Chains

Chains in GenAI Stack are cohesive assemblies of interconnected and easily reusable components. They encapsulate a sequence of calls to various components such as models, document retrievers, other chains, etc., offering a streamlined and user-friendly interface to navigate through this sequence.

RetreivalQAPrompt

The Retrieval QA prompt is a chain component that allows the user to combine their own prompt with the retriever and generator models.

Parameters

  • chain_type: Different chain types with distinct combination strategies. Methods:

    • stuff: Inserts a list of small documents into a prompt, suitable for applications with a few small documents.

    • map_reduce: Applies an LLM chain to each document individually and combines the outputs into a single result.

    • map_rerank: Runs an initial prompt on each document, scoring responses and returning the highest-scoring one.

    • refine: Constructs a response iteratively by updating answers for each document using an LLM chain in the GenAI Stack.

  • LLM: Large Language Model integration used in the GenAI Stack.

  • Memory[Optional]: To store the chat response in history.

  • Prompt: Prompt template or ChatPrompt Prompt, that contains the prompt to be instructured for the chain.

  • Retriever: The retriever used to fetch relevant documents.

Example usage:

Retrieval QA with prompt requires an LLM, an optional memory component, and it uses a Prompt to specify what the LLM should do/how it should behave in response to the user question by using the information in the context provided by a retriever component where we can also pass a vectorstore, (question and context will be the variables in your prompt).

LLM Chain

A chain component that utilizes a large language model (LLM) to generate responses based on a given prompt.

Parameters

  • LLM: ​Large Language Model integration used in the GenAI Stack.

  • Memory[Optional]: To store the chat response in history.

  • Prompt: Prompt template or ChatPrompt Prompt, that contains the prompt to be instructured for the chain.

Example usage:

The LLM chain takes ChatPromptTemplate or PromptTemplate, along with LLM and Memory to work as a simple chat bot without any external knowledge source, relying on the LLM provided.

CombineDocsChain

A chain component that takes in a list of documents and combines them into a single prompt for the LLM to process.

Params:

  • LLM: Large Language Model integration used in the GenAI Stack.

  • chain_type: Different chain types with distinct combination strategies. Methods:

    • stuff: Inserts a list of small documents into a prompt, suitable for applications with a few small documents.

    • map_reduce: Applies an LLM chain to each document individually and combines the outputs into a single result.

    • map_rerank: Runs an initial prompt on each document, scoring responses and returning the highest-scoring one.

    • refine: Constructs a response iteratively by updating answers for each document using an LLM chain in the GenAI Stack.

Example Usage:

RetrievalQA Chain

The RetrievalQA chain is a chain component that integrates a retriever to fetch relevant documents and an LLM to generate answers for a given question based on the retrieved documents.

Parameters

  • CombineDocsChain: Chain to use to combine the documents with RetrievalQA chain type.

  • Memory[Optional]: To store the chat response in history.

  • Retriever: The retriever used to fetch relevant documents.

Example usage

The RetrievalQA chain requires a CombineDocChain which will have the LLM, and a Retriever to be passed optionally along with Memory. This acts as the final component that allows using the attached components to answer user queries.

SequentialLLMChain

SequentialLLMChain includes both extracted documents from a source and user-defined prompts. By feeding these inputs to the language model in a specific order, the tool builds upon prior context and guides the model towards generating coherent and relevant text.

Parameters

  • LLM : A Language model to be used for text generation.

  • Starting Prompt : This prompt template serves as the initial prompt to start the text generation process. It is used to set the tone, provide context, or ask for specific input to kickstart the generation.

  • Continuation Prompt : This prompt template used for subsequent prompts in the text generation process. Once the initial prompt is provided, the continuation prompt guides the generation of subsequent text, maintaining coherence and relevance.

  • System Prompt : This prompt template is used to inject system-level information or instructions into the text generation process. Offers instructions or guidance to the text generation process based on system-level considerations.

  • Documents : Documents to be used in text generation.

Example usage:

This component is mostly used for Text Generation use cases. The Starting, Continuation prompts take HumanMessagePromptTemplate as input and System prompt takes the SystemMessagePromptTemplate as input. The system message prompt contains the context that is to be passed to the LLM along with a guidance instruction. The starting prompt then is used to start the generation, and the continuation prompt helps refine this content to the required format.

ConversationalRetrievalChain

Chain for having a conversation based on retrieved documents. This chain takes in chat history (a list of messages) and new questions, and then returns an answer to that question.

Parameters

  • chain_type : The chain type to use to decide the combination strategy used for integrating the LLM and retriever components.. Defaults to "stuff"

  • LLM : A Language model to be used for text generation.

  • Memory[Optional]: To store the chat response in history.

  • Retriever: The retriever used to fetch relevant documents.

Example usage:

This serves as a convenient method to allow an LLM to be given access to a Retriever and Memory to serve as chat history for a application that has access to external Knowledge source.

LLMCheckerChain

Chain for question-answering with self-verification.

Parameters

  • LLM : A Language model to be used for QA and verification.

Example Usage:

This component only takes an LLM as the input, and internally performs a chain of self-verififcation before giving the output.

PromptRunner

The Prompt Runner chain is a chain component that executes a prompt with a specified language model (LLM) and returns the generated output. It is responsible for running a prompt through an LLM and managing the input and output processing.

Parameters

  • inputs: Key-value pairs of variables to be used in the prompt

  • LLM : A Language model to be used for QA and verification.

  • Prompt Template: PromptTemplate object containing the prompt to be passed to the LLM.

Example usage:

Prompt Runner chain runs a chain with the given Prompt Template. Additional parameters like temperature can be specified as inputs to the chain.

Last updated