Memories
Last updated
Last updated
Memory component ensures that all conversations are stored in a buffer, enabling the ChatBot to leverage chat history for more contextually relevant responses and facilitating the ability to ask follow-up questions.
The ConversationBufferWindowMemory maintains a record of all the exchanges that occur during a conversation, but it only retains the last K interactions.
For example: Set a low k=5, to only keep the last 5 interactions in memory.
Parameters
input_key: Input key is the human query that is defined in the prompt. This key provide the input user query to the chain.
memory Size (k): Window size to retain the past interactions
memory_key: Memory key is where all the input querys as stored i.e., chat_history.
output_key: If you have multiple input variables and if you want to return one key to another sequential that time you define output_key.
return_messages (boolean): Default it is False. If set True or enabled, then the history will be returned as list of message.
Example usage:
Chat memory is not needed for Conversation Buffer Window memory. This component stores a fixed size rolling window (of length = memory size) of previous chat messages between the system and user to be passed to the LLM. Keep in mind the context length of the LLM when specifying the Memory size.
This memory facilitates the storage of messages and subsequently retrieves them into a variable.
Parameters
input_key: Input key is the human query that is defined in the prompt. This key provide the input user query to the chain.
memory_key: Memory key is where all the input querys as stored i.e., chat_history.
output_key: If you have multiple input variables and if you want to return one key to another sequential that time you define output_key.
return_messages (boolean): Default it is False. If set True or enabled, then the history will be returned as list of message.
Example usage:
This component works similar to the previous one, but it stores the entire chat history instead of limited number of chat messages. However on long conversations this isn't a good option.
The Entity Memory stores factual information pertaining to particular entities discussed within a conversation. It utilises a LLM (Large Language model) to extract details about these entities and gradually enhances its understanding of them over time
Parameters
LLM: Add a large language model to extract entities.
chat_history_key: Unique id for individual entity.
input_key: Input key is the human query that is defined in the prompt. This key provide the input user query to the chain.
memory Size (k): Window size to retain the past interactions
output_key: If you have multiple input variables and if you want to return one key to another sequential that time you define output_key.
return_messages (boolean): Default it is False. If set True or enabled, then the history will be returned as list of message.
Example usage:
ConversationEntityMemory extracts named entities from the chat history using an NLP model and generates summaries about those entities and their context. This is stored to be provided to the LLM instead of the entire chat history.
This type of memory uses an NLP model to extract subject-predicate-object style knowledge triples from the chat messages. This is then stored in a knowledge graph to recreate memory.
Parameters
input_key: The variable to be used as Chat Input when more than one variable is available.
memory Size (k): Window size to retain the past interactions
LLM: Add a large language model to extract entities.
memory_key: Memory key is where all the input queries are stored i.e., chat_history.
output_key: If you have multiple input variables and if you want to return one key to another sequential that time you define output_key. The variable to be used as Chat Output (e.g. answer in a ConversationalRetrievalChain)
return_messages (boolean): Default it is False. If set True or enabled, then the history will be returned as list of message.
Example usage:
The knowledge triples stored in an integrated knowledge graph are used in future conversations by querying the knowledge graph to retrieve relevant facts about mentioned entities to provide as additional context to the LLM.
Conversation summary memory summarises the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.
Parameters
input_key: The variable to be used as Chat Input when more than one variable is available.
LLM: Add a large language model to extract entities.
memory_key: Memory key is where all the input queries are stored i.e., chat_history.
output_key: If you have multiple input variables and if you want to return one key to another sequential that time you define output_key. The variable to be used as Chat Output (e.g. answer in a ConversationalRetrievalChain)
return_messages (boolean): Default it is False. If set True or enabled, then the history will be returned as list of message.
Example usage:
This component helps reduce prompt length for long conversations compared to showing all messages by generating summaries of the chat history periodically, condensing the content to be passed to the LLM.
VectorStoreRetrieverMemory stores previous interactions in a vector database, where each snippet is represented as a point in high-dimensional space. When you ask a question, it searches this space for the snippets closest to your meaning, regardless of their initial order in the conversation.
Parameters
input_key: The variable to be used as Chat Input when more than one variable is available.
memory_key: Memory key is where all the input queries are stored i.e., chat_history.
retriever: The vectorstore which is to be used as retriever.
return_messages (boolean): Default it is False. If set True or enabled, then the history will be returned as list of message.
Example usage:
In addition to the inputs specified for the previous components, this memory requires a vector store retriever (currently only supports weaviate). This component stores memories/conversation snippets in a vector store and queries the most relevant documents when required.