LogoLogo
Home
  • Introduction
  • Quickstart
    • Starter guide
    • Core Concepts
      • Stack Type
      • Data Loader
      • Inputs/Outputs
      • Text Splitters
      • Embedding Model
      • Vector Store
      • Large Language Model
      • Memory
      • Chain
    • Testing Stack
    • Deployment
    • Knowledge Base
    • Organization and Teams
    • Secret Keys
    • Logs
  • Components
    • Inputs
    • Outputs
    • Document Loaders
    • Prompts
    • Text Splitters
    • Embeddings
    • Vector Store
    • Retrievers
    • Multi Modals
    • Agents
    • Large Language Models
    • Memories
    • Chains
    • Output Parsers
  • Customization
    • Writing Custom Components in GenAI Stack
    • Build your own custom component
    • Define parameters used for required components
  • Usecases
    • Simple QA using Open Source Large Language Models
    • Multilingual Indic Language Translation
    • Document Search and Chat
    • Chat with Multiple Documents
  • Terminologies
    • RAG - Retrieval Augmented Generation
    • Hybrid Search - Ensemble Retriever
  • REST APIs
    • GenAI Stack REST APIs
    • Chat API Reference
    • Text Generation API Reference
    • Rate Limiting and Sleep Mode
  • Troubleshooting
    • How to verify what is loaded and chunked from the loader?
  • Acknowledgements
    • Special Mentions
Powered by GitBook
On this page
  • Why We Need Memory
  • The Use of Memory

Was this helpful?

  1. Quickstart
  2. Core Concepts

Memory

PreviousLarge Language ModelNextChain

Last updated 1 year ago

Was this helpful?

While optional, memory stands out as a crucial component in this particular pipeline, especially when constructing a chatbot. It ensures that all conversations are stored in a buffer, enabling the chatbot to leverage chat history for more contextually relevant responses and facilitating the ability to ask follow-up questions.

Why We Need Memory

Memory plays a vital role in this pipeline, particularly in chatbot construction. It ensures that all conversations are stored in a buffer, allowing the chatbot to utilize chat history. This stored context enables the chatbot to provide responses that are more relevant to the ongoing conversation and facilitates the capability to ask follow-up questions.

The Use of Memory

Memory serves several crucial purposes:

  • Enhanced Contextual Responses: By retaining previous conversation history, memory enables the chatbot to better understand the context of current interactions.

  • Seamless Conversation Flow: With access to past interactions, memory helps maintain continuity in conversations. The chatbot can recall previous topics discussed and smoothly transition between related subjects, creating a more natural and engaging user experience.

  • Improved User Engagement: Memory enables the chatbot to build rapport with users by recalling previous interactions and incorporating this knowledge into ongoing conversations.

Memory is an optional part in almost all the chains. In order to store the history of the conversation, we make return messages to be true and directly connect to either RetrievalQA or LLMChain chain.

Conclusion

For more information on Memory, check the documentation .

here