LogoLogo
Home
  • Introduction
  • Quickstart
    • Starter guide
    • Core Concepts
      • Stack Type
      • Data Loader
      • Inputs/Outputs
      • Text Splitters
      • Embedding Model
      • Vector Store
      • Large Language Model
      • Memory
      • Chain
    • Testing Stack
    • Deployment
    • Knowledge Base
    • Organization and Teams
    • Secret Keys
    • Logs
  • Components
    • Inputs
    • Outputs
    • Document Loaders
    • Prompts
    • Text Splitters
    • Embeddings
    • Vector Store
    • Retrievers
    • Multi Modals
    • Agents
    • Large Language Models
    • Memories
    • Chains
    • Output Parsers
  • Customization
    • Writing Custom Components in GenAI Stack
    • Build your own custom component
    • Define parameters used for required components
  • Usecases
    • Simple QA using Open Source Large Language Models
    • Multilingual Indic Language Translation
    • Document Search and Chat
    • Chat with Multiple Documents
  • Terminologies
    • RAG - Retrieval Augmented Generation
    • Hybrid Search - Ensemble Retriever
  • REST APIs
    • GenAI Stack REST APIs
    • Chat API Reference
    • Text Generation API Reference
    • Rate Limiting and Sleep Mode
  • Troubleshooting
    • How to verify what is loaded and chunked from the loader?
  • Acknowledgements
    • Special Mentions
Powered by GitBook
On this page
  • Step 1: Define your prompt
  • Step 2: Define your LLM - Open Source LLM
  • Step 3: Build Chain
  • Step 4: Testing

Was this helpful?

  1. Usecases

Simple QA using Open Source Large Language Models

In this use, we will build a simple Question and Answering assistant just like ChatGPT but with help of Open Source Language Models.

PreviousDefine parameters used for required componentsNextMultilingual Indic Language Translation

Last updated 1 year ago

Was this helpful?

Step 1: Define your prompt

Every Large Language Model requires a well-crafted prompt template to guide its actions effectively. For the creation of a Q&A chatbot, a Prompt Template is essential to provide clear instructions to the Large Language Model on its tasks. This component is invaluable as it ensures that the model understands the desired task and generates accurate responses based on the provided instructions.

Click on Prompt Template, and edit the template with your prompt:

Know more about Prompts:

Step 2: Define your LLM - Open Source LLM

To enhance the intelligence of the responses, we require a Large Language Model capable of understanding instructions and generating corresponding responses. In this context, we utilize an Open Source Large Language Model due to its accessibility and cost-effectiveness. By leveraging the HuggingFace Hub Large Language Models, users can input their Access token and model name.

Notably, this approach offers the advantage of accessing 7B Large Language Models without the need for manual loading.

Step 3: Build Chain

To link the prompt with the Large Language Model (LLM), we use a tool called LLMChain. Just connect your Prompt Template and LLM directly to the LLM Chain, and you can optionally include a memory component if needed. The LLM Chain helps the LLM learn from your prompt, making it better at giving sensible responses.

Once the chain is complete click on the build icon in the bottom right corner of the page.

Step 4: Testing

Once the build is complete you can click on the chat icon to test the flow.

If you're able to get the response then the flow is running successfully & now you can deploy the flow to share it with others.

Know more about LLM:

flow

Know more about Chains:

chat interface

Check out how to use Chat Interface:

https://docs.aiplanet.com/components/large-language-models
https://docs.aiplanet.com/components/chains
https://docs.aiplanet.com/quickstart/chat-interface-genai-stack-chat
https://docs.aiplanet.com/components/prompts
icons