Starter guide

Let's get started with building your first GenAI stack!

This guide introduces the use case: Chat with Documentation, which will allow you to ask questions in plain language and get relevant answers from your documents making your information more accessible and useful than ever before. The documents we will be choosing here will be this GitBook page itself.

Once we're done, we will essentially have a chatbot that can answer any of your queries related to any page of AI Planet's GenAI Stack Gitbook! Let's begin!

Document Loader

The first step, of course will be to load your documents. Our platform offers a multiple loaders to suit your data needs. A PDF loader, for instance will require you to upload your document by clicking on the file upload icon. For this use case, we will be using the GitBook loader to load this documentation! Simply drag and drop the loader and paste the URL.

(Note: The loader loads only the page of the documentation as the specified URL, try it out with different pages of this GitBook)

Text Splitters

For our data to be ingested, we use Text Splitters which will chunk our data into smaller defined chunks, that will make it easier to retrieve relevant content as you will see later. We can specify the size of these chunks, the overlap while chunking and the seperator. The document loader will serve as the input to this component as shown below!

Embeddings and Vectorstore

Now, this chunked data cannot be stored as it is, as this will make it difficult for fast retrieval. To avoid this, each chunk is converted into a unique numerical code, like a fingerprint, using a process called embedding. This code captures the key information and meaning within the chunk.

These codes are then stored in a special database called a vectorstore. This database is optimised for efficiently searching and retrieving information based on these numerical representations.

For more details about the parameters and the input, output check out the Components section!

Large Language Model

The soul of this entire pipeline is the LLM or Large Language Model. This will serve as the engine to answer your questions. HuggingFace Hub offers more than 300k models to choose from. We can also use other LLM like OpenAI's GPT, VertexAI and other models (check the LLMs in the Components section).

We will use Google's gemma-7b model for this guide! (This requires a HuggingFaceHub token and authorisation! As an alternative HuggingFaceH4/zephyr-7b-alpha can also be used)

Prompt

We can specify instructions to our LLM to behave in a certain way or to give responses according to a set of rules by defining a Prompt component. We have a set of Prompt formats you can use, check the components section for more details!

We choose a PromptTemplate component and specify the prompt as shown below after clicking the Template field.

Chain

Now its time to integrate all our components!

Chains are sequences of calls that can be made to an LLM, a tool, or an external data processing step. These are used for crafting multi-step workflows and simulating intricate interactions with language models. We will be using the RetrievalQA Prompt chain in this guide. This will allow us to pass the previously defined prompt and add memory to our chat bot as well. (More on this below).

We connect our LLM as well as our Vectorstore which will serve as retriever to the chain.

Memory

We want our chatbot to remember the conversation history to be able to answer questions related to the current conversation and previous queries. For this we have various memory based components, for now we use the ConversationWindowBuffer Memory.

Now we have the entire pipeline built! Hooray! Let's check if it works now.

Build Stack

Now we just click the () icon on the bottom right cornet to build the stack. This will validate our entire stack and tell us if all the components are working.

Great! Now that everything is built, the chat icon is now activated meaning the chatbot is now ready to answer your queries! Click on this icon to open up the chat interface!

You can now chat with this Gitbook documentation and ask any queries related to GenAI stack!

The chatbot can now answer queries related to this Documentation! That's great! This brings us to the end of this guide. Note that the quality of the responses depends on each of the components used! Feel free to try out different chunkers/LLMs, prompts etc. Only when you explore can you build something great!

Last updated