In this video, we work through building a chatbot using Retrieval Augmented Generation (RAG) from start to finish. We use OpenAI's gpt-3.5-turbo Large Language Model (LLM) as the "engine", we implement it with LangChain's ChatOpenAI class, use OpenAI's text-embedding-ada-002 for embedding, and the Pinecone vector database as our knowledge base.
📌 Code:
[ Ссылка ]
🌲 Subscribe for Latest Articles and Videos:
[ Ссылка ]
👋🏼 AI Consulting:
[ Ссылка ]
👾 Discord:
[ Ссылка ]
Twitter: [ Ссылка ]
LinkedIn: [ Ссылка ]
00:00 Chatbots with RAG
00:59 RAG Pipeline
02:35 Hallucinations in LLMs
04:08 LangChain ChatOpenAI Chatbot
09:11 Reducing LLM Hallucinations
13:37 Adding Context to Prompts
17:47 Building the Vector Database
25:14 Adding RAG to Chatbot
28:52 Testing the RAG Chatbot
32:56 Important Notes when using RAG
#artificialintelligence #nlp #ai #langchain #openai #vectordb
Chatbots with RAG: LangChain Full Walkthrough
Теги
pythonmachine learningartificial intelligencenatural language processingsemantic searchsimilarity searchvector similarity searchvector databaseretrieval augmented generationretrieval augmented generation tutorialpinecone vector databasevector searchlangchainlangchain chatbotlarge language modelschatbot pythonchatbot ragchatbot airag tutorialjames briggsopenai gpt-3.5-turboopenai chatbotchatbot full projectchatbot full tutorialai