Need help or want to connect with like-minded AI Developers? Join the conversation: [ Ссылка ]
Fully Local RAG for Your PDF Docs (Private ChatGPT with LangChain, RAG, Ollama, Chroma)
Teach your local Ollama new tricks with your own data in less than 10 minutes, using RAG with LangChain and Chroma. Completely private and FREE! Upload multiple PDFs into your vector store and create embeddings so you can query the database and provide the LLM the context it needs. This is a beginner tutorial that includes multiple examples of using PDFs in your RAG and features PyPDFLoader, RecursiveCharacterTextSplitter, Chroma, OllamaEmbeddings, and ChatOllama. Models used mxbai-embed-large and llama3.2, along with OpenAI text-embedding-3-large and gpt-4o.
Included is a bonus web scraper Python script that uses pypeteer and downloads any web page into a local PDF. Once added to the vector store you can start chatting with your entire website!
The possibilities are endless with custom knowledge chatbots you can create by using LangChain and Ollama. But the best part is you don't have to, because you can easily clone the GitHub repo (link below) and drop your own PDF files! The solution is similar ChatPDF which allow you to chat to your docs ([ Ссылка ]), but completely local, private and fully-customizable to your needs.
Ever wanted to chat with your PDFs or train ChatGPT on your own data? This video will show you how!
GitHub: [ Ссылка ]
Timestamps:
0:00 - Intro
0:52 - GitHub walkthrough
3:40 - RAG
6:15 - Models
7:35 - Ingestion
10:25 - Chatbot
14:25 - Chroma
18:30 - Scraper
20:05 - QA
22:30 - Conclusion
Ещё видео!