Build a fully local, private RAG Application with Open Source Tools (Meta Llama 3, Ollama, PostgreSQL and pgai)
🛠 𝗥𝗲𝗹𝗲𝘃𝗮𝗻𝘁 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀
📌 Try pgai PostgreSQL extension ⇒ [ Ссылка ]
📌 Github repo with code used in the video: [ Ссылка ]
📌 Use Open-source LLMs in PostgreSQL with Ollama and pgai ⇒ [ Ссылка ]
📌 What is pgai? ⇒ [ Ссылка ]
📌 Install TimescaleDB / PostgreSQL using Docker ⇒ [ Ссылка ]
📌 Ollama ⇒ [ Ссылка ]
📌 Nomic Embed ⇒ [ Ссылка ]
📌 Meta Llama 3 ⇒ [ Ссылка ]
📌 Try pgai on Timescale free for 30 days ⇒ [ Ссылка ]
Join Hervé Ishimye, a Developer Advocate at Timescale, as he demonstrates how to create a private Retrieval Augmented Generation (RAG) application using open-source tools. Learn to integrate Meta Llama 3.2 as your language model, leverage Ollama for model execution, and employ PostgreSQL as your vector database. Ensure data privacy by running these elements locally and benefit from enhanced control, speed, and reduced costs. Hervé walks through the process of setting up your RAG application using Docker, configuring Olama and PostgreSQL, and embedding models effectively. This tutorial emphasizes data safety and the integration of tools like pgai and pgvector in your local environment.
📚 𝗖𝗵𝗮𝗽𝘁𝗲𝗿𝘀
00:00 Why private RAG?
02:28 Tools for Building a Private RAG Application
03:02 Deep Dive into Ollama and PostgreSQL
06:09 Setting Up Your Local Environment
10:38 Implementing the RAG Application
14:31 Conclusion and Final Thoughts
🐯 𝗔𝗯𝗼𝘂𝘁 𝗧𝗶𝗺𝗲𝘀𝗰𝗮𝗹𝗲
At Timescale, we see a world made better via innovative technologies, and we are dedicated to serving software developers and businesses worldwide, enabling them to build the next wave of computing. Timescale is a remote-first company with a global workforce backed by top-tier investors with a track record of success in the industry.
💻 𝗙𝗶𝗻𝗱 𝗨𝘀 𝗢𝗻𝗹𝗶𝗻𝗲!
🔍 Website ⇒ [ Ссылка ]
🔍 Slack ⇒ [ Ссылка ]
🔍 GitHub ⇒ [ Ссылка ]
🔍 Twitter ⇒ [ Ссылка ]
🔍 Twitch ⇒ [ Ссылка ]
🔍 LinkedIn ⇒ [ Ссылка ]
🔍 Timescale Blog ⇒ [ Ссылка ]
🔍 Timescale Documentation ⇒ [ Ссылка ]
Ещё видео!