llama 3.1 is the new SOTA for open source but how good is it at function calling or tool usage for Agents? In this video, we test the new 70B, 8B llamas via the Groq API for parallel and Nested function calls along with a specialized fine-tune for function calling. Results will surprise you!
LINKS:
Check out Langtrace: [ Ссылка ]
Github: [ Ссылка ]
Langtrace Discord: [ Ссылка ]
Notebooks:
llama3.1 70B: [ Ссылка ]
llama3.1 8B: [ Ссылка ]
llama3 70B fine-tuned: [ Ссылка ]
💻 RAG Beyond Basics Course:
[ Ссылка ]
Let's Connect:
🦾 Discord: [ Ссылка ]
☕ Buy me a Coffee: [ Ссылка ]
|🔴 Patreon: [ Ссылка ]
💼Consulting: [ Ссылка ]
📧 Business Contact: engineerprompt@gmail.com
Become Member: [ Ссылка ]
💻 Pre-configured localGPT VM: [ Ссылка ] (use Code: PromptEngineering for 50% off).
Signup for Newsletter, localgpt:
[ Ссылка ]
TIMESTAMPS:
00:00 Introduction to Llama 3.1 and Function Calling
01:13 Setting Up the Environment
03:30 Testing Basic Function Calls
09:43 Exploring Parallel Function Calls
15:29 Nested Function Calls with Movie Recommendations
17:43 Comparing Models: 70B vs 8B
20:13 Specialized Function Calling Models from Groq
21:43 Final Thoughts and Recommendations
All Interesting Videos:
Everything LangChain: [ Ссылка ]
Everything LLM: [ Ссылка ]
Everything Midjourney: [ Ссылка ]
AI Image Generation: [ Ссылка ]
Ещё видео!