Meta has been at the absolute edge of the open-source AI ecosystem, and with the recent release of Llama 3.1, they have officially created the largest open-source model to date. So, what's the secret behind the performance gains of Llama 3.1? What will the future of open-source AI look like?
Thomas Scialom is a Senior Staff Research Scientist (LLMs) at Meta AI, and is one of the co-creators of the Llama family of models. Prior to joining Meta, Thomas worked as a Teacher, Lecturer, Speaker and Quant Trading Researcher.
In the episode, Adel and Thomas explore Llama 405B it’s new features and improved performance, the challenges in training LLMs, best practices for training LLMs, pre and post-training processes, the future of LLMs and AI, open vs closed-sources models, the GenAI landscape, scalability of AI models, current research and future trends and much more.
Find DataFramed on DataCamp [ Ссылка ]
and on your preferred podcast streaming platform:
Apple Podcasts:
[ Ссылка ]
Spotify:
[ Ссылка ]
Links Mentioned in the Show:
Meta - Introducing Llama 3.1: Our most capable models to date - [ Ссылка ]
Download the Llama Models - [ Ссылка ]
[Course] Working with Llama 3 - [ Ссылка ]
[Skill Track] Developing AI Applications - [ Ссылка ]
Related Episode: Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.com - [ Ссылка ]
New to DataCamp?
Learn on the go using the DataCamp mobile app - [ Ссылка ]
Empower your business with world-class data and AI skills with DataCamp for business - [ Ссылка ]
Ещё видео!