Join Ananya Kumar, a fifth-year PhD student at Stanford University, as he delves into the world of Foundation Models.
In this informative video, he discusses his work on developing better algorithms for pre-training Foundation Models and their fine-tuning, especially in terms of robustness and safety. He provides a comprehensive tutorial on Foundation Models, their capabilities, and how they can be adapted for various tasks.
See more videos from Snorkel here: youtube.com/channel/UC6MQ2p8gZFYdTLEV8cysE6Q?sub_confirmation=1
Ananya also highlights the potential risks and harms of these models, emphasizing the need for careful usage. He further discusses the Center for Research on Foundation Models at Stanford and its interdisciplinary approach towards the advancement and responsible use of Foundation Models. Towards the end, he provides a deep dive into a specific project on how fine-tuning can distort pre-trained features and underperform out of distribution.
This video is a must-watch for anyone interested in machine learning, AI models, and their real-world applications.
More related videos: [ Ссылка ]
More related videos: [ Ссылка ]
Timestamps:
00:00 Introduction
00:43 Overview of Foundation Models
01:38 Definition of Foundation Models
02:40 Training Foundation Models
04:08 Using Foundation Models
04:43 Methods to Utilize Foundation Models
06:40 Prompt Tuning Techniques
09:59 Center for Research on Foundation Models
11:00 Social Responsibility and Technical Foundation
12:54 Interdisciplinary Research
13:06 Deep Dive into Fine-Tuning
16:25 Challenges with Fine-Tuning
18:38 Solutions for Better Model Performance
22:27 Summary of Findings
23:12 Conclusion
#foundationmodels #datascience #machinelearning
Ещё видео!