Unpacking the multilayer perceptrons in a transformer, and how they may store facts
Instead of sponsored ad reads, these lessons are funded directly by viewers: [ Ссылка ]
An equally valuable form of support is to share the videos.
AI Alignment forum post from the Deepmind researchers referenced at the video's start:
[ Ссылка ]
Anthropic posts about superposition referenced near the end:
[ Ссылка ]
[ Ссылка ]
Some added resources for those interested in learning more about mechanistic interpretability, offered by Neel Nanda
Mechanistic interpretability paper reading list
[ Ссылка ]
Getting started in mechanistic interpretability
[ Ссылка ]
An interactive demo of sparse autoencoders (made by Neuronpedia)
[ Ссылка ]
Coding tutorials for mechanistic interpretability (made by ARENA)
[ Ссылка ]
Звуковая дорожка на русском языке: Влад Бурмистров.
Sections:
0:00 - Where facts in LLMs live
2:15 - Quick refresher on transformers
4:39 - Assumptions for our toy example
6:07 - Inside a multilayer perceptron
15:38 - Counting parameters
17:04 - Superposition
21:37 - Up next
------------------
These animations are largely made using a custom Python library, manim. See the FAQ comments here:
[ Ссылка ]
[ Ссылка ]
[ Ссылка ]
All code for specific videos is visible here:
[ Ссылка ]
The music is by Vincent Rubinetti.
[ Ссылка ]
[ Ссылка ]
[ Ссылка ]
------------------
3blue1brown is a channel about animating math, in all senses of the word animate. If you're reading the bottom of a video description, I'm guessing you're more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly.
Mailing list: [ Ссылка ]
Twitter: [ Ссылка ]
Instagram: [ Ссылка ]
Reddit: [ Ссылка ]
Facebook: [ Ссылка ]
Patreon: [ Ссылка ]
Website: [ Ссылка ]
Ещё видео!