In this video we go back to the extremely important Google paper which introduced the Mixture-of-Experts (MoE) layer with authors including Geoffrey Hinton.
The paper is titled "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer". MoE today is widely used in various top Large Language Models and interestingly, it was published at the beginning of 2017, while the Attention All You Need paper which introduced Transformers was published later that year, also by Google. It this video the purpose is to understand why the Mixture-of-Experts method is important and how it works.
Paper page - [ Ссылка ]
Blog post - [ Ссылка ]
-----------------------------------------------------------------------------------------------
✉️ Join the newsletter - [ Ссылка ]
👍 Please like & subscribe if you enjoy this content
-----------------------------------------------------------------------------------------------
Chapters:
0:00 Why MoE is needed?
1:33 Sparse MoE Layer
3:41 MoE Paper's Figure
Ещё видео!