In this video we talk about how we can train large language models (LLMs) to follow instructions with human feedback. The paper proposes a solution called InstructGPT, which involves fine-tuning GPT-3 using human feedback to align the model with user intent across various tasks. By collecting datasets of labeler demonstrations and rankings of model outputs, the InstructGPT model, despite having fewer parameters than GPT-3, shows preference in human evaluations and improvements in truthfulness and reduction in toxic output generation.
*References*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
“Training language models to follow instructions
with human feedback” paper: [ Ссылка ]
*Related Videos*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Chain-of-Verification (COVE) Reduces Hallucination in Large Language Models: [ Ссылка ]
Why Language Models Hallucinate: [ Ссылка ]
Transformer Self-Attention Mechanism Explained: [ Ссылка ]
Jailbroken: How Does LLM Safety Training Fail? - Paper Explained: [ Ссылка ]
How to Fine-tune Large Language Models Like ChatGPT with Low-Rank Adaptation (LoRA): [ Ссылка ]
Multi-Head Attention (MHA), Multi-Query Attention (MQA), Grouped Query Attention (GQA) Explained: [ Ссылка ]
LLM Prompt Engineering with Random Sampling: Temperature, Top-k, Top-p: [ Ссылка ]
*Contents*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
00:00 - Abstract & Intro
03:01 - Main Results - Human Preferences
04:45 - RLHF Overview
07:13 - Methods and Experiments
14:32- Results
18:45 - Discussion & Conclusions
*Follow Me*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
🐦 Twitter: @datamlistic [ Ссылка ]
📸 Instagram: @datamlistic [ Ссылка ]
📱 TikTok: @datamlistic [ Ссылка ]
*Channel Support*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The best way to support the channel is to share the content. ;)
If you'd like to also support the channel financially, donating the price of a coffee is always warmly welcomed! (completely optional and voluntary)
► Patreon: [ Ссылка ]
► Bitcoin (BTC): 3C6Pkzyb5CjAUYrJxmpCaaNPVRgRVxxyTq
► Ethereum (ETH): 0x9Ac4eB94386C3e02b96599C05B7a8C71773c9281
► Cardano (ADA): addr1v95rfxlslfzkvd8sr3exkh7st4qmgj4ywf5zcaxgqgdyunsj5juw5
► Tether (USDT): 0xeC261d9b2EE4B6997a6a424067af165BAA4afE1a
#llm #rlhf
Ещё видео!