This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.
Resources:
▶️ Series Playlist: [ Ссылка ]
📰 Read more: [ Ссылка ]
💻 Example code: [ Ссылка ]
Final Model: [ Ссылка ]
🔢 Dataset: [ Ссылка ]
References:
[1] Deeplearning.ai Finetuning Large Langauge Models Short Course: [ Ссылка ]
[2] arXiv:2005.14165 [cs.CL] (GPT-3 Paper)
[3] arXiv:2303.18223 [cs.CL] (Survey of LLMs)
[4] arXiv:2203.02155 [cs.CL] (InstructGPT paper)
[5] 🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: [ Ссылка ]
[6] arXiv:2106.09685 [cs.CL] (LoRA paper)
[7] Original dataset source — Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
--
Homepage: [ Ссылка ]
Book a call: [ Ссылка ]
Intro - 0:00
What is Fine-tuning? - 0:32
Why Fine-tune - 3:29
3 Ways to Fine-tune - 4:25
Supervised Fine-tuning in 5 Steps - 9:04
3 Options for Parameter Tuning - 10:00
Low-Rank Adaptation (LoRA) - 11:37
Example code: Fine-tuning an LLM with LoRA - 15:40
Load Base Model - 16:02
Data Prep - 17:44
Model Evaluation - 21:49
Fine-tuning with LoRA - 24:10
Fine-tuned Model - 26:50
Fine-tuning Large Language Models (LLMs) | w/ Example Code
Теги
large language modelslarge language modelllmlanguage modelhugging facepythoncodeprogrammingexample codefine-tuningfine tuningfine tunehow to fine tune modeltutoriallectureworkshoptrainingfor beginnersmade easyfine tune llmfine tune llama 2fine tune chatgptfune tune hugging face modelguidefine-tuning gpt-3python codelessonopen-sourcefreefor freeno cost fine tuningfine tune locallylocal fine tuninghow to fine tune gpt 3