While cutting-edge large language models can write almost any text you like, they are expensive to run. You can get the same performance for less money by using a smaller model and fine-tuning it to your needs.
In this session, Andrea, a Computing Engineer at CERN, and Josep, a Data Scientist at the Catalan Tourist Board, will walk you through the steps needed to customize the open-source Mistral LLM. You'll learn about choosing a suitable LLM, getting training data, tokenization, evaluating model performance, and best practices for fine-tuning.
Key Takeaways:
- Learn how to fine-tune a large language model using the Hugging Face Python ecosystem.
- Learn about the steps to prepare for fine-tuning and how to evaluate your success.
- Learn about best practices for fine-tuning models.
Resources - [ Ссылка ]
Ещё видео!