paper link: [ Ссылка ]
The paper presents a collection of pretrained and fine-tuned large language models (LLMs) optimized for dialogue use cases. The authors introduce Llama 2-Chat, a set of LLMs ranging in scale from 7 billion to 70 billion parameters, which outperform open-source chat models on most benchmarks tested. The paper also describes the approach to fine-tuning and safety improvements of Llama 2-Chat, enabling the community to build on their work and contribute to the responsible development of LLMs.
Reading paper on mobile can be a disaster. But Scholaread app is here to help! Get Scholaread on App store or Google play and read smoothly at your fingertips.
Ещё видео!