Google Cloud AI Accelerators (TPUs and GPUs) enable high-performance, cost-effective training and inference for leading AI/ML frameworks: PyTorch, JAX, and TensorFlow. In this session, learn about the collaboration between Google, Meta, and partners in the AI ecosystem. Join us to see how PyTorch/XLA uses the XLA compiler to accelerate AI workloads on Cloud AI Accelerators. Discover how PyTorch/XLA enables high-performance training and inference for LLaMA 2, a state-of-the art large language model (LLM) from Meta. Learn how PyTorch Lightning helps customers quickly and easily fine-tune LLMs on Cloud TPUs.
Speakers: Carlos Mocholi, Damien Sereni, Shauheen Zahirazami, Rachit Aggarwal
Watch more:
All sessions from Google Cloud Next → [ Ссылка ]
#GoogleCloudNext
Ещё видео!