The deployment of ML models in production is a delicate process filled with challenges. You can deploy a model via a REST API, on an edge device, or as as an off-line unit used for batch processing. You can build the deployment pipeline from scratch, or use ML deployment frameworks.
In this video, you'll learn about the different strategies to deploy ML in production. I provide a short review of the main ML deployment tools on the market (TensorFlow Serving, MLFlow Model, Seldon Deploy, KServe from Kubeflow). I also present BentoM - the focus of this mini-series - describing its features in detail.
=================
1st The Sound of AI Hackathon (register here!):
[ Ссылка ]
Join The Sound Of AI Slack community:
[ Ссылка ]
Interested in hiring me as a consultant/freelancer?
[ Ссылка ]
Connect with Valerio on Linkedin:
[ Ссылка ]
Follow Valerio on Facebook:
[ Ссылка ]
Follow Valerio on Twitter:
[ Ссылка ]
=================
Content:
0:00 Intro
0:36 ML deployment strategies
1:32 Basic ML deployment
3:27 Disadvantages of basic ML deployment
4:57 Overview of ML deployment tools
9:54 BentoML
14:00 What's next?
Ещё видео!