AI workflow for large scale deployment of far-edge ML devices
Technical session delivered by Kabir Manghnani, ML Platform Engineer and Mark Stubbs, Principal Architect at Shoreline IoT.
As ML on the far edge has the begins to reshape traditional notions of asset management many questions remain unanswered regarding how specialized machine intelligence can be brought to users unfamiliar with the technology.
In this talk we discuss a set of novel ML workflows that take advantage of containerized cloud computing to allow for simultaneous training and deployment of millions of ML models trained specifically for the sensors they operate on.
Such workflows allow ML engineers to easily develop, train, and deploy models onto many far-edge devices (without writing embedded C/C++ code) as well as open the doors to adaptive models that don't ask users to
Stay connected with Arm:
Website: [ Ссылка ]
Twitter: [ Ссылка ]
Facebook: [ Ссылка ]
LinkedIn: [ Ссылка ]
Instagram: [ Ссылка ]
Ещё видео!