Walk through the steps to author, optimize, and deploy a custom TensorFlow Lite model to mobile using best practices and the latest developer tooling. This includes using the model authoring APIs, applying and debugging model optimization techniques such as quantization, benchmarking on a real device, and deployment to Android.
Resource:
Converting your model - [ Ссылка ]
Analyze your converted model - [ Ссылка ]
TensorFlow Model Optimization - [ Ссылка ]
Quantization Debugger - [ Ссылка ]
Performance best practices - [ Ссылка ]
TensorFlow Lite website - [ Ссылка ]
TensorFlow Forum - [ Ссылка ]
TensorFlow website → [ Ссылка ]
Follow on Twitter - [ Ссылка ]
Speakers: Arun Venkatesan, Yu-Cheng Ling, Adam Koch
Watch more:
All Google I/O 2022 Sessions → [ Ссылка ]
ML/AI at I/O 2022 playlist → [ Ссылка ]
All Google I/O 2022 technical sessions → [ Ссылка ]
Subscribe to TensorFlow → [ Ссылка ]
#GoogleIO
Ещё видео!