Apache (incubating) TVM is an open-source deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends.
This conference is on the state of the art of deep learning compilation optimization and discusses recent advances in frameworks, compilers, systems and architecture support, security, training and hardware acceleration.
This video includes the following presentations:
TVM @ OctoML – Jason Knight
TVM @ Qualcomm – Krzysztof Parzyszek
Towards cross-domain co-optimization – Nilesh Jain, Intel Labs
TASO: Optimizing Deep Learning Computation with Automated Generation of Graph Substitutions – Zhihao Jia, Stanford
Further information is available at [ Ссылка ]
This session was recorded on December 5, 2019. This video is closed captioned
Ещё видео!