This talk introduces the field of Explainable AI, outlines a taxonomy of ML interpretability methods, walks through an implementation deepdive of Integrated Gradients, and concludes with discussion on picking attribution baselines and future research directions.
Chapters:
00:00 - Intro
2:31 - What is Explainable AI?
8:40 - Interpretable ML methods
14:52 - Deepdive: Integrated Gradients (IG)
39:13 - Picking baselines and future research directions
Resources:
Integrated gradients → [ Ссылка ]
Vertex AI → [ Ссылка ]
What-if-tool → [ Ссылка ]
Catch more ML Tech Talks → [ Ссылка ]
Subscribe to TensorFlow → [ Ссылка ]
product: TensorFlow - General; re_ty: Publish;
Ещё видео!