As part of the Trustworthy AI series, Grégoire Montavon (TU Berlin) presents his research on eXplainable AI (XAI) and trust.
⏱Shownotes:
00:00 Opening Remarks by ITU
01:00 Introduction by Samek
01:58 Introduction by Gregoire Montavon
02:27 Why do we need Trustworthy in AI?
03:37 Machine Learning Decisions
04:41 Detecting horse example
10:08 How do we get these heatmaps?
10:52 Layerwise Relevance Propagation (LRP)
12:48 Can LRP be Justified Theoretically?
13:17 Deep Taylor Decomposition
14:36 LRP is More Stable than Gradient
16:00 LRP on Different Types of Data/Models
18:23 Advanced Explanation with GNN-LRP
19:00 Systematically Finding Clever Hans
19:58 Idea: Spectral Relevance Analysis (SpRAy)
22:08 The Revolution of Depth
23:08 Clever Hans on the VGG-16 Image Classifier
23:37 XAI Current Challenges
28:27 Towards Trustworthy AI
30:01 Explainable AI book
30:18 www.heatmapping.org
30:48 References
30:54 Q&A Session
31:10 How to measure trustworthiness and the certification process
33:32 How does your LRP compare with Google's XRAI algorithm?
34:33 What are your thoughts on explainability models?
35:33 Class discrimination in AI methods?
37:17 Do you think we can use explanation methods to detect vectors in poisoning attacks?
39:23 Where explanation is going to (the future)?
41:07 Do you think there are some limits for explanations that are hard to explain?
42:28 What do you think about using explanation techniques to detect potentially implausible/incorrect predictions?
43:10 Have you tried to calculate heatmaps for images which have been altered with adversarial perturbations?
45:49 Closing from ITU
🔴 Watch the latest #AIforGood videos:
[ Ссылка ]
Explore more #AIforGood content:
1️⃣ [ Ссылка ]
2️⃣ [ Ссылка ]
3️⃣ [ Ссылка ]
📅 Discover what's next on our programme!
[ Ссылка ]
Social Media:
Website: [ Ссылка ]
Twitter: [ Ссылка ]
LinkedIn Page: [ Ссылка ]...
LinkedIn Group: [ Ссылка ]
Instagram: [ Ссылка ]
Facebook: [ Ссылка ]
WHAT IS TRUSTWORTHY AI SERIES?
Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.
The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world:
[ Ссылка ]
What is AI for Good?
The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact.
Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.
#trustworthyAI #explainableAI
Ещё видео!