Information Leakage of Neural Networks by Johan Östman, Research Scientist @ AI Sweden
As machine learning is becoming a cornerstone of society, it increasingly encounters the challenge of handling sensitive data. This issue is magnified when trained machine learning models are shared with external entities, e.g., open-source or via API, which raises the critical question: Can sensitive information be extracted from the shared models?
In this talk, we will navigate the fascinating domain of information extraction attacks targeting trained machine learning models. We will dissect various attack vectors across different adversarial settings and their potential to compromise data. Additionally, we will discuss strategies to mitigate these attacks and showcase the effectiveness of such techniques.
Finally, we will touch upon the legal aspects and the importance in bridging between legal and technical definitions of risk.
Johan leads the privacy-preserving machine learning initiatives at AI Sweden, Sweden’s national center for applied AI. His team is dedicated to mitigating information leakage from machine learning models and advancing decentralized machine learning methodologies. He is also co-leading a research group at Chalmers University of Technology, focusing on the privacy-security-utility tension within federated learning. Additionally, he is the project initiator of a federated learning project together with Handelsbanken and Swedbank to combat money laundering. He also leads a larger initiative investigating the nuances of information leakage. Johan holds a Ph.D. in Information Theory and dual master’s degrees in Electrical Engineering and Industrial Economics.
Recorded at the 2024 GAIA Conference on March 27 at Svenska Mässan in Gothenburg, Sweden.
Ещё видео!