Towards More Practical Threat Models in Artificial Intelligence Security
Kathrin Grosse, EPFL; Lukas Bieringer, QuantPi; Tarek R. Besold, TU Eindhoven; Alexandre M. Alahi, EPFL
Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI. For example, while models are often studied in isolation, they form part of larger ML pipelines in practice. Recent works also brought forward that adversarial manipulations introduced by academic attacks are impractical. We take a first step towards describing the full extent of this disparity. To this end, we revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice via a survey with 271 industrial practitioners. On the one hand, we find that all existing threat models are indeed applicable. On the other hand, there are significant mismatches: research is often too generous with the attacker, assuming access to information not frequently available in real-world settings. Our paper is thus a call for action to study more practical threat models in artificial intelligence security.
View the full USENIX Security '24 program at [ Ссылка ]
Ещё видео!