Nicolas Papernot, Director of Institute for Network and Security Research, and Patrick McDaniel, Computer Security Graduate Research Assistant & Google PhD Fellow, come from Penn State University to show us adversarial examples and how they affect other models.
From AI With The Best, online developer conference April 29-30, 2017
Check out our website: [ Ссылка ]
[ Ссылка ]
Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software. // In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.
Ещё видео!