Recent progress in machine-learning technologies has reported impressive performances in computer vision and security-sensitive tasks.
Understanding the security properties of learning algorithms, as well as designing suitable countermeasures, has thus become a timely, relevant and challenging research field.
Evasion attacks (also recently referred to as adversarial examples) consist of manipulating input data to evade a trained classifier at test time. These include, for instance, manipulation of malware code to have the corresponding sample undetected (i.e., misclassified as legitimate), or manipulation of images to mislead object recognition.
We have been the first to demonstrate these attacks against nonlinear classifiers, including Support Vector Machines and Neural Networks, in [Biggio et al., ECML-PKDD 2013 ]. Notably, these classifiers were believed to be more secure than linear classifiers at that time, due to their complex input-output mapping relationships [Šrndić & Laskov, NDSS 2013 ]. We demonstrated how to overcome this difficulty with a straightforward gradient-based evasion attack, and highlighted the vulnerability of such classifiers to evasion in different application settings, including handwritten digit recognition and PDF malware detection.
Evasion attacks have been independently derived in the area of deep learning and computer vision later in [C. Szegedy et al., ICLR 2014 ], under the name of adversarial examples, namely, images that can be misclassified by deep-learning algorithms while being only imperceptibly distorted.
We have also recently developed a secure-learning algorithm to counter adversarial examples in Android malware detection [A. Demontis et al., IEEE TDSC 2017 ]. We have derived a robust version of Drebin, a popular malware detection tool based on static code analysis [D. Arp et al., NDSS 2014 ]. A similar attempt has been also recently reported in [K. Grosse et al., ESORICS 2017 ].
Poisoning attacks are subtler. Their goal is to mislead the learning algorithm during the training phase by manipulating only a small fraction of the training data, in order to significantly increase the number of misclassified samples at test time, causing a denial of service. These attacks require access to the training data used to learn the classification algorithm, which is possible in some application-specific contexts.
We demonstrated poisoning attacks against Support Vector Machines in [B. Biggio et al., ICML 2012 ], then against LASSO, Ridge and Elastic-net Regression in [H. Xiao et al., ICML 2015 ], and more recently against Neural Networks and Deep Learning algorithms [L. Muñoz-González et al., AISec 2017 ].
B. Biggio, B. Nelson, P. Laskov. In ICML 2012.
B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, F. Roli. In ECML-PKDD 2013.
B. Biggio, G. Fumera, F. Roli. In IEEE TKDE 2014.
H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, F. Roli. In ICML 2015.
M. Melis, A. Demontis, B. Biggio, G. Brown, G. Fumera, F. Roli. In 2017 ICCV Workshop ViPAR.
L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, F. Roli. In AISec 2017.
A. Demontis, M. Melis, B. Biggio, D. Maiorca, D. Arp, K. Rieck, I. Corona, G. Giacinto, F. Roli. In IEEE TDSC 2017.