Secure ML Research

Research area on the Security of Machine Learning


Secure ML Research   Tutorial: Wild Patterns   Secure ML Library
Machine-learning and data-driven AI technologies have reported impressive performances in computer vision and security-sensitive tasks. From self-driving cars and robot-vision systems to spam and malware detection tools, such technologies have become pervasive.
However, when deployed in the wild, these technologies may encounter adversarial conditions or unexpected variations of the input data. Understanding their security properties and designing suitable countermeasures has thus become a timely and relevant open research challenge towards the development of safe AI systems.
Our research team has been among the first to:
  • show that machine-learning algorithms are vulnerable to gradient-based adversarial manipulations of the input data, both at test time (evasion attacks) and at training time (poisoning attacks);
  • derive a systematic framework for security evaluation of learning algorithms; and
  • develop suitable countermeasures for improving their security.
Evasion attacks (also recently referred to as adversarial examples) consist of manipulating input data to evade a trained classifier at test time. These include, for instance, manipulation of malware code to have the corresponding sample misclassified as legitimate, or manipulation of images to mislead object recognition.

Our research members have been among the first to demonstrate these attacks against well-known machine-learning algorithms, including support vector machines and neural networks (Biggio et al., 2013). Evasion attacks have been independently derived in the area of deep learning and computer vision later (C. Szegedy et al., 2014), under the name of adversarial examples, namely, images that can be misclassified by deep-learning algorithms while being only imperceptibly distorted.
Poisoning attacks are subtler. Their goal is to mislead the learning algorithm during the training phase by manipulating only a small fraction of the training data, in order to significantly increase the number of misclassified samples at test time, causing a denial of service. These attacks require access to the training data used to learn the classification algorithm, which is possible in some application-specific contexts.

We demonstrated poisoning attacks against support vector machines (Biggio et al., 2012), then against LASSO, ridge and elastic-net regression (Xiao et al., 2015; Jagielski et al., 2018), and more recently against neural networks and deep-learning algorithms (L. Muñoz-González et al., 2017).

Timeline of Learning Security

timeline-advml_2020_.png
Relevant Publications

Relevant Lectures

The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence
Jean Baudrillard, Sociologist

Info

Pluribus One S.r.l.

Via Bellini 9, 09128, Cagliari (CA)

info[at]pluribus-one.it

PEC: pluribus-one[at]pec.pluribus-one.it

 

Legal entity

Share capital: € 10008

VAT no.: 03621820921

R.E.A.: Cagliari 285352


 

University of Cagliari

  Pluribus One is a spin-off of the Department of Electrical and Electronic Engineering, University of Cagliari, Italy

 

Certifications