Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i.e., maliciously crafted perturbations of inputs designed to force mispredictions. In this article we propose a novel technique to certify the security of machine learning models against evasion attacks with respect to an expressive threat model, where the attacker can be represented by an arbitrary imperative program. Our approach is based on a transformation of the model under attack into an equivalent imperative program, which is then analyzed using the traditional abstract interpretation framework. This solution is sound, efficient and general enough to be applied to a range of different models, including decision trees, logistic regression and neural networks. Our experiments on publicly available datasets show that our technique yields only a minimal number of false positives and scales up to cases which are intractable for a competitor approach.

Certifying machine learning models against evasion attacks by program analysis

Calzavara, S;Ferrara, P;Lucchese, C
2023-01-01

Abstract

Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i.e., maliciously crafted perturbations of inputs designed to force mispredictions. In this article we propose a novel technique to certify the security of machine learning models against evasion attacks with respect to an expressive threat model, where the attacker can be represented by an arbitrary imperative program. Our approach is based on a transformation of the model under attack into an equivalent imperative program, which is then analyzed using the traditional abstract interpretation framework. This solution is sound, efficient and general enough to be applied to a range of different models, including decision trees, logistic regression and neural networks. Our experiments on publicly available datasets show that our technique yields only a minimal number of false positives and scales up to cases which are intractable for a competitor approach.
2023
31
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5020963
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact