Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.
Adversarial training of gradient-boosted decision trees
Calzavara S.;Lucchese C.;
2019-01-01
Abstract
Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.File | Dimensione | Formato | |
---|---|---|---|
cikm19.pdf
non disponibili
Tipologia:
Documento in Post-print
Licenza:
Accesso chiuso-personale
Dimensione
725.17 kB
Formato
Adobe PDF
|
725.17 kB | Adobe PDF | Visualizza/Apri |
I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.