Adversarial attacks have demonstrated remarkable efficacy in altering the output of a learning model by applying a minimal perturbation to the input data. While increasing attention has been placed on the image domain, however, the study of adversarial perturbations for geometric data has been notably lagging behind. In this paper, we show that effective adversarial attacks can be concocted for surfaces embedded in 3D, under weak smoothness assumptions on the perceptibility of the attack. We address the case of deformable 3D shapes in particular, and introduce a general model that is not tailored to any specific surface representation, nor does it assume access to a parametric description of the 3D object. In this context, we consider targeted and untargeted variants of the attack, demonstrating compelling results in either case. We further show how discovering adversarial examples, and then using them for adversarial training, leads to an increase in both robustness and accuracy. Our findings are confirmed empirically over multiple datasets spanning different semantic classes and deformations.
|Data di pubblicazione:||2020|
|Titolo:||Generating Adversarial Surfaces via Band-Limited Perturbations|
|Rivista:||COMPUTER GRAPHICS FORUM|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1111/cgf.14083|
|Appare nelle tipologie:||2.1 Articolo su rivista |