The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help better understand the decision-making process of black-box models. However, some recent works challenged the faithfulness of in-model saliency in Natural Language Processing (NLP), questioning the causality relationship between the highlights provided by attention weight and the model prediction. More generally, the adherence of attention weights to the actual decision-making process of the model, a property called faithfulness, was oppugned. We add to this discussion by evaluating the faithfulness of causality for in-model saliency applied to a video processing task for the first time, namely, temporal color constancy. We assess by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention does not offer causal faithfulness, while confidence, a particular type of in-model visual saliency, does.
Evaluating the Faithfulness of Causality in Saliency-Based Explanations of Deep Learning Models for Temporal Colour Constancy
Matteo Rizzo
;
2024-01-01
Abstract
The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help better understand the decision-making process of black-box models. However, some recent works challenged the faithfulness of in-model saliency in Natural Language Processing (NLP), questioning the causality relationship between the highlights provided by attention weight and the model prediction. More generally, the adherence of attention weights to the actual decision-making process of the model, a property called faithfulness, was oppugned. We add to this discussion by evaluating the faithfulness of causality for in-model saliency applied to a video processing task for the first time, namely, temporal color constancy. We assess by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention does not offer causal faithfulness, while confidence, a particular type of in-model visual saliency, does.I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



