The ability of deep learning-based approaches to extract features autonomously from raw data while outperforming traditional methods has led to several breakthroughs in artificial intelligence. However, these models suffer from intrinsic opacity, making it difficult to explain their predictions. This is problematic not only because it hinders debugging but, most importantly, because it negatively affects the perceived trustworthiness of the systems. What is often overlooked is that many relatively simple tasks can be solved efficiently and effectively with data processing strategies paired with traditional models that are inherently more transparent. This work highlights the frequently neglected perspective of using knowledge-based and explainability-driven problem-solving in ML. We introduce a set of guidelines to design explainable models and apply them to the task of classifying the ripeness of banana crates. This is done by planning explainability and model design together. We showcase how the task can be solved using opaque deep learning models and more transparent strategies. Notably, there is a minimal loss of accuracy but a significant gain in explainability, which is truthful to the model’s inner workings. Finally, we perform a user study to evaluate the perception of explainability by the end users and discuss our findings.

Stop Overkilling Simple Tasks with Black-Box Models, Use More Transparent Models Instead

Rizzo M.
Writing – Original Draft Preparation
;
Marcuzzo M.
Writing – Original Draft Preparation
;
Zangari A.
Writing – Original Draft Preparation
;
Schiavinato M.
Writing – Review & Editing
;
Albarelli A.
Supervision
;
Gasparetto A.
Supervision
2024-01-01

Abstract

The ability of deep learning-based approaches to extract features autonomously from raw data while outperforming traditional methods has led to several breakthroughs in artificial intelligence. However, these models suffer from intrinsic opacity, making it difficult to explain their predictions. This is problematic not only because it hinders debugging but, most importantly, because it negatively affects the perceived trustworthiness of the systems. What is often overlooked is that many relatively simple tasks can be solved efficiently and effectively with data processing strategies paired with traditional models that are inherently more transparent. This work highlights the frequently neglected perspective of using knowledge-based and explainability-driven problem-solving in ML. We introduce a set of guidelines to design explainable models and apply them to the task of classifying the ripeness of banana crates. This is done by planning explainability and model design together. We showcase how the task can be solved using opaque deep learning models and more transparent strategies. Notably, there is a minimal loss of accuracy but a significant gain in explainability, which is truthful to the model’s inner workings. Finally, we perform a user study to evaluate the perception of explainability by the end users and discuss our findings.
2024
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5093692
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact