Recent advancements in Artificial Intelligence (AI) have often improved the accuracy of medical diagnostics in several fields, such as cancer detection and diagnosing cardiovascular or neuromuscular diseases. However, the opaque nature of AI decision-making can limit its adoption in the clinical setting, as physicians require clear and interpretable explanations to trust these tools. To address this issue, the field of eXplainable Artificial Intelligence (XAI) aims to clarify the rationale behind AI predictions while ensuring compliance with ethical standards and advanced regulations such as the GDPR and the AI Act. This study applies multiple explainability methods to a diagnostic support model for Distal Myopathies (DMs), a rare neuromuscular disorder marked by subtle, early-stage tissue alterations. Beyond classification, our approach generates detailed explanations for the model’s predictions. We propose novel techniques, including a hierarchical occlusion method and an ensemble framework that combines individual explanations to produce refined, interpretable visualizations. Feedback from expert radiologists is used to assess the effectiveness of these methods, highlighting their potential to enhance trust and usability in clinical practice. Our results show that pretrained convolutional networks achieve high classification accuracy, exceeding 88%, with perfect recall in identifying affected cases, while underscoring the need for adaptive and user-centric approaches to explainability in AI-driven diagnostic tools.

Assessing the Value of Explainable Artificial Intelligence for Magnetic Resonance Imaging

Frasson, Giada;Rizzo, Matteo;Nobile, Marco Salvatore;
2025-01-01

Abstract

Recent advancements in Artificial Intelligence (AI) have often improved the accuracy of medical diagnostics in several fields, such as cancer detection and diagnosing cardiovascular or neuromuscular diseases. However, the opaque nature of AI decision-making can limit its adoption in the clinical setting, as physicians require clear and interpretable explanations to trust these tools. To address this issue, the field of eXplainable Artificial Intelligence (XAI) aims to clarify the rationale behind AI predictions while ensuring compliance with ethical standards and advanced regulations such as the GDPR and the AI Act. This study applies multiple explainability methods to a diagnostic support model for Distal Myopathies (DMs), a rare neuromuscular disorder marked by subtle, early-stage tissue alterations. Beyond classification, our approach generates detailed explanations for the model’s predictions. We propose novel techniques, including a hierarchical occlusion method and an ensemble framework that combines individual explanations to produce refined, interpretable visualizations. Feedback from expert radiologists is used to assess the effectiveness of these methods, highlighting their potential to enhance trust and usability in clinical practice. Our results show that pretrained convolutional networks achieve high classification accuracy, exceeding 88%, with perfect recall in identifying affected cases, while underscoring the need for adaptive and user-centric approaches to explainability in AI-driven diagnostic tools.
2025
Explainable Artificial Intelligence. xAI 2025.
File in questo prodotto:
File Dimensione Formato  
978-3-032-08317-3_20.pdf

accesso aperto

Tipologia: Versione dell'editore
Licenza: Creative commons
Dimensione 3.88 MB
Formato Adobe PDF
3.88 MB Adobe PDF Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5105029
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact