The increasing use of artificial intelligence (AI) models across various fields has raised concerns about whether these models can meet user trust expectations. As a result, researchers are focusing on assessing AI models’ performance relative to user expectations to determine trust levels. Evidence suggests that effective interaction with eXplainable AI (XAI) techniques can mitigate over-reliance on AI models and better align user expectations with the actual capabilities of these models in decision-making. In this study, we analyze trust from two perspectives: perceived trust, based on user self-reported trust, and demonstrated trust, which evaluates whether users, when given a choice, prefer to rely on AI or make decisions independently. We also explore how different interactions between human subjects and XAI models, along with varying levels of task risk, influence trust. Our findings reveal that these two types of trust are substantially different; human subjects do not always exhibit trust behavior in actual decision-making tasks, even when they perceive themselves as trusting the AI. Furthermore, we show that an AI model’s low error rate in making correct decisions can influence human subjects’ mental models, leading them to report a higher tendency to trust the AI. Finally, we conclude that human perceptions of trust are fragile and may change based on ongoing interactions with the model.

The Dynamics of Trust in XAI: Assessing Perceived and Demonstrated Trust Across Interaction Modes and Risk Treatments

Zhang, Chao;Nobile, Marco S.;
2025-01-01

Abstract

The increasing use of artificial intelligence (AI) models across various fields has raised concerns about whether these models can meet user trust expectations. As a result, researchers are focusing on assessing AI models’ performance relative to user expectations to determine trust levels. Evidence suggests that effective interaction with eXplainable AI (XAI) techniques can mitigate over-reliance on AI models and better align user expectations with the actual capabilities of these models in decision-making. In this study, we analyze trust from two perspectives: perceived trust, based on user self-reported trust, and demonstrated trust, which evaluates whether users, when given a choice, prefer to rely on AI or make decisions independently. We also explore how different interactions between human subjects and XAI models, along with varying levels of task risk, influence trust. Our findings reveal that these two types of trust are substantially different; human subjects do not always exhibit trust behavior in actual decision-making tasks, even when they perceive themselves as trusting the AI. Furthermore, we show that an AI model’s low error rate in making correct decisions can influence human subjects’ mental models, leading them to report a higher tendency to trust the AI. Finally, we conclude that human perceptions of trust are fragile and may change based on ongoing interactions with the model.
2025
Explainable Artificial Intelligence. xAI 2025.
File in questo prodotto:
File Dimensione Formato  
978-3-032-08317-3_15.pdf

accesso aperto

Tipologia: Versione dell'editore
Licenza: Creative commons
Dimensione 2.92 MB
Formato Adobe PDF
2.92 MB Adobe PDF Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5105030
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact