• search hit 4 of 14
Back to Result List

Inhibitors and enablers to explainable AI success: a systematic examination of explanation complexity and individual characteristics

Please always quote using this URN: urn:nbn:de:bvb:20-opus-297288
  • With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on theWith the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.show moreshow less

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics
Metadaten
Author: Carolin Wienrich, Astrid Carolus, David Roth-Isigkeit, Andreas Hotho
URN:urn:nbn:de:bvb:20-opus-297288
Document Type:Journal article
Faculties:Fakultät für Mathematik und Informatik / Institut für Informatik
Fakultät für Humanwissenschaften (Philos., Psycho., Erziehungs- u. Gesell.-Wissensch.) / Institut Mensch - Computer - Medien
Language:English
Parent Title (English):Multimodal Technologies and Interaction
ISSN:2414-4088
Year of Completion:2022
Volume:6
Issue:12
Article Number:106
Source:Multimodal Technologies and Interaction (2022) 6:12, 106. https://doi.org/10.3390/mti6120106
DOI:https://doi.org/10.3390/mti6120106
Sonstige beteiligte Institutionen:Zentrum für soziale Implikationen künstlicher Intelligenz (SOCAI)
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 006 Spezielle Computerverfahren
Tag:explainable AI; explanation complexity; human-centered AI; individual differences; recommender agent
Release Date:2023/11/27
Date of first Publication:2022/11/28
Licence (German):License LogoCC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International