• Treffer 4 von 20
Zurück zur Trefferliste

Inhibitors and enablers to explainable AI success: a systematic examination of explanation complexity and individual characteristics

Zitieren Sie bitte immer diese URN: urn:nbn:de:bvb:20-opus-297288
  • With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on theWith the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.zeige mehrzeige weniger

Volltext Dateien herunterladen

Metadaten exportieren

Weitere Dienste

Teilen auf Twitter Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Autor(en): Carolin Wienrich, Astrid Carolus, David Roth-Isigkeit, Andreas Hotho
URN:urn:nbn:de:bvb:20-opus-297288
Dokumentart:Artikel / Aufsatz in einer Zeitschrift
Institute der Universität:Fakultät für Mathematik und Informatik / Institut für Informatik
Fakultät für Humanwissenschaften (Philos., Psycho., Erziehungs- u. Gesell.-Wissensch.) / Institut Mensch - Computer - Medien
Sprache der Veröffentlichung:Englisch
Titel des übergeordneten Werkes / der Zeitschrift (Englisch):Multimodal Technologies and Interaction
ISSN:2414-4088
Erscheinungsjahr:2022
Band / Jahrgang:6
Heft / Ausgabe:12
Aufsatznummer:106
Originalveröffentlichung / Quelle:Multimodal Technologies and Interaction (2022) 6:12, 106. https://doi.org/10.3390/mti6120106
DOI:https://doi.org/10.3390/mti6120106
Sonstige beteiligte Institutionen:Zentrum für soziale Implikationen künstlicher Intelligenz (SOCAI)
Allgemeine fachliche Zuordnung (DDC-Klassifikation):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 006 Spezielle Computerverfahren
Freie Schlagwort(e):explainable AI; explanation complexity; human-centered AI; individual differences; recommender agent
Datum der Freischaltung:27.11.2023
Datum der Erstveröffentlichung:28.11.2022
Lizenz (Deutsch):License LogoCC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International