Towards a Better Understanding of Information Disclosure in Human-AI Interactions

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/151416
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1514169
http://dx.doi.org/10.15496/publikation-92756
Dokumentart: Dissertation
Erscheinungsdatum: 2026-01-28
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Psychologie
Gutachter: Sassenberg, Kai (Prof. Dr.)
Tag der mündl. Prüfung: 2024-01-29
DDC-Klassifikation: 150 - Psychologie
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en
Zur Langanzeige

Abstract:

Humans interact more and more frequently with evermore capable AI (Artificial Intelligence) systems. While these interactions offer many opportunities, they also come with certain risks. In cases in which AI is designed to give personalized output, this becomes particularly evident: on the one hand, people can benefit from personalized interactions (e.g., receiving personalized recommendations or a personalized usage experience in using con-versational AI), but on the other hand, personalization requires the collection of personal data which is often accompanied by concerns for users’ privacy. As people nonetheless often use such AI systems and willingly disclose their personal data, the current dissertation aimed at a better understanding of disclosure in the increasingly common context of human-AI interactions. For this purpose, this dissertation used a human-oriented approach building on the ideas of the computers as social actors paradigm (Nass & Moon, 2000; Reeves & Nass, 1996) and anthropomorphism (Epley et al., 2007). Three empirical chapters investigated how perceptions of (1) the interaction partner (i.e., the technology and its provider) and (2) the interaction (i.e., perceived interactivity and output quality) are related to individuals’ decisions to disclose personal information in private use contexts of AI. Furthermore, considering that individuals cannot always decide whether AI is used – for instance, in the work context – this dissertation addressed the perspective of decision-makers (i.e., managers) and how their AI acceptance is associated with perceptions of AI and the relevance of personal data in certain business areas. Taken together, the findings of the current dissertation highlight that the human-oriented approach (i.e., focusing on how humans perceive the interaction partner and the interaction rather than the technical details of implementation) is useful to get a better understanding of disclosure in human-AI interactions. Within several empirical studies, the current thesis showed that perceptions of the interaction partner (i.e., the technology and the provider) as well as of the interaction (i.e., regarding the perceived interactivity and output quality) seem important for individuals’ disclosure (and decision-makers’ AI acceptance). Most importantly, it became evident that the capabilities of the AI as interaction partner can evoke positive reactions (e.g., users were more willing to disclose information if perceiving intellectual competencies in a conversational AI), but – if they cross a critical boundary – also lead to adverse reactions (e.g., non-users being less willing to disclose and users and non-users being more concerned about privacy if perceiving meta-cognitive heuristics in conversational AI; managers reporting lower AI acceptance for implementation). Thus, it seems that very high capabilities of AI might not always be perceived as desirable. Accordingly, future research needs to grasp a better understanding of why such negative reactions arise and how we can ensure a safe interaction of humans with AI technologies that harness the full potential of these increasingly capable technologies while avoiding – or at least minimizing – the associated risks and reducing adverse reactions.

Das Dokument erscheint in: