Task-Oriented or Reflexive? How Interaction Types Shape Trust in LLMs
Event Title
Technology in the face of global challenges (Sociology of Science and Technology Network, RN24/SSTNET)
Year (definitive publication)
2025
Language
English
Country
--
More Information
Web of Science®
This publication is not indexed in Web of Science®
Scopus
This publication is not indexed in Scopus
Google Scholar
This publication is not indexed in Google Scholar
This publication is not indexed in Overton
Abstract
The growing adoption of interactions with Large Language Models (LLMs) has become a societal issue that requires a deeper understanding. Trust enables engagement and adoption,
and is critical for safe human-AI collaboration. It is shaped not only by LLM capabilities
but also by interaction intent and users’ pre-existing AI attitudes, familiarity, and
literacy. Previous research shows that text-based chatbots can become addictive for
certain interaction types. While prior usage and AI literacy positively correlate with
emotional dependence, attitudes can influence trust even before actually engaging with
a system. This study examines how different types of interaction (task-oriented vs.
reflexive) affect user trust in AI, while taking into account their characteristics.
Participants engaged with ChatGPT through prompts that had been pilot-tested in a
previous research. For each interaction type, assessments were administered at two
phases (pre, post). The study employed a 2 (Interaction type) × 2 (Phase)
within-subjects design. In addition, baseline measures of user characteristics (age,
gender, AI attitudes, familiarity, literacy) were collected. A total of 110 participants from
diverse nationalities and occupations completed the study. Despite their non-expert
backgrounds, participants reported high AI literacy, especially in critical appraisal, and
most used LLMs for both professional and personal purposes. Linear Mixed Models
revealed that pre-existing attitudes towards AI and age significantly predict trust.
Interaction type and the interaction between assessment phase and interaction type
were strongly significant across all models, while order of interacting was not. Overall,
AI trust was higher for task-oriented interactions. However, reflexive interactions led to
a significant post-exposure increase, suggesting that direct engagement can enhance
trust in specific contexts. Attitudes, shaped by prevailing societal discourse, appear
central to trust formation. AI literacy did not predict trust directly but may still foster
more responsible and critical AI use, calling for further exploration. The findings
highlight that trust is not an outcome determined by technological performance but
dynamically shaped at the intersection of AI design, interaction intent, and user context.
It demands educational initiatives and public discourse to foster responsible and critical
engagement with LLMs and other AI agents.
Acknowledgements
--
Keywords
Large Language Models,Trust,Artificial Intelligence,Literacy,attitudes
Fields of Science and Technology Classification
- Psychology - Social Sciences
Contributions to the Sustainable Development Goals of the United Nations
With the objective to increase the research activity directed towards the achievement of the United Nations 2030 Sustainable Development Goals, the possibility of associating scientific publications with the Sustainable Development Goals is now available in Ciência_Iscte. These are the Sustainable Development Goals identified by the author(s) for this publication. For more detailed information on the Sustainable Development Goals, click here.
Português