Ciência_Iscte
Comunicações
Descrição Detalhada da Comunicação
Task-Oriented or Reflexive? How Interaction Types Shape Trust in LLMs
Título Evento
Technology in the face of global challenges (Sociology of Science and Technology Network, RN24/SSTNET)
Ano (publicação definitiva)
2025
Língua
Inglês
País
--
Mais Informação
Web of Science®
Esta publicação não está indexada na Web of Science®
Scopus
Esta publicação não está indexada na Scopus
Google Scholar
Esta publicação não está indexada no Google Scholar
Esta publicação não está indexada no Overton
Abstract/Resumo
The growing adoption of interactions with Large Language Models (LLMs) has become a societal issue that requires a deeper understanding. Trust enables engagement and adoption,
and is critical for safe human-AI collaboration. It is shaped not only by LLM capabilities
but also by interaction intent and users’ pre-existing AI attitudes, familiarity, and
literacy. Previous research shows that text-based chatbots can become addictive for
certain interaction types. While prior usage and AI literacy positively correlate with
emotional dependence, attitudes can influence trust even before actually engaging with
a system. This study examines how different types of interaction (task-oriented vs.
reflexive) affect user trust in AI, while taking into account their characteristics.
Participants engaged with ChatGPT through prompts that had been pilot-tested in a
previous research. For each interaction type, assessments were administered at two
phases (pre, post). The study employed a 2 (Interaction type) × 2 (Phase)
within-subjects design. In addition, baseline measures of user characteristics (age,
gender, AI attitudes, familiarity, literacy) were collected. A total of 110 participants from
diverse nationalities and occupations completed the study. Despite their non-expert
backgrounds, participants reported high AI literacy, especially in critical appraisal, and
most used LLMs for both professional and personal purposes. Linear Mixed Models
revealed that pre-existing attitudes towards AI and age significantly predict trust.
Interaction type and the interaction between assessment phase and interaction type
were strongly significant across all models, while order of interacting was not. Overall,
AI trust was higher for task-oriented interactions. However, reflexive interactions led to
a significant post-exposure increase, suggesting that direct engagement can enhance
trust in specific contexts. Attitudes, shaped by prevailing societal discourse, appear
central to trust formation. AI literacy did not predict trust directly but may still foster
more responsible and critical AI use, calling for further exploration. The findings
highlight that trust is not an outcome determined by technological performance but
dynamically shaped at the intersection of AI design, interaction intent, and user context.
It demands educational initiatives and public discourse to foster responsible and critical
engagement with LLMs and other AI agents.
Agradecimentos/Acknowledgements
--
Palavras-chave
Large Language Models,Trust,Artificial Intelligence,Literacy,attitudes
Classificação Fields of Science and Technology
- Psicologia - Ciências Sociais
Contribuições para os Objetivos do Desenvolvimento Sustentável das Nações Unidas
Com o objetivo de aumentar a investigação direcionada para o cumprimento dos Objetivos do Desenvolvimento Sustentável para 2030 das Nações Unidas, é disponibilizada no Ciência_Iscte a possibilidade de associação, quando aplicável, dos artigos científicos aos Objetivos do Desenvolvimento Sustentável. Estes são os Objetivos do Desenvolvimento Sustentável identificados pelo(s) autor(es) para esta publicação. Para uma informação detalhada dos Objetivos do Desenvolvimento Sustentável, clique aqui.
English