Export Publication

The publication can be exported in the following formats: APA (American Psychological Association) reference format, IEEE (Institute of Electrical and Electronics Engineers) reference format, BibTeX and RIS.

Export Reference (APA)
Marijose Páez Velázquez, Bobrowicz-Campos, E. & Arriaga, P. (2025). Task-Oriented or Reflexive? How Interaction Types Shape Trust in LLMs. Technology in the face of global challenges (Sociology of Science and Technology Network, RN24/SSTNET).
Export Reference (IEEE)
M. P. Velázquez et al.,  "Task-Oriented or Reflexive? How Interaction Types Shape Trust in LLMs", in Technology in the face of global challenges (Sociology of Science and Technology Network, RN24/SSTNET), Porto, 2025
Export BibTeX
@misc{velázquez2025_1764929727699,
	author = "Marijose Páez Velázquez and Bobrowicz-Campos, E. and Arriaga, P.",
	title = "Task-Oriented or Reflexive? How Interaction Types Shape Trust in LLMs",
	year = "2025",
	howpublished = "Other",
	url = "https://aps.pt/technology-in-the-face-of-global-challenges-midterm-conference/"
}
Export RIS
TY  - CPAPER
TI  - Task-Oriented or Reflexive? How Interaction Types Shape Trust in LLMs
T2  - Technology in the face of global challenges (Sociology of Science and Technology Network, RN24/SSTNET)
AU  - Marijose Páez Velázquez
AU  - Bobrowicz-Campos, E.
AU  - Arriaga, P.
PY  - 2025
CY  - Porto
UR  - https://aps.pt/technology-in-the-face-of-global-challenges-midterm-conference/
AB  - The growing adoption of interactions with Large Language Models (LLMs) has become a societal issue that requires a deeper understanding. Trust enables engagement and adoption,
and is critical for safe human-AI collaboration. It is shaped not only by LLM capabilities
but also by interaction intent and users’ pre-existing AI attitudes, familiarity, and
literacy. Previous research shows that text-based chatbots can become addictive for
certain interaction types. While prior usage and AI literacy positively correlate with
emotional dependence, attitudes can influence trust even before actually engaging with
a system. This study examines how different types of interaction (task-oriented vs.
reflexive) affect user trust in AI, while taking into account their characteristics.
Participants engaged with ChatGPT through prompts that had been pilot-tested in a
previous research. For each interaction type, assessments were administered at two
phases (pre, post). The study employed a 2 (Interaction type) × 2 (Phase)
within-subjects design. In addition, baseline measures of user characteristics (age,
gender, AI attitudes, familiarity, literacy) were collected. A total of 110 participants from
diverse nationalities and occupations completed the study. Despite their non-expert
backgrounds, participants reported high AI literacy, especially in critical appraisal, and
most used LLMs for both professional and personal purposes. Linear Mixed Models
revealed that pre-existing attitudes towards AI and age significantly predict trust.
Interaction type and the interaction between assessment phase and interaction type
were strongly significant across all models, while order of interacting was not. Overall,
AI trust was higher for task-oriented interactions. However, reflexive interactions led to
a significant post-exposure increase, suggesting that direct engagement can enhance
trust in specific contexts. Attitudes, shaped by prevailing societal discourse, appear
central to trust formation. AI literacy did not predict trust directly but may still foster
more responsible and critical AI use, calling for further exploration. The findings
highlight that trust is not an outcome determined by technological performance but
dynamically shaped at the intersection of AI design, interaction intent, and user context.
It demands educational initiatives and public discourse to foster responsible and critical
engagement with LLMs and other AI agents.
ER  -