Ciência_Iscte
Publicações
Descrição Detalhada da Publicação
Evaluating the clinical safety of large language models in response to high-risk mental health disclosures
Título Revista
Practice Innovations
Ano (publicação definitiva)
N/A
Língua
Inglês
País
Estados Unidos da América
Mais Informação
Web of Science®
Scopus
Esta publicação não está indexada na Scopus
Google Scholar
Esta publicação não está indexada no Overton
Abstract/Resumo
As large language models increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular large language models—Claude, Gemini, DeepSeek, ChatGPT, Grok 3, and LLAMA—to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in a global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or kept the conversation open. These findings suggest that while large language models show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings.
Agradecimentos/Acknowledgements
--
Palavras-chave
Large language models,Crisis intervention,Ethics,Mental health
English