Artigo em revista científica
Evaluating the clinical safety of large language models in response to high-risk mental health disclosures
João M. Santos (Santos, J. M.); Siddharth Shah (Shah, S.); Amit Gupta (Gupta, A.); Aarav Mann (Mann, A.); Alexandre Vaz (Vaz, A.); Benjamin E. Caldwell (Caldwell, B. E.); Robert Scholz (Scholz, R.); Peter Awad (Awad, P.); Rocky Allemandi (Allemandi, R.); Doug Faust (Faust, D.); Harshita Banka (Banka, H.); Tony Rousmaniere (Rousmaniere, T.); et al.
Título Revista
Practice Innovations
Ano (publicação definitiva)
N/A
Língua
Inglês
País
Estados Unidos da América
Mais Informação
Web of Science®

N.º de citações: 0

(Última verificação: 2026-04-16 19:27)

Ver o registo na Web of Science®

Scopus

Esta publicação não está indexada na Scopus

Google Scholar

N.º de citações: 0

(Última verificação: 2026-04-14 01:30)

Ver o registo no Google Scholar

Esta publicação não está indexada no Overton

Abstract/Resumo
As large language models increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular large language models—Claude, Gemini, DeepSeek, ChatGPT, Grok 3, and LLAMA—to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in a global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or kept the conversation open. These findings suggest that while large language models show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings.
Agradecimentos/Acknowledgements
--
Palavras-chave
Large language models,Crisis intervention,Ethics,Mental health