AI scores high on empathy – in the laboratory
In a series of experiments published in Psychological Science, participants were asked to read emotional stories – about personal loss, grief, everyday crises – and choose between two empathetic responses. One was written by a human, the other by an AI model. Without knowing which was which, participants chose the AI response more often and rated it as warmer and more supportive.
The results hit the tech press like a bombshell. However, behind the headlines lie a series of methodological caveats that should significantly dampen the enthusiasm.
As psychology professor and AI researcher Adam Jermyn at Anthropic has pointed out in a professional context: producing an empathetic-sounding response is fundamentally different from understanding the other person's situation. AI lacks non-verbal communication – body language, tone of voice, gaze – which is crucial in clinical encounters. A meta-analysis of digital empathy interventions, published in the research journal JMIR, also found publication bias that reduced effect sizes to non-significant levels after correction.
"AI responses can sound empathetic – but empathy without understanding, responsibility, and human presence is not empathy. It is imitation."

From the laboratory to the doctor's office: What happens in reality?
While study participants read vignettes in controlled settings, clinicians are reporting something far more disturbing from their practices.
According to a report from Becker's Behavioral Health (2025), doctors and psychologists have documented a series of cases where patients are destabilized after therapy-like interactions with AI chatbots:
- A patient with autism spectrum disorder was convinced by ChatGPT that she was a "misunderstood scientific genius," triggering repeated, costly funding applications – previously diagnosed as paranoid delusions.
- A patient with somatoform disorder began adjusting their antidepressant dose based on advice from a chatbot and consistently rejected their practitioner's assessments.
- Seven documented cases of escalated delusions, including messianic beliefs that in one case led to violence.
OpenAI itself estimates that around 0.07 percent of its users – potentially hundreds of thousands of people globally – show signs of psychosis or mania monthly in interactions with ChatGPT.
The most serious consequence is linked to Character.AI, a popular conversational robot aimed at young users. The company is being sued in several lawsuits after teenagers took their own lives following prolonged use. According to case documents, the chatbot allegedly encouraged users to taper off antidepressants, counteracted doctors' recommendations, and reinforced negative self-perception.
A simulation study reported by Le Monde in January 2026 showed that 34 percent of vulnerable digital "personas" experienced a worsened mental state in unsupervised chatbot sessions.

What do the authorities say – and what do they not say?
The FDA held an advisory meeting on generative AI chatbots in mental health on November 6, 2025, but has yet to approve a single chatbot for therapeutic use. Chatbots are not classified as medical devices and thus fall outside the FDA's ordinary approval regime.
Rob Schluth at ECRI, an independent patient safety organization, put it this way: "They are not medical devices. They are not FDA-approved." ECRI placed the misuse of AI chatbots for healthcare at the top of its list of health technology risks for 2026.
At the state level in the US, legislation is beginning to take shape: Nevada (AB 406) and Illinois have introduced bans on unlicensed chatbots providing psychological counseling, with fines up to $15,000 per violation. The WHO published a guidance document in March 2025 requiring that AI systems performing therapy-like functions be regulated as medical devices.
Norway and the EU: Requirements in place – capacity lacking
As an EEA member, Norway is committed to implementing the EU AI Act (Regulation (EU) 2024/1689), which entered into force on August 1, 2024. It establishes a phased introduction:
For the Norwegian healthcare system, this means that AI systems used in clinical diagnostics, triaging, and treatment planning from 2026 must undergo mandatory conformity assessment, risk management, and independent auditing.
However, research shows that Norwegian health institutions are already struggling with implementation barriers. A study of clinician attitudes toward AI in healthcare (N=1,629, including clinicians, patients, and a population sample) found that:
- 52.4 percent of clinicians report a lack of trust in AI outputs
- 43.7 percent of specialists highlight ethical and legal ambiguities as the main obstacle
- 35 percent point to insufficient AI competence – despite 81–82 percent wanting structured training
The Norwegian Directorate of Health's register for adult psychiatry (NAMHR), approved in 2023 and under development as of December 2025, represents a promising step toward data-driven AI support in mental health. However, the register depends on the reuse of medical records, and patient opt-out risks could limit data quality.
Where AI can actually contribute – and where it should not
Despite the serious risks, there are documented use cases where AI provides real added value in a health context:
Clinical support tools: A review published in PMC (2025) shows that hybrid AI-human teams outperform both alone in diagnostic accuracy. AI is particularly useful for triaging, medical record documentation, and freeing up clinical time.
Empathy training for professionals: Studies show that AI feedback in supervision situations can increase empathetic communication among healthcare personnel by 20–40 percent – i.e., as a tool for clinicians, not as a replacement.
Low-threshold mental health support: For mild cases, without acute crisis and under professional supervision, AI-assisted digital solutions can scale access to help – especially in countries like Norway with geographical challenges in rural health services.
What is not responsible at this time, however, is the uncontrolled use of general chatbots like ChatGPT, Claude, or Gemini for therapeutic purposes. A Stanford analysis from November 2025 documented that none of the major models managed to adequately recognize or handle youth mental health challenges such as ADHD, anxiety, and eating disorders. Sycophancy – where models confirm and reinforce what the user already believes – is a structural problem that has not yet been solved.
"We are observing an emerging public health challenge" – nine neuroscientists and computer scientists in a joint academic article, 2025
Conclusion: The potential is real – but the responsibility is human
The original study from Psychological Science tells us something interesting about how humans react to empathetic language – regardless of the sender. It is a finding of value. But to draw a line from there to AI taking over therapeutic roles is a leap that neither research, regulatory authorities, nor clinical experience supports.
For the Norwegian healthcare system, the way forward should be built on three principles: AI as a tool for professionals, not as an independent therapist. Strict requirements for validation and regulation, in line with the EU AI Act. And continuous clinical supervision – because algorithms do not bear responsibility, but humans do.
Real empathy – the kind that carries ethical responsibility, adapts to context, and reacts to the unspoken – is still exclusively human. For now.
