- Chris Stokel-Walker
- Newcastle upon Tyne
Cases are emerging of harm or suicide resulting from people’s dependent relationships with AI chatbots such as ChatGPT. Are these warning signs of a larger hidden problem? And, if so, what should regulators do? Chris Stokel-Walker investigates
A fifth of UK adults report having a common mental health concern, NHS figures indicate.1 That number is rising, from 15.5% of 16 to 64 year olds in 1993 to 22.6% in 2024.
Commensurate with that, demand for mental health services is also rising, up 21% since 2016.2
It’s little wonder, then, that people are seeking other solutions. And the rise of generative AI chatbots in the three years since the November 2022 release of OpenAI’s ChatGPT has provided many with an outlet to discuss their mental and emotional distress.
At first glance, generative AI chatbots seem to represent a perfect conversational partner for people struggling with their mental health: they’re available 24/7, and, by design, are constantly supportive and endlessly patient with their conversational partners. But concern is increasing that use of chatbots in the self-treatment of mental health problems is becoming a problem and not a cure.
Scale of the problem
Early warning signs of the problems that can ensue from people using chatbots to try to self-manage mental health problems are emerging.
Several US teenagers, including 16 year old Adam Raine and 14 year old Sewell Seltzer III, are known to have died by suicide after conversations with AI chatbots. Their parents have subsequently alleged that, far from helping their children with their mental health crises, AI chatbots exacerbated or encouraged suicidal ideation.3
In another recent instance 56 year old Stein-Erik Soelberg allegedly killed his mother and then himself after a paranoid spiral fuelled by conversations with AI chatbots.4
There are also cases of people experiencing …
