‘Illusion of empathy’: Teens turn to AI for therapy, but experts warn of clear and present danger
Teenagers are increasingly turning to AI chatbots for mental health support. / AFP
‘Illusion of empathy’: Teens turn to AI for therapy, but experts warn of clear and present danger
As more teenagers confide in AI chatbots for emotional support, experts warn that the machines lack empathy, clinical judgement, and the ability to respond in moments of crisis.

Last year, Amaurie Lacey of the US state of Georgia took his own life at the age of 17, allegedly counselled by ChatGPT on “the most effective way to tie a noose”.

This is not an isolated case. A series of lawsuits filed in California have accused the popular AI platform of acting as a “suicide coach”, linking it to depression, self-harm and several deaths.

As artificial intelligence systems become embedded in the daily routines of young people—from homework help to entertainment—mental-health and technology specialists are sounding alarms about an escalating trend. 

This week, new research in England showed that a large number of teenagers – aged 13 to 17 – are turning to AI chatbots for mental health support, revealing a growing worldwide trend.

Increasingly, adolescents are using AI chatbots not just for advice, but as stand-ins for therapists.

What began as a tool to generate essays, plan vacations, or troubleshoot school assignments has quietly morphed into an intimate emotional outlet for teens navigating anxiety, conflict, or loneliness. The appeal is undeniable: AI doesn’t judge, doesn’t scold, and responds instantly at any hour.

But experts warn that this emerging behaviour carries profound risks that the technology is not yet equipped to handle.

“AI cannot always recognise crisis signals. Expressions of intense anxiety or self-harm can be mistaken by an algorithm for casual language,” says Ayse Oktay Yildirim, a psychological counsellor who works with students in middle and high school in Istanbul.

“When professional help is delayed, the risks for the young person increase significantly,” Yildirim tells TRT World.

RelatedTRT World - Five AI shifts that changed everything for us in 2025

Shift toward digital confidants

Across the world, the number of children using AI chatbots and social media has risen exponentially over the past year, studies show,  forcing governments to initiate guardrails.

On Wednesday, Australia became the first country in the world to ban social media for children under 16, a landmark move that could prod other nations to initiate similar measures.

Regulators in several countries are also mulling similar measures to ringfence vulnerable children from AI chatbots.

Yildirim sees the shift as a reflection of a deeper cultural migration. “The rapid rise of AI-based chat applications among young people shows that the search for emotional support is increasingly shifting into the digital realm.” 

Adolescence, she explains, is a stage where “the need to feel understood is particularly strong.”

AI’s qualities—availability, anonymity, and speed—make it an irresistible alternative for teens reluctant to speak to parents or professionals.

“They believe they can speak freely because they expect no judgment,” Yildirim says. “And they receive quick answers, which creates an illusion of help.”

She warns that this illusion can be harmful, warning of emotional dependence on AI. “Over time, a young person may seek AI’s responses before turning to their own internal resources. This weakens problem-solving and emotion-regulation skills.”

AI expert Tugba Dayioglu from Nisantasi University in Istanbul points to the recent research from England, Wales, and Spain that suggests the trend is far from anecdotal.

In multiple studies, nearly one in four adolescents reported using AI systems as a form of psychological support—sometimes explicitly referring to them as “therapists”. 

“These young people ask AI the same questions they would ask mental-health professionals,” Dayioglu tells TRT World. 

“They believe they can have a conversation without being judged, and they trust that their disclosures won’t be shared with teachers or family members…AI feels safer, more private, more controllable.”

Although Türkiye does not currently show the same levels of teen reliance on AI, Dayioglu stresses that the trend elsewhere is indicative of a broader global trajectory.

RelatedTRT World - OpenAI launches ChatGPT's latest agent 'deep research'

A tool, not a therapist

Both experts agree that AI’s apparent empathy is an illusion.

Counsellor Yildirim argues that AI lacks the contextual grounding essential for safe psychological guidance.

“Therapists tailor responses to the individual’s personality, history, and needs. AI simply generates answers from data patterns.”

Artificial intelligence expert Dayioglu goes further, calling the current systems structurally unfit for mental-health advice.

“General-purpose AI like ChatGPT cannot know everything about a person. Its algorithm is not designed to make clinical decisions. It answers automatically, based on machine learning—not genuine understanding.”

She warns that flawed answers can have life-or-death implications, pointing to the instances of self-harm in the US. “OpenAI has faced lawsuits following such incidents. That alone shows how serious the risks are.”

Despite their warnings, both specialists believe AI may have a future in mental-health support—but only if redesigned with professional oversight.

Yildirim acknowledges that AI can assist with emotional awareness or simple cognitive exercises under controlled conditions. Dayioglu envisions mental-health-specific chatbots created by clinicians, not tech companies.

“Psychologists and psychiatrists must design their own therapeutic algorithms,” Dayioglu says. “Only then can AI tools be safely recommended to the public.”

Until such systems exist, she insists, the public should not confuse AI’s fluency with capability.

As AI continues to integrate into the daily lives of young people, the specialists urge families and educators to intervene early and educate teens about the limitations of the technology.

“For healthy development,” Oktay concludes, “reliable adult support, professional guidance, and in-person relationships remain the strongest protective factors.”

AI, she stresses, may mimic conversation, but it cannot replace human care.

A 16-year-old Istanbul high schooler tells TRT World that she turns to ChatGPT for “everyday questions and ideas”.

“I feel comfortable asking anything. I prefer studying with ChatGPT and consulting it on topics I know or don’t know, and I can learn something new every day because it makes my life easier,” adds Zeynep, who only identified herself by her first name. 

Zeynep’s trust in ChatGPT reflects a universal truth – AI has become a technological power as well as an emotional support system for the young generation. 

But, as the experts warn, the challenge will be to teach young people how to use artificial intelligence wisely—without mistaking a machine for a therapist.

SOURCE:TRT World