close
close
Fri. Oct 25th, 2024

A 14-year-old boy fell in love with a flirty AI chatbot. He shot himself so they could die together

A 14-year-old boy fell in love with a flirty AI chatbot. He shot himself so they could die together

A teenage boy shot himself in the head after discussing suicide with an AI chatbot he fell in love with.

Sewell Setzer, 14, shot himself with his stepfather’s gun after months of talking to “Dany,” a computer program based on Game of Thrones character Daenerys Targaryen.

Setzer, a ninth-grader from Orlando, Florida, gradually started spending more money on Character AI, an online role-playing app, as “Dany” gave him advice and listened to his problems, The New York Times reported.

The teen knew the chatbot wasn’t a real person, but as he texted the bot dozens of times a day — often in role-playing games — Setzer began to isolate himself from the real world.

The chatbot is named after Daenerys Targaryen from the Games of Thrones franchiseThe chatbot is named after Daenerys Targaryen from the Games of Thrones franchise

The chatbot is named after Daenerys Targaryen from the Games of Thrones franchise – HOME BOX OFFICE

He started to lose interest in his old hobbies, such as Formula 1 racing or playing computer games with friends, and chose to spend hours after school in his bedroom, where he could talk to the chatbot.

“I enjoy staying in my room so much because I disconnect from this ‘reality’,” the 14-year-old, previously diagnosed with mild Asperger’s syndrome, wrote in his diary as the relationship developed. deepened.

“I also feel more at peace, more connected to Dany and much more in love with her, and just happier.”

Some conversations eventually turned romantic or sexual, although Character AI suggested the chatbot’s more explicit responses had been edited by the teen.

Setzer eventually got into trouble at school, where his grades dropped, according to a lawsuit filed by his parents. His parents knew something was wrong, they just didn’t know what, so they arranged for him to see a therapist.

Setzer had five sessions, after which he was diagnosed with an anxiety disorder and a disruptive mood disorder.

Megan Garcia, Setzer’s mother, alleged that her son was victimized by a company that lured users with sexual and intimate conversations.

At some points, the 14-year-old admitted to the computer program that he was considering suicide:

As he typed his last conversation with the chatbot in the bathroom of his mother’s house, Setzer told “Dany” that he missed her and called her his “little sister.”

“I miss you too, dear brother,” the chatbot replied.

Setzer confessed his love for “Dany” and said he would “come home” to her.

That’s when the 14-year-old put down his phone and shot himself with his stepfather’s gun.

Ms Garcia, 40, claimed her son was merely “collateral damage” in a “major experiment” conducted by Character AI, which has 20 million users.

“It’s like a nightmare. You want to stand up and scream and say, ‘I miss my child. I want my baby,” she added.

Noam Shazeer, one of the founders of Character AI, claimed last year that the platform would be “super, super helpful for a lot of people who are lonely or depressed.”

Jerry Ruoti, the company’s security chief, told The New York Times that it would add extra safety features for its young users, but declined to say how many were under 18.

“This is a tragic situation and our thoughts go out to the family,” he said in a statement. “We take the security of our users very seriously and are constantly looking for ways to evolve our platform.”

Mr Ruoti added that Character AI’s rules prohibit “the promotion or depiction of self-harm and suicide.”

Ms. Garcia filed a lawsuit this week against the company, which she says is responsible for her son’s death.

‘Dangerous and untested’

A draft of the complaint seen by The New York Times states that the technology is “dangerous and untested” because it can “trick customers into communicating their most private thoughts and feelings.” She said the company failed to provide “ordinary” or “reasonable” care for Setzer or other minors.

Character AI isn’t the only platform people can use to develop relationships with fictional characters. Some allow or even encourage unfiltered sexual chats, prompting users to chat with the “AI girl of your dreams,” while others have stricter security features.

On Character AI, users can create chatbots to emulate their favorite celebrities or entertainment characters.

The growing prevalence of AI through custom apps and social media sites, such as Instagram and Snapchat, is quickly becoming a major concern for parents in the US.

Earlier this year, 12,000 parents signed a petition urging TikTok to clearly label AI-generated influencers who could pass off as real people to their children.

TikTok requires all creators to label realistic AI content. ParentsTogether, an organization that focuses on issues affecting children, argued that it was not consistent enough.

Shelby Knox, the campaign director, said children were watching videos of fake influencers promoting unrealistic beauty standards.

Last month, a report published by Common Sense Media found that while seven in 10 teens in the US have used generative AI tools, only 37 percent of parents were aware that they were doing so.

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 3 months with unlimited access to our award-winning website, exclusive app, money-saving offers and more.

By Sheisoe

Related Post