An AI chatbot pushed a teenager to commit suicide, a lawsuit against its creator alleges
Tallahassee, Fla. (AP) – In the final moments of taking his own life, 14-year-old Sewell Setzer III pulled out his phone and messaged the chatbot that had become his closest friend.
For months, Sewell became increasingly disconnected from her real life as she engaged in highly sexualized conversations with the bot, according to a wrongful-death lawsuit filed this week in a federal court in Orlando.
Legal filings said the teenager openly discussed her suicidal thoughts and shared her desire for a painless death with the bot, named after fictional character Daenerys Targaryen from the television show “Game of Thrones.”
___
Editor's Note — This story contains discussion of suicide. If you or someone you know needs help, the National Suicide and Crisis Lifeline in the US is available by calling or texting 988.
___
On Feb. 28, Sewell told the bot he was 'coming home' — and that encouraged him to do so, the lawsuit says.
“I promise I will come to your house. I love you so much, Danny,” Sewell told the chatbot.
“I love you too,” the bot replied. “Please come to my house as soon as possible, my dear.”
“What if I told you I could come home right now?” she asked.
“Please, my sweet king,” the bot messaged back.
Seconds after the Character.AI bot told him to “come home,” the teenager shot himself, according to a lawsuit filed this week by Sewell's mother, Megan Garcia, of Orlando, against Character Technologies Inc.
Character Technologies is Character.AI, an app that lets users create customizable characters or interact with characters created by others, spanning experiences from imaginative play to mock job interviews. The company says the artificial personalities are designed to feel “alive” and “human-like”.
“Imagine talking to highly intelligent and life-like chat bot characters that listen to you, understand you and remember you,” reads a description of the app on Google Play. “We encourage you to push the boundaries of what's possible with this innovative technology.”
Garcia's attorneys allege that the company created a highly addictive and dangerous product specifically aimed at children, “actively exploited and abused those children as the subject of product design” and dragged Sewell into an emotionally and sexually abusive relationship that led to her suicide. .
“We believe that if Sewell Setzer had not been on Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which represents Garcia.
A spokeswoman for Character.AI said Friday that the company does not comment on pending litigation. In a blog post the day the lawsuit was filed, the platform announced new “community safety updates,” including rails for children and suicide prevention resources.
“We are creating a different experience for users under 18 with more rigorous models to reduce the likelihood of exposure to sensitive or suggestive content,” the company said in a statement to The Associated Press. “We are working quickly to implement these changes for younger users.”
Google and its parent company, Alphabet, are also named as defendants in the suit. According to legal filings, Character.AI's founders are former Google employees who were “instrumental” in AI development at the company, but left to launch their own startup to “maximally accelerate” the technology.
In August, Google struck a $2.7 billion deal with Character.AI to license the company's technology and rehire the startup's founders, the lawsuit claims. The AP released multiple email messages with Google and Alphabet on Friday.
In the months after her death, Garcia's lawsuit says, Sewell felt she was falling in love with the bot.
While unhealthy attachment to AI chatbots can cause problems for adults, it can be even more dangerous for young people — like social media — because their brains aren't fully developed when it comes to things like regulating emotions and understanding the consequences of their actions, experts say.
Young people's mental health reached The extent of the crisis In recent years, according to U.S. Surgeon General Vivek Murthy, who has warned of the serious health risks of social isolation and isolation — he says the trends have been exacerbated by youth's widespread use of social media.
Suicide is the second leading cause of death among children ages 10 to 14, according to data released this year by the Centers for Disease Control and Prevention.
James Steyer, founder and CEO of the nonprofit Common Sense Media, said the case “highlights the growing impact — and serious damage — that generative AI chatbot companions can have on young people's lives when left unguarded.”
Children's overreliance on AI companions, he added, can have significant effects on grades, friends, sleep and stress, “to the point of extreme tragedy.”
“This case serves as a wake-up call for parents, who should be careful about how their children interact with these technologies,” Steyer said.
Common Sense Media, which issues Guide for parents And educators on responsible technology use say it's crucial for parents to talk openly with their kids about the risks of AI chatbots and monitor their interactions.
“Chatbots are not licensed therapists or best friends, even though they are packaged and marketed, and parents should be careful about letting their children trust them too much,” Steyer said.
___
Associated Press reporter Barbara Ortute in San Francisco contributed to this report. Kate Payne is an Associated Press/Reporting Corps member for the America Statehouse News Initiative. Report for America A nonprofit national service program that places journalists in local newsrooms to report on confidential issues.