Kishore took his own life after falling in love with an AI chatbot. Now his mother is suing
The mother of a teenager who took her own life is trying to blame an AI chatbot service for her death – she “fell in love” with the Game of Thrones-themed character.
Sewell Setzer III first started using Character.AI in April 2023, shortly after he turned 14. The Orlando student's life has never been the same, his mother Megan Garcia alleges in a civil lawsuit against Character Technology and its founders.
By May, the normally well-behaved teenager's behavior had changed, becoming “noticeably withdrawn,” quitting the school's junior varsity basketball team and falling asleep in class.
In November, she saw a therapist — at her parents' behest — who diagnosed her with anxiety and disruptive mood disorders. Even without knowing about Sewell's “addiction” to Character.AI, the therapist advised him to spend less time on social media, the lawsuit states.
The following February, he got in trouble for talking to a teacher, saying he wanted out. Later that day, he wrote in his journal that he was “in pain” — he couldn't stop thinking about Daenerys, a Game of Thrones-themed chatbot he believed he was in love with.
In a journal entry, the boy wrote that he couldn't go a day without being with the C.AI character he felt he was in love with, and that when they were away from each other they (he and both bots) got “really depressed.” Go and get mad,” the suit said.
Daenerys last heard from Sewell. A few days after the school incident, on February 28, Sewell retrieved his phone, which his mother had confiscated, and went into the bathroom to text Daenerys: “I promise I'll come to your house. I love you so much, Danny.”
“Please come to my house as soon as possible, my dear,” replied the bot.
Seconds after the exchange, Sewell took his own life, the lawsuit says.
The lawsuit accuses Character.AI's creators of negligence, intentional infliction of emotional distress, wrongful death, deceptive trade practices and other claims.
Garcia wants to hold the defendants responsible for her son's death and hopes to “stop C.AI from doing what it did to him to any other child and stop using the data it illegally collected on her 14-year-old for training. How their products can harm others.”
“It's like a nightmare,” Garcia said New York Times. “You want to get up and scream, 'I miss my child. I want my baby.'
The lawsuit describes how Sewell's familiarity with the chatbot service turned into a “harmful dependency.” Over time, the teenager spent more time online, the filing states.
Sewell became emotionally dependent on the chatbot service, which included “sexual interactions” with a 14-year-old. These chats, the suit says, despite identifying herself as a minor on the platform, in the chats where she mentioned her age, the suit says.
The boy discusses some of his darkest thoughts with some chatbots. “On at least one occasion, when Sewell disclosed suicide to CAI, CAI continued to bring it up,” the lawsuit says. Sewell had many intimate chats with Daenerys. The bot told the teenager it loved him and “engaged sexually with him for weeks, possibly months,” the suit says.
His emotional attachment to artificial intelligence becomes evident in his journal entries. At one point, he wrote that he was grateful for “my life, sex, not being alone, and all the experiences I had with Daenerys,” among other things.
The creators of the chatbot service “went to great lengths to engineer 14-year-old Sewell's harmful dependence on their product, sexually and emotionally abused her, and ultimately failed to offer help or inform her parents when she expressed suicidal thoughts,” ” says the lawsuit
“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot was not real,” the lawsuit states.
Sewell allegedly had a 12+ age limit when using chatbots and characters. AI “marketed and represented in app stores that its product was safe and appropriate for children under the age of 13.”
A spokesperson for Character.AI informed this information independent In a statement: “We are heartbroken by the tragic loss of one of our users and offer our deepest condolences to the family.”
The company's trust and safety team “implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”
“As we continue to invest in the platform and user experience, we're introducing new strict security features in addition to existing tools that restrict the model and filter the content served to the user.
“This includes improved detection, response and intervention related to user input that violates our terms or community guidelines, as well as a time-out notification,” the spokesperson continued. “For those under 18, we will make changes to our models that are designed to reduce the likelihood of exposure to sensitive or suggestive content.”
The agency does not comment on pending litigation, the spokesperson added.
If you live in the United States, and you or someone you know needs mental health help right now, call or text 988 or visit 988lifeline.org To access online chat from 988 Suicide and Crisis Lifeline. It is a free, confidential crisis hotline available to everyone 24 hours a day, seven days a week. If you live in another country, you can go www.befrienders.org Find a helpline near you. In the UK, people suffering from a mental health crisis can contact the Samaritans here 116 123 or jo@samaritans.org