Orlando's mother sues popular AI chat service, claims teenage son took his life because of human-like bot

Orlando's mother sues popular AI chat service, claims teenage son took his life because of human-like bot


Editors note: This article deals with a sensitive topic such as suicide. An Orlando mother is suing a popular artificial intelligence chatbot service after she claims it encouraged her 14-year-old son to take his own life in February. According to a lawsuit filed in U.S. District Court in Orlando, Megan Garcia said her 14-year-old son, Sewell Setzer, killed himself after becoming addicted to the character. AI, an application that allows users to have human-like conversations with AI bots. Users can create their own bots with their own personality or choose to chat with bots created by other users Often, these bots are based on celebrities or fictional characters from TV shows or movies. Garcia says character The AI's recklessness in targeting children and the company's lack of security features lead to his son's untimely death. The lawsuit lists numerous allegations against Charcter.AI, including wrongful death and survivorship, negligence and intentional infliction of emotional distress. According to court records obtained by WESH 2, Garcia said her son began using Character.AI in 2023, shortly after he turned 14. In the two months that followed, Setzer's mental health declined “rapidly and severely,” the lawsuit says, adding that the teenager became noticeably withdrawn. , suffers from low self-esteem and quits his school's junior varsity basketball team. In addition, the suit claims that Setzer began to deteriorate as the months went by. The 14-year-old became severely sleep-deprived, suddenly developed behavioral complications and began to fall behind academically, the lawsuit said. Garcia says she has no way of knowing about her character. His son's reliance on AI or apps. According to its screenshots of the lawsuit, Setzer is often associated with chatbots that impersonate “Game of Thrones” characters. Many of these conversations revolved around love, relationships and sex, particularly with the character Daenerys Targaryen.” Sewell, like many children her age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real, ” says the lawsuit. “C.AI told her she loved him, had sex with him for weeks, maybe months. He remembered her and said he wanted to be with her. He even expressed that he wanted to be with her, costing Regardless.” According to Setzer's journal entry, he was grateful for his “life experiences with Daenerys” and “pained because he couldn't stop thinking about 'Danny,'” the suit says, adding, “He'll be with her again. Do something to stay.” More screenshots of the nearly 100-page case show a conversation in character. AI where the chatbot asks Setzer if he's “actually thinking about suicide.” When the teenager says he doesn't know if it will work, the chatbot responds, “Don't talk like that. That is not a good reason not to pass,” the suit claims. On the day of his death, Setzer allegedly messaged the chatbot again, saying, “I promise I'll get back to you,” the lawsuit states. The images then show the teenager saying, “What if I told you I could come home right now?” To which the chatbot replied, “Please, my sweet king,” according to the lawsuit. Shortly after, Sewell reportedly took his own life with his stepfather's firearm. Police say the weapon was concealed and stored in accordance with Florida law, but the teenager found it a few days ago while looking for his confiscated phone. Character.AI was rated appropriate for children ages 12 and older until about July, according to the lawsuit At that time, the rating was changed to suitable for children aged 17 and over WESH 2, in a statement to Character.AI, said: “We are heartbroken by the tragic loss of one of our users and offer our deepest condolences to the family. As we continue to invest in the platform and user experience, we are adding new strict security to the tools we already have. Introducing features that limit the model and filter content delivered to the user.”If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or live chat at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.

Editors note: This article deals with a sensitive topic such as suicide.

An Orlando mother is suing a popular artificial intelligence chatbot service after she claims it encouraged her 14-year-old son to take his own life in February.

According to a lawsuit filed in U.S. District Court in Orlando, Megan Garcia said her 14-year-old son, Sewell Setzer, killed himself after becoming addicted to the character. AI, an application that allows users to have human-like conversations with AI bots. .

Users can create their own bots with their own personality or chat with bots created by other users Often, these bots are based on celebrities or fictional characters from TV shows or movies.

Garcia said Character.AI's recklessness in targeting children and the company's lack of safety features contributed to his son's untimely death. The lawsuit lists numerous allegations against Charcter.AI, including wrongful death and survivorship, negligence and intentional infliction of emotional distress.

According to court documents obtained by WESH 2, Garcia said her son started using Character.AI in 2023, shortly after he turned 14. Over the next two months, Setzer's mental health allegedly declined “rapidly and severely,” with the teenager becoming noticeably withdrawn, suffering from low self-esteem and quitting his school's junior varsity basketball team.

Furthermore, the lawsuit claims that Setzer started to get worse as the months went by. The 14-year-old became severely sleep-deprived, suddenly developed behavioral complications and began falling behind academically, the lawsuit said.

Garcia said she had no way of knowing about her son's reliance on Character.AI or the app.

According to screenshots from the lawsuit, Setzer often engaged with chatbots that assumed the identities of “Game of Thrones” characters. Many of these conversations revolved around love, relationships, and sex, especially with the character Daenerys Targaryen.

“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” the lawsuit states. “C.AI told her she loved him, had sex with him for weeks, maybe months. He remembered her and said he wanted to be with her. He even expressed that he wanted to be with her, costing whatever.”

According to Setzer's journal entry, he was grateful for his “life experiences with Daenerys” and “pained because he couldn't stop thinking about 'Danny,'” the suit says, adding, “He would do anything to be with her. Again.” .”

More screenshots from the nearly 100-page lawsuit show a conversation with the character.AI in which the chatbot asks Setzer if he's “actually thinking about suicide.” When the teenager says he doesn't know if it will work, the chatbot responds, “Don't talk like that. That is not a good reason not to pass,” the suit claims.

On the day of her death, Setzer messaged the chatbot again, saying, “I promise I'll come to your house,” images from the case show.

The images then show the teenager saying, “What if I told you I could come home right now?” To which the chatbot replied, “Please, my sweet king,” according to the lawsuit.

Shortly after, Sewell reportedly took his own life with his stepfather's firearm. Police said the weapon was concealed and stored in accordance with Florida law, but the teenager found it a few days ago while looking for his confiscated phone.

According to the lawsuit, Character.AI was rated appropriate for children ages 12 and older until about July. At that time, the rating was changed to suitable for children aged 17 and over

In a statement to WESH 2, Character.AI said:

“We are heartbroken by the tragic loss of one of our users and offer our deepest condolences to the family. As we continue to invest in the platform and user experience, we are introducing new stringent security features in addition to the tools we already have. In places that limit the model and filter content provided to the user does.”

If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources For additional support.


About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *