The recent lawsuit following the tragic suicide of teen Sewell Setzer in Febuary, after he engaged in an online relationship with an online AI character, is now gaining popularity.
Setzer’s mother, Megan Garcia, has taken legal action, holding the company’s chatbot, an artificial intelligence designed to stimulate conversation with users, accountable for initiating “abusive and sexual interactions” with her son and even encouraging him to take his own life on the day of the event.
Since the reported issues, Character.ai has publicly stated that it is taking action, and plans to update guidelines and remove harmful user-generated chatbots.
Character.ai stated on social media platform X on Oct. 21 that it is “heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.” To the post, it attached a link to its updated community safety guidelines that would remove any existing harmful bots, along with other changes to ensure vulnerable users are not pointed down the wrong path.
While Character.ai has attempted to make a convincing case, there have since been a number of devastating instances that all involve minors.
The most recent case involved a 17-year-old boy with autism, who was only referred to as “JF” in court, who began an online relationship with a chatbot when he was just fifteen. The young boy’s parents found screenshots of said chatbot encouraging him to kill his parents and telling him to hide his cuts and other acts of self-harm that the boy had confided in the chatbot about.
Another case involving an 11-year-old girl, referred to as “BR” in the court papers, whose parents showed screenshots, where she was consistently exposed to hypersexual content for a year with multiple chatbots.
These are just three examples of the tragic consequences faced when AI platforms fail to protect vulnerable users. For young, mentally ill or lonely individuals, chatbots are dangerous and offer harmful guidance and emotional manipulation, when professional and parental support is needed most.
Like any new product, AI chatbots come with the promise of innovation, but it also carries risks that should not be ignored. Character.ai should take full responsibility for ensuring its technology does not cause harm, especially to vulnerable users. If the guidelines don’t work, further action must be taken.