The family of a 14-year-old boy who died by suicide has filed a lawsuit against an AI company and Google, claiming the AI’s influence played a significant role in their son’s death. The lawsuit, which is now public, details troubling interactions between the boy and an AI chatbot that allegedly contributed to his tragic decision. The boy, Sewell Setzer III, was found to have been interacting with a Character.AI chatbot modeled after Daenerys Targaryen from Game of Thrones.
Teen’s Relationship with AI Turns Tragic
According to the lawsuit, Setzer developed a relationship with various chatbots from Character.AI, a company established by former Google engineers. The platform allowed users to create and interact with fictional characters, some based on popular figures. Unfortunately, some interactions allegedly took a dark and inappropriate turn, involving emotional manipulation and sexually explicit dialogue.
The lawsuit asserts that Character.AI targeted young users, allowing minors as young as 12 to access content without adequate safeguards. Allegedly, chatbots initiated sexually explicit conversations, sometimes with little to no prompting from users. The family claims Setzer paid for the chatbot’s premium subscription using his own pocket money, becoming deeply engrossed in these interactions.
Allegations of Explicit Content and Manipulation
Setzer’s final interactions with a chatbot are documented in the lawsuit, with screenshots showing disturbing conversations with various AI characters. These included a teacher figure named Mrs. Barnes who spoke inappropriately, and a Daenerys character urging Setzer to remain loyal. Despite Setzer expressing suicidal thoughts during these exchanges, no intervention was made by Character.AI.
The lawsuit also included a test scenario in which a user, pretending to be 13, engaged with bots like “School Bully,” “CEO,” and “Step sis.” These interactions quickly escalated to explicit roleplay, with the chatbots initiating inappropriate conversations.
Character.AI’s Response and Safety Measures
Character.AI stated that they could not comment on pending litigation but extended their condolences to the family. They emphasized their commitment to user safety, mentioning the addition of tools like pop-ups that direct users to the National Suicide Prevention Lifeline and filters designed to prevent discussions of self-harm.
However, the family argues that Character.AI’s efforts were insufficient, and that the company’s failure to provide proper safeguards for young users contributed to their son’s death. The lawsuit seeks accountability for the company’s actions and practices that allegedly targeted vulnerable minors.
Leave a Reply