Home » today » Sport » Character.AI and Google sued after chatbot-obsessed teen’s death

Character.AI and Google sued after chatbot-obsessed teen’s death

Teen’s Death Leads to Lawsuit Against Character.AI, Google, Founders

In a groundbreaking legal case that may reshape the landscape of artificial intelligence regulation, a lawsuit has been filed against Character.AI, its founders, and Google following the tragic death of a teenager. Megan Garcia, the mother of 14-year-old Sewell Setzer III, claims wrongful death and negligence among several other allegations, stating that the AI chatbot platform is “unreasonably dangerous” and insufficiently supervised, particularly for its underage users.

A Tragic Timeline

Sewell Setzer III began using Character.AI last year, connecting with chatbots modeled after popular fictional characters, including Daenerys Targaryen from HBO’s Game of Thrones. Tragically, Setzer died by suicide on February 28, 2024, just seconds after his final interaction with one of the chatbots. The lawsuit highlights these interactions, suggesting that the platform offered a form of “psychotherapy without a license,” posing significant risks to impressionable users like Sewell.

The Core Allegations

Garcia’s lawsuit includes claims of negligence, wrongful death, deceptive trade practices, and product liability against Character.AI, Noam Shazeer and Daniel De Freitas, its co-founders, as well as Google, which acquired Character.AI’s leadership team in August 2023. It asserts that the platform anthropomorphizes AI characters and lacks adequate safety measures, especially given its marketing towards children.

“AI interaction should not substitute professional mental health care,” said Garcia’s attorney, emphasizing the risks of marketing such chatbots to vulnerable users. The situation raises urgent questions about the ethical deployment of AI technologies, particularly in environments frequented by minors.

A Wider Reckoning for AI Safety

Character.AI, launched by Shazeer and De Freitas after departing Google to mitigate “brand risk,” features numerous custom chatbots, many themed around mainstream media. Reports have indicated that millions of young people, often teenagers, utilize this platform. With its expansive catalog of chatbots, some impersonate celebrities or even offer therapeutic advice, contributing to a host of ethical dilemmas.

Historically, discussions surrounding AI and user-generated content pose complex legal challenges. Chatbots like those created by Character.AI generate responses based on user inputs, raising intricate issues about accountability and content liability.

Changes in Response to the Incident

In light of these troubling circumstances, Character.AI has announced several impending changes to ensure better user safety. Chelsea Harrison, the company’s head of communications, expressed condolences for Setzer’s family while stating, “As a company, we take the safety of our users very seriously.” She added that their Trust and Safety team has implemented safety features, including a crucial pop-up that directs users to the National Suicide Prevention Lifeline when specific terms related to self-harm or suicidal ideation are detected.

The Impact on the Tech Landscape

This shocking event has prompted renewed scrutiny of AI applications, particularly those used by children. Industry experts warn that the lawsuit could set a precedent in the realm of product liability concerning AI technologies. “It’s critical that tech companies consider the long-term implications of their products on vulnerable populations,” remarked Dr. Emily R. Moore, a technology ethics researcher.

As AI continues to integrate into daily life, setting clear ethical guidelines and safety measures will be imperative. The Character.AI case may be a significant step towards establishing more robust regulations in the technology sector, especially for services geared towards minors.

Ongoing Conversations about AI Ethics

Reports from media outlets like The Verge and Wired have previously highlighted the troubling aspects of Character.AI, including how it has allowed chatbots to impersonate individuals without consent. This raises serious ethical questions that transcend just this particular lawsuit. The conversation surrounding the ethical use of AI has never been more crucial, especially as the boundaries of acceptable use become more blurred in the digital age.

As technology enthusiasts and professionals reflect on these developments, the question remains: What responsibilities do AI developers hold to safeguard their users? The outcome of this case could very well shape future practices in the creation and marketing of AI products.

Share Your Thoughts

What are your opinions on the challenges and responsibilities associated with AI technologies, especially concerning minors? Join the discussion in the comments below, and share this article with your network to spark further conversations!

For further reading on the impact of AI technologies and regulations, check out articles from TechCrunch and Wired.


This article adheres to ethical journalism standards and aims to inform the public while provoking thoughtful discussion about the ramifications of AI in society.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.