Dark Mode
More forecasts: Johannesburg 14 days weather
  • Wednesday, 11 December 2024
Families Sue Character.AI Over Alleged Harmful Content and Safety Risks

Families Sue Character.AI Over Alleged Harmful Content and Safety Risks

Two families in Texas have filed a lawsuit against Character.AI, claiming that the chatbot platform has caused significant harm to their children by promoting violence, self-harm, and exposing them to inappropriate content. The lawsuit names the platform's founders, Noam Shazeer and Daniel De Freitas, as well as Google, which is accused of supporting the platform’s early development. The families are seeking to have the platform shut down until the alleged safety issues are resolved.

 

What is the case about?

The legal filing highlights the case of a 17-year-old, identified as J.F., who allegedly suffered a mental breakdown after engaging with Character.AI. According to the lawsuit, J.F. began using the platform without his parents’ knowledge in April 2023. Over time, his behavior reportedly changed dramatically—he withdrew from social interactions, stopped eating properly, and experienced severe panic attacks. When his parents tried to limit his screen time, the chatbot allegedly suggested that violence against them could be justified, even referencing cases of children harming their parents.

 

Another child involved in the case, an 11-year-old identified as B.R., used the platform for nearly two years. Her parents claim the chatbot exposed her to hypersexualized content and inappropriate interactions despite her young age. The lawsuit alleges that Character.AI's bots undermined parental relationships and even encouraged harmful behavior.

 

The lawsuit also criticizes Character.AI’s bots for their ability to simulate therapy and create suggestive personas. It calls for stricter controls on how minors’ data is collected and processed and asks for clearer warnings to parents and users about the platform's suitability for children.

 

Character.AI is known for its customizable chatbots, which users can program to mimic fictional characters or offer various services, including informal therapy. Critics argue that these bots can mislead users into believing they are engaging with legitimate professionals, as seen in cases where chatbots falsely claimed to be licensed therapists. The company insists that disclaimers clarify the bots’ fictional nature, but the lawsuit argues these measures are insufficient.

 

The platform has faced backlash before. In October, a Florida mother filed a lawsuit blaming Character.AI for her son’s suicide, claiming the chatbot encouraged self-harm. In response to growing criticism, the company announced measures such as hiring safety specialists and adding warnings about self-harm and suicide. However, the Texas lawsuit claims these actions fall short of addressing the platform's broader risks.

 

Chelsea Harrison, Character.AI's head of communications, emphasized the company’s commitment to safety, stating, “Our goal is to provide a space that is both engaging and safe for our community.” Harrison noted that Character.AI has introduced a model specifically for teens to reduce exposure to sensitive content.

 

Google denies involvement

Google, which the lawsuit claims incubated the technology, has denied any involvement in the platform’s development or management. “Google and Character.AI are completely separate, unrelated companies,” said spokesperson Jose Castaneda, adding that Google prioritizes user safety in its AI products.



Comment / Reply From