Dark Mode
More forecasts: Johannesburg 14 days weather
  • Wednesday, 29 October 2025
Character.AI To Ban Under-18s From Chatting With AI Companions From November 25

Character.AI To Ban Under-18s From Chatting With AI Companions From November 25

Character.AI, the popular chatbot platform, is taking significant action to protect young users from the potential dangers of interacting with AI companions. Starting on November 25th, users under the age of 18 will no longer be able to chat with virtual characters on the platform. Instead, they will only have access to features like generating AI videos and images, all within more structured safety guidelines.

 

This comes after increased concerns about the emotional impact of AI bots, especially on teens. The company has faced criticism and lawsuits following reports of unhealthy emotional attachments between minors and chatbots. One case involved the death of a 14-year-old boy, who died by suicide after engaging in constant conversations with one of Character.AI’s bots. His family sued the company, highlighting the risks that AI companions pose to vulnerable young people.

 

Character.AI's CEO, Karandeep Anand, stated that the decision reflects the company's commitment to ensuring a safer platform, saying, “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them.” The platform has also announced plans to launch an AI safety research lab to further explore ways to improve user safety.

 

Character.AI's new age verification system, which will help identify underage users, is just one part of a broader push to create a safer environment for all users. The company will also place daily time limits for users under 18, capping their interaction with AI at two hours per day until the new rules go into effect.

 

Despite the platform's changes, some experts have raised concerns about the potential fallout from cutting off access to AI chatbots, particularly for teens who may have become emotionally attached to them. Dr. Nina Vasan, a psychiatrist specializing in AI safety, noted that while banning AI companions for minors is a positive step, the abrupt removal of such interactions could have unintended emotional consequences. "What I worry about is kids who have been using this for years and have become emotionally dependent on it," she said.

 

This change follows a wave of similar moves across the tech industry. OpenAI, the maker of ChatGPT, has also tightened its safeguards, rolling out new features that help the chatbot recognize and respond to signs of distress, particularly in vulnerable users. The recent uptick in emotional dependence on AI, especially among younger users, has prompted lawmakers to take action. Senators Josh Hawley and Richard Blumenthal recently introduced legislation that would ban AI companions for minors, requiring age-verification and ensuring that bots disclose their non-human status at the start of every conversation.

 

While online safety advocates are applauding Character.AI’s new measures, they argue that these safeguards should have been implemented earlier. “Our own research shows that children are exposed to harmful content and put at risk when engaging with AI,” said a spokesperson for Internet Matters, a group focused on online safety.

 

The move marks a critical moment in the development of AI safety measures, especially as concerns about the emotional and psychological risks of AI companions continue to grow. With more companies and lawmakers stepping up to address these issues, it seems clear that AI safety, particularly when it comes to minors, is becoming a top priority.

Comment / Reply From