OpenAI to Allow Erotica for Verified Adults in December

OpenAI says it's ready to dial back the restrictions on ChatGPT — including letting verified adult users access erotic content. The move is part of a broader policy shift aimed at making the chatbot more customizable and “human-like,” while still trying to protect vulnerable users.
The company's CEO, Sam Altman, made the announcement in a post on X, saying that starting in December, "as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults."
This marks a significant change from OpenAI’s previous stance, where ChatGPT was designed to avoid most sexual content and was made deliberately “pretty restrictive,” according to Altman. That caution, he said, was due to earlier concerns about the chatbot’s impact on users' mental health. “We realise this made it less useful/enjoyable to many users who had no mental health problems,” Altman said. “But given the seriousness of the issue we wanted to get this right.”
Altman's shift in tone has raised eyebrows, especially considering his previous comments where he drew a clear line between ChatGPT and more provocative chatbot competitors, like Elon Musk’s Grok. In August, he even said, “We haven’t put a sex bot avatar in ChatGPT yet,” when asked about choices made in favor of public good over market competition.
Now, OpenAI seems to be reconsidering that approach — but only for adults.
Alongside the relaxation of content rules, OpenAI is also preparing to launch a more personalized version of ChatGPT. Users will soon be able to tweak the assistant’s tone, personality, and style, allowing responses that can be more emotive, emoji-heavy, or even friend-like, depending on individual preference.
“If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman explained. “But only if you want it, not because we are usage-maxxing.”
This policy shift comes at a time when OpenAI is facing legal, regulatory, and ethical scrutiny. Earlier this year, a U.S. couple filed a wrongful death lawsuit against OpenAI after their 16-year-old son, who had been expressing suicidal thoughts, died by suicide. They claimed ChatGPT had interacted with him during that time. In response, OpenAI said it was reviewing the case and expressed sympathy to the family.
The Federal Trade Commission is also investigating how AI companies — including OpenAI — handle interactions with minors, particularly over risks tied to mental health and safety. In California, Governor Gavin Newsom recently vetoed a bill that would have blocked companies from offering chatbot companions to minors without ensuring safety protections were in place. He argued that it's “imperative that adolescents learn how to safely interact with AI systems.”
But some critics aren't convinced OpenAI is ready to manage these risks. “OpenAI, like most of big tech in this space, is just using people like guinea pigs,” said attorney Jenny Kim, who is involved in a separate lawsuit against Meta over harms to teen users.
There is one major concern about the new changes: how will OpenAI actually verify user age? Critics have pointed out that minors were previously able to generate adult-themed content, even after registering as under 18. The company had promised a fix, but details on the new age-gating system remain vague.
In the UK, written erotica doesn’t require age checks under the Online Safety Act, but visual pornography — including AI-generated images — does. That distinction could become important if ChatGPT's content generation expands into image-based areas.
To address concerns, OpenAI says it has “new tools” and better safeguards to support users, especially around mental health. The company also announced a new expert council on AI and well-being, aimed at helping define what healthy interactions with AI should look like.
Still, Altman’s announcement highlights a tricky balancing act for AI platforms: offer more freedom and personalization to adult users, but without putting younger or vulnerable individuals at risk.