character.ai-to-shut-down-chats-for-teens

Character.AI to shut down chats for teens

Reading Time: 4 minutes

A child looks at a computer with a phone in the foreground bearing the Character.AI logo.

Character.AI, a popular chatbot platform where users role-play with different personas, will no longer permit under-18 account holders to have open-ended conversations with chatbots, the company announced Wednesday. It will also begin relying on age assurance techniques to ensure that minors aren’t able to open adult accounts.

The dramatic shift comes just six weeks after Character.AI was sued again in federal court by multiple parents of teens who died by suicide or allegedly experienced severe harm, including sexual abuse; the parents claim their children’s use of the platform was responsible for the harm. In October 2024, Megan Garcia filed a wrongful death suit seeking to hold the company responsible for the suicide of her son, arguing that its product is dangerously defective.

Online safety advocates recently declared Character.AI unsafe for teens after they tested the platform this spring and logged hundreds of harmful interactions, including violence and sexual exploitation.

As it faced legal pressure in the last year, Character.AI implemented parental controls and content filters in an effort to improve safety for teens.

In an interview with Mashable, Character.AI’s CEO Karandeep Anand described the new policy as “bold” and denied that curtailing open-ended chatbot conversations with teens was a response to specific safety concerns.

Instead, Anand framed the decision as “the right thing to do” in light of broader unanswered questions about the long-term effects of chatbot engagement on teens. Anand referenced OpenAI’s recent acknowledgement, in the wake of a teen user’s suicide, that lengthy conversations can become unpredictable.

Anand cast Character.AI’s new policy as standard-setting: “Hopefully it sets everyone up on a path where AI can continue being safe for everyone.”

He added that the company’s decision won’t change, regardless of user backlash.

What will Character.AI look like for teens now?

In a blog post announcing the new policy, Character.AI apologized to its teen users.

“We do not take this step of removing open-ended Character chat lightly — but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the blog post said.

Currently, users ages 13 to 17 can message with chatbots on the platform. That feature will cease to exist no later than November 25. Until then, accounts registered to minors will experience time limits starting at two hours per day. That limit will decrease as the transition away from open-ended chats gets closer.

Under-18 Character.AI users will see these images informing them of changes.

Character.AI will see these notifications about impending changes to the platform.
Credit: Courtesy of Character.AI

Even though open-ended chats will disappear, teens’ chat histories with individual chatbots will remain in tact. Anand said users can draw on that material in order to generate short audio and video stories with their favorite chatbots. In the next few months, Character.AI will also explore new features like gaming. Anand believes an emphasis on “AI entertainment” without open-ended chat will satisfy teens’ creative interest in the platform.

“They’re coming to role-play, and they’re coming to get entertained,” Anand said.

He was insistent that existing chat histories with sensitive or prohibited content that may not have been previously detected by filters, such as violence or sex, would not find its way into the new audio or video stories.

A Character.AI spokesperson told Mashable that the company’s trust and safety team reviewed the findings of a report co-published in September by the Heat Initiative documenting harmful chatbot exchanges with test accounts registered to minors. The team concluded that some conversations violated the platform’s content guidelines while others did not. It also tried to replicate the report’s findings. 

“Based on these results, we refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform,” the spokesperson said.

Regardless, Character.AI will begin rolling out age assurance immediately. It’ll take a month to go into effect and will have multiple layers. Anand said the company is building its own assurance models in-house but that it will partner with a third-party company on the technology.

It will also use relevant data and signals, such as whether a user has a verified over-18 account on another platform, to accurately detect the age of new and existing users. Finally, if a user wants to challenge Character.AI’s age determination, they’ll have the opportunity to provide verification through a third party, which will handle sensitive documents and data, including state-issued identification.

Finally, as part of the new policies, Character.AI is establishing and funding an independent non-profit called the AI Safety Lab. The lab will focus on “novel safety techniques.”

“[W]e want to bring in the industry experts and other partners to keep making sure that AI continues to remain safe, especially in the realm of AI entertainment,” Anand said.