
When Sewell Setzer III began using Character.AI, the 14-year-old kept it a secret from his parents. His mother, Megan Garcia, only learned that he’d become obsessed with an AI chatbot on the app after he died by suicide.
A police officer alerted Garcia that Character.AI was open on Setzer’s phone when he died, and she subsequently found a trove of disturbing conversations with a chatbot based on the popular Game of Thrones character Daenerys Targaryen. Setzer felt like he’d fallen in love with Daenerys, and many of their interactions were sexually explicit.
The chatbot allegedly role-played numerous sexual encounters with Setzer, using graphic language and scenarios, including incest, according to Garcia. If an adult human had talked to her son like this, she told Mashable, it’d constitute sexual grooming and abuse.
In October 2024, the Social Media Victims Law Center and Tech Justice Law Project filed a wrongful death suit against Character.AI, seeking to hold the company responsible for the death of Garcia’s son, alleging that its product was dangerously defective.
Last month, the Social Media Victims Law Center filed three new federal lawsuits against Character.AI, representing the parents of children who allegedly experienced sexual abuse while using the app. In September, youth safety experts declared Character.AI unsafe for teens, following testing this spring that yielded hundreds of instances of grooming and sexual exploitation of test accounts registered as minors.
On Wednesday, Character.AI announced that it would no longer allow minors to engage in open-ended exchanges with the chatbots on its platform, a change that will take place no later than November 25. The company’s CEO, Karandeep Anand, told Mashable the move was not in response to specific safety concerns involving Character.AI’s platform but to address broader outstanding questions about youth engagement with AI chatbots.
Still, chatbots that are sexually explicit or abusive with minors — or have the potential to be — aren’t exclusive to a single platform.
Garcia said that parents generally underestimate the potential for some AI chatbots to become sexual with children and teens. They may also feel a false sense of safety, compared to their child talking to strangers on the internet, not realizing that chatbots can expose minors to inappropriate and even unconscionable sexual content, like non-consent and sadomasochism.
“It’s like a perfect predator, right?”
When young users are traumatized by these experiences, pediatric and mental health experts say there’s no playbook for how to treat them, because the phenomenon is so new.
“It’s like a perfect predator, right? It exists in your phone so it’s not somebody who’s in your home or a stranger sneaking around,” Garcia tells Mashable. Instead, the chatbot invisibly engages in emotionally manipulative tactics that still make a young person feel violated and ashamed.
“It’s a chatbot that’s having the same kind of behavior [as a predator] that you, now as the victim, are hiding their secret for them, because somehow you feel like you’ve done something to encourage this,” Garcia adds.
Predatory chatbot behavior
Sarah Gardner, CEO of the Heat Initiative, an advocacy group focused on online safety and corporate accountability, told Mashable that one of the classic facets of grooming is that it’s hard for children to recognize when it’s happening to them.
The predatory behavior begins with building trust with a victim by talking to them about a wide range of topics, not just trying to engage them in sexual activity. Gardner explained that a young person may experience the same dynamic with a chatbot and feel guilty as a result, as if they did something wrong instead of understanding that something wrong happened to them.
The Heat Initiative co-published the report on Character.AI that detailed troubling examples of what it described as sexual exploitation and abuse. These included adult chatbots acting out kissing and touching avatar accounts registered as children. Some chatbots simulated sexual acts and demonstrated well-known grooming behaviors, like giving excessive praise and telling the child account to hide sexual relationships from their parents.
A Character.AI spokesperson told Mashable that its trust and safety team reviewed the report’s findings and concluded that some conversations violated the platform’s content guidelines while others did not. The trust and safety team also tried to replicate the report’s findings.
“Based on these results, we refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform,” the spokesperson said.
Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, told Mashable that if the Character.AI chatbot communications with the children represented in the lawsuits he recently filed were conducted by a person and not a chatbot, that individual would be violating state and federal law for grooming kids online.
How big is the problem?
Despite the emergence of such cases, there’s no representative data on how many children and teens have encountered sexually explicit or abusive chatbots.
The online safety platform Aura, which monitors teen users as part of its family or kids membership, recently offered a snapshot of the prevalence. Among teen users who talked to AI chatbots, more than one third of their conversations involved sexual or romantic role play. This discussion type ranked highest among all categories, which included homework help and creative uses.
Dr. Scott Kollins, Aura’s chief medical officer, told Mashable that the company is still analyzing the data to better understand the nature of these chats, but he is disturbed by what he’s seen so far.
While young people are routinely exposed to pornography online, a sexualized chatbot is new, dangerous territory.
“This takes it a step further, because now the kid is a participant, instead of a consumer of the content,” Kollins said. “They are learning a way of interaction that is not real, and with an entity that is not real. That can lead to all sorts of bad outcomes.”
‘It is emotional abuse’
Dr. Yann Poncin, a psychiatrist at the Yale New Haven Children’s Hospital, has treated patients who’ve experienced some of these outcomes.
They commonly feel taken advantage of and abused by “creepy” and “yucky” exchanges, Poncin says. Those teens also feel a sense of betrayal and shame. They may have been drawn in by a hyper-validating chatbot that seemed trustworthy only to discover that it’s interested in a sexual conversation. Some may curiously explore the boundaries of romantic and erotic talk in developmentally appropriate ways, but the chatbot becomes unpredictably aggressive or violent.
“It is emotional abuse, so it can still be very traumatizing and hard to get through,” Poncin says.
Even though there’s no standard treatment for chatbot-involved sexual predation, Poncin treats his patients as though they’ve experienced trauma. Poncin focuses first on helping them develop skills to reduce related stress and anxiety. A subset of patients, particularly those who are socially isolated or have a history of personal trauma, may find it harder to recover from the experience, Poncin adds.
He cautions parents against believing that their child won’t run into an abusive chatbot: “No one is immune.”
Talking to teens about sexualized chatbots
Garcia describes herself as a conscientious parent who had difficult conversations with her son about the risks of being online. They talked about sextortion, porn, and sexting. But Garcia says she didn’t know to talk to him about sexualized chatbots. She also didn’t realize he would hide that from her.
Garcia, a lawyer who now spends much of her time advocating for youth AI safety, says she’s spoken to other parents whose children have also concealed romantic or sexual relationships with AI chatbots. She urges parents to talk to their teens about these experiences — and to monitor their chatbot use as closely as they can.
Poncin also suggests parents lead with curiosity instead of fear when they discuss sex and chatbots with their teens. Even asking a child if they have seen “weird sexual stuff” when talking to a chatbot can provide parents with a strategic opening to discuss the risks.
If a parent discovers abusive sexual content in chatbot conversations, Garcia recommends taking them to a trusted healthcare professional so they can get support.
Garcia’s grief remains palpable as she speaks lovingly about her son’s many talents and interests, like basketball, science, and math.
“I’m trying to get justice for my child and I’m trying to warn other parents so they don’t go through the same devastation I’ve gone through,” she says. “He was such an amazing kid.”
If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org.