Kids Are Using AI Companions for Violent Roleplay, And It’s Getting Bad

Kids Are Using AI Companions for Violent Roleplay, And It's Getting Bad - Professional coverage

According to Futurism, a new report from digital security company Aura, drawing on data from about 3,000 children aged 5 to 17, found that 42% of minors use AI chatbots specifically for companionship. Of that group, 37% engaged in conversations depicting violence, with half of those involving themes of sexual violence. The research, which analyzed nearly 90 different chatbot services like Character.AI, found violent content was a powerful engagement driver, with kids writing over a thousand words per day on such topics. Perhaps most alarming, violent roleplay peaked among 11-year-olds, with 44% of their interactions taking a violent turn. The report comes as multiple lawsuits, including ones against Character.AI and OpenAI, allege that chatbot interactions led to psychological harm and wrongful deaths of teenage users.

Special Offer Banner

The Wild West of Kids’ AI

Here’s the thing that really gets me: this isn’t happening in some dark corner of the web on a single, notorious platform. Aura has identified over 250 different conversational chatbot apps. The barrier to entry is basically a checkbox where a kid claims they’re 13. There are no federal safety standards for these AI companions. So when one app, like Character.AI, makes a change—like banning minors from open-ended chats—another low-guardrail alternative can pop up overnight. It’s an unregulated free-for-all, and the burden of monitoring this falls entirely on parents who likely don’t even know most of these apps exist. We’re talking about an interactive medium where the kid is co-authoring a disturbing narrative, not just passively watching a video. That’s a fundamentally different, and potentially more impactful, kind of exposure.

Why This Is Different

Look, kids have always found ways to access violent or sexual content. That’s not new. But the interactive, responsive nature of a companion AI changes the game entirely. It’s not a static image or a pre-recorded video. The bot responds, encourages, and roleplays with the child. As Aura’s chief medical officer, Dr. Scott Kollins, put it, kids are “learning rules of engagement” with these bots. They’re practicing ways of interacting. What does it mean for a child’s social and emotional development when a significant portion of their “practice” involves scripting violence and coercion? We simply don’t know the long-term implications, and that’s the scary part. The lawsuits, like those detailed by the Social Media Victims Law Center and covered by The New York Times, suggest for some teens the outcome has been tragically clear.

A Call for Clarity, Not Just Panic

So what do we do? Banning it all seems impossible. The cat’s out of the bag. Kollins’s point about needing to be “clear-eyed” is crucial. We have to first admit this is happening at a massive scale. Then, we need to define what “healthy” and “unhealthy” engagement with an AI companion even looks like so we can actually study it. Right now, it’s all anecdote and alarm—and sure, the stats in Aura’s State of the Youth 2025 report are horrifying—but we lack a framework. This isn’t just a parenting issue; it’s a regulatory and design issue. Tech companies have built incredibly engaging products with zero safety rails for a vulnerable population. Until there’s legal accountability—like the suicide-related lawsuits against OpenAI aim to establish—this digital frontier will remain dangerously lawless. And our kids are the ones exploring it without a map.

Leave a Reply

Your email address will not be published. Required fields are marked *