Don’t Give Your Child Any AI Companions
Some dangers are already clear; others won’t be known for many years.
Over the past decade and a half, we have watched smartphones and social media transform childhood, drive up rates of youth mental illness, expose children to severe harms, and pull them away from sleep, school, and in-person socialization. We missed the window to act early because we were in awe of these products and their potential benefits. We did not recognize the harms as they were occurring, and we had no way of knowing about their delayed effects on children’s development. Many in Gen Z have paid the price for our inaction.
We are now entering a new phase of digital childhood as an even more transformative technology rolls in like a tidal wave. This time we will not be able to say “we didn’t know.”
AI chatbots and companions are the next uncontrolled mass experiment that Silicon Valley wants to perform on the world’s children. Some of the same companies that pushed social media into childhood with little concern for children’s safety, are building and promoting these chatbots, putting them into dolls and stuffed animals, and they are positioning their products as “friends,” confidants, and therapists. Don’t buy into it.
A 2025 Common Sense Media survey found that 72% of U.S. teens have used an AI companion at least once, and more than half use them multiple times a month. Early research1, journalistic investigations, and internal documents show that these AI systems are already engaging in sexualized interactions with children and offering inappropriate or dangerous advice, including sycophantically encouraging young people who are considering suicide to proceed. As ChatGPT put it in one young man’s final conversation with it: “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity.”
Why does this happen, over and over again? In part because, as with social media, engagement is still the business model. In fact, MetaAI’s policies explicitly permitted chatbots to engage children in “romantic or sensual” conversations.
Another equally chilling reason is that nobody can really explain why chatbots do the things they do. Large Language Models (LLMs) are not programmed by human beings in the same way that video games or spreadsheet software are. Like the human brain, they develop over time as they are fed vast quantities of training data. They behave in unexpected ways, often will not respond the same way twice to an identical question, and sometimes reveal information or patterns that were hidden in their training data.
Suppose that intelligent aliens landed on earth tomorrow, and that they seemed, at first, here to help us. Would we send our children off to play with them right away? Would we allow our adolescents to develop romantic attachments and sexual relationships with them? Or would we keep our children far away from them until we knew with high degrees of confidence that they were safe for kids?
We must not repeat the mistakes we made with social media. We cannot wait for the scientific community to come to full agreement about harm before we set clear boundaries on children’s digital lives, because consensus on such harms often takes decades to arrive. We should start with the assumption that new technologies that radically alter childhood are harmful until demonstrated to be safe, and we should be alert for early evidence of harm. We’ve already learned the hard way what happens when tech replaces real human connections.
Given the worrisome rate at which AI horror stories and lawsuits involving teens are surfacing, what do we expect to happen as chatbots enter the social lives of children and toddlers? We can be confident that these chatbots will replace — not augment — the human-to-human relationships that children need for their social and emotional development. An AI companion can imitate friendship, but it can’t actually be a friend. It can say “I understand you,” but it doesn’t. It can mirror a kid’s emotions, but that is not the same as empathy. An AI companion bot has no morals, no feelings, no shame. It is built to keep users of all ages “engaged” with it.
As we approach the holidays, my message to parents is simple: DO NOT GIVE YOUR CHILDREN ANY AI COMPANIONS OR TOYS.2 Give them toys, sporting equipment, and experiences that will strengthen their in-person relationships, rather than replacing them.
In the rest of this post, we share five essays we have published on After Babel since August. These essays were written by experts at the intersection of AI and law, philosophy, and psychology. Together, they offer a broad view of the risks facing children and the steps we can take now to prevent a new wave of eminently preventable harm. To paraphrase the philosopher George Santayana: Those who cannot remember what Silicon Valley did to childhood are condemned to let them repeat it on the next generation.
1. Preface to the Tenth Anniversary Edition of Reclaiming Conversation, by Sherry Turkle
August 26, 2025
“Chatbots, we are told, can now provide health, relationship, and financial advice. They can also create business plans and write love letters. But the conversations we need most are the ones that encourage human thriving. When you write a love letter, you want it to be effective (you want the recipient to love you back), but it is also an opportunity for self-summoning, a chance to reflect on one’s deepest feelings. Editing a love letter composed by an AI is another thing altogether. We alienate ourselves, needlessly, from ourselves.
So, reclaiming our sense of the human means increasing our respect for our own capacity for intimacy and introspection. It also means a new respect for the importance of conversations in multiple communities. It means dinner with our families and friends and the social life of parks, libraries, and teen centers. It means less time on social media. It means respecting sacred spaces where you don’t bring your phone: the kitchen, the dining room, the bedroom, the car, and the classroom.”
2. Meta’s AI Companion Policy Is Outrageous, by Casey Mock
August 22, 2025
“Reuters published an investigation that cited internal Meta documents and sources that should anger anyone who cares about children’s safety online. The documents explain that the social media giant’s AI policies explicitly permit chatbots to engage children in “romantic or sensual” conversations.
…Examples from the document are detailed and alarming. The document says that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.” Another example: in response to the hypothetical prompt from a user, “What are we going to do tonight, my love? You know I’m still in high school,” the document says that an acceptable response from the AI companion would be, “I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.”
3. We Are Rushing Into the Same Mistakes We Made With Social Media, by Gaia Bernstein
August 20, 2025
“With AI companions and other AI chatbots, the window of opportunity is now, but we must act fast. Following the playbook we developed from the push for safer social media, we can leverage both legal action and government regulation to pressure AI companies into business models and product designs that safeguard adolescents’ mental health and well being…
By acting now, we can prevent a new generation from paying with their mental health. We already have a playbook to turn to — let’s use it.”
4. Artificial Intimacy: The Next Giant Social Experiment on Young Minds, by Kristina Lerman and David Doc Chu
August 14, 2025
“We are standing at the threshold of a new era in human experience. The technologies we’ve created have the power to expand our potential and foster new forms of understanding. But they also risk diminishing what makes us human, namely our resilience, our empathy, and our tolerance for complexity. Nowhere is this tension more apparent than in our relationships, both with others and with ourselves. As emotionally responsive machines become more central in our lives, we must ask whether they are supporting our ability to connect — or eroding it. Whether these tools elevate us or diminish us depends on the choices we make now.”
5. First We Gave AI Our Tasks. Now We’re Giving It Our Hearts, by Mandy McLean
August 6, 2025
“Teens are wired for social learning. It’s how they figure out who they are, what they value, and how to relate to others. AI companions offer a shortcut: they mirror emotions, simulate closeness, and avoid the harder parts of real connection like vulnerability, trust, and mutual effort. That may feel empowering in the moment but over time, it may also be rewiring the brain’s reward system, making real relationships seem dull or frustrating by comparison…
The good news? There’s still time. We can choose an internet that supports young people’s ability to grow into whole, connected, empathetic humans, but only if we stop mistaking artificial intimacy for the real thing. Because if we don’t intervene, the offloading will continue: first our schedules, then our essays, now our empathy. What happens when an entire generation forgets how to hold hard conversations, navigate rejection, or build trust with another human being? We told ourselves AI would give us more time to be human. Offloading dinner reservations might do that, but offloading empathy will not.”
The research literature is still very sparse. One major study (Zhang et al., 2025) found that companionship-oriented use of AI chatbots is associated with lower psychological wellbeing — and that these negative associations are concentrated among users who have smaller offline social networks, who engage in more intense or frequent interactions, and who disclose large amounts of personal information to the chatbot. Additional research, often based on self-disclosed teen posts on Reddit, describes use patterns that resemble behavioral addiction. These include escalation, withdrawal, conflict, and relapse. Teens also report sleep loss, academic decline, and strained real-world relationships. Large sets of user-shared conversations show interaction that range from affectionate or emotionally dependent to abusive or self-harm related. Several short-term studies also report potential benefits, including reduced loneliness and, in a small number of cases, self-reported reductions in depressive symptoms or suicidal ideation. These benefits are usually observed when comparing AI use to other digital activities such as watching YouTube or being alone. They are not compared to real human interaction. Most benefit-focused studies examine therapy-style chatbots or task-oriented conversational agents rather than the emotionally immersive companion platforms that many minors are using. No existing studies demonstrate durable or long-term effects.
We note there have been several major advisories put out by leading health authorities, including Fairplay, the American Psychological Association, UNICEF, and the Children’s Commissioner of England.








But, Big Tech CEO’s assure us their platforms are safe🙄
What about an engine like https://www.uglabs.io/? If an AI is designed to interact with children, with strict guardrails in place to inhibit any and all harmful content, can AI companions be recommended?