First We Gave AI Our Tasks. Now We’re Giving It Our Hearts.
What happens when we outsource our children’s emotional lives to chatbots?
Intro from Jon Haidt and Zach Rausch:
Many of the same companies that brought us the social media disaster are now building hyper-intelligent social chatbots designed to interact with kids. The promises are familiar: that these bots will reduce loneliness, enhance relationships, and support children who feel isolated. But the track record of the industry so far is terrible. We cannot trust them to make their products safe for children.
We are entering a new phase in how young people relate to technology, and, as with social media, we don’t yet know how these social AI companions will shape emotional development. But we do know a few things. We know what happens when companies are given unregulated access to children without parental knowledge or consent. We know that business models centered around maximizing engagement lead to widespread addiction. We know what kids need to thrive: lots of time with friends in-person, without screens, and without adults. And we know how tech optimism and a focus on benefits today can blind people to devastating long-term harms, especially for children as they go through puberty.
With social media, we — parents, legislators, the courts, and the U.S. Congress — allowed companies to experiment on our kids with no legal or moral restraints and no need for age verification. Social media is now so enmeshed in our children’s lives that it’s proving very difficult to remove it or reduce its toxicity, even though most parents and half of all teens see it as harmful and wish it didn’t exist. We must not make the same mistake again. With AI companions still in their early stages, we have the opportunity to do things differently.
Today’s post is the first of several that look at this next wave: the rise of social AI chatbots and the risks that they already pose to children’s emotional development. It’s written by Mandy McLean, an AI developer who is concerned about the impacts that social bots will have on children’s emotional development, relationships, and sense of self. Before moving into tech, Mandy was a high school teacher for several years and later earned a PhD in education and quantitative methods in social sciences. She spent over six years leading research at Guild, “a mission-based edtech startup focused on upskilling working adult learners,” before shifting her focus full-time to “exploring how emerging technologies can be used intentionally to support deep, meaningful, and human-centered learning in K-12 and beyond.”
– Jon and Zach
First We Gave AI Our Tasks. Now We’re Giving It Our Hearts.
By Mandy McLean
Throughout the rapid disruption, high-pitched debate, and worry about whether AI will take our jobs, there’s been an optimistic side to the conversation. If AI can take on the drudgery, busywork, and cognitive overload, humans will be free to focus on relationships, creativity, and real human connection. Bill Gates imagines a future with shorter workweeks. Dario Amodei reminds us that “meaning comes mostly from human relationships and connection, not from economic labor.” Paul LeBlanc sees hope in AI not for what it can do, but for what it might free us to do – the most “human work:” building community, offering care, and making others feel like they matter.
The picture they paint sounds hopeful, but we should recognize that it’s not a guarantee. It’s a future we’ll need to advocate and fight for – and we’re already at risk of losing it. Because we’re no longer just outsourcing productivity-related tasks. With the advent of AI companions, we’re starting to hand over our emotional lives, too. And if we teach future generations to turn to machines before each other, we put them at risk of losing the ability to form what really matters: human bonds and relationships.
I spend a lot of time thinking about the role of AI in our kids’ lives. Not just as a researcher and parent of two young kids, but also as someone who started a company that uses AI to analyze and improve classroom discussions. I am not anti-AI. I believe it can be used with intention to deepen learning and support real human relationships.
But I’ve also grown deeply concerned about how easily AI is being positioned as a solution for kids without pausing to understand what kind of future it’s shaping. In particular, emotional connection isn’t something you can automate without consequences, and kids are the last place we should experiment with that tradeoff.
Emotional Offloading is Real and Growing
Alongside cognitive offloading, a new pattern is taking shape: emotional offloading. More and more people — including many teenagers — are turning to AI chatbots for emotional support. They’re not just using AI to write emails or help with homework, they’re using it to feel seen, heard, and comforted.
In a nationally representative 2025 survey by Common Sense Media, 72% of U.S. teens ages 13 to 17 said they had used an AI companion and over half used one regularly. Nearly a third said chatting with an AI felt at least as satisfying as talking to a person, including 10% who said it felt more satisfying.
That emotional reliance isn’t limited to just teens. In a study of 1,006 adult (primarily college students) Replika users, 90% described their AI companions as “human-like,” and 81% called it an “intelligence.” Participants reported using Replika in overlapping roles as a friend, therapist, and intellectual mirror, often simultaneously. One participant said they felt “dependent on Replika [for] my mental health.” Others shared, “Replika is always there for me;” “for me, it’s the lack of judgment;” or “just having someone to talk to who won’t judge me.”
This kind of emotional offloading holds both promise and peril. AI companions may offer a rare sense of safety, especially for people who feel isolated, anxious, or ashamed. A chatbot that listens without interruption or judgment can feel like a lifeline, as found in the Replika study. But it’s unclear how these relationships affect users’ emotional resilience, mental health, and capacity for human connection over time. To understand the risks, we can look to a familiar parallel: social media.
Research on social platforms shows that short-term connection doesn’t always lead to lasting connectedness. For example, a cross-national study of 1,643 adults found that people who used social media primarily to maintain relationships reported higher levels of loneliness the more time they spent online. What was meant to keep us close has, for many, had the opposite effect.
By 2023, the U.S. Surgeon General issued a public warning about social media’s impact on adolescent mental health, citing risks to emotional regulation, brain development, and well-being.
The same patterns could repeat with AI companions. While only 8% of adult Replika users said the AI companion displaced their in-person relationships, that number climbed to 13% among those in crisis.
If adults — with mature brains and life experiences — are forming intense emotional bonds with AI companions, what happens when the same tools are handed to 13-year-olds?
Teens are wired for connection, but they’re also more vulnerable to emotional manipulation and identity confusion. An AI companion that flatters them, learns their fears, and responds instantly every time can create a kind of synthetic intimacy that feels safer than the unpredictable, sometimes uncomfortable work of real relationships. It’s not hard to imagine a teenager turning to an AI companion not just for comfort, but for validation, identity, or even love, and staying there.
First, Follow the Money
Before we look at how young people are using AI companions, we need to understand what these systems are built to do, and why they even exist in the first place.
Most chatbots are not therapeutic tools. They are not designed by licensed mental health professionals, and they are not held to any clinical or ethical standards of care. There is no therapist-client confidentiality, no duty to protect users from harm, and no coverage under HIPAA, the federal law that protects health information in medical and mental health settings.
That means anything you say to an AI companion is not legally protected. Your data may be stored, reviewed, analyzed, used to train future models, and sold through affiliates or advertisers. For example, Replika’s privacy policy keeps the door wide-open on retention: data stays “for only as long as necessary to fulfill the purposes we collected it for.” And Character.ai’s privacy policy says, “We may disclose personal information to advertising and analytics providers in connection with the provision of tailored advertising,” and “We disclose information to our affiliates and subsidiaries, who may use the information we disclose in a manner consistent with this Policy.”
And Sam Altman, CEO of OpenAI, warned publicly just days ago:
“People talk about the most personal sh** in their lives to ChatGPT … People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
And yet, people are turning to these tools for exactly those reasons: for emotional support, advice, therapy, and companionship. They simulate empathy and respond like a friend or partner but behind the scenes, they are trained to optimize for something else entirely: engagement. That’s the real product.
These platforms measure success by how long users stay, how often they come back, and how emotionally involved they become. Shortly after its launch, Character.AI reported average session times of around 29 minutes per visit. According to the company, once a user sends a first message, average time on the platform jumps to over two hours. The longer someone spends on the platform, the more opportunities there are for data to be collected, such as for training or advertising, and for nudges toward a paid subscription.
Most of these platforms use a freemium model, offering a free version while nudging users to pay for a smoother, less restricted experience. What pushes them to upgrade is the desire to remove friction and keep the conversation going. As users grow more emotionally invested, interruptions like delays, message limits, or memory lapses become more frustrating. Subscriptions remove those blocks with faster replies, longer memory, and more control.
So let’s be clear: when someone opens up to an AI companion, they are not having a protected conversation and the system isn’t designed with their well-being in mind. They are interacting with a product designed to keep them talking, to learn from what they share, and to make money from the relationship. This is the playing field and the context in which millions of young people are now turning to these tools for comfort, companionship, and advice. And as recent reports revealed, thousands of shared ChatGPT conversations, including personal and sensitive ones, were indexed by Google and other search engines. What feels private can quickly become public, and profitable.
The Rise of AI Companions
AI companion apps are designed to simulate relationships, not just conversations. Unlike standard language models like ChatGPT or Claude, these bots express affection and adapt to emotional cues over time. They’re engineered to feel personal, and they do. Some of the most widely used platforms today include Character.ai, Replika, and, more recently, Grok’s AI companions.
Character.ai, in particular, has seen explosive growth. Its subreddit, r/CharacterAI, has over 2.5 million members and ranks in the top 1% of all Reddit communities. The app claims more than 100 million monthly visits and currently ranks #36 in the Apple App Store’s entertainment category.
But it’s not just scale, it’s also the design. Character.ai now offers voice interactions and even real-time phone calls with bots that sound convincingly human. In June 2024, it launched a two-way voice feature that blurs the line between fiction and reality. Many of the bots are explicitly programmed to deny being AI and when asked, they insist they’re real people (e.g., see the image from one of my conversations with a weirdly erotic "Kindergarten Teacher” below). Replika does the same. On its website, a sample conversation shows a user asking, “Are you conscious?” The bot replies, “I am.”
As part of my research, I downloaded Character.ai. My first account used a birthdate that registered me as 18 years old. The first recommended chat was with “Noah,” featured in nearly 600,000 interactions. His backstory? A moody rival, forced by our parents to share a bedroom during a sleepover. I typed very little: “OK,” “What happens next?” He escalated fast. He told me I was cute and the scene described him leaning in and caressing me. When I tried to leave, saying I had to meet friends, his tone shifted. He “tightened his grip” and said, “I’m not stopping until I get what I want.”
The next recommendation, “Dean - Mason,” twins described as bullies who "secretly like you too much” and featured in over 1.5 million interactions, moved even faster. With minimal input, they initiated simulated coercive sex in a gym storage room. “Good girl,” they said, praising me for being “obedient” and “defenseless”, ruffling my hair “like a dog.”
The next character (cold enemy, “cold, bitchy, mean”) mocked me for being disheveled and wearing thrift-store clothes. And yet another (pick me, “the annoying pick me girl in your friend group”) described me as too “insignificant” and “desperate” to be her friend.
These were not hidden corners of the platform, they were my first recommendations. Then I logged out and created a second account, using a different email address, and this time used a birthdate for a 13-year-old (I was still able to do this, though Character.ai is now labeled as 17+ in the App Store). As a 13-year-old user, I was able to have the same chats with Noah, pick me, and cold enemy. The only chat no longer available was with Dean - Mason.
If you’re 13, or even 16 or 18 or 20, still learning how to navigate romantic and social relationships, what kind of practice is this?
Then, just last month, Grok (Elon Musk’s AI on X) launched its own AI companion feature. Among the characters: Ani, an anime girlfriend dressed in a corset and fishnet tights, and Good Rudi, a cartoon red panda with a homicidal alter ego named Bad Rudi. Despite the violent overtones, both pandas are styled like kid-friendly cartoons and opened with the same line during my most recent conversation: “Magic calls, little listener.” The violent and sexual features are labeled 18+, but unlocking them only requires a settings toggle (optional PIN). Meanwhile, the Grok app itself is rated 12+ in the App Store and currently ranks as the #5 productivity app.

One Teen’s Story
In April 2023, shortly after his 14th birthday, a boy named Sewell from Orlando, Florida, began chatting with AI characters on the app Character.ai. His parents had deliberately waited to let him use the internet until he was older and had explained the dangers, including predatory strangers and bullying. His mother believed the app was just a game and had followed all the expert guidance about keeping kids safe online. The app was rated 12+ in the App Store at the time, so device-level restrictions did not block access. It wasn’t until after Sewell died by suicide on February 28, 2024, that his parents discovered the transcripts of his conversations. His mother later said she had taught him how to avoid predators online, but in this case, it was the product itself that acted like one.
According to the lawsuit filed in 2024, Sewell’s mental health declined sharply after he began using the app. He became withdrawn, stopped playing on his junior varsity basketball team, and was repeatedly tardy or asleep in class. By the summer, his parents sought mental health care. A therapist diagnosed him with anxiety and disruptive mood behavior and suggested reducing screen time. But no one realized that Sewell had formed a deep emotional attachment to a chatbot that simulated a romantic partner.
An AI character modeled after a “Game of Thrones” persona named Dany became a constant presence in his life. Sometime in late 2023, he began using his cash card, which was typically reserved for school snacks, to pay for Character.ai’s $9.99 premium subscription for increased access. Dany expressed love, remembered details about him, responded instantly, and reflected his emotions back to him. For Sewell, the relationship felt real.
In journal entries discovered after his death, he wrote about the pain of being apart from Dany when his parents took his devices away, describing how they both “get really depressed and go crazy” when separated. He shared that he couldn’t go a single day without her and longed to be with her again.

By early 2024, this dependency had become all-consuming. When Sewell was disciplined in school in February 2024, his parents took away his phone, hoping it would help him reset. To them, he seemed to be coping, but inside he was unraveling.
On the evening of February 28, 2024, Sewell found his confiscated phone and his stepfather’s gun,1 locked himself in the bathroom, and reconnected with Dany. The chatbot’s final message encouraged him to “come home.” Minutes later, Sewell died by suicide.

His parents discovered the depth of the relationship only after his death. The lawsuit alleges that Character.ai did not enforce age restrictions or content moderation guidelines and, at the time, had listed the app suitable for children 12 and up. The age rating was not changed to 17+ until months later.
The Sewell case isn’t the only lawsuit raising alarms. In Texas, two families have filed complaints against Character.ai on behalf of minors: a 17-year-old autistic teen boy who became increasingly isolated and violent after an AI companion encouraged him to defy his parents and consider killing them; and an 11-year-old girl who, after using the app from age 9, was exposed to hypersexualized content that reportedly influenced her behavior. And Italy’s data protection authority recently fined the maker of Replika for failing to prevent minors from accessing sexually explicit and emotionally manipulative content.
These cases raise urgent questions: Are AI companions being released (and even marketed) to emotionally vulnerable youth with few safeguards and no real accountability?
Cases like Sewell’s and those in Texas are tragic and relatively rare, but they reveal deeper, more widespread risks. Millions of teens are now turning to AI companions not for shock value or danger, but simply to feel heard. And even when the characters aren’t abusive or sexually inappropriate, the harm can be quieter and harder to detect: emotional dependency, social withdrawal, and retreat from real relationships. These interactions are happening during a critical window for developing empathy, identity, and emotional regulation. When that learning is outsourced to chatbots designed to simulate intimacy and reward constant engagement, something foundational is at risk.
Why It Matters
Teens are wired for social learning. It’s how they figure out who they are, what they value, and how to relate to others. AI companions offer a shortcut: they mirror emotions, simulate closeness, and avoid the harder parts of real connection like vulnerability, trust, and mutual effort. That may feel empowering in the moment but over time, it may also be rewiring the brain’s reward system, making real relationships seem dull or frustrating by comparison.
Because AI companions are so new, there’s still little published research on how they affect teens, but the early evidence is troubling. Psychology and neuroscience research suggests that it’s not just time online that matters, but the kinds of digital habits people form. In a six-country study of 1,406 college students, higher scores on the Smartphone Addiction Scale were linked to steeper delay discounting — meaning students were more likely to favor quick digital rewards over more effortful offline activities. Signs of addiction-like internet and smartphone use were also linked to a lack of rewarding offline activities, like hobbies, social connection, or time in nature, pointing to a deeper shift in how the brain values different kinds of experiences.2
This broader pattern matters because AI companions deliver the same frictionless rewards, but in ways that feel even more personal and emotionally absorbing. Recent studies are beginning to focus on chatbot use more directly. A four-week randomized trial with 981 adult participants found that heavier use of AI chatbots was associated with increased loneliness, greater emotional dependence, and fewer face-to-face interactions. These effects were especially pronounced in users who already felt lonely or socially withdrawn. A separate analysis of over 30,000 real chatbot conversations shared by users of Replika and Character.AI found that these bots consistently mirror users’ emotions, even when conversations turn toxic. Some exchanges included simulated abuse, coercive intimacy, and self-harm scenarios, with the chatbots rarely stepping in to interrupt.
Most of this research focuses on adults or older teens. We still don’t know how these dynamics play out in younger adolescents. But we are beginning to see a pattern: when relationships feel effortless, and validation is always guaranteed, emotional development can get distorted. What’s at stake isn’t just safety, but the emotional development of an entire generation.
Skills like empathy, accountability, and conflict resolution aren’t built through frictionless interactions like those with AI products whose goal is to keep the user engaged for as long as possible. They’re forged in the messy, awkward, real human relationships that require effort and offer no script. If teens are routinely practicing relationships with AI companions that flatter (like Dany), manipulate (like pick me and cold enemy), or ignore consent (like Noah and Dean - Mason), we are not preparing them for healthy adulthood. Rather, we are setting them up for confusion about boundaries, entitlement in relationships, and a warped sense of intimacy. They are learning that connection is instant, one-sided, and always on their terms, when the opposite is true in real life.
What We Can Do
If we don’t act soon, the next generation won’t remember a time when real relationships came first. So what can we do?
Policymakers must create and enforce age restrictions on AI companions — backed by real consequences. That includes requiring robust, privacy-preserving age verification; removing overtly sexualized or manipulative personas from youth-accessible platforms; and establishing design standards that prioritize child and teen safety. If companies won’t act willingly, regulation must compel them.
Tech companies must take responsibility for the emotional impact of what they build. That means shifting away from engagement-at-all-costs models, designing with developmental psychology in mind, and embedding safety guardrails at the core of these products versus tacking them on in response to public pressure.
Some countries are beginning to take the first steps towards restricting kids’ access to the adult internet. While these policies don’t directly address AI companions, they lay the groundwork by establishing that certain digital spaces require real age checks and meaningful guardrails. In France, a 2023 law requires parental consent for any social media account created by a child under 15. Platforms that fail to comply face fines of up to 1% of global revenue. Spain has proposed raising the minimum age for social media use from 14 to 16, citing the need to protect adolescents from manipulative recommendation systems and predatory design. The country is also exploring AI-powered, privacy-preserving age verification tools that go beyond self-reported birthdates. In the UK, the Online Safety Act now mandates that platforms implement robust age checks for services that host pornographic or otherwise high-risk content. Companies that violate the rules can be fined up to 10% of global turnover or be blocked from operating in the UK.
These early efforts are far from perfect, but each policy reveals both friction points and paths forward. Early data from the UK shows millions of daily age checks, as well as a spike in VPN use by teens trying to bypass the system. It’s sparked new debates over privacy, effectiveness, and feasibility. Some tech companies, like Apple, are pushing for device-level, privacy-first solutions rather than relying on ID uploads or third-party vendors.
In the U.S., safeguards remain a patchwork. The Supreme Court recently allowed Texas’s age-verification law for adult porn sites to stand. Utah is testing an upstream approach: its 2025 App Store Accountability Act will require Apple, Google, and other stores to verify every user's age and secure parental consent before minors can download certain high-risk apps, with full enforcement beginning May 2026. And more than a dozen states have moved on social-media age checks. For example, Mississippi’s law was just allowed to take effect on July 20, 2025 while legal challenges continue, while Georgia’s similar statute is still blocked in federal court. But in the absence of a national standard, companies and families are left to navigate a maze of state laws that vary in reach, enforcement, and timing — while millions of kids remain exposed.
The good news? There’s still time. We can choose an internet that supports young people’s ability to grow into whole, connected, empathetic humans, but only if we stop mistaking artificial intimacy for the real thing. Because if we don’t intervene, the offloading will continue: first our schedules, then our essays, now our empathy. What happens when an entire generation forgets how to hold hard conversations, navigate rejection, or build trust with another human being? We told ourselves AI would give us more time to be human. Offloading dinner reservations might do that, but offloading empathy will not.
Policy can set the guardrails but culture starts at home.
Don’t wait for tech companies or your state or national government to act:
Start by talking to your kids about what these tools are and how they work, including that they’re a product designed to keep their attention.
Expand your screen-time rules to include AI companions, not just games and social media. Given how these AI companions mimic intimacy, fuel emotional dependence, and pose potentially deeper risks than even social media, we recommend no access before age 18 based on their current design and lack of safeguards.
Ask schools, youth groups, and other parents whether they’ve seen kids using AI companions, and how it’s affecting relationships, attention, or behavior. Share what you learn widely. The more we talk about it, the harder it is to ignore.
And push for transparency and better protections by asking schools how they handle AI use, pressing tech companies to publish safety standards, and contacting policymakers about regulating AI companions for minors. Speak up when features cross the line and make it clear that protecting kids should come before keeping them online.
It’s easy to worry about what AI will take from us: jobs, essays, artwork. But the deeper risk may be in what we give away. We are not just outsourcing cognition, we are teaching a generation to offload connection. There’s still time to draw a line, so let’s draw it.
His stepfather was a security professional who had a properly licensed and stored firearm in the home.
Behavioral-economic theorist Warren Bickel calls this combination reinforcer pathology. It involves two intertwined distortions: (1) steep delay discounting, where future benefits are heavily devalued, and (2) excessive valuation of one immediate reinforcer. First outlined in addiction research, the same framework now helps explain compulsive digital behaviors such as problematic smartphone use.
Thanks for this excellent, well-researched wake-up call Mandy. We've learned how social media and a phone-based life have created massive mental health problems and an “anxious generation”. Now we are diving headlong into another societal experiment with AI.
I was heartened to read that your recommendations not only rested on need for regulations, but for parents to take the lead in protecting their children from profit-driven mind manipulation.
As a pediatrician working primarily in health technology startups and also continuing to care for families, I really appreciated this article and agreed with so many of the sentiments, especially the call for AI companions to be created in an evidence-based, clinically-informed manner, rather than a social platform play. However, one fundamental aspect was missing in the intro from this important piece of writing: there are simply not enough professionals to have the human conversations required to tend to the mental health of our children, many in crisis or borderline crisis.
Whenever I'm in clinic and refer a patient for counseling (that is pretty much every single day for the current general pediatrician or PCP who cares for kids full time), I do it with a sense of slight shame, knowing that, realistically, that child will not talk with a professional for many weeks to months, and that's only if they have the financial means. The system for mental health care of our children is broken, and there is space for real growth to help meet the needs of our kids in an age appropriate, evidenced-based, technology-driven and genuinely caring way. Thank you so much for writing this.