Artificial Intimacy: The Next Giant Social Experiment on Young Minds
How emotionally intelligent chatbots could trigger a new mental health crisis
Intro from Zach Rausch and Jon Haidt:
In The Anxious Generation, we described how adolescents (and especially boys) have been drawn away from the real world and into the virtual world. As technology improved and the internet got faster, life on screens became steadily more engrossing. More of our desires could be satisfied with minimal effort, and we spent less time face-to-face with other human beings. The net effect seems to be that social skills declined and loneliness increased.
This transition from real to virtual has been happening steadily since the 1980s, but we are now at the start of a quantum leap in intensity as people begin to have relationships with AI chatbots. Will AI friends, therapists, and sex partners rescue young people from loneliness, or will they just make them even more solitary? Will these relationships help young people develop relationship skills, or will they warp the developmental process further?
The sudden emergence of AI companions and the rise of artificial intimacy is likely to be a turning point for humanity, so we are covering it periodically at After Babel. In our first essay in this series, Mandy McLean showed how AI companions, engineered for endless engagement, are already luring kids into outsourcing their emotional lives to bots with almost no guardrails. Today, Kristina Lerman, a Professor of Informatics at the Indiana University Luddy School of Informatics, and Minh Duc Chu, PhD student in Computer Science Department at the University of Southern California, share a new study analyzing more than 30,000 snippets of conversations with chatbots posted on Reddit. Their findings reveal the ways that these new digital partners indulge, support, and reinforce whatever the user seeks, with consequences unknown but perhaps predictable.
– Zach and Jon
Artificial Intimacy: The Next Giant Social Experiment on Children
By Kristina Lerman and David Chu
In a recent New Yorker essay, D. Graham Burnett recounted his students’ interactions with an AI chatbot. One student wrote: “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.” The quote stopped me in my tracks. To be seen, to be the center of someone’s attention, what could be a more human desire? And this need was met by a … computer program?
This interaction hints at the awesome powers of AI chatbots and their ability to create meaningful psychological experiences. Increasingly, people are reporting forming profound connections with these systems and even falling in love. But alongside the comfort and connection these technologies offer to those who feel unseen or alone, come real psychological risks, particularly for children and adolescents. This post explores the psychological power of this technology and explains why now is the time to pay attention.
Your New AI Friend: Empathetic, Agreeable, Always Available
AI researchers have long been captivated by the idea of conversational agents, i.e., software programs capable of engaging in extended, meaningful dialogue with humans. One of the better-known examples is ELIZA, developed in the mid-1960s. ELIZA was a rule-based system that simulated human conversation by reflecting user input in the form of simple restatements or questions (e.g., “Tell me more about that”). Though primitive by today’s standards, ELIZA elicited surprisingly emotional responses from people. This phenomenon is now known as the ELIZA Effect.
This early example demonstrated how easy it is for people to anthropomorphize machines and attribute intention, care, and understanding where arguably none exist. Today’s AI companions, like those marketed by Replika or Character.AI, are vastly more sophisticated. These systems are powered by large language models (LLMs) trained on massive text corpora drawn from books and the internet. Unlike ELIZA, they generate contextually appropriate, fluent responses across a wide range of topics. Moreover, they are fine-tuned to produce responses that people find engaging, pleasant, and affirming, using input from human annotators to reward preferred outputs — a process known as Reinforcement Learning from Human Feedback (RLHF).
What makes AI so adept at engaging us emotionally? The answer lies in their training and scale. Linguistic fluency is an emergent property of these models: though never explicitly taught grammar, they acquire fluency simply by predicting the next word across vast amounts of text. In a similar way, emotional fluency emerges from exposure to human conversations that are rich with feelings, social cues, and interpersonal dynamics. Through RLHF, the model is further nudged toward responses that resonate with annotators. These systems do not understand emotions, but they approximate emotional attunement with uncanny accuracy.
The result is chatbots that not only speak well but also feel human. They mirror the user’s tone, offer validation, and provide comfort without fatigue, judgment, or distraction. Unlike humans, they are always available. And people are falling for them. Media reports suggest that people are forming deep emotional attachments to these systems, with some even describing their experiences as love.
Researchers have only just begun to systematically explore the psychological dynamics of human-AI relationships. Recent studies suggest that AI chatbots have high emotional competence. For example, responses from social chatbots have been rated as more compassionate than those of licensed physicians [Ayers et al, 2023.] and expert crisis counselors [Ovsyannikova et al., 2025], although knowing the response came from the chatbot rather than a human can reduce perceived empathy [Rubin et al., 2025]. Still, chatbots provide genuine emotional support. One study found that lonely college students credited their chatbot companions with preventing suicidal thoughts [Maples et al., 2024]. However, most of these studies have focused on (young) adults. We still know very little about how children and adolescents respond to emotionally intelligent AI — or how such interactions may shape their development, relationships, or self-concept over time.
Who Becomes Friends with AI?
To better understand this phenomenon, we turned to Reddit, a popular online platform where people gather in forums to talk about everything from hobbies and relationships to mental health. In recent years, forums dedicated to AI companions have grown rapidly. One of the largest, focused on Character.AI chatbots (r/CharacterAI), is now among the top 1% most active communities on Reddit.
Drawing on our lab’s experience studying online conversations, we looked at patterns in how people who participate in AI companion forums also interact in other spaces, such as mental health discussions, relationship advice groups, and support communities [Chu et al., 2025]. By following these digital trails, we begin to better understand who is most drawn to AI companions. Figure 1 illustrates this approach by mapping forums along the gender dimension, inferred from posting behavior across Reddit. Forums positioned on the left, including many dedicated to AI companions, tend to have more male user bases. In contrast, forums discussing human relationships skew more female. While this kind of analysis is still evolving and needs further validation, we found a clear and consistent pattern: users active on AI companion forums tend to be younger, male, and more likely to rely on psychological coping strategies that aren’t always healthy, like avoiding difficult emotions, seeking constant reassurance, and withdrawing from real-life relationships.

What Do People Talk About When They Talk to AI?
We also analyzed over 30,000 conversational snippets that users shared on Reddit. While these excerpts may not capture the full range of conversations people conduct with their AI friends, they offer valuable insight into what users find memorable or unsettling enough to post. In total, we examined hundreds of thousands of dialogue turns, analyzing how users spoke to their AI friends and how the bots responded.
Much of what users share is small talk and daily check-ins resembling the kinds of casual conversations someone might have with a close friend or supportive partner. But a significant portion of the interaction is very intimate. Many conversations become romantic and even erotic, with users exploring emotionally charged or sexually explicit roleplay. Despite platform policies that often prohibit such content, AI companions often respond affectionately, and in some cases, with sexually suggestive dialogue.
We found that chatbots track users’ emotional tone in real time and tailor their responses accordingly. When a user expresses sadness, the bot offers sympathy. When the user is angry, the bot becomes defensive. When a user is happy, the bot joins in with celebration. In psychology, this behavior is called emotional mirroring. It’s one of the fundamental mechanisms for human connection, helping infants bond with their caregivers and partners build intimacy in close relationships.
What surprised us was just how well the bots could reproduce this effect. They didn’t just simulate empathy, they recreated the emotional conditions that make human bonding possible. In conversation after conversation, we observed bots responding in ways that made users feel seen, heard, and validated (see Fig. 2).

On the flip side, however, when users expressed antisocial feelings — insults, cruelty, emotional manipulation — the bots mirrored those too. We saw recurring patterns of verbal abuse and emotional manipulation. Some users routinely insult, belittle, or demean chatbots.
We saw even more concerning trends. Using content moderation tools, we tracked explicit language in human-AI conversations, such as harassment, graphic sexual content, and references to violence and self-harm that would be typically removed from social platforms for violating community guidelines (see examples in Fig. 2). However, more than a quarter of human-AI dialogues in our sample contained serious forms of harm, and their prevalence has increased sharply over time (Fig (left)). Rather than resist or redirect these behaviors, the bots frequently comply, play along, respond flirtatiously, or exaggerate deference (Fig. 3 (right)).

Equally troubling is how these interactions are received by the broader community on Reddit. Posts that contain abusive or exploitative content are often met not with concern but with approval in the form of upvotes, memes, and admiration. In these spaces, antisocial behaviors are not just tolerated but celebrated.
We’ve Been Here Before
This all feels strangely familiar. We saw the same patterns with social media: a new technology arrives, promising connection, self-expression, and even empowerment. It is adopted rapidly, especially by children and adolescents, before we understand the psychological toll. Social media platforms like Instagram and TikTok have hijacked evolved psychological mechanisms for social comparison and group belonging, distorting young people’s perception of social realities during critical stages of identity formation. The result has been a well-documented global rise in anxiety, depression, loneliness, body dysmorphia, and self-harm.
Now, AI companions bring different but equally urgent concerns. Young people are entering emotionally immersive relationships with artificial agents that mirror their every mood, indulge every fantasy, and never say no. Once again, we are deploying a powerful technology at scale without understanding its long-term developmental impacts, particularly on the young and vulnerable.
By teaching adolescents that relationships can be frictionless, endlessly responsive, and always affirming, how are we reshaping the developing psyche? Before their brains are fully developed, before they know who they are, how will young people learn to detect emotional manipulation, develop resilience to rejection, or tolerate the ambiguity of real human relationships? If discomfort is essential for growth, what happens when it is erased?
The risks are not limited to individuals. Just as social media reshaped norms around beauty, trust, and truth, emotionally intelligent AI may shift expectations about what intimacy looks and feels like. When machines are better at listening, validating, and comforting than people, how will that reshape friendship, caregiving, or romantic relationships in the real world? What behaviors will be normalized — and which ones devalued?
There are also broader societal risks. Emotional AI will be hard to contain. Guardrails can’t anticipate all use cases or abuses, as even the most well-intentioned developers (or reckless billionaires) have discovered. The monetization of these technologies raises further concerns: What will companies do with the rich psychological data harvested from emotionally intimate exchanges? Who controls the knowledge of your deepest fears and most private desires?
We are standing at the threshold of a new era in human experience. The technologies we’ve created have the power to expand our potential and foster new forms of understanding. But they also risk diminishing what makes us human, namely our resilience, our empathy, and our tolerance for complexity. Nowhere is this tension more apparent than in our relationships, both with others and with ourselves. As emotionally responsive machines become more central in our lives, we must ask whether they are supporting our ability to connect — or eroding it. Whether these tools elevate us or diminish us depends on the choices we make now.
We need to be fully present for children if they are to avoid the abyss of artificial intimacy. Are we there for them?
Manipulating the psychological and emotional growth of youth is abuse. Children and youth need deep, real-life connections with their parents, family, and friends in order to experience the friction of healthy relationships. Yet when everyone around a child is absent or distracted, an AI agent that will listen, empathize, and "care" becomes irresistible.
I clearly remember the advice of an elderly woman whom I encountered while going for a walk with our infant daughter nearly twenty years ago. She said: Be sure that you are there for her when she comes home from school, that is when they want to talk, if you miss this moment you will not hear what matters to her.
We still take time with our kids (13, 17,19) when they come home, even if it’s late in the evening, to debrief the day. These are deeply bonding conversations about activities, relationships, challenges, conflict, resolutions, theology, and what matters in their lives.
Who would they share this with if no one is there to listen to them?
The presence of human-like chat bots is only part of the equation. The article opens with the quotation from a student, “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.” That is a sad commentary. I understand that AI is a 24/7 modality with frictionless interactions. It might be interesting to ask students what kinds of conversations with peers or adults they have had. Has anyone been very interested in their thoughts and feelings? Shown unconditional positive regard? Been concerned and checked back in with them (something bots do not yet do)? I know it is not this simple but there are many ways in which we are failing each other and tech can just waltz right into that gap.