Reclaiming Conversation in the Age of AI
Sherry Turkle on AI, empathy, and the fight for human connection
Introduction from Jon Haidt:
Neil Postman was the greatest media analyst of the late 20th century, showing us how new technologies—from the printing press through television and the early internet—brought about changes to so many life domains, and to consciousness itself. I often wish that Postman was here with us today as the pace of change and concerns about harms increase rapidly.
But we do have a Neil Postman and her name is Sherry Turkle. She is the Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology at MIT, where she has been writing about the ways that humans and computers interact, and how those interactions change humans, since the 1980s. Her 2015 book Reclaiming Conversation was a landmark work of media scholarship, coming at the end of the long run of internet techno-optimism (from the 1990s through the Arab Spring) and just before the "techlash" of the late 2010s.
I first discovered Sherry through her 2012 TED talk, titled "Connected but alone?" By the time she published Reclaiming Conversation, it seems that the question mark was no longer needed. In fact, Sherry gives us the most powerful summation of how smartphones and social media, these powerful technologies of connection, have damaged close human relationships. She does it in four words: "We are forever elsewhere."
It is now ten years since Reclaiming Conversation first appeared. The task of reclaiming face-to-face conversation with human beings in the age of TikTok, ChatGPT, and sex bots is much harder now than it was in 2015. We are so pleased that Sherry is releasing an updated version of the book, and so honored that Sherry is letting us present to you the new preface that she wrote for the updated version.
– Jon
Preface to the Tenth Anniversary Edition of Reclaiming Conversation
By Sherry Turkle

In June 2024, I was invited to speak about my research on the human effects of generative AI at my Harvard/Radcliffe College reunion. Just a few days before, OpenAI had released ChatGPT4-o, a conversational program that speaks out loud and has this beguiling feature: its voice suggests emotion, catching in a manner that sounds like concern or interest. It pauses for emphasis. This chatbot and its many cousins are designed to act as mentors, best friends, even lovers. They offer what I’ve called artificial intimacy, our new AI.
I begin my presentation with a demonstration of Chat to my undergraduate classmates. Most have never seen it before. I hold up my phone and tell the program that I’m at my college reunion and that it’s emotionally difficult. Many classmates have lost partners or are themselves struggling with illness. We have lost seventy-one members of our class since our last reunion five years ago. I’m stressed. I couldn’t sleep the night before. Can Chat help? A resonant male voice comes out of my smartphone: “Sherry, that is very hard. Be sure you are taking care of yourself. You are under a lot of stress.” Even I, who had been talking to Chat over the past several days and thought I had steeled myself against its charms, am taken aback. It’s just too good.
I spend the rest of my talk making my pitch that chatbots cannot know us, cannot hear us. But in the days that follow, my classmates don’t engage with my concerns on the ethical issues posed by generative AI. They ask me to help them put ChatGPT4-o on their phones. Rather than a psychologist warning about the effects of pretend empathy, I am most useful as their IT support.
I wrote Reclaiming Conversation after a decade of studying what I call “relational artifacts,” computational objects that declare their caring intentions. I began with studies of Tamagotchis, digital creatures in tiny plastic eggs that ask to be tended to—to be fed and amused, to have their digital poop cleaned up. I graduated from Tamagotchis to Furbies, My Real Babies, Aibos, and finally Paros, social robots shaped like baby seals that are designed to be companions for the elderly. When people are drawn into the most primitive exchanges with a sociable object—avatar, robot, or chatbot—they believe it cares for them. And we are wired to care for it in return. My work with relational artifacts left me with this: We nurture what we love, but we love what we nurture. We love what we allow ourselves to relate to. It’s important to remember that this love is unrequited.
I was a young faculty member at MIT in the late 1970s when the psychoanalyst Erik Erikson visited to talk about engineering education. After his presentation, he asked me what I was doing as a humanist at an engineering school. I told him I was studying how computers change people’s ideas about themselves, and he made this comment: “Engineers, they’re not convinced that people have an interior. It’s not necessary for their purposes.” They see the complexity of inner life not as a feature but as a bug.
All that uncertainty, friction, ambivalence, pushback, all that drama. Who needs it?
Turns out, we need it.
When you reach out to make common cause with another person, accepting all the ways they are different from you, you increase your capacity for human understanding. That feeling of “friction” in human exchange is a good thing—it comes from bringing our whole selves to the encounter. When we communicate on screens, we distance ourselves from one another. We lose the ability to put ourselves in the place of others and negotiate differences. Intimacy and empathy are compromised, and civil society suffers as well.
We nurture what we love, but we love what we nurture. We love what we allow ourselves to relate to. It’s important to remember that this love is unrequited.
This book is animated by my alarm about a flight from conversation—a retreat to social media and texting. But why would we do this to ourselves? Why would this seem so attractive? My fieldwork in homes, workplaces, classrooms, and schools demonstrates that what people most want is to avoid the “stress” of face-to-face interaction. To flee vulnerability, people in the 2010s mostly turned from talk to text. Today, in the flush of generative AI, we opt for even less risk and talk directly to machines. The urgency to reclaim conversation is even greater in the face of this seductive new threat.
It’s fall 2023, and I am talking about ChatGPT with Eric Schmidt, former CEO and chair of Google and now chair of the National Security Commission on Artificial Intelligence. The conversation turns to using chatbots as psychotherapists. Surely, I argue, this seems an area where a machine would have no standing. Schmidt disagrees. Information trumps experience. He insists that a chatbot will have every paper on anxiety at its disposal; it will know everything that all the greatest therapists have ever said about depression. In the future, he expands, there will be little need for person-to-person conversation. How could the accumulated knowledge of billions not be superior to the knowledge of one? The AI composite will always be better than any individual person. I found this viewpoint stunning.
In Erikson’s terms, are we all engineers now? Chatbots know how to deliver pleasing conversations, but they work, in fact, by predicting the appropriate next words in a sentence. All they can deliver is a performance of empathy. Pretend empathy. When you tell your troubles to a machine, it has no stake in the conversation. It can’t feel pain. It has no life in which you play a part. It takes no risk; it has no commitment. When you turn away from an exchange, the chatbot doesn’t care if you cook dinner or commit suicide. Without a body or a human life cycle, it has no standing to talk about loss, love, passion, or joy.
The idea that individual people, with their specificity and history, are less than an AI composite is a central dogma of generative AI. It’s the embodiment of Erikson’s warning. It does more than make human conversation transactional; it declares a lack of interest in what lies beneath.
The idea that individual people, with their specificity and history, are less than an AI composite is a central dogma of generative AI … It’s a new form of behaviorism that devalues the richness and complexity of the human.
It’s a new form of behaviorism that devalues the richness and complexity of the human.
Conversation is about more than information. In conversation, we reveal ourselves to each other in our conflicts, contradictions, and fears. There, we nurture our capacity for empathy by connecting to other humans who have experienced the attachments and losses of human life. What was a flight from conversation becomes the death of conversation itself.
These days, social media’s problems are in the news. It’s addictive and divisive, and it undermines the emotional growth of children. But just as we contemplate the first steps away from Facebook, TikTok, and Instagram, we fall in love with generative AI chatbots. Social media was our gateway drug to conversations with machines. And now, we live as addicts poised to substitute one drug for another, using chatbot “relationships’’ where we once used social media. Our criticism of technology lags behind its seductive power.
It’s an old story, actually—one where technology creates a problem and then offers a new technology as its solution. Silicon Valley began with the fairy dust of 1960s dreams sprinkled on it. The revolution had failed, but engineers and computer scientists claimed they would carry that dream into the early personal computer industry. Apple was countercultural, Google would do no evil, and Facebook would connect the world into a peaceful network of friends.
But all of this was an illusion. When I first encountered social media, which replaced friendship with friending, I saw us in the cold, hard center of a perfect storm. We came to expect more from technology and less from each other. And now, with the intimate machines of generative AI, we are so much further along this path of being satisfied with less.
Silicon Valley is there to make money—by keeping people at their screens. Now the industry has a new claim: the conversations of artificial intimacy will finally “cure” loneliness by offering more gratifying and supportive conversations than people ever could.
At our new point of inflection, we should see ourselves not as victims but as empowered consumers. If we don’t want to talk to machines, we must learn to avoid the hype.
In the wave of enthusiasm about generative AI, there has been renewed talk of technological determinism and “inevitable” next steps to integrate algorithms into our intimate lives. But nothing is inevitable— conversation is something we can forget, but also something we can remember. We can come back to each other and to ourselves.
I argued for this assertion of agency in 2015, and now I argue ever more fervently. There is more than a threat to empathy at stake; there is a threat to our sense of what it means to be human. The performance of pretend emotion does not make machines more human. But it challenges what we think makes people special. Our human identity is something we need to reclaim for ourselves.
That means making face-to-face conversation a priority because it is truly the most human and humanizing thing we do. It’s what has always allowed us to have common cause with other people. This preface is not only a warning against a new assault on conversation; it’s a reminder that our old tool works.
The performance of pretend emotion does not make machines more human. But it challenges what we think makes people special. Our human identity is something we need to reclaim for ourselves.
The philosopher Emmanuel Levinas wrote about the power of our embodied presence. The human face initiates an ethical compact. It signals the presence of a self that can recognize another. It calls us to compassion, to deep knowing. When we are present to each other in conversation, our mirror neurons cause our facial muscles to be in tune with those of our interlocutor. We experience the emotion of conversation from within our bodies. When I talk to engineers and computer scientists about this, as an argument for why chatbots should “stay in their lane,” my colleagues get my point but usually follow up with a shrug. They have already defined conversation as a transfer of information. They admit there is more, but it is, at least for the moment, superfluous. Superfluous because it cannot be implemented on a machine. When we make chatbots into our friends, we take up residency in that world where human relations are seen through the prism of what machines can do.
Chatbots, we are told, can now provide health, relationship, and financial advice. They can also create business plans and write love letters. But the conversations we need most are the ones that encourage human thriving. When you write a love letter, you want it to be effective (you want the recipient to love you back), but it is also an opportunity for self-summoning, a chance to reflect on one’s deepest feelings. Editing a love letter composed by an AI is another thing altogether. We alienate ourselves, needlessly, from ourselves.
So, reclaiming our sense of the human means increasing our respect for our own capacity for intimacy and introspection. It also means a new respect for the importance of conversations in multiple communities. It means dinner with our families and friends and the social life of parks, libraries, and teen centers. It means less time on social media. It means respecting sacred spaces where you don’t bring your phone: the kitchen, the dining room, the bedroom, the car, and the classroom.
I continue to believe in human resilience and resistance—and in our capacity to turn to conversation. When faced with worsening conditions, it’s time to double down on what we think will help.
Here is where I hope you, reader, will double down: take this moment to question our common practice of using the “marvels” of machine behavior to redefine human capacities that are as old as life. When Alan Turing defined artificial intelligence as the successful performance of human intelligence, he left out so much of what we rely on when we meet human intelligence. Human intelligence takes the social world into account. It is situated in the life of the body. Intelligent people relate to one another on a playing field of shared social experience. Nevertheless, the Turing behavioral definition of intelligence—a machine that fools you into thinking it is a person—became a gold standard. It was concrete and measurable.
Now that chatbots might be said to pass the Turing test, we pay the price for years of nodding our assent to its narrow behaviorism. And if we say that generative AI chatbots are empathic, our thinking about empathy is similarly downgraded. Empathy is putting yourself in someone’s place, caring you are there, and committing to stay the course. You have a stake in helping your neighbor make things better. You can’t get bored or turn away. Empathy enlarges those who offer it and binds them to others. It makes people feel part of something larger than themselves.
While the discourse around generative AI is hyperbolic (We’re leaving for the metaverse! AI will bring the end of human relevancy!), the language of reclaiming empathy and conversation is granular, humble, and concerned with the day-to-day. The family table. The garden club. These simple settings bring me back to where this book begins—with Thoreau’s image of chairs as spaces for conversation. The chairs connote the places—in the home and the public square—where individuals can find their inner voices and communities can gather. The chairs call us to places where we don’t consider our thoughts and feelings as commodities.
To me, the arguments in this book are more poignant because the pandemic stands between today and when I wrote them. Not surprisingly, it was in that time of isolation that a first generation of chatbots was proposed as a cure for loneliness. I tried them all but became ever more skeptical of the chair that machines can pull up to the conversation. When I cultivated solitude, I could hear my own voice. Chatbots led me to the pretend desires of beings that did not exist.
During the COVID years, we had all the time in the world to communicate through machines and to be alone with our machines, but more than anything, we missed each other and how we find ourselves in the presence of one another.
Can we summon ourselves to reclaim that longing and respect for the complexities of our communities and our inner lives? Right now, the culture may be smitten with the idea of pretend conversation with chatbots, but there is another choice: to turn our cultural resources to remaking the spaces in which the real thing can happen.
Sherry, just this week I searched up your name to see whether you had added your essential voice to the current AI conversation! Thank you for bringing your work from *Reclaiming Conversation* back to the forefront; it has never been more urgent than now. My husband and I have been writing on navigating daily life in the Machine Age with a focus on the importance of human connection and support a tectonic shift in our relationship to technology. We lead annual community digital fasts, and provide practical guidance for turning away from screens and toward each other (The 3Rs of Unmachining https://schooloftheunconformed.substack.com/p/the-3rs-of-unmachining-guideposts). We will be bringing your work back into conversation, espcially as it relates to parent-child relationships.
Thank you again for adding your voice!
Excellent, Sherry! I know people who replace human companionship with dogs and/or cats, but chatbots are a much higher levels of that. Where will this take humanity?