AI Will Soon Make Social Media Much More Harmful to Liberal Democracy, and to Children
My new essay with Eric Schmidt lays out four imminent threats and five doable reforms
The After Babel Substack is about “the profound psychological and sociological changes that occurred in the 2010s when human social and political life migrated onto platforms curated by a few for-profit companies whose business models drove them to maximize engagement.” I’m using this Substack to help me write two books on the topic. The first will be titled The Anxious Generation: How smartphones and overprotection damage mental health and block the path to adulthood. (This is the new title, much better than the old title of “Kids In Space.”)
The second book will be titled Life After Babel: Adapting to a world we can no longer share. It will be about how social media (and related technologies) changed the structure of social relationships and knowledge flow so profoundly that we may never again have shared understandings, and we may be living in a way that cannot sustain liberal democracy as we have known it since the 18th century.
So far, this Substack has been entirely focused on the issues of the first book: documenting the epidemic of teen mental illness that began around 2012, in many countries, and laying out the evidence that a major cause of it was the sudden move of teen social life from flip phones (which are only good for communication) to smartphones (which are good for gaining and holding children’s attention, taking them away from everything else). In the coming weeks, I’ll pivot to the second major cause of the epidemic: the loss of unsupervised free play and especially risky play that began with the rise of “safetyism” (the worship of safety) in the 1990s, at least in the USA, UK, and Canada.
But today, I’m putting up my first post about the second book, Life After Babel, because I have a new essay out in The Atlantic with Eric Schmidt (the former CEO of Google), titled AI IS ABOUT TO MAKE SOCIAL MEDIA (MUCH) MORE TOXIC. I draw out the main points here.
AI IS ABOUT TO MAKE SOCIAL MEDIA (MUCH) MORE TOXIC
About one year ago, I published an essay in The Atlantic titled WHY THE PAST 10 YEARS OF AMERICAN LIFE HAVE BEEN UNIQUELY STUPID. It was a very dark essay, laying out the view from social psychology on what social media has done to the processes and institutions upon which a liberal democracy relies. Eric read an early version of the essay, along with my book proposal for Life After Babel, and reached out to me to talk.
It took a while for us to get together, but we finally met for lunch in NYC last October. I laid out my full argument about the harms to democracy from social media, and I showed him the early versions of the graphs about teen mental health featured throughout this Substack. Eric laid out his growing concerns about the effects of social media as an engineer who was at the center of things from the beginning. Eric had been a techno-optimist in the early days of the internet and social media. Here’s a passage that got cut from our first draft of today’s Atlantic essay, giving Eric’s reflections on how he came to change his mind in recent years:
When I look back, I see two ways in which we in the tech community were naively optimistic. First, like many in Silicon Valley, I had an overly rosy view of human nature. Most of us thought that it was inherently good to just connect everybody and everything. But now I can see that even though most people are good––or, at least, they behave well when interacting with strangers––a small number of trolls, foreign agents, and domestic jerks gain access to the megaphone that is social media, and they can do a lot of damage to trust, truth, and civility.
Second, I didn’t fully understand human tribalism and the way that social media could supercharge it. All platforms wanted to grow their user bases and increase their engagement, and we all thought that social media was a healthy way to help small communities form and flourish. But as political polarization rose steadily, not just in the USA but in many parts of the world in the 2010s, we discovered that issues of partisanship, identity, and us-versus-them were among the most powerful drivers of engagement.
Eric had recently published a book on AI titled The Age of AI (with Henry Kissinger and Daniel Huttenlocher; see this video summary). The book is not alarmist — it is a balanced examination of the bounties and dangers that AI might confer upon us. But Eric had been thinking about the interactions of AI with social media, and once I shared my perspective on the social psychology of that interaction, it became clear to both of us that generative AI could make social media much, much worse. Given the concerns that Eric described above, it seemed likely that AI was going to super-empower bad actors by giving them each an army of assistants, and it was going to supercharge intergroup conflict by drowning us all in high-quality video evidence that the other side is worse than Hitler.
We decided to write an essay together, joining his understanding of the technology with my research on social and moral psychology. We converged upon a short list of four imminent threats, all described in our essay:
1) AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation.
2) Personalized super-influencers will make it much easier for companies, criminals, and foreign agents to influence us to do their bidding via social media platforms.
3) AI will make social media much more addictive for children, thereby accelerating the ongoing teen mental illness epidemic.
4) AI will change social media in ways that strengthen authoritarian regimes (particularly China) and weaken liberal democracies, particularly polarized ones, such as the USA.
We then began talking about potential reforms that would reduce the damage. We both share a general wariness of heavy-handed government regulations when market-based solutions are available. Still, we saw that social media and AI both create collective action problems and market failures that require some action from governments, at least for setting rules of the road and legal frameworks within which companies can innovate. We workshopped a list of ideas with an MIT engineering group organized by Eric’s co-author Dan Huttenlocher (we thank Aleksander Madry, Asu Ozdaglar, Eric Fletcher, Gregory Dreifus, Simon Johnson, and Luis Videgaray), and with members of Eric’s team (thanks especially to Robert Esposito, Amy Kim, Eli Sugarman, Liz McNally, and Andrew Moore). We also got helpful advice from experts including Ravi Iyer, Renee di Resta, and Tobias Rose-Stockwell.
We ended up selecting five reforms aimed mostly at increasing everyone’s ability to trust the people, algorithms, and content they encounter online:
1. Authenticate all users, including bots
2. Mark AI-generated audio and visual content
3. Require data transparency with users, government officials, and researchers
4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote
5. Raise the age of “internet adulthood” to 16 and enforce it
I hope you’ll read the essay for our explanations of why these reforms are needed and how they could be implemented.
The arrival of social media in the early 2000s was our most recent encounter with a socially transformative technology that spread like wildfire with almost no regulation, oversight, or liability. It has proven to be a multidimensional disaster. Generative AI promises to be far more transformative and is spreading far more quickly. It has the potential to bring global prosperity, but that potential comes with the certainty of massive global change. Let’s not make the same mistake again. Liberal democracy and child development are easy to disrupt, and disruption is coming. Let’s get moving, this year, to protect both.