AI Will Soon Make Social Media Much More Harmful to Liberal Democracy, and to Children
My new essay with Eric Schmidt lays out four imminent threats and five doable reforms
The After Babel Substack is about “the profound psychological and sociological changes that occurred in the 2010s when human social and political life migrated onto platforms curated by a few for-profit companies whose business models drove them to maximize engagement.” I’m using this Substack to help me write two books on the topic. The first will be titled The Anxious Generation: How smartphones and overprotection damage mental health and block the path to adulthood. (This is the new title, much better than the old title of “Kids In Space.”)
The second book will be titled Life After Babel: Adapting to a world we can no longer share. It will be about how social media (and related technologies) changed the structure of social relationships and knowledge flow so profoundly that we may never again have shared understandings, and we may be living in a way that cannot sustain liberal democracy as we have known it since the 18th century.
So far, this Substack has been entirely focused on the issues of the first book: documenting the epidemic of teen mental illness that began around 2012, in many countries, and laying out the evidence that a major cause of it was the sudden move of teen social life from flip phones (which are only good for communication) to smartphones (which are good for gaining and holding children’s attention, taking them away from everything else). In the coming weeks, I’ll pivot to the second major cause of the epidemic: the loss of unsupervised free play and especially risky play that began with the rise of “safetyism” (the worship of safety) in the 1990s, at least in the USA, UK, and Canada.
But today, I’m putting up my first post about the second book, Life After Babel, because I have a new essay out in The Atlantic with Eric Schmidt (the former CEO of Google), titled AI IS ABOUT TO MAKE SOCIAL MEDIA (MUCH) MORE TOXIC. I draw out the main points here.
AI IS ABOUT TO MAKE SOCIAL MEDIA (MUCH) MORE TOXIC
About one year ago, I published an essay in The Atlantic titled WHY THE PAST 10 YEARS OF AMERICAN LIFE HAVE BEEN UNIQUELY STUPID. It was a very dark essay, laying out the view from social psychology on what social media has done to the processes and institutions upon which a liberal democracy relies. Eric read an early version of the essay, along with my book proposal for Life After Babel, and reached out to me to talk.
It took a while for us to get together, but we finally met for lunch in NYC last October. I laid out my full argument about the harms to democracy from social media, and I showed him the early versions of the graphs about teen mental health featured throughout this Substack. Eric laid out his growing concerns about the effects of social media as an engineer who was at the center of things from the beginning. Eric had been a techno-optimist in the early days of the internet and social media. Here’s a passage that got cut from our first draft of today’s Atlantic essay, giving Eric’s reflections on how he came to change his mind in recent years:
When I look back, I see two ways in which we in the tech community were naively optimistic. First, like many in Silicon Valley, I had an overly rosy view of human nature. Most of us thought that it was inherently good to just connect everybody and everything. But now I can see that even though most people are good––or, at least, they behave well when interacting with strangers––a small number of trolls, foreign agents, and domestic jerks gain access to the megaphone that is social media, and they can do a lot of damage to trust, truth, and civility.
Second, I didn’t fully understand human tribalism and the way that social media could supercharge it. All platforms wanted to grow their user bases and increase their engagement, and we all thought that social media was a healthy way to help small communities form and flourish. But as political polarization rose steadily, not just in the USA but in many parts of the world in the 2010s, we discovered that issues of partisanship, identity, and us-versus-them were among the most powerful drivers of engagement.
Eric had recently published a book on AI titled The Age of AI (with Henry Kissinger and Daniel Huttenlocher; see this video summary). The book is not alarmist — it is a balanced examination of the bounties and dangers that AI might confer upon us. But Eric had been thinking about the interactions of AI with social media, and once I shared my perspective on the social psychology of that interaction, it became clear to both of us that generative AI could make social media much, much worse. Given the concerns that Eric described above, it seemed likely that AI was going to super-empower bad actors by giving them each an army of assistants, and it was going to supercharge intergroup conflict by drowning us all in high-quality video evidence that the other side is worse than Hitler.
We decided to write an essay together, joining his understanding of the technology with my research on social and moral psychology. We converged upon a short list of four imminent threats, all described in our essay:
1) AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation.
2) Personalized super-influencers will make it much easier for companies, criminals, and foreign agents to influence us to do their bidding via social media platforms.
3) AI will make social media much more addictive for children, thereby accelerating the ongoing teen mental illness epidemic.
4) AI will change social media in ways that strengthen authoritarian regimes (particularly China) and weaken liberal democracies, particularly polarized ones, such as the USA.
We then began talking about potential reforms that would reduce the damage. We both share a general wariness of heavy-handed government regulations when market-based solutions are available. Still, we saw that social media and AI both create collective action problems and market failures that require some action from governments, at least for setting rules of the road and legal frameworks within which companies can innovate. We workshopped a list of ideas with an MIT engineering group organized by Eric’s co-author Dan Huttenlocher (we thank Aleksander Madry, Asu Ozdaglar, Eric Fletcher, Gregory Dreifus, Simon Johnson, and Luis Videgaray), and with members of Eric’s team (thanks especially to Robert Esposito, Amy Kim, Eli Sugarman, Liz McNally, and Andrew Moore). We also got helpful advice from experts including Ravi Iyer, Renee di Resta, and Tobias Rose-Stockwell.
We ended up selecting five reforms aimed mostly at increasing everyone’s ability to trust the people, algorithms, and content they encounter online:
1. Authenticate all users, including bots
2. Mark AI-generated audio and visual content
3. Require data transparency with users, government officials, and researchers
4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote
5. Raise the age of “internet adulthood” to 16 and enforce it
I hope you’ll read the essay for our explanations of why these reforms are needed and how they could be implemented.
The arrival of social media in the early 2000s was our most recent encounter with a socially transformative technology that spread like wildfire with almost no regulation, oversight, or liability. It has proven to be a multidimensional disaster. Generative AI promises to be far more transformative and is spreading far more quickly. It has the potential to bring global prosperity, but that potential comes with the certainty of massive global change. Let’s not make the same mistake again. Liberal democracy and child development are easy to disrupt, and disruption is coming. Let’s get moving, this year, to protect both.
Eric Schmidt! I don't think so. Google has played along with this schtick from the beginning and Eric has been there in the driver's seat at least since 2000. He's played along and capitalized on section 230 since the beginning of Google.
https://en.wikipedia.org/wiki/Section_230
Section 230 is the reason that it is impossible to hold social media companies legally liable for their content.
Also, Google, with Eric Schmidt at the helm, has made a pile of money off of licensing the Android operating system which is the operating system in Samsung cell phones.
Likely, a primary reason that Eric is so concerned about AI is that open source AI poses an existential threat to Google Search (his cash cow monopoly.)
Eric never had a moral compass and I doubt that he has suddenly developed one.
Thank you for taking on this formidable challenge, Jonathan. I have read all of your books and frequently recommend them.
I understand AI poses a serious threat to the information landscape, but we must vigilantly guard against authoritarian tendencies in our efforts to thwart those threats lest we inadvertently empower the state and other authorities to infringe on our inalienable human rights—much as chemotherapy indiscriminately destroys healthy cells with malignant ones alike.
Over the past three years, we have witnessed how governments have used the excuse of suppressing “misinformation” to silence dissident voices exposing their disinformation and lies—to lethal effect—as I’ve covered extensively at my Substack:
• “Letter to US Legislators: #DefundTheThoughtPolice” (https://margaretannaalice.substack.com/p/letter-to-us-legislators-defundthethoughtpolice)
• “Letter to the California Legislature” (https://margaretannaalice.substack.com/p/letter-to-the-california-legislature)
• “Dispatches from the New Normal Front: The Ministry of Truth’s War on ’Misinformation’” (https://margaretannaalice.substack.com/p/dispatches-from-the-new-normal-front)
My concern with the proposed reforms you have outlined here is they can easily be abused by totalitarian forces. #1, for example, would eliminate the protective cloak of privacy for whistleblowers and others attempting to expose corruption and other regime crimes, thus endangering the ability of individuals to share information that incriminates the powers enforcing this rule.
#2 is an excellent idea and one I support; same goes for #5.
#3 is a bit amorphous—I would need to understand more what you mean by requiring data transparency but am strongly in favor of transparency for government officials, agencies, and other public entities.
#4 worries me greatly as it could threaten the very platform this piece has been published on. I am extremely grateful to Chris Best and Hamish McKenzie for taking a strong stance in favor of free speech, despite ongoing pressures from pro-censorship advocates. The discussion provoked by this Note from Hamish is well-worth perusing for those who wish to understand the nuances of this contentious debate:
• https://substack.com/profile/3567-hamish-mckenzie/note/c-15043731
As you formulate solutions to address the challenges of AI, I ask that you never lose sight of the necessity to protect our freedom of expression. As Michelle Stiles writes in “One Idea To Rule Them All: Reverse Engineering American Propaganda”:
“The greatest attack on language is censorship and this must be resisted at every level. You cannot have a free society without free speech, period. Any attempt to argue that others must be protected from offense and hurt feelings should be utterly repudiated. No government, no company, no fact-checkers can ever be the arbiters of truth.”