Solving the Social Dilemma: Many Paths to Social Media Reform
A guide to the laws and policies proposed around the world
[A note from Jon Haidt]
Those of us concerned about the effects of social media on society often feel that we’re battling a hydra—the many-headed monster of Greek mythology who grew two new heads each time Hercules cut one off. The variety, reach, and damage of these platforms keep expanding, even though we have not yet been able to cut off any heads.
Hercules finally killed the hydra with Athena’s golden sword. What tools or weapons do we have available to us today? What forces can possibly incentivize or force Meta, TikTok, and the rest to make changes that would benefit adolescent mental health or the stability of and viability of liberal democracy? Market forces are not working because, as Tristan Harris and other experts explained in The Social Dilemma, social media has put us all into a social dilemma, also known as a collective action problem: it’s hard for anyone to quit as long as everyone else is on a platform. It’s hard for any parent to say no as long as a child says that “everyone else” in her class is on Instagram. But if we could all coordinate, we could escape from the trap.
The main tool that modern societies use to solve collective action problems—especially when children are being harmed—is the law. Laws set boundaries and minimum standards, creating a fair, safe, level playing field on which companies can innovate and compete. We simply don’t allow harmful products to be sold to minors. We don’t allow companies to expose children to danger, especially without the knowledge and consent of their parents.
What golden swords are available to us? What laws have been enacted, or proposed? Are they working as intended? Who knows! There are so many laws being proposed at the state and federal levels in the USA, to say nothing of the UK, EU, and Australia. Two years ago I tried making a list of them but gave up. I was making and managing too many other lists (see the long list of collaborative review docs that Zach Rausch and I maintain).
So, I reached out to my friends at the Center for Humane Technology (CHT) in May of 2022 and suggested that we work together on this. They put Camille Carlton on the job, and I put Zach on the job. The three of us worked together to figure out the best structure, and then Zach and Camille did most of the work to populate the document. We then sent it out to a select group of experts to critique our work and add to the doc.
At last, it is ready for public viewing and (more) criticism. This Substack post lays out the review doc, beginning with an argument for the necessity of regulation and then giving an overview of what you’ll find in the review doc.
We hope the review doc will be a resource for many audiences: for legislators around the world who are looking for ideas and model bills; for entrepreneurs who hope to build more humane technology; and for social scientists and policy experts who can help us better understand the effects—intended and unintended—of each approach.
CHT’s goal is humane technology. My goal is to restore adolescent mental health and reduce political chaos. The major tech companies are so unhelpful, so committed to ignoring the problems they create, that legislation and litigation are essential. Let’s pick the best tools for the job. As Hercules found out, not all swords work.
— Jon Haidt
Death Traps
On March 25, 1911, a fire broke out in the heart of Greenwich Village, Manhattan, at the Triangle Shirtwaist Factory. The factory, which occupied the eighth, ninth, and tenth floors of a building, became the stage for one of the deadliest industrial disasters in U.S. history. Within its walls, hundreds of girls and young women, many of whom were Italian and Jewish immigrants between ages 14 and 23, were operating sewing machines. The fire spread rapidly, trapping the workers behind locked doors—a practice done at the time to prevent workers from taking unauthorized breaks, and also to reduce theft. With only one working elevator, many girls plunged down the shaft to their deaths. Others burned alive or jumped from the windows, watched by thousands of bystanders. A total of 123 women and girls were killed, and 23 men.
The Triangle Shirtwaist Fire was just one of many industrial accidents of its era. Among the most tragic aspects of these fires was the fact that they were not inevitable; rather, they were the consequence of a lack of accountability and oversight that put thousands of young workers at extreme risk of harm.
If not inevitable, how could so many people have let this happen so many times? The answer is that the factories were caught in a trap, known by social scientists as a collective action problem. Any factory that spent money to improve worker safety was at a disadvantage compared to its competitors. There were no incentives to improve conditions for workers, so market pressures caused a “race to the bottom” in which a sleazy move by one factory put pressure on all competitors to follow suit.1 You can see how the dynamics of collective action problems work in Figure 2 below.
Figure 1. Collective Action Problem: A situation where an action would be beneficial to many people, yet if only one person takes the action while others do not, it becomes too costly for that individual. Consequently, it's unlikely that any one person will take the necessary action. (Image source: Green, 2015; black arrow, block, and text added by Zach).
Fortunately, there were also collective action solutions that liberated factories from the trap. These solutions required coordination among workers, the government, and ultimately, the factories themselves. The Triangle fire spurred ardent activism from many working women. Their activism put pressure on the government to eventually create the Committee on Public Safety, which in turn led to the formation of the Factory Investigation Commission. This Commission ran hundreds of investigations into worker safety conditions at factories around the state. The Commission’s work led to 38 new labor laws, a monumental turning point for child protection and worker safety, and the start of a century of reforms.
Figure 2. Liberating ourselves out of collective action problems. Coordination among groups of people enables individuals to escape from the trap. As more people participate, individual benefit increases. (Image source: Green, 2015; black arrow and text added by Zach).
The "race to the bottom" was stopped by state and then federal governments creating new safety and accountability measures that raised the "bottom" significantly. Companies consequently reduced their fire risk and improved their working conditions—for example by building more and better entrances and exits, and electrifying factories to replace the use of oil lamps or gas lights. They had to spend money upfront to make these improvements, but so did their competitors.
Social Media Companies are Racing to the Bottom of the Brain Stem
The 21st century has many industrial disasters analogous to the Triangle Shirtwaist Fire, although today the flames often burn behind screens rather than in the middle of factories. One such fire has been raging since the early 2010s, as millions of adolescents and young adults have been ‘working’ for social media companies without pay or safety protection, generating the content needed (posts, likes, shares) to hook other young people. These “users” are not customers (and they are not even just free laborers); they are the product that the companies deliver to their real customers: the advertisers.
Like oil companies who compete to extract and sell as much oil as possible, social media companies vie for human attention, aiming to sell this attention to advertisers. Due to intense competition and limited regulations, these companies are engaged in their own race to the bottom, but this time, to the bottom of the brain stem.
This race is characterized by tactics aimed at maximizing engagement, and thus attention—think autoplay, infinite scroll, and content that evokes strong, and often extreme, emotion.
Such strategies, while profitable, fragment users’ attention, erode social trust and jeopardize children’s mental health. The leaders of the social media companies know what they are doing, and some of them surely have a conscience, so why don’t they stop doing it? Because any company that drops out of the race will lose its young customers to the platforms that stay in it. The companies are caught in a collective action problem.
Reforming social media is extremely challenging, as the difficulties are compounded by what is called the complexity gap. The technology that underlies these platforms advances much faster than many of us can understand it, making reform a constant game of catch-up. In fact, all reform efforts, for almost any cause, will be difficult to implement as our age of information overload and fragmentation makes truth and effective reforms exceptionally hard to find.
How do we navigate through the confusion, find common ground, and decide which avenues to pursue as we try to protect our children and improve our democracy? One difficulty is that there are so many reform efforts happening—locally, federally, internationally—and so many approaches being tried, that we can’t even remember them all. If only there was one website or document that kept track of all of these efforts, organized them into categories, and linked them to research on their efficacy. Well, now there is.
Social Media Reform in a Rapidly Changing World
Jon and I have teamed up with the Center for Humane Technology to provide an ongoing and evolving open-source Google document that contains the citations and summaries of current and proposed social media reforms, both at the policy and platform levels, with comments and criticisms from leading experts in each content area. We hope that this resource will help legislators, reform advocates, and law firms to coordinate their activities and find the most effective ways to incentivize (or force) tech companies to change their current practices.
The document is broken up into six sections, with each section providing proposed solutions for a distinct harmful consequence of social media: reducing harmful impacts on children, improving civil discourse, protecting privacy and liberty, enhancing platform accountability, updating our existing institutions, and incentivizing technology that better serves the public interest. Each topical section is broken down into subsections—policy fixes and platform fixes—meant to indicate the manner in which the solution is proposed.
Platform Fix: A technical solution that can be made to the platform by changing how it is designed, such as disabling notifications by default to reduce addictive behaviors. Platform fixes can be regulated by companies themselves, or incentivized via policy.
Policy Fix: An existing or proposed regulatory solution, such as the Age Appropriate Design Code which aims to protect young people from social media harms. While policy fixes can include platform fixes, they can also include non-technical elements such as establishing new rights and standards or clarifying liability.
Here is the table of contents:
Let’s take section 2 as an example to illustrate the complexities of the reform efforts and how the document provides a tool to help us parse through them. Section 2 highlights solutions designed to improve social media for children and adolescents. We’ve been writing a lot about this topic here at the After Babel Substack, and it has become clear that the harm to mental health is found most acutely among children and adolescents.
Section 2 provides a list of solutions that approach the topic from a variety of perspectives. As can be seen in the Google doc, the platform and policy solutions for section 2 tend to fall into three large buckets: (1) content-based solutions, (2) age-gating solutions, and (3) design-based solutions. These solutions fall along different parts of the technology development cycle (from design to deployment) thus addressing the harms in different ways. Let’s dive deep into each to get a sense of what solutions are currently being implemented and considered:
1. Content-based solutions
The content-based solutions focus on regulating and filtering the type of content that young users can see and interact with on the platform. This has been a common approach used by social media platforms. For example, Facebook uses AI tools to detect and remove content that goes against their community standards before users can see it.
These content-based strategies have led to the removal of millions of posts per year across platforms. But these moderation strategies are often dubbed as “a whack-a-mole harm reduction approach” that can involve a high risk of censorship, bias, and cases where they fail to actually mitigate exposure to harmful content.
Content-based solutions are implemented at the end of the technology development cycle (i.e., after the products have been deployed to consumers). Because of this, content-based solutions try to clean up the harms after they’ve happened, working more like a bandaid as opposed to addressing the root cause.
2. Age-Gating Solutions
Age-gating solutions are able to avoid concerns around content decisions by simply delaying children’s entry onto social media platforms. At the same time, some age-gating solutions may bring about other unintended risks, such as increased privacy risks, depending on the type of age-gating solution and how it is designed.
Nonetheless, these solutions are effective tools in addressing two of the major collective action problems we face: ending the incentive for companies to compete for young users and ending the need for young people to be on social media in order to avoid social isolation.2
Regarding the first, by regulating and creating an effective age-gated barrier to social media platforms, companies will not be able to compete for younger users. If they could have young users on their sites, the business incentives would push companies to do everything that they could to engage and hook them.
Age-gating social media platforms can be done in multiple ways. Appendix C and D go through the ways that companies are currently working to build user and age authentication methods and the challenges (e.g., privacy concerns) that these approaches have.
Here is just one of many examples of age-gating solutions:
Policy solution (see 2.1.3): Raising the age of internet adulthood to 16 through COPPA 2.0 (2023), an evolution of the 1998 COPPA Act, which currently sets the age of internet adulthood to 13. This legislation, championed by Senators Edward Markey and Bill Cassidy, would restrict data collection for adolescents aged 13-15. (See Appendix C for platform based age-gating solutions).
Age-gating solutions are implemented at the end of the technology development cycle (but ideally before the products have been deployed to consumers). While age-gating solutions do not address the root causes of platform harms, they can (if successfully implemented) be effective in reducing harm to young users by restricting access altogether.
3. Design-based solutions
Design-based solutions are centered around the design choices that companies make when developing their products, rather than focusing on what content is being posted on the platforms. Because of this, they are also able to get around issues of censorship, bias, and the sometimes arbitrary decision-making around what is and is not “harmful content.” Common design changes that address social media’s addictive nature include: removing endless scroll with a feed that requires users to take an action to see more, ending autoplay on videos, or hiding social validation features such as the number of likes or comments.
The most well-known design-focused solution to improve social media is the UK age-appropriate design code (AADC). Section 2.1.1 examines the AADC in detail. For two years now, the UK AADC has incentivized companies to design their products with child welfare in mind. It has compelled platforms to switch off geolocation services, avoid using nudge techniques, provide a high level of privacy by default, and require regular 3rd party assessments of products developed for children (see here for other deceptive design practices used by social media companies).
The UK AADC has been a blueprint for a number of similar proposals driven in the United States, particularly the California Age Appropriate Design Code, along with others which you can see in the doc.
One bill that has garnered much attention in the United States and also focuses on design changes is the Kids Online Safety Act (KOSA). KOSA would require platforms to (1) use a “duty of care” for social media platforms to prevent and mitigate specific harms to minors, (2) provide minors with options to protect their information, (3) disable addictive product features, (4) enable users to opt out of algorithmic recommendations, and a variety of other requirements.
Design-based solutions are implemented at the beginning of the technology development cycle before a product makes it to market. Because of this, design-based solutions tackle the root causes, ensuring that products mitigate harm upstream before it occurs, as opposed to implementing fixes to reduce harm downstream, after it’s happened.
Overall, there are hundreds of proposals, many in different stages of development, and each with its own strengths and limitations. In our fast-paced and constantly changing world, it can be difficult to keep track of these solutions. We hope the Google doc will help you make sense of these options and help you decipher which solutions would best move us towards a better social media future.
Creating Humane Technology
The forces that currently drive social media have created a system in which social media companies face a collective action problem, ruthlessly competing with each other to capture attention from users or else face a competitive disadvantage.
In order to pull companies out of this race, we need a paradigm shift that addresses the complexities of the system. There is no silver bullet for the change we need; instead, the best way forward is by coordinating together and pushing for a spectrum of solutions—from design changes to regulatory guardrails to shifts in governance structures.
In fact, many grassroots and governing bodies have already been moving in this direction, especially regarding youth mental health. For example, on October 24th, 2023, 42 Attorneys General across the United States came together to push back against Big Tech, suing Meta for the ways it has harmed young users through addictive designs and privacy violations. Youth organizations have been spawning across the country pushing for safer social media platforms and even choosing to log off from social media entirely. See Appendix A of the Google Doc for a list of organizations that work on social media reform across a wide range of issues.
There is now a growing community of technologists, entrepreneurs, researchers, and advocates who are working to ensure that technology products are designed with some concern for the public interest, and particularly for the welfare of children and adolescents. Together, we’re imagining a new era of social media. We ask questions such as:
What if social media platforms enabled greater critical thinking skills and stronger human relationships instead of exacerbating isolation, addiction, and loneliness?
What if social media platforms fostered nuanced democratic debates and productive outcomes instead of outrage and polarization?
This world is possible if we choose to design technology for the public interest, and if we free the companies and ourselves from collective action traps.
Please check out the Social Media Reforms Collaborative Review.
If you are a researcher, legislator, or industry insider and have reforms or criticisms to add, follow directions in the doc to contact us and request comment access.
A major part of the problem for adolescent mental health, in addition to social media, is the decline of the “play based childhood” and the loss of free play and childhood independence. We do not discuss legislative solutions to this problem in this review doc. See our other collaborative review doc: Free Play and Mental Health: A Collaborative Review where we lay out the evidence of the problem.
For a longer discussion of collective action problems and how we can break free from them, please pre-order The Anxious Generation (due out March 26).
Note on the incentives to not improve working conditions: The influx of immigrants at the time, desperate for work, created a labor surplus. This surplus meant that factories didn't have to compete for workers by improving conditions, because there were always more people willing to take whatever jobs were available. Thus, the factories that would have improved working conditions would have been at a disadvantage to the other companies. Factories operating with lower costs (due to poorer conditions) could offer cheaper products or achieve higher profits, which was a significant incentive at the time. Second, factory owners often focused on short-term profit maximization, which often overshadowed the long-term benefits of better working conditions. This short-term focus can be linked to the lack of regulations and the absence of a strong labor movement at that time.
Note that most age-gating solutions look for parental consent, meaning that companies are still going to be incentivized to attract young users. And, with the network effects and peer dynamics tied to social media, youth will likely push for consent to be online and parents are likely – willing or unwilling – to grant it. Additionally, parental supervision of media usage varies by socioeconomic status, with lower income youth using devices more. This pattern is likely to continue with age-gating, where wealth, race, location and other factors determine which kids are granted “permission” to use these products. Finally, many experts and young users themselves note that kids will find ways around these types of solutions, especially in an era of generative AI.
All of humanity seems to be entering the teen years; we need to begin learning, how to “babysit our lower selves”.
As a mom of four kids, (ages, 14 to 24)- the difference in parenting due to technology is astounding and not some thing I signed up for. With the first three kids, by age 14 I typically begin letting go a bit- I’ve done the hard-core parenting, and they can begin making a little mistakes on their own and learning from them -but technology and social media has changed all that. I am still right up there in their hamster ball, trying to keep the phones out of the bedrooms & screens facing their bedroom doors, trying to keep my 13-year-old off of all social media until he’s in high school… It’s just a constant, never ending struggle and in the meantime, I’m trying to keep my husband engaged and off of his phone while doing the same for myself! It’s nonstop lower-self babysitting that just wasn’t present in the 80s. This is a great article that gives me a glimmer of hope. Thank you.
This is an excellent and much-needed initiative. The open-source document is an impressive, if complex, set of proposals.
One of the challenges with implementation is that the proposals are, in effect, a series of tactical adjustments across a range of practical concerns. It’s as if we have a sinking ship with holes in numerous locations, and we need to plug most of these holes, if not all, to keep ourselves and our children afloat.
Another approach would be to recognize that the ship itself is the problem. The ship is not technology, but the overall context of values that defines what it means to be human. For many of us within the “unMachining” movement, we are dealing not only with a set of practical challenges with how tech is used, but with an underlying worldview that sees human beings as having no parameters, like blank slates, or rather like factory molds that can be shaped however we wish.
Some of us feel this way because we are coming from a spiritual perspective, but the same position can be taken as a materialist. Our minds and bodies are very adaptable, but not endlessly adaptable. They have limits. For instance, there is arguably only so much time that a still-developing human creature (i.e., child or teen) can sit at a screen, sedentary, disconnected from real human beings, from nature, from embodied engagement with physical reality, and remain mentally and physically healthy.
We need all the proposals within the open-source document, and probably more, but we also need a new ship, a new set of understandings about people that prioritize the development of our cognitive, social, and physical capacities in all their native fullness. If we cannot establish something like a common vision here, we may plug some of the holes in the old ship, but the ship may still remain intrinsically leaky and keep sinking.
Still, the proposals and resources you’ve amassed are incredibly helpful. Thank you. The more awareness that’s raised at a grassroots level, the more likely it is that both the practical and underlying philosophical issues will become integrated priorities within a broader public discourse.