How Indonesia is Protecting 80 Million Children from Online Harm
Their first-of-its-kind regulation cracks down on harmful design.
Introduction from Jon Haidt and Ravi Iver:
After two watershed verdicts in the social media trials in New Mexico and LA this week, we’ve entered a new era in the fight to protect children from online harms. Momentum is growing internationally, and we’re excited to see Indonesia’s groundbreaking new regulation take effect this weekend. The country has mandated a minimum age of 16 for account creation on any online platform that uses features that expose children to documented categories of risk — meaning the regulation applies not only to social media platforms, but also to AI chatbots, gaming apps, and beyond. By addressing harmful features like autoplay, engagement-based algorithms, and ephemeral content, Indonesia’s approach protects kids while preserving their ability to access information, and holds the platforms accountable while incentivizing safer tech design. The regulation will solve the collective action trap for Indonesian families and serve as a model that other nations can build upon.
In this post, Anindito Aditomo, senior researcher for Indonesia’s Center for Education and Policy Studies — a key advisor in the design of the regulation — and his co-authors offer an inside look at how the regulation was built and what makes it unlike anything attempted before.
Bravo, Indonesia!
–Jon and Ravi
Indonesia is home to more than 80 million children, and roughly 8 in 10 of them are already online. That’s 64 million young people (a population larger than Italy) navigating an online world with very few guardrails. According to survey data from UNICEF, 48% of Indonesian children ages 8–18 have experienced cyberbullying, more than 50% have been exposed to sexually explicit content online, and 2% have been threatened with or experienced sexual violence. So, in early 2025, Indonesia joined a pioneering group of countries regulating digital platforms that pose risks of harm to children. Championed by the Minister of Communication and Digital Affairs, Meutya Hafidz, Indonesia’s new regulation will restrict access to high-risk online platforms for users under 16.
Though much of the initial media coverage has called this a “social media ban,” Indonesia’s approach applies to all digital platforms that children access, including AI chatbots, social media, and online games. Instead of imposing a blanket ban on platform categories, its design-based risk assessment approach treats each platform differently based on the level of risk its features pose to children. This design-focused regulation is the first of its kind internationally — differing from Australia’s law, which designates an minimum age for account creation on specific social media platforms. When it goes into effect on March 28, it will both shield kids from harm and create real incentives for the tech industry to build their products more responsibly.
What the Regulation Requires
The digital design features that lead to harm are well established. These include engagement-based algorithms, which can lead to compulsive use and exposure to unwanted or harmful content, the autoplay, which removes opportunities to reconsider how long one is using a platform, and the quantification of engagement (e.g., displaying how many likes a post or photo has), which can result in negative social comparison.
Indonesia’s new regulation requires digital platforms used by under-16s to conduct a self-assessment that evaluates whether the platform’s features expose kids to various categories of risk, including interaction with strangers; exposure to harmful or inappropriate content; misuse or exploitation of personal data; exploitative consumer practices; risks of addiction; and other mental and physical harm.
Digital platforms must submit the results of their self-assessment to the Ministry of Communication and Digital Affairs — the body tasked with implementing the law — along with supporting evidence. The Ministry will then evaluate the submission and determine the platform’s risk level. Platforms certified as sufficiently low-risk can continue to provide services to children, while platforms deemed high-risk must implement risk mitigation measures to continue serving under-16s. Required mitigation measures may include disabling features linked to addiction risk (e.g., infinite scrolling, “like” counts, and content recommendations based on user data); protecting minors’ accounts from discovery by strangers, including via search engines; and preventing underage users from being exposed to violent and sexually explicit materials.
If the platform’s mitigation measures are insufficient, the Ministry will require the company to implement age verification, revoke accounts for current users under the age of 16, and prevent under-16s from creating new accounts. A specialized Ministry team will audit the high-risk platforms to ensure compliance.
Because this regulation is fundamentally an age minimum for account creation, it protects children from the most harmful design features without infringing on their right to information, as Jonathan Haidt and Ravi Iyer explain:
[I]f they do not have an account and have not signed a contract with the company, then they cannot compare the popularity of pictures of themselves, receive tailored late night notifications, be served more and more extreme content, or be contacted by strangers via messaging. Without this inappropriate business relationship and access to the extensive data they currently collect from kids, companies will find it much harder to train algorithms and use design features to manipulate and exploit kids.
The law also imposes other accountability metrics that platforms need to fulfill, including the allocation of resources for public education (e.g., parental guidance workshops at schools), and regular reporting and analysis that supports continuous improvements.
The Advantages of a Design-Based Risk Approach
Indonesia’s unique, design-based risk approach to protecting kids online recognizes that platforms within the same broad category can pose very different levels of risk. Not all online games or chat applications are equally risky, just as not all apps marketed as “educational” are necessarily safe for children. A risk-based approach allows children to continue using platforms where the benefits plausibly outweigh the risks, while restricting access to platforms where the risks are unacceptably high.
The focus on design over platform type also incentivizes the innovation of safer tech. Companies will have a clear opportunity: create safer technology for kids and attract the users displaced from high-risk platforms. New platforms can build safer spaces that meet the regulation’s design standards without fear of being out-competed by companies that are willing to compromise children’s safety for growth. Families will also benefit from the transparent risk framework the regulation establishes, which will allow them to compare platforms more easily and choose the right ones for their kids.
In addition, honing in on risks rather than specific technologies helps future-proof the regulation by anticipating categories of technology that do not exist yet. New platforms, formats, or business models can be assessed within the same framework without the need to constantly rewrite the law. Our evolving understanding of harmful design can also be incorporated into the regulation’s risk assessments, just as improved knowledge of fire or earthquake safety continues to inform new building codes.
Companies will have a clear opportunity: create safer technology for kids and attract the users displaced from high-risk platforms.
Much like the platform-based age-minimum approach used by Australia, Indonesia’s regulation places responsibility on the platform providers, reflecting a shift in the nation’s regulatory thinking from “policing bad outcomes” to “preventing predictable risks.” The burden is on the platforms to understand and mitigate the risks their products create; the tech companies themselves will be held accountable for violations (not the parents or children). The framework also creates a shared language that is conducive to better dialogue between regulators and platforms.
The design-based risk framework will also support productive public discussion about online safety in Indonesia. The risk-assessment criteria will broaden and enrich public understanding of digital harms, which is often narrowly focused on ill-intentioned users who post harmful content or who attempt to contact children, rather than the platform designs that enable and incentivize such behavior. This approach also draws attention to less visible but equally serious risks, such as data exploitation, exploitative monetization practices, addiction, and longer-term mental and physical health effects.
Determining Risk Levels
At the heart of implementation is a deceptively simple question: What makes a platform risky for children? A rigorous, evidence-based assessment that answers this question is essential to the regulation’s success. The Indonesian government has taken this task seriously, using empirical data and scientific expertise to inform the assessment design.
To gather critical information about the types and degree of online harms that Indonesian children experience — and therefore what the assessment should evaluate — the Ministry examined user data from the major platforms1 as well as simulations of children’s digital platform usage. From the data as well as advice from experts, they identified seven categories of risk that the assessment would evaluate: content, contact, consumer (i.e., extracting payment from underage users via targeted ads or gambling-like features), data privacy, addiction, mental health, and physical health. They then leveraged existing research, including the companies’ own randomized control trials, studies, and experiments,2 as well as academic studies on features like autoplay,3 to identify key risk indicators and determine the best way to measure risk. Across categories, they devised a scoring method and determined at what threshold a platform qualifies as high-risk.
After building the risk assessment instrument, the Ministry convened a panel of experts for a Delphi study that examined the proposed risk indicators and evaluated the validity of the assessment. To ensure consistency and reliability, they also asked dozens of raters to independently apply the assessment on a set of platforms and compared the resulting scores. These third-party raters consistently identified platforms known to cause harm, suggesting that the assessment is both credible and rigorous.
Potential Implementation Challenges
1. The Self-Assessment
The decision to have companies complete a self-assessment as an initial step makes the regulation scalable across a large number of platforms, including future services that don’t yet exist. Testing from third-party raters indicates that the assessment’s specificity regarding design requirements should produce objective, consistent results, even when applied by the companies themselves. This is in contrast to general risk-assessment frameworks, which are broader and have not meaningfully addressed platform design because they allow companies to focus on content risks instead of design choices (e.g., in the EU).
Still, the self-assessment creates the very real possibility that providers will take a liberal interpretation of the risk indicators to claim that their platforms pose little risk to users.
Independent review of the companies’ self-assessments is therefore critical. The Ministry will need to equip itself with the resources and technical capabilities necessary to identify and refute unfounded claims. Fortunately, this challenge is not unique to Indonesia, and the Ministry has already taken steps to coordinate such capabilities with like-minded regulators.
2. Age Verification and Enforcement
Another key technical challenge revolves around age verification. The regulation will require high-risk platforms to both prevent under-16s from creating accounts and delete existing under-16 accounts. As part of enforcement, the Ministry will need to monitor whether high-risk platforms have implemented accurate age verification and barred underage users from creating accounts. This will require the Ministry to rapidly build technical expertise and gather additional resources.
Some tech industry stakeholders have pushed back against age verification, citing technical limitations; others have raised privacy concerns. These are valid, but given worldwide momentum toward protecting children, providers are already improving age verification technology and addressing the need for user privacy. The latest version of iOS, for example, allows users in select jurisdictions to validate their age without sharing any identity information with the third-party applications. As more countries demand it, such innovations will only get better.
Another common refrain is that teens will find ways to circumvent age verification measures (e.g., through VPNs). While this may be true for some, even partial success will protect a great number of children. Compare this thinking to other regulations that protect kids: The fact that some drivers still speed does not eliminate the utility of speed limits, and we don’t encounter underage drinking and decide to get rid of the minimum drinking age.
3. Balancing Safety with Children’s Rights
Some critics of the regulation have also raised concerns about its effect on children’s rights to information and freedom of expression. While there is some validity here, we believe the concerns are overstated. Indonesia’s age minimum applies specifically to account creation on high-risk platforms; it doesn’t prevent children from accessing the vast majority of information available online. YouTube content, for example, is fully accessible without an account (and other platforms could follow suit if they choose). Still, this is an area the Ministry will continue to monitor and address
Empowering Collective Action
If implemented well, Indonesia’s age minimum for high-risk platforms has the potential to solve a persistent collective-action problem. As parents and educators, we are locked in a losing battle against online addiction, forced to act as individual “digital police” for our kids. Because the large majority of adolescents are currently on these platforms, keeping your child off of them can feel like a sentence of social isolation. This regulation fundamentally changes that calculus. By addressing addiction at the architectural level of the platform, Indonesia’s approach empowers parents to stop being enforcers and start being mentors.
For educators, this is equally transformative. Schools have long struggled to manage the behavioral and cognitive effects on kids who spend too much time online (the majority), from sleep deprivation to attention fragmentation. Because this regulation focuses on limiting the most predatory, “sticky” features of digital platforms, children will be more able to disengage. With the collective-action problem solved and kids freed from mechanisms designed to addict them, schools and communities will have the chance to reclaim the “unmediated” spaces — playgrounds, sports fields, and face-to-face social circles — where crucial social-emotional skills are forged.
With this groundbreaking regulation, Indonesia is stepping up to protect its 80 million children and is showing the world that this is no longer a private uphill battle for parents to wage against the platforms; it’s a public concern that demands a bold government response. Ultimately, the true measure of Indonesia’s risk-based approach will be found not only in the absence of digital harm, but also in the presence of a flourishing analog life.
They based their survey design on similar measurements from other regulators and the companies themselves.
Lawsuits from across jurisdictions are continually sourcing new evidence that can be used to further inform risk indicators as they continue to hone the assessment.
The studies on autoplay show that it often leads to regretted usage, which helps explain why many teenagers themselves feel that they use these products too much and feel manipulated by them.










Before I can support any of these laws, I need to see a clear explaination of how you verify all user's age while not creating a database that allows 1-for-1 identification and tracking of all users.
Secondly, "...shift in the nation’s regulatory thinking from “policing bad outcomes” to “preventing predictable risks.”" sounds a lot like precrime.
I am following all this as closely as I can. I'm wondering when did it change from, "Cool, the phone learned what songs I like!" to "Wow, the phone is addicting to children, in a bad way." I'm seriously asking because I feel like I missed that shift...