The UK Is Doing the Hard Work of Protecting Children Online
Commentary and advice from Beeban Kidron, who helped start it all
Intro from Jon Haidt:
In December 2019, Tristan Harris (of the Center for Humane Technology) sent me an email introducing me to Beeban Kidron. He said she is “leading the charge on UK tech policy for kids’ use and ‘Duty of Care.’” Thus began my long collaboration with Beeban, who has been making films and advocating for children’s rights for decades. She was recognized for these contributions to British society with an appointment to the House of Lords in 2012. It was her 2013 documentary film InRealLife that opened her eyes to what was happening to teens growing up online. In response, she founded the 5Rights Foundation, a charity named after a series of fundamental rights that children should be entitled to in the digital world.
As Beeban shuttled around between London, Washington, and Silicon Valley, she advocated for the novel idea that children on the internet are not actually adults, even when they claim to be 13 (which is the internet’s current “age of adulthood.”) Beeban argued that the companies that had come to own much of childhood, and that were turning children’s attention into profits, should bear responsibility for the harms children were encountering on their service.
Beeban was the architect of the UK’s landmark and world-leading Age Appropriate Design Code, which took effect in 2021. The AADC became the model for many U.S. states, and which influenced the Kids Online Safety Act in the U.S. Congress. In other words, just as the British Parliament is sometimes called “the mother of Parliaments,” Beeban can be called “the mother of parliamentary efforts to protect children online.”
Now the UK has gone further, with its Online Safety Act of 2023, which began to be implemented in July of this year. The UK is doing the necessary and difficult work of figuring out how to translate the general principles of children’s rights and protections into specific and enforceable rules that the companies must respect. How is it going?
Now that so many nations are joining in the global effort1 to reduce the exploitation of children online, I thought it would be helpful for legislators and reformers around the world to hear from Beeban about what exactly they are doing in the UK, what controversies and objections have arisen, and what advice she has for the rest of the world (e.g., focus more on design than on content).
Thank you Great Britain! Thank you Beeban!
– Jon
The UK Is Doing the Hard Work of Protecting Children Online
By Beeban Kidron
By the age of five, 20 percent of UK children are using social media apps without parental supervision. By 13, half of children in the UK have seen pornography online. Among 13- to 15-year-olds, one in eight report receiving unwanted sexual advances on Instagram in the past week. Nearly two-thirds of British 16- to 18-year-olds have tried to cut back on their smartphone use. A clear majority of young people in the UK aged 16 to 24 (62%) say social media “does more harm than good,” and more than half of Gen Z respondents (55%) believe life would be better if social media were banned for under-16s. Only 22% think it would be worse.
That is the online world that children inhabit daily. I have spent a decade and a half as a crossbench (non partisan) peer in the UK’s House of Lords bringing forward legislation and regulations that require tech companies to make the internet safer and more age-appropriate for children. My goal has been to secure for under-18s the kinds of protections and privileges that exist in all other settings. I contributed to the development and passage of the Online Safety Bill, as a member of the Joint Committee on the Draft Online Safety Bill, and worked with colleagues to amend and strengthen the provisions for children
After eight years of development and debate, the groundbreaking Online Safety Act (OSA) finally became UK law in 2023. The OSA addresses concerns, echoed worldwide, about the impact of an unregulated internet on children, adults, and the fabric of civil society. The objective is in the title: its purpose is to make the online world safe for all those living in the UK. By placing legal duties on search services and user-to-user services (largely social media companies), the Act protects users from illegal content and children from harmful content and activity, as defined by the Act.
The passage of the OSA was dignified by a completely non-partisan approach. A brief look at Hansard (the official record of Parliament) shows that it stands out as a victory because it prioritized children’s safety over political animosity. Nonetheless, the Act did not pass without controversy. Some argued that the age assurance measures violate adult free speech rights, while others contended that the measures fail to adequately protect children.
Disappointingly, after the Acts passage some politicians have sought to turn these differing opinions into major political fault lines. In reality, there is overwhelming bipartisan support for ensuring children’s online safety. While some concerns about the law are valid and worthy of discussion — and additions may be necessary to plug obvious gaps in the novel legislation — the OSA is a first step in pursuing this critical goal.
The Protection Afforded by the Children’s Codes
Since the OSA reached the statute book, Ofcom, the UK regulator responsible for enforcing the Act, has published several of the “codes of practice,” which define a set of safety standards digital services must meet to comply with the OSA. The Illegal Content Codes and the Protection of Children Codes both establish measures that specifically protect children. After the Children’s Codes came into force this past July, the positive effects were immediate, not least of which has been a strong indication that platforms have vastly reduced the frequency at which children are recommended pornography and self-harm content.
The Illegal Content Codes of Practice are relatively straightforward: they outline the steps services must take to remove illegal content, including but not limited to terrorism content, child sexual abuse material, hate speech, specified categories of violence, and some content related to banned substances. In other words, platforms and services are not allowed to host content that is illegal under UK law, nor can they recommend such content to any user, regardless of age. The Illegal Content Codes also mandate measures that make it harder for adults to contact children online, including that platforms must exclude children from adults’ suggested connections lists, and prevent the ability for users to direct-message children whom they are not connected to (i.e., do not follow, subscribe to, etc.).
The Children’s Codes, which came into force on July 25, 2025, apply to online products and services that are likely to be accessed by children. Under the codes, digital companies providing services likely to be accessed by children must undertake risk assessments and identify and mitigate the ways in which children may experience harm when using their service. This includes implementing robust filtering of harmful content, providing reporting and support mechanisms, and embedding protective measures into service/product infrastructure.
The Act defines content that is harmful to children in three categories:
“Primary priority content” refers to pornography and content that encourages, promotes, or provides instruction for suicide, self-harm, and eating disorders.
“Priority content” refers to content such as abuse and hate, bullying, violent content, harmful substances, and dangerous stunts/challenges.
“Non-designated content” refers to other content that may be harmful to children, including content that stigmatizes body types and depression content.
The Children’s Codes stipulate that companies must be able to prove that they have “systems and processes” in place that would, in the normal run of things, prevent a child from accessing primary priority content, and exclude, down-rank, or hide priority content and non-designated content. In their risk assessments, companies must also include a consideration of how likely their service design is to expose children to harmful content.
Those that allow content or activity that is harmful to children are required to implement “highly effective age assurance,” verifying the age of the user. Age assurance enables adults to exercise their right to view content that the government has a duty to protect children from. Anticipating valid concerns about user privacy, the code stipulates that age assurance methods must comply with the standards set out by the Information Commissioner’s Office (ICO), including “data protection by design and by default” which minimizes data collection. If Ofcom is concerned that providers have not met these privacy standards, it may refer them to the ICO for investigation.
Sensibly, the measures set forth by the Children’s Codes only apply if the platform or service meets both the 1) “likely to be accessed by children” and 2) “allows content and activity that is harmful to children” conditions. If a service is not likely to be accessed by children or does not carry harmful content as defined by the Act, these stipulations do not apply, nor is an age check required.
Two Critiques: Freedom of Speech and Right to Privacy
The fractious public debate has revolved around a couple of themes: the alleged infringements to freedom of speech and concerns about the privacy and efficacy of age assurance. While both are legitimate concerns, opponents have unhelpfully shared plenty of misleading information as a way to undermine the OSA. It’s worth establishing the facts.
1. Freedom of Speech Concerns
Those who have suggested that the OSA represents a threat to freedom of speech suggest that the Act enables tech companies and/or the government to censor legitimate acts of expression. They list several high-profile examples of social media services censoring content that is entirely legitimate and legal in a democratic society, including newspaper articles about Joe Biden’s police funding plan and a statement by a UK politician promoting the founding of a new party.
To be clear, the OSA does not give the government the direct power to remove illegal content, nor does it offer the tech companies any general power to curtail speech; indeed, the Act carefully defines harmful content and its categories. While it is reasonable to be concerned that, in practice, tech companies may choose to comply with the OSA by using a “bypass strategy” in which a company removes more content than required to mitigate the risk of legal sanctions, this is not the intention and is not required by the Act. It is a deflection to blame the OSA for such an overzealous approach.
There have been some reports that children have been prevented from seeing material such as content about Gaza, Ukraine, and a parliamentary speech by a Conservative MP about grooming gangs. The OSA and Ofcom are clear that access to journalistic content, particularly for older children, should be protected. Some tech companies may be removing content too hastily, and much of it is likely teething problems as companies refine their approach to compliance. As time goes on, companies will get better at distinguishing between content that is harmful as defined by the Act and sensitive content that may aid children’s development.
Freedom of speech concerns are critical, but they don’t arise as a consequence of the Children’s Codes themselves; they arise as a consequence of companies failing to meet both the spirit and letter of the law. This is why effective, privacy-protecting age assurance is key: if age assurance is implemented with privacy at the fore, providers can comply with the Children’s Code while showing adults the same type of content they saw before.
2. Age Assurance & Privacy Concerns
But Ofcom’s “highly effective age assurance” requirement has seen its share of criticism as well.
A lot has been made of a reported increase of VPN use, which allows users to circumvent age assurance requirements by pretending to be based in another country. While it is reasonable to think that some children are using VPNs to get around age assurance, that doesn’t render the entire effort pointless. To start, it’s likely that many VPN users are adults with concerns about their own privacy — for example, the three in ten men in Britain and more than half in the U.S. who regularly look at pornography.
Meanwhile, those children who are using VPNs understand they are transgressing — an important change from allowing them unfettered access to adult content. The restriction sets a new cultural norm that says this content, which damages their emotional and social development, is not ok. It is better that a small number of children consciously take themselves where they should not be, than that all children are inundated with adult material whether they want it or not.
It’s not surprising that some adults have chosen to use VPNs rather than input information into an age assurance process: many are concerned about their privacy. As the architect of the UK’s Age Appropriate Design Code — which established a higher bar of privacy for under 18s and has been widely copied around the globe — I believe that privacy is a fundamental right, and I am entirely behind the need for privacy, for adults and children. I previously warned that maintaining privacy would be essential to ensuring confidence in the Act. Yet the regulators have not put that privacy front and center of their efforts, and it has left a weakness for opponents of the OSA to exploit.
Some commentators have wrongly argued that age assurance requires users to show ID such as a passport or driver’s licence, or give away other sensitive information. This is not what the OSA asks for. There are many privacy-preserving approaches to age verification that do not involve legal IDs or sensitive information, which I outline in “But how do they know it is a child?”, a report I co-authored for the 5Rights Foundation. Examples include third-party age assurance, which can establish a person’s age in a secure setting; or cross-account authentication in which a user’s age — and only their age — established by one provider is confirmed to other providers. Many services also already have a very clear understanding of age from existing behavioral signals.
Our report found that some service providers collect and share more information than is necessary, such as gender and geographical location. Where companies exploit the Children’s Code to grab data whilst checking age, they are acting in bad faith and fuelling distrust. If and where that is happening, they can and should be subject to regulatory action by the OSA and Ofcom.
Ofcom’s Focus on Content Over Safer Design
Freedom of speech concerns are critical, but they don’t arise as a consequence of the Children’s Codes themselves; they arise as a consequence of companies failing to meet both the spirit and letter of the law.
A third critique is one I wholeheartedly endorse: Ofcom’s Children’s Codes focus too much on content moderation and too little on safety by design. Keeping children safe is not simply about moderating the content that they encounter, but shaping the online environment which they occupy.
As plenty have shown, including Anna Lembke on this Substack, many of the online harms children suffer are the consequence of a digital services designed to extract as much of their attention for as long as possible. Addictive design features can be isolating and unfulfilling, and time spent online glued to a smartphone has an opportunity cost. A third of vulnerable children — those with special educational needs or a health condition which requires professional help — say they don’t feel like they can control how much time they spend online. More than a fifth of all children turn down opportunities to see friends to stay online. A similar number say they stay online instead of playing sports. The proportion of parents who worry about these trends is even higher.
During the development of the Online Safety Bill, I asked the (then) government to concentrate on the design not the content, saying,
“At the heart of our debates should not be content but the power of algorithms that shape our experiences online. Those algorithms could be designed for any number of purposes, including offering a less toxic digital environment, but they are instead fixed on ranking, nudging, promoting and amplifying anything to keep our attention, whatever the societal cost.”
On this issue, there has been a tussle with the regulator. Ofcom has stated, “We do not think the Act gives us the power to tackle features and functionalities as harms in their own right, including those leading to addictive behaviour, without reference to how they affect children’s exposure to harmful content.” Many parliamentarians disagree with this interpretation; they believe functionalities which are designed to extend how long a child uses a service are part and parcel of the Act.
Ultimately, Ofcom must robustly enforce the Act, enact the will of parliament, and focus on the outcomes for children, not their own internal processes. Addictive design — which is ultimately at the heart of children’s harmful online experiences — must be central to the enforcement of the Act if it is to be effective in the long term.
Does Regulation Work?
It has been more than a decade since I stood in front of a UK minister of state and told them about the GPs who had visited me in parliament, worried about the eye-catching number of young girls arriving in their surgeries with anal tears — a byproduct of young men who learned (or didn’t) about sex from hardcore pornography. It has been two years since a barrister called me, horrified, to find herself defending more than a dozen young freshmen accused of strangling during sex in just one week — “it is epidemic”; “the boys are victims too.” And as I write, yet another bereaved parent has just left my office, failing to understand how digital services are allowed to hide what drove her son to his death behind a cloak of “tech exceptionalism.”
The arrival of the OSA means companies are beginning to adapt their services to protect children from content that can be detrimentally life-changing in its ability to overwhelm and harm. Many said this was impossible. Ten years ago, it was widely felt that only passing US law could make the difference for children. Yet here in the UK, and shortly in the EU, where the Digital Services Act, which includes specific measures to keep children safe, has just been published — things are changing.
In recent weeks, some politicians — including a startling number from outside the UK — have sought to undermine the OSA. But they are wrong to call for its repeal rather than its improvement. I speak to parents, teachers, experts, and (most importantly) children weekly; they want an OSA that works. The polling shows the Act’s ambitions enjoy cross-party support. Now is the time to build on the OSA by plugging the gaps and enforcing its provisions. Parents cannot be expected to bear the entire burden of keeping their children safe online. The companies that profit from them should bear much of the responsibility. We need governments and regulators across the globe to demand a digital world that anticipates the presence of children, and that meets their needs.
For the most thorough review of the many legislative and regulatory efforts taking place around the world, see this report from my NYU colleagues at the NYU Stern Center for Business and Human Rights.




Respectfully, the Online Safety Act does not protect children at all. Anyone who thinks that is foolish, and believes the propaganda.
Children are very clever. They know what VPNs are, and often, they know a workaround or two on almost every aspect of life.
This law is a massive invasion of private citizen's privacy, and has already been shown to have data breaches from many 3rd party "verification" platforms.
So, again, respectfully, please do more research, because this reads like Government propaganda.
I just wish UK would protect children in the streets