Retired teacher here (taught 2005-2018). I saw the whole phone/social media thing unfold right in front of my eyes, and was keenly aware of the damage it was doing. It was real. I was competing with Silicon Valley MENSA's for the students' most important asset -- their attention -- and I was losing.
This effort may have started sooner had we not gone through the COVID debacle. But now that it has, I hope the momentum continues. One caveat: many districts have distributed laptops (chromebooks) to high school students. The kids are tech savvy and find their way around firewalls. Just a thought ...
Interesting that Minnesota will protect kids from social media companies because they are vulnerable but the governor also thinks that kids should be able to get a sex change operation.
So encouraging and thank you for the strong, focused, and consistent efforts to protect people, especially youth, from the predatory practices of social media platforms. This quote says it all for me:
“Kids and teens are being harmed by a race to the bottom to grab their attention spans, with toxic content and harassment as the collateral damage of profit.”
And just to be clear for the folks commenting on this - the law says nothing about what content can be posted or consumed. Rather, it addresses design issues that are known to subvert what users want, especially kids, in favor of what businesses make money from. For example, 70+% of kids feel that they are being manipulated to spend more time on these products per https://www.commonsensemedia.org/press-releases/common-sense-research-reveals-everything-you-need-to-know-about-teens-use-of-social-media-in-2018. Many kids (just like adults) would prefer to spend less time using these products, even as their social media feeds are optimized for time spent.
Thanks for taking the time to respond to comments. I was wondering how effective you expect regulations like these to be or whether they will simply end up as perfunctory. You say that the current design subverts user preferences, but with kids, do they even understand what they prefer? Yes, many kids are self-aware about the self-destructive nature of their social media usage, but few are willing to do anything about it. If the argument is to have algorithms focus on better quality of content, wouldn't such a content loop still be engaging and thus keep users engaged for extended periods?
The other concern I have is that, especially in regards to minor-adult interactions on social media, many of those interactions are not algorithmically driven, rather byproducts of the platform design. Take for instance, Omegle, which used random paring rather than any specifically tailored algorithm yet the platform facilitated sexual abuse and the propagation of harmful materials. Omegle went under as a result of this, but there are many Omegle alternatives (i.e. monkey.app) that have sprouted up in its place. I don't see how legislating transparency in regards to algorithmic design fixes these issues.
Agreed on legislating transparency not fixing things like Omegle. We didn't get everything we wanted passed in this effort and one thing we wanted was device level protections that would protect against every app. see https://www.afterbabel.com/p/how-apple-google-and-microsoft-can
I don't state that kids are not self-aware about their predicament as many addicts are conscious about their addiction and its consequences. While in principle I am not against arguments for youth agency, the specific circumstances of Gen Z and Gen Alpha have rendered them uniquely inept; their academic performance is beyond disastrous.
It becomes even more appalling when you look at the individual questions. The majority of high school kids are incapable of computing simple mathematical operations (i.e. dividing 158 by 4), nor are many capable of summarizing a few basic paragraphs. It's a hard sell to expect them to be able to properly articulate their explicit preferences when they are more cognitively impaired than the prior generation.
The idea of changing algorithms to champion quality content over engaging content seems to preclude the notion that quality content can also be highly engaging. Of course I think most people would rather their feeds be filled with higher quality content (the debate of course moving to what is considered 'quality'), but I don't see how this would reduce user engagement. Not to be crass, but it seems to analogize as substituting crack for cocaine, which fundamentally doesn't address the motivations behind teens' compulsive use of social media. I don't see how this change will amount to anything more than security theatre.
Onwards, from reading your description of Device-based Age Verification, I feel that mandating social media companies to establish more age-sensitive approaches to comply with state legislature is untenable. Your proposal seems to have parental device authorization be interpreted as device authentication by social media companies, which is unfavourable for a couple reasons. In legislating that social media companies create 'bubbles' on their platforms for kids to exist in, defined by better moderation and more age-appropriate material, you are increasing their legal exposure since they would be liable for unauthorized content entering these 'bubbles', not to mention that using authorization as authentication for access to spaces makes it quite easy for adults to infiltrate these spaces.
I was wondering if you considered pivoting towards approaching device authorization from the lens of giving device owners more power in terms of permissions, in a way that wouldn't require compliance from social media platforms. Just as user permissions on computers are regulated by administrators, have you considered a similar approach for device owners (parents) to regulate users (their kids) on phones given the existing precedent? Such a system wouldn't change what content is accessible to social media users, but it would give parents control over what apps should be allowed on the phone, allowing them to do things such as restrict the introduction to platforms like Instagram until 16 (as Haidt suggests). Other features like the ability to block websites or to use SafeSearch would use existing systems to regulate users rather than require social media platforms to create and maintain new systems. Obviously this is not a one-to-one alternative, but I think it is more feasible approach than trying to strong-arm the social media industry. Of course that's not to say that legislative reforms for social media shouldn't be pursued, rather that splitting up the issues increases the chance of at least a partial adoption.
Hey, Ravi, that's a good, important distinction. You're trying to reign in manipulative conduct without treading into content, which always gets complicated with First Amendment considerations. If we had a federal design framework - maybe like the one proposed by Baroness Kidron at 5Rights - we could audit against something and force accountability. But with Congress in a mess, it's good to see states trying to do their part.
There are several moral arguments against this type of law.
The first is that it's a form of harassment of the employees of social media companies. Imputing that they're bad people on the back of baseless rumors, requiring them to jump through hoops and risk non-compliance fines, it's all intended to bully them. The output of this law (a bunch of documents) isn't going to actually be used for anything concrete. "Officials" (who?) will "compare how well platforms are doing at protecting young people" (how?), so after the first round of compliance harassment comes yet more rounds of further harassment in which vague and badly sourced insults will be spread by left wing activists working for the government.
It may be easier to see the moral argument here by applying the golden rule: would these state officials accept social media companies bullying them into writing reports on all the ways government fails to protect children? Would they accept social media executives making public rankings of how well states protect people from themselves? No, they would find it unacceptable to be treated that way.
The second argument is that this is abusive overreach. It appears to apply to companies who aren't based in Minnesota, potentially even foreign companies (perhaps the wording is careful enough to exclude this and it's just the blog post that gives this impression). Extra-territorial law is inherently illegitimate, as it attempts to rule those who have no say in the process.
The third is that it's supreme hubris on the part of Iyer and the others who pass this legislation. They believe their own tastes and decisions to be morally and intellectually superior to not only teenagers - all of them, every single one - but also everyone else! ("protecting young people, and the rest of us, too, from potentially harmful content ")
This belief is not only repellent when expressed in personal terms - you would not accept your neighbor writing public reports of your morality based on what TV you watch, for instance - but when combined with government power it lies at the root of so much evil and tyranny throughout history. It is a stagnant form of pseudo-elitism, the sort of belief that if only everyone spent their time listening to Mozart instead of Eminem then everything would be right with the world.
Who are these people to place themselves in that position? What gives them the moral right? Nothing does.
Well argued! But I think we agree that grown men extorting nudes from 12,600 teenagers and 1 in 8 people under 16 getting sexually harassed is Bad right?
I like all your points. I guess it’s just another one of those situations where a law that attempts to flatten a complex system into a specific rule that applies equally to all circumstances can do nothing but fall short of addressing that complexity. Of course, what else could a law do?
A more liminal approach is attractive to me, where we attempt to Meet the crisis of teen mental health and sexual harassment, rather than Match it, as Nora Bateson writes in her book on systems theory approaches to the polycrisis, Combining.
I know it’s generally considered not very “policy friendly” or “realistic” but working from a place where we all agree and address that tools should not be actively designed to facilitate the exploitation of minors AND people should have a say in what laws govern their behavior might help eliminate some of that moral elitism and allow us to move forward with imperfect but better fitted complex responses to complex issues.
All that said, I like the sound of the government producing a public report explaining how they fail us. Something that goes even farther than crime, homelessness and economic statistics—a report that is equivalent to “disclose how your algorithms operate” where they make public the incentive structures and cultural landscapes that encourage them to continue to fuck us over. Hahah
As per usual, a Substack comment section is full of people who are afraid of censorship and skeptical of government intervention... seems like those two feelings are inherent to Substack culture, for understandable reasons.
Anyway, I love the experiment this bill represents. I have doubts about enforceability, but I love that this is written into law: "A social media platform's algorithmic ranking system must not optimize content that is not related to a user's expressed preferences in order to maximize the user's engagement with the platform."
I haven't touched Twitter, Facebook, Instagram, or the like in many years now, as I noticed the way their changed algorithms were changing my behavior and didn't like it. But boy would it be nice to return to a healthier social media and reconnect with those in my life who prefer to stay connected online.
While legislating barriers to protect minors from predatory adults on social media platforms makes sense, mandates regarding data sharing, algorithm behavior, or content moderation seem like overreach, could be abused, and may be unconstitutional.
What kind of abuse are you concerned about with regards to transparency requirements? We require some transparency of how the food we consume is prepared (e.g. ingredient labels). Here, we are hoping for analogous transparency of how our information system is designed. Still, this really is a "laboratory of democracy" and I don't at all believe the bill is perfect. So I'd be glad to hear well-meaning ideas for what kinds of abuses we would want to address in future iterations.
We don't require such transparency of food. Notable places that lack ingredient labels: restaurants or food stalls of any kind.
Governments only require these labels on packaged food, which is inconsistent and nonsensical. Clearly nobody cares about this information, otherwise they'd demand it from restaurants too, but they never do. What does matter is things like allergen presence, but ingredient labels are the worst possible way to present that kind of info.
Laws requiring ingredient labels should just be rolled back.
Agree with @Hallie Rose Taylor. I always read labels: for myself, for nutritional content, and for my allergic son, for allergen information. When you’re at a restaurant and you want to know what’s in your food, you can ask, which I often do. I can’t think of one benefit from removing food labels from packaged food.
The benefit is lower costs. Why does it seem that nobody in this thread recognizes that there are costs to things?
Wanting to know about allergies is perfectly reasonable, as are a few other aspects of food, but please note we're discussing ingredient labels not all food information. Restaurants don't provide ingredient lists but do provide allergen information and sometimes calorie counts too.
Of course everything costs money. I’m not sure why you think everyone in this thread thinks everything is free. But some things are worth the cost, food labeling being one of them. I’m sorry…but this is such an odd, nonsensical argument, I’m stepping away.
Your circle may not be representative. I've never seen anyone look at these labels. Saying people love eating at restaurants more than they love knowing the ingredients is just another way of saying nobody cares.
It's based on observing that nobody I've ever known in any capacity reads these labels, nor have I ever seen anyone do so in all the media I've consumed over a lifetime so far (books, TV, film, games), nor is putting ingredient labels in places where they're missing a political cause anywhere in any society on Earth. That may not be an academic study (lol), but it's good enough for me!
You didn't mention veganism or any allergies, so I'm still skeptical that you really care. When was the last time you refused to eat something because it was unpackaged? People who truly care about ingredients are people like vegetarians, people with nut allergies or people with religious requirements. It's obvious they care because they will forgo food entirely rather than eat something that might have a problematic ingredient. But even they don't want to read a list of 30 ingredients: they just want to know the food is nut-free or Halal.
> Moreover, I have no clue what the benefit of rolling these things back would actually be.
No of course not. Laws are free and compliance costs are paid by fairies! We can make food manufacturers do unlimited amounts of work and it won't have any downsides at all.
Can we guess that you've never worked in an engineering, manufacturing process or regulatory compliance role before?
How does this bill affect small business that offer what might be termed social media? I’m building integrally.one, a participatory debate platform, we have little budget to conform to onerous new regulations. And the part of the problem that you and Haidt are overly focusing on, social media itself, is only really a problem for liberal girls, conservative girls are significantly less prone to bad outcomes. It’s culture not structure once again. https://x.com/zachg932/status/1249764370458062850?s=46
I think you are fundamentally missing the one of the main points that Haidt had made. It's not just that social media has been bad for liberal girls, rather that it's been bad for all the youth, with girls and liberals getting the worst of it. But also remember that in regards to depressive affect, the divide across sex and politics already existed before social media.
I mean I would argue that it's both structure and culture.
The issue with the figure you link is that it does not consider the circumstances of each mental health diagnoses, nor does it stratify by significant factors like sexual orientation, which are more prevalent in the liberal demographic. Also, there is extremely high variance in regards to these samples, with error bars often overlapping across sex and political identity.
It being driven by social media is Haidt's theory, but it's not proven and there were strong counterarguments to his claims he never successfully addressed.
Looking for definitive causal links is difficult when several variables may be codependent, and when ethics limit what you could feasibly be able to test, so sometimes correlations are the best you are going to get. And the correlations between compulsive digital media consumption and poor mental health outcomes, especially amongst the youth, are undeniably strong. Haidt's work certainly has blind spots that I've addressed in writing, but I do think social media/internet access is the largest contributor.
I would be curious as to what are the strong counterarguments that you speak of. Responses to the social media argument - https://www.nature.com/articles/d41586-024-00902-2 - often cite contemporary events as being the cause, with far weaker correlations. Suggesting school shootings as being responsible, but then only picking a select few as 'landmark incidents' to match the data heavily weakens such a spurious argument. Dishonestly pretending that current year is the worst year for the average person to be alive in, or that people's brains are being cooked by climate change seem like much weaker arguments than those for social media. Especially since many of the arguments for contemporary events inadequately match the trendlines or are localized to just the youth.
I was referring to Aaron's argument in Reason. He spot checked a lot of studies presented as evidence and found them to either uselessly weak or not relevant to the question. Haidt's response was to claim people shouldn't demand strong evidence for a public health crisis, which is circular logic, as the reason he believes there's a "crisis" is that same collection of pseudo-science.
I lost all the respect that I'd gained for him up to that point, from when he was doing Heterodox Academy work. You can't present a giant pile of papers that often aren't even related to social media, claim they're proof of your thesis, hope nobody checks and then try to BS your way out of it when called on it. To Haidt's credit he did at least engage with his critics and offer a well written response, rather than just ignoring it. He's still one of the better social scientists out there. It's just that this is a very low bar, and he's presenting his case as a fait accompli when in fact he still hasn't left the starting line. This is why calling for legislation on the back of social science is so often immoral: it's an incredible application of force and violence to make people comply with rules pulled out of the murkiest of intellectual wells.
Reading the bill, it looks like it only affects social media which have 10,000 monthly active users in the state of Minnesota. It seems to me very likely that by the time your platform has achieved that benchmark, you will have the budget to comply with the legal requirements.
That seems unlikely. Social media is a highly scalable business. A few people can take a successful site to the tens of millions of users range. 10k MAU isn't much.
There's a lot of other problems with it, for instance if your site suffers an undetected bot attack then it will appear that your site has a lot of users in Minnesota, even if it doesn't. As bots are trying to appear like regular users, and will show up that way on the dashboards that MN is going to presumably audit, this could put even very small sites in a nasty position of suddenly finding themselves in non-compliance.
It also means that even tiny sites, like blogs or forums, have to set up software to geolocate all their users (a hard problem in and of itself), regularly monitor how many are in MN and then alert the operators if that threshold is crossed. Because if you don't set it up at the start then probably by the time you realize you might need to do it, it's too late, and now you're a criminal.
It's clear that people who think this stuff is trivial have never had to code up or run a UGC driven service before.
Often it is done by users. It isn't perfect, but asking users for their aspirational preferences for content (e.g. what they think is quality), rather than assuming that what they watch is what they want more of, leads to better (and safer) experiences for users. see https://arxiv.org/pdf/2402.06831 for more on how platforms have done it in the past
I don't necessarily disagree, but it's somewhat a function of helping people feed their aspirational selves. Many people read such things and may even "like" it to show social support, but they also realize it is often sensational, misleading, overly divisive, etc. If you instead ask them what they explicitly want to see more of or what they think is higher quality, you end up with more informative, less divisive content. Some people still want that stuff, but the other parts of the law address the outsized and often unwanted impact of that small group of people on everyone else.
It's done already. That's literally what the like button is: asking the user "do you like this" and then trying to give users more of it. Engagement just means people deciding that the content is high quality, vs content they immediately navigate away from because it's low quality.
You aren't solving any actual issues with any of this. There is no problem to solve.
Yes, in some world where people are perfect optimization robots. In the real world we restrict alcohol, harder drugs, gambling, prostitution, nicotine, etc. because we know that we are weak apes and what we want isn’t always what’s good for us. So we as a collective restrict them, to one degree or another.
Algorithmic infinite scrolls and short-form video are as least as bad for us as many of these other things. Just as addictive, just as harmful. It’s fully within normal precedent to regulate them.
I agree in the sense that there are few positive takeaways from doomscrolling through short-form videos, but I see the 'how' of regulation being more important that decision to regulate itself.
Short-form videos do have a place on the internet and on social media. However as you rightfully acknowledge, their allure is of course the low opportunity cost of their consumption because of their brevity. Of course if you assume that the rate of quality is uniform regardless of content length, then you would still consume on average the same quality of content, it's just that short-form allows for those dopamine hits to occur on a shorter interval.
The infinite scroll is another issue in and of itself, and while I certainly do not think it should be enabled by default, I don't think that the elimination of devices like the infinite scroll or auto-play would be anything more than security theatre. Making users manually select the next video or refresh their feed isn't going to hinder social media addiction.
Think for a second of the operant conditioning chamber. The current state of the infinite scroll requires the rat to press the lever only once to receive an infinite supply of social media. Forcing users to manually refresh is the same as expecting the rat to pull the response lever an additional time whenever it wants more from the dispenser. All you will do in this scenario is recondition people rather than alter their behavior patterns.
This is complete nonsense. We already know they removed the Dislike button because they don't want content to be labeled as bad.
People don't share high quality content, they share highly engaging content. Engagement doesn't mean high quality at all. It's the complete opposite. Everything that is shocking to disturbing is shared, nothing educational, not even fun, nothing interesting.
You just have to stay 5 minutes on the YT or IG feed to realize that. And it's even worse for kids who can't tell the difference between that and reality.
Sounds like a back door to government censorship of dissenting views. We should all take note of laws that aim to “protect” us from information. A better way is for PARENTS to limit their child’s access to social media platforms and to educate our children to question all regulations put forward by government.
surveillance capitalism is a huge problem. It's about time sensable guide lines of engagment be applied. Respect of civil iberties needs to be weighed against moral age appropriateness. Sticking it to the oligharch owners of social media is reasonable . They need to be held accoutablre to all the harm their algorithms cause. I see a lot of law suits coming down the pike cause these assholes will not go gently into the night.
What about safety standards for the phone itself? Social media works due it mechanism of action - blue light, which depletes dopamine along with the magnetic field that lulls children into a suggestive state.
Interesting article. I’m not sure what the answer is on the topic but I do find part of this perspective interesting. Regulating companies to stop allowing this type of communication seems like a simple solution to a complex problem. What are your thoughts on an empowering of parents to understand the implications of their children being on devices?
I’m a dad, and we are very cautious with how much time our son spends on devices. Especially after reading “anxious generation” and now that we have a new baby girl, it’s even more important for us. I do however look back at us 1-2 years ago and didn’t place significant roadblocks or limits. We became more aware and have completely shifted. There are highs and lows of course because all our son’s friends seem to have no limits. This puts pressure on won us but we take pride in considering the future impacts for our kids. I wonder what the switch for other parents is for them to be more aware and care enough to do something about it
Amazing. Individuals (but maybe not youths though) CAN oppose social media mechanisms but it doesn't mean they HAVE TO. It's never too late for stronger forces (government) to start acting. Thanks for sharing!
Retired teacher here (taught 2005-2018). I saw the whole phone/social media thing unfold right in front of my eyes, and was keenly aware of the damage it was doing. It was real. I was competing with Silicon Valley MENSA's for the students' most important asset -- their attention -- and I was losing.
This effort may have started sooner had we not gone through the COVID debacle. But now that it has, I hope the momentum continues. One caveat: many districts have distributed laptops (chromebooks) to high school students. The kids are tech savvy and find their way around firewalls. Just a thought ...
Interesting that Minnesota will protect kids from social media companies because they are vulnerable but the governor also thinks that kids should be able to get a sex change operation.
So encouraging and thank you for the strong, focused, and consistent efforts to protect people, especially youth, from the predatory practices of social media platforms. This quote says it all for me:
“Kids and teens are being harmed by a race to the bottom to grab their attention spans, with toxic content and harassment as the collateral damage of profit.”
Hopefully a step in the right direction. It’s difficult to see why anyone would disagree with this on moral grounds.
Thanks John.
And just to be clear for the folks commenting on this - the law says nothing about what content can be posted or consumed. Rather, it addresses design issues that are known to subvert what users want, especially kids, in favor of what businesses make money from. For example, 70+% of kids feel that they are being manipulated to spend more time on these products per https://www.commonsensemedia.org/press-releases/common-sense-research-reveals-everything-you-need-to-know-about-teens-use-of-social-media-in-2018. Many kids (just like adults) would prefer to spend less time using these products, even as their social media feeds are optimized for time spent.
Hi Ravi,
Thanks for taking the time to respond to comments. I was wondering how effective you expect regulations like these to be or whether they will simply end up as perfunctory. You say that the current design subverts user preferences, but with kids, do they even understand what they prefer? Yes, many kids are self-aware about the self-destructive nature of their social media usage, but few are willing to do anything about it. If the argument is to have algorithms focus on better quality of content, wouldn't such a content loop still be engaging and thus keep users engaged for extended periods?
The other concern I have is that, especially in regards to minor-adult interactions on social media, many of those interactions are not algorithmically driven, rather byproducts of the platform design. Take for instance, Omegle, which used random paring rather than any specifically tailored algorithm yet the platform facilitated sexual abuse and the propagation of harmful materials. Omegle went under as a result of this, but there are many Omegle alternatives (i.e. monkey.app) that have sprouted up in its place. I don't see how legislating transparency in regards to algorithmic design fixes these issues.
I think kids are more self-aware than we might think here. see https://www.commonsensemedia.org/kids-action/articles/what-teens-want-adults-to-know-about-their-relationships-with-smartphones - It's hard to know what the ultimate effect would be since we don't live in that world, but we know that we could design a "better" world that listened to their explicit preferences more in order to find out.
Agreed on legislating transparency not fixing things like Omegle. We didn't get everything we wanted passed in this effort and one thing we wanted was device level protections that would protect against every app. see https://www.afterbabel.com/p/how-apple-google-and-microsoft-can
I don't state that kids are not self-aware about their predicament as many addicts are conscious about their addiction and its consequences. While in principle I am not against arguments for youth agency, the specific circumstances of Gen Z and Gen Alpha have rendered them uniquely inept; their academic performance is beyond disastrous.
https://www.nationsreportcard.gov/ltt/?age=13
It becomes even more appalling when you look at the individual questions. The majority of high school kids are incapable of computing simple mathematical operations (i.e. dividing 158 by 4), nor are many capable of summarizing a few basic paragraphs. It's a hard sell to expect them to be able to properly articulate their explicit preferences when they are more cognitively impaired than the prior generation.
The idea of changing algorithms to champion quality content over engaging content seems to preclude the notion that quality content can also be highly engaging. Of course I think most people would rather their feeds be filled with higher quality content (the debate of course moving to what is considered 'quality'), but I don't see how this would reduce user engagement. Not to be crass, but it seems to analogize as substituting crack for cocaine, which fundamentally doesn't address the motivations behind teens' compulsive use of social media. I don't see how this change will amount to anything more than security theatre.
Onwards, from reading your description of Device-based Age Verification, I feel that mandating social media companies to establish more age-sensitive approaches to comply with state legislature is untenable. Your proposal seems to have parental device authorization be interpreted as device authentication by social media companies, which is unfavourable for a couple reasons. In legislating that social media companies create 'bubbles' on their platforms for kids to exist in, defined by better moderation and more age-appropriate material, you are increasing their legal exposure since they would be liable for unauthorized content entering these 'bubbles', not to mention that using authorization as authentication for access to spaces makes it quite easy for adults to infiltrate these spaces.
I was wondering if you considered pivoting towards approaching device authorization from the lens of giving device owners more power in terms of permissions, in a way that wouldn't require compliance from social media platforms. Just as user permissions on computers are regulated by administrators, have you considered a similar approach for device owners (parents) to regulate users (their kids) on phones given the existing precedent? Such a system wouldn't change what content is accessible to social media users, but it would give parents control over what apps should be allowed on the phone, allowing them to do things such as restrict the introduction to platforms like Instagram until 16 (as Haidt suggests). Other features like the ability to block websites or to use SafeSearch would use existing systems to regulate users rather than require social media platforms to create and maintain new systems. Obviously this is not a one-to-one alternative, but I think it is more feasible approach than trying to strong-arm the social media industry. Of course that's not to say that legislative reforms for social media shouldn't be pursued, rather that splitting up the issues increases the chance of at least a partial adoption.
Hey, Ravi, that's a good, important distinction. You're trying to reign in manipulative conduct without treading into content, which always gets complicated with First Amendment considerations. If we had a federal design framework - maybe like the one proposed by Baroness Kidron at 5Rights - we could audit against something and force accountability. But with Congress in a mess, it's good to see states trying to do their part.
It would be constitutional grounds rather than moral ones.
There are several moral arguments against this type of law.
The first is that it's a form of harassment of the employees of social media companies. Imputing that they're bad people on the back of baseless rumors, requiring them to jump through hoops and risk non-compliance fines, it's all intended to bully them. The output of this law (a bunch of documents) isn't going to actually be used for anything concrete. "Officials" (who?) will "compare how well platforms are doing at protecting young people" (how?), so after the first round of compliance harassment comes yet more rounds of further harassment in which vague and badly sourced insults will be spread by left wing activists working for the government.
It may be easier to see the moral argument here by applying the golden rule: would these state officials accept social media companies bullying them into writing reports on all the ways government fails to protect children? Would they accept social media executives making public rankings of how well states protect people from themselves? No, they would find it unacceptable to be treated that way.
The second argument is that this is abusive overreach. It appears to apply to companies who aren't based in Minnesota, potentially even foreign companies (perhaps the wording is careful enough to exclude this and it's just the blog post that gives this impression). Extra-territorial law is inherently illegitimate, as it attempts to rule those who have no say in the process.
The third is that it's supreme hubris on the part of Iyer and the others who pass this legislation. They believe their own tastes and decisions to be morally and intellectually superior to not only teenagers - all of them, every single one - but also everyone else! ("protecting young people, and the rest of us, too, from potentially harmful content ")
This belief is not only repellent when expressed in personal terms - you would not accept your neighbor writing public reports of your morality based on what TV you watch, for instance - but when combined with government power it lies at the root of so much evil and tyranny throughout history. It is a stagnant form of pseudo-elitism, the sort of belief that if only everyone spent their time listening to Mozart instead of Eminem then everything would be right with the world.
Who are these people to place themselves in that position? What gives them the moral right? Nothing does.
Well argued! But I think we agree that grown men extorting nudes from 12,600 teenagers and 1 in 8 people under 16 getting sexually harassed is Bad right?
I like all your points. I guess it’s just another one of those situations where a law that attempts to flatten a complex system into a specific rule that applies equally to all circumstances can do nothing but fall short of addressing that complexity. Of course, what else could a law do?
A more liminal approach is attractive to me, where we attempt to Meet the crisis of teen mental health and sexual harassment, rather than Match it, as Nora Bateson writes in her book on systems theory approaches to the polycrisis, Combining.
I know it’s generally considered not very “policy friendly” or “realistic” but working from a place where we all agree and address that tools should not be actively designed to facilitate the exploitation of minors AND people should have a say in what laws govern their behavior might help eliminate some of that moral elitism and allow us to move forward with imperfect but better fitted complex responses to complex issues.
All that said, I like the sound of the government producing a public report explaining how they fail us. Something that goes even farther than crime, homelessness and economic statistics—a report that is equivalent to “disclose how your algorithms operate” where they make public the incentive structures and cultural landscapes that encourage them to continue to fuck us over. Hahah
As per usual, a Substack comment section is full of people who are afraid of censorship and skeptical of government intervention... seems like those two feelings are inherent to Substack culture, for understandable reasons.
Anyway, I love the experiment this bill represents. I have doubts about enforceability, but I love that this is written into law: "A social media platform's algorithmic ranking system must not optimize content that is not related to a user's expressed preferences in order to maximize the user's engagement with the platform."
I haven't touched Twitter, Facebook, Instagram, or the like in many years now, as I noticed the way their changed algorithms were changing my behavior and didn't like it. But boy would it be nice to return to a healthier social media and reconnect with those in my life who prefer to stay connected online.
A huge thank you for the critical work you are doing Ravi!
While legislating barriers to protect minors from predatory adults on social media platforms makes sense, mandates regarding data sharing, algorithm behavior, or content moderation seem like overreach, could be abused, and may be unconstitutional.
What kind of abuse are you concerned about with regards to transparency requirements? We require some transparency of how the food we consume is prepared (e.g. ingredient labels). Here, we are hoping for analogous transparency of how our information system is designed. Still, this really is a "laboratory of democracy" and I don't at all believe the bill is perfect. So I'd be glad to hear well-meaning ideas for what kinds of abuses we would want to address in future iterations.
We don't require such transparency of food. Notable places that lack ingredient labels: restaurants or food stalls of any kind.
Governments only require these labels on packaged food, which is inconsistent and nonsensical. Clearly nobody cares about this information, otherwise they'd demand it from restaurants too, but they never do. What does matter is things like allergen presence, but ingredient labels are the worst possible way to present that kind of info.
Laws requiring ingredient labels should just be rolled back.
Agree with @Hallie Rose Taylor. I always read labels: for myself, for nutritional content, and for my allergic son, for allergen information. When you’re at a restaurant and you want to know what’s in your food, you can ask, which I often do. I can’t think of one benefit from removing food labels from packaged food.
The benefit is lower costs. Why does it seem that nobody in this thread recognizes that there are costs to things?
Wanting to know about allergies is perfectly reasonable, as are a few other aspects of food, but please note we're discussing ingredient labels not all food information. Restaurants don't provide ingredient lists but do provide allergen information and sometimes calorie counts too.
Of course everything costs money. I’m not sure why you think everyone in this thread thinks everything is free. But some things are worth the cost, food labeling being one of them. I’m sorry…but this is such an odd, nonsensical argument, I’m stepping away.
Because of comments like these:
"I can’t think of one benefit from removing food labels from packaged food."
"I have no clue what the benefit of rolling these things back would actually be."
"It’s difficult to see why anyone would disagree with this on moral grounds."
Your circle may not be representative. I've never seen anyone look at these labels. Saying people love eating at restaurants more than they love knowing the ingredients is just another way of saying nobody cares.
It's based on observing that nobody I've ever known in any capacity reads these labels, nor have I ever seen anyone do so in all the media I've consumed over a lifetime so far (books, TV, film, games), nor is putting ingredient labels in places where they're missing a political cause anywhere in any society on Earth. That may not be an academic study (lol), but it's good enough for me!
You didn't mention veganism or any allergies, so I'm still skeptical that you really care. When was the last time you refused to eat something because it was unpackaged? People who truly care about ingredients are people like vegetarians, people with nut allergies or people with religious requirements. It's obvious they care because they will forgo food entirely rather than eat something that might have a problematic ingredient. But even they don't want to read a list of 30 ingredients: they just want to know the food is nut-free or Halal.
> Moreover, I have no clue what the benefit of rolling these things back would actually be.
No of course not. Laws are free and compliance costs are paid by fairies! We can make food manufacturers do unlimited amounts of work and it won't have any downsides at all.
Can we guess that you've never worked in an engineering, manufacturing process or regulatory compliance role before?
How does this bill affect small business that offer what might be termed social media? I’m building integrally.one, a participatory debate platform, we have little budget to conform to onerous new regulations. And the part of the problem that you and Haidt are overly focusing on, social media itself, is only really a problem for liberal girls, conservative girls are significantly less prone to bad outcomes. It’s culture not structure once again. https://x.com/zachg932/status/1249764370458062850?s=46
I think you are fundamentally missing the one of the main points that Haidt had made. It's not just that social media has been bad for liberal girls, rather that it's been bad for all the youth, with girls and liberals getting the worst of it. But also remember that in regards to depressive affect, the divide across sex and politics already existed before social media.
I mean I would argue that it's both structure and culture.
The issue with the figure you link is that it does not consider the circumstances of each mental health diagnoses, nor does it stratify by significant factors like sexual orientation, which are more prevalent in the liberal demographic. Also, there is extremely high variance in regards to these samples, with error bars often overlapping across sex and political identity.
It being driven by social media is Haidt's theory, but it's not proven and there were strong counterarguments to his claims he never successfully addressed.
Looking for definitive causal links is difficult when several variables may be codependent, and when ethics limit what you could feasibly be able to test, so sometimes correlations are the best you are going to get. And the correlations between compulsive digital media consumption and poor mental health outcomes, especially amongst the youth, are undeniably strong. Haidt's work certainly has blind spots that I've addressed in writing, but I do think social media/internet access is the largest contributor.
I would be curious as to what are the strong counterarguments that you speak of. Responses to the social media argument - https://www.nature.com/articles/d41586-024-00902-2 - often cite contemporary events as being the cause, with far weaker correlations. Suggesting school shootings as being responsible, but then only picking a select few as 'landmark incidents' to match the data heavily weakens such a spurious argument. Dishonestly pretending that current year is the worst year for the average person to be alive in, or that people's brains are being cooked by climate change seem like much weaker arguments than those for social media. Especially since many of the arguments for contemporary events inadequately match the trendlines or are localized to just the youth.
I was referring to Aaron's argument in Reason. He spot checked a lot of studies presented as evidence and found them to either uselessly weak or not relevant to the question. Haidt's response was to claim people shouldn't demand strong evidence for a public health crisis, which is circular logic, as the reason he believes there's a "crisis" is that same collection of pseudo-science.
I lost all the respect that I'd gained for him up to that point, from when he was doing Heterodox Academy work. You can't present a giant pile of papers that often aren't even related to social media, claim they're proof of your thesis, hope nobody checks and then try to BS your way out of it when called on it. To Haidt's credit he did at least engage with his critics and offer a well written response, rather than just ignoring it. He's still one of the better social scientists out there. It's just that this is a very low bar, and he's presenting his case as a fait accompli when in fact he still hasn't left the starting line. This is why calling for legislation on the back of social science is so often immoral: it's an incredible application of force and violence to make people comply with rules pulled out of the murkiest of intellectual wells.
Reading the bill, it looks like it only affects social media which have 10,000 monthly active users in the state of Minnesota. It seems to me very likely that by the time your platform has achieved that benchmark, you will have the budget to comply with the legal requirements.
That seems unlikely. Social media is a highly scalable business. A few people can take a successful site to the tens of millions of users range. 10k MAU isn't much.
There's a lot of other problems with it, for instance if your site suffers an undetected bot attack then it will appear that your site has a lot of users in Minnesota, even if it doesn't. As bots are trying to appear like regular users, and will show up that way on the dashboards that MN is going to presumably audit, this could put even very small sites in a nasty position of suddenly finding themselves in non-compliance.
It also means that even tiny sites, like blogs or forums, have to set up software to geolocate all their users (a hard problem in and of itself), regularly monitor how many are in MN and then alert the operators if that threshold is crossed. Because if you don't set it up at the start then probably by the time you realize you might need to do it, it's too late, and now you're a criminal.
It's clear that people who think this stuff is trivial have never had to code up or run a UGC driven service before.
https://t.co/VAMXl41SZj
“AI recommendation systems optimize for clicks, likes, and time engaged rather than the quality of content.” And who decides what quality content is?
Often it is done by users. It isn't perfect, but asking users for their aspirational preferences for content (e.g. what they think is quality), rather than assuming that what they watch is what they want more of, leads to better (and safer) experiences for users. see https://arxiv.org/pdf/2402.06831 for more on how platforms have done it in the past
Since everyone is now conditioned to like and seek out inflammatory and provocative content, I’m skeptical. But I will check out the article. Thanks!
I don't necessarily disagree, but it's somewhat a function of helping people feed their aspirational selves. Many people read such things and may even "like" it to show social support, but they also realize it is often sensational, misleading, overly divisive, etc. If you instead ask them what they explicitly want to see more of or what they think is higher quality, you end up with more informative, less divisive content. Some people still want that stuff, but the other parts of the law address the outsized and often unwanted impact of that small group of people on everyone else.
It's done already. That's literally what the like button is: asking the user "do you like this" and then trying to give users more of it. Engagement just means people deciding that the content is high quality, vs content they immediately navigate away from because it's low quality.
You aren't solving any actual issues with any of this. There is no problem to solve.
Yes, in some world where people are perfect optimization robots. In the real world we restrict alcohol, harder drugs, gambling, prostitution, nicotine, etc. because we know that we are weak apes and what we want isn’t always what’s good for us. So we as a collective restrict them, to one degree or another.
Algorithmic infinite scrolls and short-form video are as least as bad for us as many of these other things. Just as addictive, just as harmful. It’s fully within normal precedent to regulate them.
Please check the interesting story of prohibition before proceeding.
I agree in the sense that there are few positive takeaways from doomscrolling through short-form videos, but I see the 'how' of regulation being more important that decision to regulate itself.
Short-form videos do have a place on the internet and on social media. However as you rightfully acknowledge, their allure is of course the low opportunity cost of their consumption because of their brevity. Of course if you assume that the rate of quality is uniform regardless of content length, then you would still consume on average the same quality of content, it's just that short-form allows for those dopamine hits to occur on a shorter interval.
The infinite scroll is another issue in and of itself, and while I certainly do not think it should be enabled by default, I don't think that the elimination of devices like the infinite scroll or auto-play would be anything more than security theatre. Making users manually select the next video or refresh their feed isn't going to hinder social media addiction.
Think for a second of the operant conditioning chamber. The current state of the infinite scroll requires the rat to press the lever only once to receive an infinite supply of social media. Forcing users to manually refresh is the same as expecting the rat to pull the response lever an additional time whenever it wants more from the dispenser. All you will do in this scenario is recondition people rather than alter their behavior patterns.
This is complete nonsense. We already know they removed the Dislike button because they don't want content to be labeled as bad.
People don't share high quality content, they share highly engaging content. Engagement doesn't mean high quality at all. It's the complete opposite. Everything that is shocking to disturbing is shared, nothing educational, not even fun, nothing interesting.
You just have to stay 5 minutes on the YT or IG feed to realize that. And it's even worse for kids who can't tell the difference between that and reality.
That sounds like a different internet. The highest engagement content on YouTube is stuff like MrBeast, which is famously wholesome.
Sounds like a back door to government censorship of dissenting views. We should all take note of laws that aim to “protect” us from information. A better way is for PARENTS to limit their child’s access to social media platforms and to educate our children to question all regulations put forward by government.
surveillance capitalism is a huge problem. It's about time sensable guide lines of engagment be applied. Respect of civil iberties needs to be weighed against moral age appropriateness. Sticking it to the oligharch owners of social media is reasonable . They need to be held accoutablre to all the harm their algorithms cause. I see a lot of law suits coming down the pike cause these assholes will not go gently into the night.
What about safety standards for the phone itself? Social media works due it mechanism of action - blue light, which depletes dopamine along with the magnetic field that lulls children into a suggestive state.
Interesting article. I’m not sure what the answer is on the topic but I do find part of this perspective interesting. Regulating companies to stop allowing this type of communication seems like a simple solution to a complex problem. What are your thoughts on an empowering of parents to understand the implications of their children being on devices?
I’m a dad, and we are very cautious with how much time our son spends on devices. Especially after reading “anxious generation” and now that we have a new baby girl, it’s even more important for us. I do however look back at us 1-2 years ago and didn’t place significant roadblocks or limits. We became more aware and have completely shifted. There are highs and lows of course because all our son’s friends seem to have no limits. This puts pressure on won us but we take pride in considering the future impacts for our kids. I wonder what the switch for other parents is for them to be more aware and care enough to do something about it
Amazing. Individuals (but maybe not youths though) CAN oppose social media mechanisms but it doesn't mean they HAVE TO. It's never too late for stronger forces (government) to start acting. Thanks for sharing!
The problem is in this graph, it’s not social media, it’s the distorting view of a liberal mindset. https://x.com/zachg932/status/1249764370458062850?s=46
We can only hope that the SCOTUS will defang US Surgeon General Vivek Murthy if someone can obtain cert.
Thank you.