A former Meta product manager suggests changes that would make the reforms more effective for mitigating the coming impact of AI.
As an educator and mother of 2 teenagers, I am so thankful that these conversations are occurring. In addition to putting an age limit on social media, (one that I am sure many kids will try to bypass but hopefully will create a barrier), I think we also need to look at our education system and it's push to make all students 1:1 with technology (meaning every child has their own device and in some cases access to this device all day at school). This mindset that kids need to be educated by computers while knowing that at the same time these kids are having to divert their attention from the strong pull of the internet and so called "educational games" (our latest version of the "fat free" snackwell cookie) is a huge part of the problem. Yes social media is bad but so is allowing 10 year olds to have free access to a computer all day at school and expecting them to be "working on math" or "reading" when we know these other forces are designed to steal their attention are so easy for them to use. Also, we have forgotten the importance of multisensory learning and face-to-face communication, something that is lost with tapping on a key board. The stronger the foundation of social skills we can develop, the better our children will be able to navigate their behaviors on-line. These skills need to be developed through real human interaction and not through a screen.
> Some social media platforms have introduced reputation-based functionality with successful results. For example, Reddit’s upvote/downvote and Karma system have proven useful for improving social discourse while avoiding the privacy issues that could come with identifying all users. Using this model, we could require accounts to earn trust from the community before giving them all the power (and responsibility) of widespread distribution and develop ways to make the loss of community trust consequential.
Not sure you want to be using Reddit as your model to emulate. It has a widespread reputation as "the [insert unflattering body part here] of the Internet," based largely on the ability to create new accounts entirely anonymously and unaccountably. When anyone can trivially create an alt "for free" and wade into a discussion pretending to be new, (or have multiple well-established alts amplifying each other's voices, or any number of other bits of bad behavior,) you get... well... the toxic mess that is Reddit.
> Consider an example from one of the leaked Facebook paper documents revealing that a small set of users are responsible for nearly half of all uncivil comments. The absence of an effective downvote system ironically amplifies their visibility when others engage to contest their behavior. What if we could diminish this group's social sway by holding them accountable, possibly through a history of downvoted comments?
This might be more effective, but it's also possible that it could lead to whole new forms of bullying and harassment. Unless the threshold for making it onto the downvoted comments list was unreasonably high, it wouldn't be particularly difficult for malicious users to brigade somebody they didn't like and make them look like a problem.
I think the best solution to this that I've seen comes from StackOverflow: downvoting decreases the target user's reputation, but it also decreases your reputation as well, by a smaller amount. You're allowed to hold other users accountable, but there's a cost to doing so. (A few years back they significantly weakened that cost, and the site's quality has gotten a lot worse ever since.)
> Questions of causality are pervasive in debates about social media (e.g., is social media a reflection of our societal polarization, or is it causing that polarization?)
Seems to me the best answer is "both." It's a feedback loop; existing polarization leads to polarizing content, which drives further polarization.
> Social media has been hailed as removing gatekeepers, but those gatekeepers may not all be bad.
Hear, hear! The principle of Chesterton's Fence (Chesterton's Gate?) applies here; as more and more "gatekeepers" are removed, we see more and more clearly the costs and harms of gates inadequately kept.
Don't agree with everything in this piece, but I like the analogy to "building codes" a lot. I feel like most people's eyes glaze over when you mention the word "algorithm" but everyone knows what you mean by electrical box. However, it took a lot of terrible catastrophes before there were building codes enforced. How do we limit the damage today? What is the tipping point to say, "Now we must put into practice these codes?" And how do we agree on those codes?
I think we need some independent organization(s), research groups, and/or government organizations to head this up - people with real working knowledge of how these platforms work, how they got the way they are, and how we might fix them. I'm very tired of politicians/lawyers acting like "experts" in other fields when really they need to take advice from smart people like Haidt and Ravi. I hope they are doing just that.
As always the fundamentally issue will be who determines what is 'mis' or 'dis' information.
Reddit's thumbs up/down system does not reward good content. It famously penalizes contrarian thinking and rewards conformity. What defines conformity depends on the subreddit, but center and center-right commentary anywhere on Reddit gets downvoted outside of a minuscule number of subreddits. In addition, contrarian voices on Reddit are often tracked down outside of a specific subreddit and the user subsequently banned within that subreddit, even if nothing written by that user in that subreddit was outside the bounds. Reddit, if anything, is a warning, not a positive example.
Secondly, the social reward system advocated in this piece isn't new either. We saw it in the _Black Mirror_ episode "Nosedive," and we see it on display in China. Is this what we want? Hardly. If anything, a system such as this again penalizes contrarians and anyone with a "conspiracy theory"—you know, like "SARS-Cov-2 came out of the virus lab in Wuhan," the "crazy" stuff that we learn after the fact was spot on.
I agree with everything Ravi suggests but as a child and adolescent psychiatrist I don’t think we should leave the role of social media gate keeper to parents. There is a very real risk that young people who are already disadvantaged by coming from troubled/very stressed families will be further disadvantaged by having unrestricted access to social media. Not to mention the huge difficulty parents already report trying to set limits on their adolescents’ use of social media/technology. If we feel social media is damaging for young people, society should find ways to protect them from the risk as we currently do with alcohol, smoking etc even if these efforts are only partly successful.
I’m of the opinion that social media providers are morally culpable for the marked rise in teen and young adult depression and suicide since 2011. They made only token efforts to safeguard the most psychologically and emotionally vulnerable segment of our population, claiming the mantle of Section 230 protection at the same time their employees were curating content to suppress that with which they disagree.
"We should help parents regain their rights to be gatekeepers for their children."
I sympathize with this, but I fear it’s a losing battle. Children and youth will always be the primary targets of big tech: get them young, and get them hooked, thus ensuring years of profit.
There is also an underlying values problem. Our culture has lost the ability to say a healthy “No”, let alone to encourage anything like the reclamation of parental “rights”.
Ravi's suggestion "Focus on accountability, not identity" seems clearly right to me. Anonymous accounts can develop a strong reputation - eg the DRASTIC group, many anonymous, who contributed so much to understanding the origins of Covid. Anonymity is crucial to the ability to state dissenting views without fear of personal repercussions.
Three fundamental issues are not addressed. In the first place, digital devices and platforms create addictive behavior by stimulating the dopamine effect. In the second place, the angorithms aim to elicit strong, engaged, reactions in the users. Therefore, the business model basically automates an abusive environment. And in the third place - as we have seen in the Twitter files - the application of ‘visibility filters’ really means tech platforms create an alternate reality that, inasmuch as people are not warned explicitly, innocent users are going to mistake for the world out there. Isn’t that induced psychosis?
All of this smells like the revenge of the nerds, which is what happens when you subject (formerly) human relations to a system that calculates responses.
Today when I went on Facebook, it was nothing but ads. I think most people including myself are burnt out by it. Zuckerberg et al really made out like bandits by monetizing, as people gradually caught on that what they were seeing was not real anymore. Now we’re all just a lot sadder, more frightened and suspicious, and less likely to trust anything. Zuckerberg brought us Trump. The reforms listed above are excellent, but are too little too late for the mind f’ed generation. Get off the screens and get out of the house and connect with people and, for God’s sake, don’t post any pictures of yourself doing so. You’ll instantly become part of the problem.
Reddit is well-known for the effectiveness of its woke thought police - strange to cite it as any kind of model. Rather, dissenters from the woke regime should realize that speech platforms can never escape politics, and redouble efforts to build alternatives to Big Tech's woke regime.
Jon Askonas and Ari Schulman, "Why Speech Platforms Can Never Escape Politics"
I couldn't agree more with the concept of accountability versus authentication of a real identity. Although I understand the motivation to identify the real life person behind these comments, that just opens things up to future abuse by intelligence agencies when they get their hands on this information. It could be Trumper's going after wokesters or vice versa. Not being able to down vote comments on social media reminds me of the phenomenon where all participants in a competition get a ribbon at the end.
It's already a source of frustration for me that I have to lie when I create my kids' accounts on various websites. Because as they're both under 13, certain sites are horribly crippled. Having an under 13 Google account, for instance, made it unusable for us, and fixing it was complicated though not impossible. (We set his birthday to a day before he turned 13 and then let it roll over).
It's fine if parents want to create child accounts for their kids, but existing legislation, aside from being easily surmountable, actually reduces parental choice by forcing restrictions onto child accounts with no way to remove them.
I'm extremely doubtful that any additional legislation wouldn't just be more of the same, but worse.
Super interesting piece on a critical issue. As a crypto professional, one of the aspects of our industry that I feel is most misunderstood is that we can be a tool to make the internet SAFER. Point 1 of this piece is possible on a mass scale applying cryptographic identity solutions. Authentication is one of the primary uses of blockchains. Frank McCourt and Project Liberty are very articulate about the need to make the internet safer and how crypto is a huge part of doing so.
I’m just a lowly stat at home mom… why are tech people doing this to us? Tech people must really hate normal human beings. I’m definitely a terrible parent because I have this bizarre idea that if I need to remind my child to brush his teeth, he probably doesn’t need a phone or anything with unlimited access to the internet.