As an educator and mother of 2 teenagers, I am so thankful that these conversations are occurring. In addition to putting an age limit on social media, (one that I am sure many kids will try to bypass but hopefully will create a barrier), I think we also need to look at our education system and it's push to make all students 1:1 with technology (meaning every child has their own device and in some cases access to this device all day at school). This mindset that kids need to be educated by computers while knowing that at the same time these kids are having to divert their attention from the strong pull of the internet and so called "educational games" (our latest version of the "fat free" snackwell cookie) is a huge part of the problem. Yes social media is bad but so is allowing 10 year olds to have free access to a computer all day at school and expecting them to be "working on math" or "reading" when we know these other forces are designed to steal their attention are so easy for them to use. Also, we have forgotten the importance of multisensory learning and face-to-face communication, something that is lost with tapping on a key board. The stronger the foundation of social skills we can develop, the better our children will be able to navigate their behaviors on-line. These skills need to be developed through real human interaction and not through a screen.
I am hopeful that more do than not! The more I ask others about this, the more I hear "we need to get rid of computers!". I am not naive that we can completely get rid of them but intend to keep speaking out about it so that maybe we can make some changes, I would hate to see us keep moving in this direction.
Excellent point of the paradox we are putting children into (and a reason for children to think adults are not reliable people)
As Big Tech (and whoever is behind them) are fighting to cancel wisdom, prudence and Love from humanity, the pre-conditions to create the dehumanized people they need for their innovations.... Let us be inspired by the quote from President Lincoln:
“You can have anything you want if you want it badly enough. You can be anything you want to be, do anything you set out to accomplish if you hold to that desire with singleness of purpose.”
We have to stop looking for compromises and negotiate the non-negotiable, we must be brave:
to learn the ABC of life and how to become a fully functioning adult kids don't need Digital stuff. How do we create the conditions for kids to learn the ABC of life until they are 18 and ONLY THEN we give them access to the Tech. Meanwhile we pressure decision makers to SANITIZE tech from the addiction/OCD generating component and impose ethical algorithms Pilots are a great example (safe pilots!), they can't fly computerized planes unless they first master manual flight skills. And that makes a whole difference in emergency situation as the sad case of Air France Rio-Paris showed.
> Some social media platforms have introduced reputation-based functionality with successful results. For example, Reddit’s upvote/downvote and Karma system have proven useful for improving social discourse while avoiding the privacy issues that could come with identifying all users. Using this model, we could require accounts to earn trust from the community before giving them all the power (and responsibility) of widespread distribution and develop ways to make the loss of community trust consequential.
Not sure you want to be using Reddit as your model to emulate. It has a widespread reputation as "the [insert unflattering body part here] of the Internet," based largely on the ability to create new accounts entirely anonymously and unaccountably. When anyone can trivially create an alt "for free" and wade into a discussion pretending to be new, (or have multiple well-established alts amplifying each other's voices, or any number of other bits of bad behavior,) you get... well... the toxic mess that is Reddit.
> Consider an example from one of the leaked Facebook paper documents revealing that a small set of users are responsible for nearly half of all uncivil comments. The absence of an effective downvote system ironically amplifies their visibility when others engage to contest their behavior. What if we could diminish this group's social sway by holding them accountable, possibly through a history of downvoted comments?
This might be more effective, but it's also possible that it could lead to whole new forms of bullying and harassment. Unless the threshold for making it onto the downvoted comments list was unreasonably high, it wouldn't be particularly difficult for malicious users to brigade somebody they didn't like and make them look like a problem.
I think the best solution to this that I've seen comes from StackOverflow: downvoting decreases the target user's reputation, but it also decreases your reputation as well, by a smaller amount. You're allowed to hold other users accountable, but there's a cost to doing so. (A few years back they significantly weakened that cost, and the site's quality has gotten a lot worse ever since.)
> Questions of causality are pervasive in debates about social media (e.g., is social media a reflection of our societal polarization, or is it causing that polarization?)
Seems to me the best answer is "both." It's a feedback loop; existing polarization leads to polarizing content, which drives further polarization.
> Social media has been hailed as removing gatekeepers, but those gatekeepers may not all be bad.
Hear, hear! The principle of Chesterton's Fence (Chesterton's Gate?) applies here; as more and more "gatekeepers" are removed, we see more and more clearly the costs and harms of gates inadequately kept.
I've often wondered if a two-part/multipart system might work better. Instead of a single "down-vote" mechanism, which could mean anything from "I disagree with you" to "you sir/madame, are being a$$ hole" I wonder if having an "I agree with this statement" heart emoji thing and a "karma like" emoji or a rating scale item like a " this was a well-argued discussion or thanks for at least being civil" thumbs up would work as a statistical tool.
Seeing as the main topic of the title post is the psychological impact of platforms. I know from my own background that the algorithmic tools and data analytics exist under the covers to show the platform devs/managers which users/posts are getting "good" interactions but I would like to see if we could change "good" from "drives engagement/advertising/shock value" to "this is a constructive post/discussion/comment". The devs/mods mean the existing system to be that, but its been turned into an abusive system used to brigade and silence dissent "even well-formed and cogent dissent".
Much of this is subjective and in daily life means mods and users of the communities police themselves but having more tools exposed to the platform user base in this increasingly tech-savvy world, not less would be good in the long run. Giving the mods the ability to point out that "hey this was a good conversation" even if they did not agree with the conclusions might be a good or at least helpful tool. In my school days/middle and college, the great teachers took time to recognize those who took the time to at least try to communicate respectfully and with good arguments. We should try to move back in that direction. IMO.
However, in the end, no technology, platform or algorithm can/will take the place of personal responsibility for one's own behavior. That starts in the "meat world".
I like the list of options. As a recovering product manager, this would be a great place to start a storyboard/user story for requirements... must... resist...
With the caveat that all of these are subjective unless they are tied to comments for context, (debate, Socratic discussion, verbal tennis, etc.) I can see them being very useful. I've seen versions of some of them but not a good implementation of all of them together.
"- accurate/inaccurate" (By what measure, does this option require the inclusion of contextual accuracy/inaccuracy citation, for a lightweight app this is pretty hard to include but on the other end it leads to a Wiki-like situation, which had its own bias.
"- positive/negative tone" (Seems like a good measurement over a large sample pool)
"- relevant/not relevant (I understand that this will be personal to the user)" (given to mods and "power users" this can be a very good tool
"- important/unimportant" (Included in the context of the rest of these items it presents a really good way to replace the "upvote" with something with more context.)
I like the scale idea. Give the ability to show “force” of reaction but also still allows groups to manipulate the system if they are determined to make a point.
The n/a option is good as well but I would think you could just assume that if someone read the item (a metric that already exists) and chooses not to vote. The right set of options should allow the user to vote without forcing them to choose a n/a.
I could be wrong and multi systems should probably be tried. My main concern is with overly simplified systems that drive to overly simplified assumptions. Over complex would drive to less interaction.
A medium should be strived for. Something like the Strongly agree, agree, neutral, disagree, strongly disagree (your +2,1,0,-1,-2) mirrors that. I just think IMO there should be multiple variable used to measure the interaction.
oh, and I believe it best to make these interactions anonymous. Whatever I label a post, only the system knows it, not the other users. That way, we keep the interactions honest, and deprive the poster of the dopamine hit when they get a lot of likes. It'll keep the post more honest in nature and the interactions as well.
I agree with the option to have anonymous interactions, however, they should be verified and tied to a real person IMO. Bots are a large part of the manipulation problem I and many others are concerned about.
How we implement this is purely a technical problem. How we keep it anonymous and protect it, is a policy/compliance issue requiring penalties and civil/criminal punishment for violations by the platforms/governments/civilian actors who violate the policy i.e. GDPR/HIPAA.
It won't keep it from happening but, just like good forest fire prevention, it would keep the massive, town-destroying, life-taking acts from happing often.
IMO, A system that requires the user to utilize "all" of the voting options if they use "any" of the options could tamp down some of the random "I'm in a bad mood and feel like kicking sand in someone's face" actions that are common in anonymous, quick, and spiteful social media interactions.
Many platforms are starting to build multipart survey options (LinkedIn, Facebook, Twitter) that allow the user to give the dev/mods feedback on what kind of experience they want to have. Yes, this allows for echo chambers but they could/should be by choice. We as individuals get to have freedom of association (or we should).
If a user wants to have interactions beyond the bubble they find comfortable, let them poke their head out or stick their toe in the larger stream "when and if they want to".
yes, all interactions must be tied to a user. Interesting point about forcing all the interactions to be used, but then , as per my reply above, a n/a option might be useful.
IMO, I think it's healthy to provide vacations from the echo chamber. Forcibly, yes, but no one will get hurt. And a solid reputation system will ensure that contrarian points will come from a reputable source.
“I think it's healthy to provide vacations from the echo chamber. Forcibly, yes, but no one will get hurt.” ...
Sounds an awful like reeducation...
I don’t think you mean it that way because I’ve been interacting with you enough to have a inclination to believe that. I don’t KNOW you or your motives but I’m choosing to give YOU the benefit of the doubt at the moment.
IMO However, force WILL eventually be abused. Choice is critical. If one chooses to be ignorant of the world then one also chooses the consequences. If one is forced into something it might be helpful but it also might injure. Informed consent is required for freedom.
Convince me I should listen to your voice if I choose to engage. Don’t force me to listen.
"This might be more effective, but it's also possible that it could lead to whole new forms of bullying and harassment. Unless the threshold for making it onto the downvoted comments list was unreasonably high, it wouldn't be particularly difficult for malicious users to brigade somebody they didn't like and make them look like a problem." I was just thinking this.
I came here to say something similar, but you said it way better. I feel like Reddit is a place where anyone with a dissenting opinion gets "shouted down", to the point where I don't ever post/comment on anything. I'm not even an avid user and I can see how it has become an echo chamber (or series of echo chambers) for groupthink. There's probably a better way - I like the StackOverflow model that you mentioned
Reddit shows many of the signs of the social media system in the _Black Mirror_ episode "Nosedive," where conformity is prized and contrarian thinking punished. Even attempting a middle ground on some of the larger subreddits will be downvoted to oblivion and risk the user a banning.
I was naturally sceptical of downvotes. In some areas, like the trans debate, the activist bases on both sides seem to be more hostile to reasoned heterodox voices which take a more empirical and view, aiming at some form of compromise (like cross sex hormones, in the 20% of acute cases of GD, somewhat earlier than 18, to preserve the ability to pass).
But reading your explanation of a reputational hit, it seems more plausible. However, I disagree with the presumption it should be small. I guess it would be a matter of calibration- but I would start from the basis of an equal hit to reputation for those who do the 'punishing'.
I also think with should look to the example of speeding fines. There are plenty of cities where wealthy individuals driving expensive cars rack up significant numbers of tickets before their behaviour is checked. Perhaps a flat tax- reputationally, those with higher status and greater influence should probably pay a percentage of their reputation for smacking down the little guy, provided they in turn aren't in the habit of harassing them first.
A system that allows users to "see" the statistical view of the "voters" history could provide some context.
For example, if I'm in a conversation with a group and on a topic I find important and a set of "votes" are provided. I would love to see if particular, some, or all of the "voters" have a statistically relevant history, to me. Do they regularly make negative votes, do they contribute to lifting important/relevant topics that are important/relevant, to me? Do they show a particular bias that is relevant, to me?
This information could provide me (the user) the ability to understand both my impact on the overall conversation and the type of interactions that are occurring and do/should I care based on my own moral variables. It would also allow lurkers to interact with the flow and extract information behind the scene without having to read every message or expose themselves to unwanted exposure. Social anxiety is real and disabling and can create resentment of being excluded and fear of inclusion because of "bullies in the play-yard").
For me personally, I don't, and shouldn't care "who" they are, where they come from, or what they think from a moral/ethical standpoint. I just want to know if they are good or bad contributors so I can make a more informed choice about whether or not to interact with or ignore them.
Brigading/Ratio'ing are common terms, but ones that have little visibility to the one being brigaded/ratio'ed in terms of context. I'd like to see more visibility into the technique and the actual message being delivered and by who/what.
I know for a fact from personal and professional experience that this kind of information exists and is used by some platforms to form policy and direction of movement (for advertisers, political messaging, or actual benevolent purposes. IMO, We (the users) should have more visibility into, and use of, the tools used to "push the sheep around the fields".
No social interaction can be assured to be 100% safe. However, we can and should (IMO) do better at providing social tools/cues to people who do choose to interact for whatever reason. We have tone, volume, facial and body expressions that are learned subconsciously and taught to us throughout our entire lives in "meat space".
IMO, We need better tools in cyberspace to increase the likelihood of positive or at least productive (by whatever measure you deem productive) interactions.
Don't agree with everything in this piece, but I like the analogy to "building codes" a lot. I feel like most people's eyes glaze over when you mention the word "algorithm" but everyone knows what you mean by electrical box. However, it took a lot of terrible catastrophes before there were building codes enforced. How do we limit the damage today? What is the tipping point to say, "Now we must put into practice these codes?" And how do we agree on those codes?
I think we need some independent organization(s), research groups, and/or government organizations to head this up - people with real working knowledge of how these platforms work, how they got the way they are, and how we might fix them. I'm very tired of politicians/lawyers acting like "experts" in other fields when really they need to take advice from smart people like Haidt and Ravi. I hope they are doing just that.
It's a great strategy to keep the debate go on forever and nothing gets done in the meantime which favors the "mis and dis information" economy of platforms. Genius!
I agree with everything Ravi suggests but as a child and adolescent psychiatrist I don’t think we should leave the role of social media gate keeper to parents. There is a very real risk that young people who are already disadvantaged by coming from troubled/very stressed families will be further disadvantaged by having unrestricted access to social media. Not to mention the huge difficulty parents already report trying to set limits on their adolescents’ use of social media/technology. If we feel social media is damaging for young people, society should find ways to protect them from the risk as we currently do with alcohol, smoking etc even if these efforts are only partly successful.
I would suggest a slight modification to your point. I nod to your experience as a psychiatrist. However your comment... "I don’t think we should leave the role of social media gate keeper to parents" …' really jerks my chain' as my grandmother used to say.
As a parent of two adult sons who grew up in the first social media cohort, I can attest firsthand that social media can be harmful to adolescents, middle school, and teenage children. However, no one, and I mean no one gets to make first-order (safety, comfort, happiness) decisions about my kids without me and my partner (wife in this case) being the gatekeeper. Give me better tools, help me get to where we want/need to go, and provide compassionate assistance to those without guidance, but don't TELL me what I can or can't do with my family.
Obviously, a bit of hyperbole to get my point across. I don't intend to pick a fight.
I am 100% on board with protective guardrails for kids, families, and adults who don't know or can't help themselves, in the form of product labels, content warnings, and usage criteria. But PLEASE don't suggest creating yet another social program (regulation) that will solve this issue. It has to start with each of us as individuals, taking our own responsibility to do the best we can. Give me better tools, visibility into the way the system works, and options to control things for myself and my family, but please don't allow the government or the product maker to assume they know what's best for each of us or else they will (based on history) make designs for all of us that don't really work for any of us.
I think you are right that parents need to be the ultimate gatekeepers here. I think Penny’s point is that there are many kids whose parents are unwilling to perform that function, and they need to be protected too.
My statement... "and provide compassionate assistance to those without guidance"... means helping parents, grandparents, extended family, teachers, friends, churches, local city programs, Boy/Girl Scouts (the original ones, not the ones that exist today), etc...(except the federal gov.) assist in this issue.
The best way for them to be able to assist (IMO) is to give everyone more visibility into, tools to utilize and the ability to influence the systems our children interact with (including public school).
On a personal level, I have compassion, sympathy, and empathy for kids that live in situations where they don't have adequate leadership. I am a champion of personal responsibility to give back into our youth. However, I am very wary, scared, and disillusioned with overarching/overreaching "programs" intended to help but end up as vehicles for graft, corruption, and power for the government too far away from the source of the issue. Many issues are cultural, local, and specific to the individual child.
IMO, From a product development perspective, we need to incentivize the platforms to build more tools for the above list to utilize. That is a detailed and technical conversation maybe best had in other locations but that is what I am trying to build consensus for.
Let the consumer see, manage and control our own experiences without manipulation, trickery, and outright fraud.
I totally agree, and as a clinical social worker I especially appreciate your point about more and more social programming. I have found over the years that social programming is usually aiming a firehose at the smoke rising from the fire when it needs to aim at the base of the flame, and I don't know that social programming is equipped to do that.
OK, I can't resist the opportunity to expand and play off your statement to "build a better metaphor"
IMO,
Social programming today is a large and powerful hose emerging from marble buildings and nondescript office buildings.
Aimed across vast distances that lie between there and the blaze.
Millions, billions, trillions of gallons of water($) are needed to propel it the distance.
It spreads out in a fan and drops water on everyone in between like a cold rain unasked for but endured "for the good of everyone".
Its power and spread knock down some of the smoke, making a splash and impact of visual sight.
Watching from afar it looks impressive and provides an excuse to turn and "go about your business".
The firefighters on the ground and the residents of the structure in flames fight on with the resources they have, knowing the mist does little but make them wet.
For those caught in the blaze?
Will someone there rush in to help? Will the local assistance get to them in time?
They wait to be rescued or to be burned alive. Quickly if lucky or slowly if not.
Damaged for life either way.
Ok... prose moment is over... Thank you for the opportunity to scratch an itch. Full credit for the original metaphor.
Reddit's thumbs up/down system does not reward good content. It famously penalizes contrarian thinking and rewards conformity. What defines conformity depends on the subreddit, but center and center-right commentary anywhere on Reddit gets downvoted outside of a minuscule number of subreddits. In addition, contrarian voices on Reddit are often tracked down outside of a specific subreddit and the user subsequently banned within that subreddit, even if nothing written by that user in that subreddit was outside the bounds. Reddit, if anything, is a warning, not a positive example.
Secondly, the social reward system advocated in this piece isn't new either. We saw it in the _Black Mirror_ episode "Nosedive," and we see it on display in China. Is this what we want? Hardly. If anything, a system such as this again penalizes contrarians and anyone with a "conspiracy theory"—you know, like "SARS-Cov-2 came out of the virus lab in Wuhan," the "crazy" stuff that we learn after the fact was spot on.
I’m of the opinion that social media providers are morally culpable for the marked rise in teen and young adult depression and suicide since 2011. They made only token efforts to safeguard the most psychologically and emotionally vulnerable segment of our population, claiming the mantle of Section 230 protection at the same time their employees were curating content to suppress that with which they disagree.
Thank you William, happy to notice that someone keeps the helm of the core issue of this entire conversation.
It would be already a great change if ANYONE from those companies speaking on this issue started the conversation by apologizing for the millions victims their products generated and a public commitment that never will they allow that so much harm is done. Without an apology and without repentance and most of all without punishment, what behavior can we expect?
"We should help parents regain their rights to be gatekeepers for their children."
I sympathize with this, but I fear it’s a losing battle. Children and youth will always be the primary targets of big tech: get them young, and get them hooked, thus ensuring years of profit.
There is also an underlying values problem. Our culture has lost the ability to say a healthy “No”, let alone to encourage anything like the reclamation of parental “rights”.
I hope you’re right. As a parent with concerns about digital tech use, I have definitely found it easier to say “yes” than “no”, not only because of the addictive nature of many forms of digital tech, but because there is an implicit assumption in our culture that “no” is somehow harsh or unfair.
And yet, interestingly, whenever my wife and I have been able to set strong limits for our children, as well as opportunities for them to shift their energies elsewhere, the result has been pleasantly surprising: the kids suddenly find creativity they didn’t know they had.
Ravi's suggestion "Focus on accountability, not identity" seems clearly right to me. Anonymous accounts can develop a strong reputation - eg the DRASTIC group, many anonymous, who contributed so much to understanding the origins of Covid. Anonymity is crucial to the ability to state dissenting views without fear of personal repercussions.
Agreed. Good anonymous accounts develop a reputation and strive to maintain it. The same was true over 200 years ago with the anonymous essays published in the Federalist.
Norman, what if you knew "Anonymity is crucial to the ability to state dissenting views without fear of personal repercussions." for 5% of the people who need in some remote countries it but served 95% to do harm also to children and people in your own country Would you consider that price a democratic deal?
Can you give me an example of how it will harm people?
Also note that on the need for anonymity to state dissenting views without fear of repercussion, I was specifically thinking of my own country, Canada, not just remote countries.
You really can't think of any? Let me help... millions of children suddenly available to tons of predators (accoding to NCOSE there are 750000 predators at any time of the day online)
What if the efficacy of "dissenting" online is a little inflated (skilfully by the companies)? After all our ancestors achieved so much (more) with so much less than us. It's not that dissent was born with Social Media. We know Social Media exploit human weaknesses and beliefs, and they are very devious. If they were exploiting our need to "dissent" and protect "dissent" how could we tell?
Eckart Tolle explains that we humans are addicted to our words. These guys in the Silicon Valley created a genious machine that makes us talk while we lose all our fundamental rights. With children in the front line.
Three fundamental issues are not addressed. In the first place, digital devices and platforms create addictive behavior by stimulating the dopamine effect. In the second place, the angorithms aim to elicit strong, engaged, reactions in the users. Therefore, the business model basically automates an abusive environment. And in the third place - as we have seen in the Twitter files - the application of ‘visibility filters’ really means tech platforms create an alternate reality that, inasmuch as people are not warned explicitly, innocent users are going to mistake for the world out there. Isn’t that induced psychosis?
All of this smells like the revenge of the nerds, which is what happens when you subject (formerly) human relations to a system that calculates responses.
I apologize if I'm being unnecessarily argumentative but the "revenge of the nerds" comment struck a cord. Most of the "nerds" I know (being one of them) are good people that do not want revenge. They want to develop cool products, get rich and live normal lives (pretty much like the rest of us).
The challenge we as a society have to address (IMO), is that the tools/products/technologies they have/are/will develop are being hijacked and manipulated by the same class of people who have/are/will always want to have more power (money/influence/etc.)
From the printing press, to the TV, to email, to "activism", we all have to be on the watch for that/those who are manipulating us in a way that is unhealthy, without taking the easy route by blaming the tool (internet, guns, hammers, the wheel).
I don’t mind your being argumentative at all! I use the term as a provocation, of course, though not to instill anger, but rather to consider and discuss matters that are not often addressed.
In a certain sense, I am a nerd myself.
What I try to address is the fact that there is a huge irony (and I believe a risk) in entrusting human interaction (on ‘social’ media) to people with a numerical predisposition. I am not saying that such people have less value. I believe people with all sorts of talents and tendencies have value and should be considered as contributing to the whole of human interactions called society. But I do believe that in this case we have a serious mismatch. I have written about this issue twice on my Substack, History is Now. You are welcome to have a look and disagree or debate.
Appreciate the response, tone, and offer (I'll take a look).
I agree with the principle of your premise about the use of addictive behaviors and chemical responses. Gambling, cigarettes, and sugar are all regulated items exactly because of this. In the turn of the wheel, most of them became regulated not because of the initial act/chemical/amount of the substance in question but because of the eventual abuse of them by those who had the power to manipulate them in a way to take advantage of weaknesses in our biology-psychology.
IMO we could more successfully focus on the regulation of social media at the level of targeted audience, subject matter appropriateness (age/content/time), and exposition of manipulative techniques used in algorithmic advertising/content management.
The tools/content/activity (with a few exceptions) are not inherently bad/evil/harmful. Their use can/could be if we don't keep our eye on them. And we have absolutely taken our eye off the ball.
As a classical liberal, I am skeptic of the morality or effectiveness of (legal) prohibitions (the grand victory of drugs in the war against them being exhibit no 1). But what I think would be a sensible step in the right direction is an insert/informed consent process before registration. The fact is that most people have no idea what actually happens in the field of data privacy/ tracking/profiling/targeting etc. It is rather silly that we blindly submit our data and person to be subjected to all these invisible and unknowable processes.
Even a consumer organization could create a number of categories that cover the range of processing involved. By rendering this transparent beforehand, and assuming that the cheapest platforms indeed commodify the user and so will become less of a default option, people are at least and at last given a choice.
I share your skepticism. However, I do see some benefit of providing some legal/control over say, manufacturers, service providers, and publishers that regulate around the edges of the system (kids, elderly, disabled, abused/trafficked ) to protect the consumer/user from abuse/manipulation of content to elicit a behavior not expected or desired.
Informed consent is great if implemented and well. I have issues with bright light and sounds. Warnings about this type of content help me mitigate or avoid things that would generate bad health consequences. Some people are allergic to peanuts - peanut labels. Parents want to be able to regulate the use of the content for their families - give them back laws/options to allow them to dictate what their children are exposed to.
Buyer beware is a fundamental tenant in my own personal moral/ethical foundation. However, not all people have been prepared for life the same way. Not all are aware of the potential of or existence of the type of manipulations that exist or that are even possible. I personally want an unadulterated experience, but one that clearly outlines the risks if they are not apparent (big scary teeth, thorns, or bright colors denoting poison). IMO, the "wild west" is not sustainable for any society, "from cradle to grave" is just as dangerous and even more insidious even if it takes longer to kill. Guardrails are good but have to be tweaked constantly/consistently and require an informed populace.
IMO the answer lies in the soup-mix of all of these approaches at different times, in different places, and for different populations. The fact that this vehicle (Substack) and others exist and that a plurality of conversations seems to be constructive means to me that there is hope in the complexity.
It’s clear that a single measure is not going to drag us out of this massive mess. Let’s not forget about litigation, either, as some corporations simply defraud their customers. Ultimately, I think the answer should be cultural. Technology should be used for our good, and often it is attributed intrinsic value. And I believe it triggers our negativity bias in way that sends us in a vicious cycle of increased sense of insecurity (mental obesity).
I can agree with your points. I am probably less a nerd than just an old fool but the greed that drives most enterprise is as innate in humanity as is breath.
A hammer is both a great tool and a deadly weapon. Operator error for the latter.
Eric, what you describe resonates, except the point about the tool. As extensively explained by the Center of Humane Technology in the Foundation of Humane Technology Course https://www.humanetech.com/course it is misleading to approach persuasive technology as just another "tool" . The difference is huge, as the asymmetry of power between the tool and the user is new. The supercomputer pointed at you studying every single emotion and weakness and using it to manipulate you to do what it wants has no precedent. This technology is ALIVE and has an agenda (it comes with a legal shield, no transparency obligation for customers, and users illiteracy). Guns don't have an agenda, hammers don't either.
It's like a cigarette built not only to make you addicted but also to rewire your brain in the way that "third parties" pay for against your best interest and societies best interest. It's a little different than just another tool whose effects depend on those who use it...isn't it?
Thank you Benjaming for recentering the conversation to the core issues.
1. Dopamine. Prof Andrew Huberman explains that the pain of "artificially generated dopamine goes way beyond just creating addiction Neuroscientist - What Overusing Social Media Does To Your Brain https://www.youtube.com/watch?v=Zh-AcF_4Hao
2. Algorithm: I love the "Automated Abusive Environment" concept to describe the toxic business model. Unless that is disabled I am afraid that it is like debating about giving children water wings to swim in the radioactive waters of the Fukushima nuclear plant after the Tsunami.
3. Alternate Reality Effects: The creation of al alternate fake reality creates real consequences in people's mind/life but also governments agenda. which bring to the greater issue that Prof Shoshana Zuboff brought up extensively. Another elephant in the room.
Globally, as someone who listened many times former FB/META employee Frances Haugen hearings with the US Senate, French Senate, and EU Parliament I find the disconnection between the story told in the post and the evidence-supported story she told discomforting.
Yes, I noticed, too. She specified the angorithms as a point of concern, but nobody was interested to pursue. I believe the explanation is not that complicated: Big Tech wants them for their bottom line and Big Gov wants them for control.
Thank you Benjamin for keeping the focus on the core issues.
I love the "automated abusive environment" that the business concept creates by default.
While I understand the need to explore solutions, unless the two components above are disabled, all solutions appear as effective as providing water wings to let children (and citizens in general) swim in the radioactive waters of the Fukushima nuclear plant after the tsunami.
Interestingly, although all the studies and popular wisdom highlight that the best childhood a child can have is a social media free childhood, NEVER have we heard those companies considering solutions like giving up children as their customers and acknowledging their ineptitude (in the best case).
Perhaps because parents are complicit. Or more compassionately spoken: they are addicted, too. I am reading Andy Crouch’s book right now. I would recommend it, as well - of course - as my essay Fear No More. It’s a battle, Sara, but one worth fighting.
True. Parents were the first to be enticed to enter the dehumanizing and God-erasing cage. It's crazy how they captured and reprogrammed our collective relational dynamics and depleted them of meaning- the very source of strength and resilience for humans.
At a very small scale I do my best by raising awareness and promote through digital hygiene a technology that brings us closer, helps us thrive and most of all one that can be turned off without regrets.
And YES, it is a battle worth fighting until the very last breath. Because of people like Prof Jonathan Haidt, Frances Haugen, Tristan Harris, you and I and many people here and everywhere in the world, the Good will triumph, in the end.
Reddit is well-known for the effectiveness of its woke thought police - strange to cite it as any kind of model. Rather, dissenters from the woke regime should realize that speech platforms can never escape politics, and redouble efforts to build alternatives to Big Tech's woke regime.
Jon Askonas and Ari Schulman, "Why Speech Platforms Can Never Escape Politics"
I couldn't agree more with the concept of accountability versus authentication of a real identity. Although I understand the motivation to identify the real life person behind these comments, that just opens things up to future abuse by intelligence agencies when they get their hands on this information. It could be Trumper's going after wokesters or vice versa. Not being able to down vote comments on social media reminds me of the phenomenon where all participants in a competition get a ribbon at the end.
It's already a source of frustration for me that I have to lie when I create my kids' accounts on various websites. Because as they're both under 13, certain sites are horribly crippled. Having an under 13 Google account, for instance, made it unusable for us, and fixing it was complicated though not impossible. (We set his birthday to a day before he turned 13 and then let it roll over).
It's fine if parents want to create child accounts for their kids, but existing legislation, aside from being easily surmountable, actually reduces parental choice by forcing restrictions onto child accounts with no way to remove them.
I'm extremely doubtful that any additional legislation wouldn't just be more of the same, but worse.
I hear this a lot. The perversion is in so many aspects of this products that it takes a manual to list them. Cognitive Dissonance all over the place. Catch 22 situations.
For sure #1 is "You have to choose whether you want to emarginate your child by not having a social account or let it be maimed by Tech Companies products and anonymous malevolant "third parties" (hiding behind social media legal shield...)
#2 "You have to lie to create an account to your child so you teach your child that lying is ok"
#3 "your child grows through the relationship with their parent which is based on trust but parents are led to use all kind of surveillance tools and creating of fake accounts to supervise their child" and so on ....
It's quicksand. We must extract kids not give them a lifebuoy.
Super interesting piece on a critical issue. As a crypto professional, one of the aspects of our industry that I feel is most misunderstood is that we can be a tool to make the internet SAFER. Point 1 of this piece is possible on a mass scale applying cryptographic identity solutions. Authentication is one of the primary uses of blockchains. Frank McCourt and Project Liberty are very articulate about the need to make the internet safer and how crypto is a huge part of doing so.
I’m just a lowly stat at home mom… why are tech people doing this to us? Tech people must really hate normal human beings. I’m definitely a terrible parent because I have this bizarre idea that if I need to remind my child to brush his teeth, he probably doesn’t need a phone or anything with unlimited access to the internet.
Not sure whether it's "why are Tech people doing this to us"? or "Why are you parents letting Tech People doing this to us the kids"?
When did we decide that protecting our children could be outsourced to lobbies, porn industry, and all the deviance market possible and imaginable? We allow what we tolerate.
Exactly! My words just weren’t helpful. I lost the phone war today. Since none of his friends live in our neighborhood and, and, and,… the ‘you don’t want a miserable child, do you?”
Regarding Reform 4 (Building Codes): When there are many individually developed products supporting each other, where does the buck stop? How do we think through - from a product perspective - building technical safeguards into individual products and sub products when competing with launch timelines and code revision limitations? Are there suggestions for keeping teams accountable in how they structure and build products given that the question of "who is facilitating what" is sometimes hard to pin down?
That’s kinda technical jargon that flies WAY over my widdle head. I think John Khoury hit the nail with the notion that reputation will guide people wishing to avoid toxic and unproductive verbal brawling. Some do want a more confrontational exchange
As an educator and mother of 2 teenagers, I am so thankful that these conversations are occurring. In addition to putting an age limit on social media, (one that I am sure many kids will try to bypass but hopefully will create a barrier), I think we also need to look at our education system and it's push to make all students 1:1 with technology (meaning every child has their own device and in some cases access to this device all day at school). This mindset that kids need to be educated by computers while knowing that at the same time these kids are having to divert their attention from the strong pull of the internet and so called "educational games" (our latest version of the "fat free" snackwell cookie) is a huge part of the problem. Yes social media is bad but so is allowing 10 year olds to have free access to a computer all day at school and expecting them to be "working on math" or "reading" when we know these other forces are designed to steal their attention are so easy for them to use. Also, we have forgotten the importance of multisensory learning and face-to-face communication, something that is lost with tapping on a key board. The stronger the foundation of social skills we can develop, the better our children will be able to navigate their behaviors on-line. These skills need to be developed through real human interaction and not through a screen.
Good point! I wish more educators held your views!
I am hopeful that more do than not! The more I ask others about this, the more I hear "we need to get rid of computers!". I am not naive that we can completely get rid of them but intend to keep speaking out about it so that maybe we can make some changes, I would hate to see us keep moving in this direction.
Excellent point of the paradox we are putting children into (and a reason for children to think adults are not reliable people)
As Big Tech (and whoever is behind them) are fighting to cancel wisdom, prudence and Love from humanity, the pre-conditions to create the dehumanized people they need for their innovations.... Let us be inspired by the quote from President Lincoln:
“You can have anything you want if you want it badly enough. You can be anything you want to be, do anything you set out to accomplish if you hold to that desire with singleness of purpose.”
We have to stop looking for compromises and negotiate the non-negotiable, we must be brave:
to learn the ABC of life and how to become a fully functioning adult kids don't need Digital stuff. How do we create the conditions for kids to learn the ABC of life until they are 18 and ONLY THEN we give them access to the Tech. Meanwhile we pressure decision makers to SANITIZE tech from the addiction/OCD generating component and impose ethical algorithms Pilots are a great example (safe pilots!), they can't fly computerized planes unless they first master manual flight skills. And that makes a whole difference in emergency situation as the sad case of Air France Rio-Paris showed.
Sara, I love your comparison of training pilots to first master manual flights before they can fly computerized planes! Such a great point!
Couldn't agree more, well said!
> Some social media platforms have introduced reputation-based functionality with successful results. For example, Reddit’s upvote/downvote and Karma system have proven useful for improving social discourse while avoiding the privacy issues that could come with identifying all users. Using this model, we could require accounts to earn trust from the community before giving them all the power (and responsibility) of widespread distribution and develop ways to make the loss of community trust consequential.
Not sure you want to be using Reddit as your model to emulate. It has a widespread reputation as "the [insert unflattering body part here] of the Internet," based largely on the ability to create new accounts entirely anonymously and unaccountably. When anyone can trivially create an alt "for free" and wade into a discussion pretending to be new, (or have multiple well-established alts amplifying each other's voices, or any number of other bits of bad behavior,) you get... well... the toxic mess that is Reddit.
> Consider an example from one of the leaked Facebook paper documents revealing that a small set of users are responsible for nearly half of all uncivil comments. The absence of an effective downvote system ironically amplifies their visibility when others engage to contest their behavior. What if we could diminish this group's social sway by holding them accountable, possibly through a history of downvoted comments?
This might be more effective, but it's also possible that it could lead to whole new forms of bullying and harassment. Unless the threshold for making it onto the downvoted comments list was unreasonably high, it wouldn't be particularly difficult for malicious users to brigade somebody they didn't like and make them look like a problem.
I think the best solution to this that I've seen comes from StackOverflow: downvoting decreases the target user's reputation, but it also decreases your reputation as well, by a smaller amount. You're allowed to hold other users accountable, but there's a cost to doing so. (A few years back they significantly weakened that cost, and the site's quality has gotten a lot worse ever since.)
> Questions of causality are pervasive in debates about social media (e.g., is social media a reflection of our societal polarization, or is it causing that polarization?)
Seems to me the best answer is "both." It's a feedback loop; existing polarization leads to polarizing content, which drives further polarization.
> Social media has been hailed as removing gatekeepers, but those gatekeepers may not all be bad.
Hear, hear! The principle of Chesterton's Fence (Chesterton's Gate?) applies here; as more and more "gatekeepers" are removed, we see more and more clearly the costs and harms of gates inadequately kept.
great points in this thread and love the idea of Stackoverflow as a better example. I worked on some ideas for downvotes (e.g. https://www.socialmediatoday.com/news/facebook-tests-updated-up-and-downvoting-for-comments-in-groups/598096/) and know they hold promise, but there are definitely nuances here as to how it is implemented.
thank you for your efforts in this space, Ravi 🙏🏼
much appreciated,
I've often wondered if a two-part/multipart system might work better. Instead of a single "down-vote" mechanism, which could mean anything from "I disagree with you" to "you sir/madame, are being a$$ hole" I wonder if having an "I agree with this statement" heart emoji thing and a "karma like" emoji or a rating scale item like a " this was a well-argued discussion or thanks for at least being civil" thumbs up would work as a statistical tool.
Seeing as the main topic of the title post is the psychological impact of platforms. I know from my own background that the algorithmic tools and data analytics exist under the covers to show the platform devs/managers which users/posts are getting "good" interactions but I would like to see if we could change "good" from "drives engagement/advertising/shock value" to "this is a constructive post/discussion/comment". The devs/mods mean the existing system to be that, but its been turned into an abusive system used to brigade and silence dissent "even well-formed and cogent dissent".
Much of this is subjective and in daily life means mods and users of the communities police themselves but having more tools exposed to the platform user base in this increasingly tech-savvy world, not less would be good in the long run. Giving the mods the ability to point out that "hey this was a good conversation" even if they did not agree with the conclusions might be a good or at least helpful tool. In my school days/middle and college, the great teachers took time to recognize those who took the time to at least try to communicate respectfully and with good arguments. We should try to move back in that direction. IMO.
However, in the end, no technology, platform or algorithm can/will take the place of personal responsibility for one's own behavior. That starts in the "meat world".
yes, the interactions on a post need to change. I can think of 4 good ones:
- accurate/inaccurate
- positive/negative tone
- relevant/not relevant (I understand that this will be personal to the user)
- important/unimportant
would love to hear what everyone thinks of this list.
As for the "meat world", IMHO the best experiences online will mirror our traditional offline existence as much as possible.
I like the list of options. As a recovering product manager, this would be a great place to start a storyboard/user story for requirements... must... resist...
With the caveat that all of these are subjective unless they are tied to comments for context, (debate, Socratic discussion, verbal tennis, etc.) I can see them being very useful. I've seen versions of some of them but not a good implementation of all of them together.
"- accurate/inaccurate" (By what measure, does this option require the inclusion of contextual accuracy/inaccuracy citation, for a lightweight app this is pretty hard to include but on the other end it leads to a Wiki-like situation, which had its own bias.
"- positive/negative tone" (Seems like a good measurement over a large sample pool)
"- relevant/not relevant (I understand that this will be personal to the user)" (given to mods and "power users" this can be a very good tool
"- important/unimportant" (Included in the context of the rest of these items it presents a really good way to replace the "upvote" with something with more context.)
ha! recovering product mgr... one day at a time...
thanks for your reply (see my reply would be unimportant, but positive, and n/a for the other 2... )
oh, and I've implemented a scale, so you can do -2, -1, +1, +2 on each interaction
I like the scale idea. Give the ability to show “force” of reaction but also still allows groups to manipulate the system if they are determined to make a point.
The n/a option is good as well but I would think you could just assume that if someone read the item (a metric that already exists) and chooses not to vote. The right set of options should allow the user to vote without forcing them to choose a n/a.
I could be wrong and multi systems should probably be tried. My main concern is with overly simplified systems that drive to overly simplified assumptions. Over complex would drive to less interaction.
A medium should be strived for. Something like the Strongly agree, agree, neutral, disagree, strongly disagree (your +2,1,0,-1,-2) mirrors that. I just think IMO there should be multiple variable used to measure the interaction.
oh, and I believe it best to make these interactions anonymous. Whatever I label a post, only the system knows it, not the other users. That way, we keep the interactions honest, and deprive the poster of the dopamine hit when they get a lot of likes. It'll keep the post more honest in nature and the interactions as well.
I agree with the option to have anonymous interactions, however, they should be verified and tied to a real person IMO. Bots are a large part of the manipulation problem I and many others are concerned about.
How we implement this is purely a technical problem. How we keep it anonymous and protect it, is a policy/compliance issue requiring penalties and civil/criminal punishment for violations by the platforms/governments/civilian actors who violate the policy i.e. GDPR/HIPAA.
It won't keep it from happening but, just like good forest fire prevention, it would keep the massive, town-destroying, life-taking acts from happing often.
IMO, A system that requires the user to utilize "all" of the voting options if they use "any" of the options could tamp down some of the random "I'm in a bad mood and feel like kicking sand in someone's face" actions that are common in anonymous, quick, and spiteful social media interactions.
Many platforms are starting to build multipart survey options (LinkedIn, Facebook, Twitter) that allow the user to give the dev/mods feedback on what kind of experience they want to have. Yes, this allows for echo chambers but they could/should be by choice. We as individuals get to have freedom of association (or we should).
If a user wants to have interactions beyond the bubble they find comfortable, let them poke their head out or stick their toe in the larger stream "when and if they want to".
Not by force, fake, or fraud.
yes, all interactions must be tied to a user. Interesting point about forcing all the interactions to be used, but then , as per my reply above, a n/a option might be useful.
IMO, I think it's healthy to provide vacations from the echo chamber. Forcibly, yes, but no one will get hurt. And a solid reputation system will ensure that contrarian points will come from a reputable source.
“I think it's healthy to provide vacations from the echo chamber. Forcibly, yes, but no one will get hurt.” ...
Sounds an awful like reeducation...
I don’t think you mean it that way because I’ve been interacting with you enough to have a inclination to believe that. I don’t KNOW you or your motives but I’m choosing to give YOU the benefit of the doubt at the moment.
IMO However, force WILL eventually be abused. Choice is critical. If one chooses to be ignorant of the world then one also chooses the consequences. If one is forced into something it might be helpful but it also might injure. Informed consent is required for freedom.
Convince me I should listen to your voice if I choose to engage. Don’t force me to listen.
"This might be more effective, but it's also possible that it could lead to whole new forms of bullying and harassment. Unless the threshold for making it onto the downvoted comments list was unreasonably high, it wouldn't be particularly difficult for malicious users to brigade somebody they didn't like and make them look like a problem." I was just thinking this.
I came here to say something similar, but you said it way better. I feel like Reddit is a place where anyone with a dissenting opinion gets "shouted down", to the point where I don't ever post/comment on anything. I'm not even an avid user and I can see how it has become an echo chamber (or series of echo chambers) for groupthink. There's probably a better way - I like the StackOverflow model that you mentioned
Reddit shows many of the signs of the social media system in the _Black Mirror_ episode "Nosedive," where conformity is prized and contrarian thinking punished. Even attempting a middle ground on some of the larger subreddits will be downvoted to oblivion and risk the user a banning.
I was naturally sceptical of downvotes. In some areas, like the trans debate, the activist bases on both sides seem to be more hostile to reasoned heterodox voices which take a more empirical and view, aiming at some form of compromise (like cross sex hormones, in the 20% of acute cases of GD, somewhat earlier than 18, to preserve the ability to pass).
But reading your explanation of a reputational hit, it seems more plausible. However, I disagree with the presumption it should be small. I guess it would be a matter of calibration- but I would start from the basis of an equal hit to reputation for those who do the 'punishing'.
I also think with should look to the example of speeding fines. There are plenty of cities where wealthy individuals driving expensive cars rack up significant numbers of tickets before their behaviour is checked. Perhaps a flat tax- reputationally, those with higher status and greater influence should probably pay a percentage of their reputation for smacking down the little guy, provided they in turn aren't in the habit of harassing them first.
A system that allows users to "see" the statistical view of the "voters" history could provide some context.
For example, if I'm in a conversation with a group and on a topic I find important and a set of "votes" are provided. I would love to see if particular, some, or all of the "voters" have a statistically relevant history, to me. Do they regularly make negative votes, do they contribute to lifting important/relevant topics that are important/relevant, to me? Do they show a particular bias that is relevant, to me?
This information could provide me (the user) the ability to understand both my impact on the overall conversation and the type of interactions that are occurring and do/should I care based on my own moral variables. It would also allow lurkers to interact with the flow and extract information behind the scene without having to read every message or expose themselves to unwanted exposure. Social anxiety is real and disabling and can create resentment of being excluded and fear of inclusion because of "bullies in the play-yard").
For me personally, I don't, and shouldn't care "who" they are, where they come from, or what they think from a moral/ethical standpoint. I just want to know if they are good or bad contributors so I can make a more informed choice about whether or not to interact with or ignore them.
Brigading/Ratio'ing are common terms, but ones that have little visibility to the one being brigaded/ratio'ed in terms of context. I'd like to see more visibility into the technique and the actual message being delivered and by who/what.
I know for a fact from personal and professional experience that this kind of information exists and is used by some platforms to form policy and direction of movement (for advertisers, political messaging, or actual benevolent purposes. IMO, We (the users) should have more visibility into, and use of, the tools used to "push the sheep around the fields".
No social interaction can be assured to be 100% safe. However, we can and should (IMO) do better at providing social tools/cues to people who do choose to interact for whatever reason. We have tone, volume, facial and body expressions that are learned subconsciously and taught to us throughout our entire lives in "meat space".
IMO, We need better tools in cyberspace to increase the likelihood of positive or at least productive (by whatever measure you deem productive) interactions.
Don't agree with everything in this piece, but I like the analogy to "building codes" a lot. I feel like most people's eyes glaze over when you mention the word "algorithm" but everyone knows what you mean by electrical box. However, it took a lot of terrible catastrophes before there were building codes enforced. How do we limit the damage today? What is the tipping point to say, "Now we must put into practice these codes?" And how do we agree on those codes?
I think we need some independent organization(s), research groups, and/or government organizations to head this up - people with real working knowledge of how these platforms work, how they got the way they are, and how we might fix them. I'm very tired of politicians/lawyers acting like "experts" in other fields when really they need to take advice from smart people like Haidt and Ravi. I hope they are doing just that.
As always the fundamentally issue will be who determines what is 'mis' or 'dis' information.
It's a great strategy to keep the debate go on forever and nothing gets done in the meantime which favors the "mis and dis information" economy of platforms. Genius!
Exactly! Who watches the watchers?
I agree with everything Ravi suggests but as a child and adolescent psychiatrist I don’t think we should leave the role of social media gate keeper to parents. There is a very real risk that young people who are already disadvantaged by coming from troubled/very stressed families will be further disadvantaged by having unrestricted access to social media. Not to mention the huge difficulty parents already report trying to set limits on their adolescents’ use of social media/technology. If we feel social media is damaging for young people, society should find ways to protect them from the risk as we currently do with alcohol, smoking etc even if these efforts are only partly successful.
I would suggest a slight modification to your point. I nod to your experience as a psychiatrist. However your comment... "I don’t think we should leave the role of social media gate keeper to parents" …' really jerks my chain' as my grandmother used to say.
As a parent of two adult sons who grew up in the first social media cohort, I can attest firsthand that social media can be harmful to adolescents, middle school, and teenage children. However, no one, and I mean no one gets to make first-order (safety, comfort, happiness) decisions about my kids without me and my partner (wife in this case) being the gatekeeper. Give me better tools, help me get to where we want/need to go, and provide compassionate assistance to those without guidance, but don't TELL me what I can or can't do with my family.
Obviously, a bit of hyperbole to get my point across. I don't intend to pick a fight.
I am 100% on board with protective guardrails for kids, families, and adults who don't know or can't help themselves, in the form of product labels, content warnings, and usage criteria. But PLEASE don't suggest creating yet another social program (regulation) that will solve this issue. It has to start with each of us as individuals, taking our own responsibility to do the best we can. Give me better tools, visibility into the way the system works, and options to control things for myself and my family, but please don't allow the government or the product maker to assume they know what's best for each of us or else they will (based on history) make designs for all of us that don't really work for any of us.
I think you are right that parents need to be the ultimate gatekeepers here. I think Penny’s point is that there are many kids whose parents are unwilling to perform that function, and they need to be protected too.
I heartily agree with Penny on that point.
My statement... "and provide compassionate assistance to those without guidance"... means helping parents, grandparents, extended family, teachers, friends, churches, local city programs, Boy/Girl Scouts (the original ones, not the ones that exist today), etc...(except the federal gov.) assist in this issue.
The best way for them to be able to assist (IMO) is to give everyone more visibility into, tools to utilize and the ability to influence the systems our children interact with (including public school).
On a personal level, I have compassion, sympathy, and empathy for kids that live in situations where they don't have adequate leadership. I am a champion of personal responsibility to give back into our youth. However, I am very wary, scared, and disillusioned with overarching/overreaching "programs" intended to help but end up as vehicles for graft, corruption, and power for the government too far away from the source of the issue. Many issues are cultural, local, and specific to the individual child.
IMO, From a product development perspective, we need to incentivize the platforms to build more tools for the above list to utilize. That is a detailed and technical conversation maybe best had in other locations but that is what I am trying to build consensus for.
Let the consumer see, manage and control our own experiences without manipulation, trickery, and outright fraud.
I totally agree, and as a clinical social worker I especially appreciate your point about more and more social programming. I have found over the years that social programming is usually aiming a firehose at the smoke rising from the fire when it needs to aim at the base of the flame, and I don't know that social programming is equipped to do that.
OK, I can't resist the opportunity to expand and play off your statement to "build a better metaphor"
IMO,
Social programming today is a large and powerful hose emerging from marble buildings and nondescript office buildings.
Aimed across vast distances that lie between there and the blaze.
Millions, billions, trillions of gallons of water($) are needed to propel it the distance.
It spreads out in a fan and drops water on everyone in between like a cold rain unasked for but endured "for the good of everyone".
Its power and spread knock down some of the smoke, making a splash and impact of visual sight.
Watching from afar it looks impressive and provides an excuse to turn and "go about your business".
The firefighters on the ground and the residents of the structure in flames fight on with the resources they have, knowing the mist does little but make them wet.
For those caught in the blaze?
Will someone there rush in to help? Will the local assistance get to them in time?
They wait to be rescued or to be burned alive. Quickly if lucky or slowly if not.
Damaged for life either way.
Ok... prose moment is over... Thank you for the opportunity to scratch an itch. Full credit for the original metaphor.
Reddit's thumbs up/down system does not reward good content. It famously penalizes contrarian thinking and rewards conformity. What defines conformity depends on the subreddit, but center and center-right commentary anywhere on Reddit gets downvoted outside of a minuscule number of subreddits. In addition, contrarian voices on Reddit are often tracked down outside of a specific subreddit and the user subsequently banned within that subreddit, even if nothing written by that user in that subreddit was outside the bounds. Reddit, if anything, is a warning, not a positive example.
Secondly, the social reward system advocated in this piece isn't new either. We saw it in the _Black Mirror_ episode "Nosedive," and we see it on display in China. Is this what we want? Hardly. If anything, a system such as this again penalizes contrarians and anyone with a "conspiracy theory"—you know, like "SARS-Cov-2 came out of the virus lab in Wuhan," the "crazy" stuff that we learn after the fact was spot on.
I’m of the opinion that social media providers are morally culpable for the marked rise in teen and young adult depression and suicide since 2011. They made only token efforts to safeguard the most psychologically and emotionally vulnerable segment of our population, claiming the mantle of Section 230 protection at the same time their employees were curating content to suppress that with which they disagree.
Thank you William, happy to notice that someone keeps the helm of the core issue of this entire conversation.
It would be already a great change if ANYONE from those companies speaking on this issue started the conversation by apologizing for the millions victims their products generated and a public commitment that never will they allow that so much harm is done. Without an apology and without repentance and most of all without punishment, what behavior can we expect?
You are probably right about the negative impact on young adults mental health: https://open.substack.com/pub/causalinf/p/did-facebook-hurt-our-mental-health
"We should help parents regain their rights to be gatekeepers for their children."
I sympathize with this, but I fear it’s a losing battle. Children and youth will always be the primary targets of big tech: get them young, and get them hooked, thus ensuring years of profit.
There is also an underlying values problem. Our culture has lost the ability to say a healthy “No”, let alone to encourage anything like the reclamation of parental “rights”.
The first step to solving a problem is recognizing that it exists.
Our culture is finally beginning to accept that not saying "no" is a serious problem. Don't despair; things are slowly but surely looking up.
I hope you’re right. As a parent with concerns about digital tech use, I have definitely found it easier to say “yes” than “no”, not only because of the addictive nature of many forms of digital tech, but because there is an implicit assumption in our culture that “no” is somehow harsh or unfair.
And yet, interestingly, whenever my wife and I have been able to set strong limits for our children, as well as opportunities for them to shift their energies elsewhere, the result has been pleasantly surprising: the kids suddenly find creativity they didn’t know they had.
Yeah, limits tend to do that. "Necessity is the mother of invention" and all that...
Congratulations for (at least sometimes) being able to set strong limits for your children!
....and who forged that culture? It's a well crafted incident.
Leading you to give up is the devils' work when it has not conquered you.
Ravi's suggestion "Focus on accountability, not identity" seems clearly right to me. Anonymous accounts can develop a strong reputation - eg the DRASTIC group, many anonymous, who contributed so much to understanding the origins of Covid. Anonymity is crucial to the ability to state dissenting views without fear of personal repercussions.
Agreed. Good anonymous accounts develop a reputation and strive to maintain it. The same was true over 200 years ago with the anonymous essays published in the Federalist.
Norman, what if you knew "Anonymity is crucial to the ability to state dissenting views without fear of personal repercussions." for 5% of the people who need in some remote countries it but served 95% to do harm also to children and people in your own country Would you consider that price a democratic deal?
Can you give me an example of how it will harm people?
Also note that on the need for anonymity to state dissenting views without fear of repercussion, I was specifically thinking of my own country, Canada, not just remote countries.
You really can't think of any? Let me help... millions of children suddenly available to tons of predators (accoding to NCOSE there are 750000 predators at any time of the day online)
What if the efficacy of "dissenting" online is a little inflated (skilfully by the companies)? After all our ancestors achieved so much (more) with so much less than us. It's not that dissent was born with Social Media. We know Social Media exploit human weaknesses and beliefs, and they are very devious. If they were exploiting our need to "dissent" and protect "dissent" how could we tell?
Eckart Tolle explains that we humans are addicted to our words. These guys in the Silicon Valley created a genious machine that makes us talk while we lose all our fundamental rights. With children in the front line.
Three fundamental issues are not addressed. In the first place, digital devices and platforms create addictive behavior by stimulating the dopamine effect. In the second place, the angorithms aim to elicit strong, engaged, reactions in the users. Therefore, the business model basically automates an abusive environment. And in the third place - as we have seen in the Twitter files - the application of ‘visibility filters’ really means tech platforms create an alternate reality that, inasmuch as people are not warned explicitly, innocent users are going to mistake for the world out there. Isn’t that induced psychosis?
All of this smells like the revenge of the nerds, which is what happens when you subject (formerly) human relations to a system that calculates responses.
I apologize if I'm being unnecessarily argumentative but the "revenge of the nerds" comment struck a cord. Most of the "nerds" I know (being one of them) are good people that do not want revenge. They want to develop cool products, get rich and live normal lives (pretty much like the rest of us).
The challenge we as a society have to address (IMO), is that the tools/products/technologies they have/are/will develop are being hijacked and manipulated by the same class of people who have/are/will always want to have more power (money/influence/etc.)
From the printing press, to the TV, to email, to "activism", we all have to be on the watch for that/those who are manipulating us in a way that is unhealthy, without taking the easy route by blaming the tool (internet, guns, hammers, the wheel).
Getting off soap box...
I don’t mind your being argumentative at all! I use the term as a provocation, of course, though not to instill anger, but rather to consider and discuss matters that are not often addressed.
In a certain sense, I am a nerd myself.
What I try to address is the fact that there is a huge irony (and I believe a risk) in entrusting human interaction (on ‘social’ media) to people with a numerical predisposition. I am not saying that such people have less value. I believe people with all sorts of talents and tendencies have value and should be considered as contributing to the whole of human interactions called society. But I do believe that in this case we have a serious mismatch. I have written about this issue twice on my Substack, History is Now. You are welcome to have a look and disagree or debate.
Appreciate the response, tone, and offer (I'll take a look).
I agree with the principle of your premise about the use of addictive behaviors and chemical responses. Gambling, cigarettes, and sugar are all regulated items exactly because of this. In the turn of the wheel, most of them became regulated not because of the initial act/chemical/amount of the substance in question but because of the eventual abuse of them by those who had the power to manipulate them in a way to take advantage of weaknesses in our biology-psychology.
IMO we could more successfully focus on the regulation of social media at the level of targeted audience, subject matter appropriateness (age/content/time), and exposition of manipulative techniques used in algorithmic advertising/content management.
The tools/content/activity (with a few exceptions) are not inherently bad/evil/harmful. Their use can/could be if we don't keep our eye on them. And we have absolutely taken our eye off the ball.
As a classical liberal, I am skeptic of the morality or effectiveness of (legal) prohibitions (the grand victory of drugs in the war against them being exhibit no 1). But what I think would be a sensible step in the right direction is an insert/informed consent process before registration. The fact is that most people have no idea what actually happens in the field of data privacy/ tracking/profiling/targeting etc. It is rather silly that we blindly submit our data and person to be subjected to all these invisible and unknowable processes.
Even a consumer organization could create a number of categories that cover the range of processing involved. By rendering this transparent beforehand, and assuming that the cheapest platforms indeed commodify the user and so will become less of a default option, people are at least and at last given a choice.
I share your skepticism. However, I do see some benefit of providing some legal/control over say, manufacturers, service providers, and publishers that regulate around the edges of the system (kids, elderly, disabled, abused/trafficked ) to protect the consumer/user from abuse/manipulation of content to elicit a behavior not expected or desired.
Informed consent is great if implemented and well. I have issues with bright light and sounds. Warnings about this type of content help me mitigate or avoid things that would generate bad health consequences. Some people are allergic to peanuts - peanut labels. Parents want to be able to regulate the use of the content for their families - give them back laws/options to allow them to dictate what their children are exposed to.
Buyer beware is a fundamental tenant in my own personal moral/ethical foundation. However, not all people have been prepared for life the same way. Not all are aware of the potential of or existence of the type of manipulations that exist or that are even possible. I personally want an unadulterated experience, but one that clearly outlines the risks if they are not apparent (big scary teeth, thorns, or bright colors denoting poison). IMO, the "wild west" is not sustainable for any society, "from cradle to grave" is just as dangerous and even more insidious even if it takes longer to kill. Guardrails are good but have to be tweaked constantly/consistently and require an informed populace.
IMO the answer lies in the soup-mix of all of these approaches at different times, in different places, and for different populations. The fact that this vehicle (Substack) and others exist and that a plurality of conversations seems to be constructive means to me that there is hope in the complexity.
It’s clear that a single measure is not going to drag us out of this massive mess. Let’s not forget about litigation, either, as some corporations simply defraud their customers. Ultimately, I think the answer should be cultural. Technology should be used for our good, and often it is attributed intrinsic value. And I believe it triggers our negativity bias in way that sends us in a vicious cycle of increased sense of insecurity (mental obesity).
I think you stump well Nerd person.
I can agree with your points. I am probably less a nerd than just an old fool but the greed that drives most enterprise is as innate in humanity as is breath.
A hammer is both a great tool and a deadly weapon. Operator error for the latter.
Eric, what you describe resonates, except the point about the tool. As extensively explained by the Center of Humane Technology in the Foundation of Humane Technology Course https://www.humanetech.com/course it is misleading to approach persuasive technology as just another "tool" . The difference is huge, as the asymmetry of power between the tool and the user is new. The supercomputer pointed at you studying every single emotion and weakness and using it to manipulate you to do what it wants has no precedent. This technology is ALIVE and has an agenda (it comes with a legal shield, no transparency obligation for customers, and users illiteracy). Guns don't have an agenda, hammers don't either.
It's like a cigarette built not only to make you addicted but also to rewire your brain in the way that "third parties" pay for against your best interest and societies best interest. It's a little different than just another tool whose effects depend on those who use it...isn't it?
Thank you Benjaming for recentering the conversation to the core issues.
1. Dopamine. Prof Andrew Huberman explains that the pain of "artificially generated dopamine goes way beyond just creating addiction Neuroscientist - What Overusing Social Media Does To Your Brain https://www.youtube.com/watch?v=Zh-AcF_4Hao
2. Algorithm: I love the "Automated Abusive Environment" concept to describe the toxic business model. Unless that is disabled I am afraid that it is like debating about giving children water wings to swim in the radioactive waters of the Fukushima nuclear plant after the Tsunami.
3. Alternate Reality Effects: The creation of al alternate fake reality creates real consequences in people's mind/life but also governments agenda. which bring to the greater issue that Prof Shoshana Zuboff brought up extensively. Another elephant in the room.
Globally, as someone who listened many times former FB/META employee Frances Haugen hearings with the US Senate, French Senate, and EU Parliament I find the disconnection between the story told in the post and the evidence-supported story she told discomforting.
Yes, I noticed, too. She specified the angorithms as a point of concern, but nobody was interested to pursue. I believe the explanation is not that complicated: Big Tech wants them for their bottom line and Big Gov wants them for control.
Thank you Benjamin for keeping the focus on the core issues.
I love the "automated abusive environment" that the business concept creates by default.
While I understand the need to explore solutions, unless the two components above are disabled, all solutions appear as effective as providing water wings to let children (and citizens in general) swim in the radioactive waters of the Fukushima nuclear plant after the tsunami.
Interestingly, although all the studies and popular wisdom highlight that the best childhood a child can have is a social media free childhood, NEVER have we heard those companies considering solutions like giving up children as their customers and acknowledging their ineptitude (in the best case).
Perhaps because parents are complicit. Or more compassionately spoken: they are addicted, too. I am reading Andy Crouch’s book right now. I would recommend it, as well - of course - as my essay Fear No More. It’s a battle, Sara, but one worth fighting.
True. Parents were the first to be enticed to enter the dehumanizing and God-erasing cage. It's crazy how they captured and reprogrammed our collective relational dynamics and depleted them of meaning- the very source of strength and resilience for humans.
At a very small scale I do my best by raising awareness and promote through digital hygiene a technology that brings us closer, helps us thrive and most of all one that can be turned off without regrets.
And YES, it is a battle worth fighting until the very last breath. Because of people like Prof Jonathan Haidt, Frances Haugen, Tristan Harris, you and I and many people here and everywhere in the world, the Good will triumph, in the end.
ps thank you for the book tip I will have a look!
Whoa, bruh, you nailed it.
Reddit is well-known for the effectiveness of its woke thought police - strange to cite it as any kind of model. Rather, dissenters from the woke regime should realize that speech platforms can never escape politics, and redouble efforts to build alternatives to Big Tech's woke regime.
Jon Askonas and Ari Schulman, "Why Speech Platforms Can Never Escape Politics"
https://www.nationalaffairs.com/why-speech-platforms-can-never-escape-politics
I couldn't agree more with the concept of accountability versus authentication of a real identity. Although I understand the motivation to identify the real life person behind these comments, that just opens things up to future abuse by intelligence agencies when they get their hands on this information. It could be Trumper's going after wokesters or vice versa. Not being able to down vote comments on social media reminds me of the phenomenon where all participants in a competition get a ribbon at the end.
It's already a source of frustration for me that I have to lie when I create my kids' accounts on various websites. Because as they're both under 13, certain sites are horribly crippled. Having an under 13 Google account, for instance, made it unusable for us, and fixing it was complicated though not impossible. (We set his birthday to a day before he turned 13 and then let it roll over).
It's fine if parents want to create child accounts for their kids, but existing legislation, aside from being easily surmountable, actually reduces parental choice by forcing restrictions onto child accounts with no way to remove them.
I'm extremely doubtful that any additional legislation wouldn't just be more of the same, but worse.
I hear this a lot. The perversion is in so many aspects of this products that it takes a manual to list them. Cognitive Dissonance all over the place. Catch 22 situations.
For sure #1 is "You have to choose whether you want to emarginate your child by not having a social account or let it be maimed by Tech Companies products and anonymous malevolant "third parties" (hiding behind social media legal shield...)
#2 "You have to lie to create an account to your child so you teach your child that lying is ok"
#3 "your child grows through the relationship with their parent which is based on trust but parents are led to use all kind of surveillance tools and creating of fake accounts to supervise their child" and so on ....
It's quicksand. We must extract kids not give them a lifebuoy.
Super interesting piece on a critical issue. As a crypto professional, one of the aspects of our industry that I feel is most misunderstood is that we can be a tool to make the internet SAFER. Point 1 of this piece is possible on a mass scale applying cryptographic identity solutions. Authentication is one of the primary uses of blockchains. Frank McCourt and Project Liberty are very articulate about the need to make the internet safer and how crypto is a huge part of doing so.
I’m just a lowly stat at home mom… why are tech people doing this to us? Tech people must really hate normal human beings. I’m definitely a terrible parent because I have this bizarre idea that if I need to remind my child to brush his teeth, he probably doesn’t need a phone or anything with unlimited access to the internet.
Not sure whether it's "why are Tech people doing this to us"? or "Why are you parents letting Tech People doing this to us the kids"?
When did we decide that protecting our children could be outsourced to lobbies, porn industry, and all the deviance market possible and imaginable? We allow what we tolerate.
Exactly! My words just weren’t helpful. I lost the phone war today. Since none of his friends live in our neighborhood and, and, and,… the ‘you don’t want a miserable child, do you?”
Regarding Reform 4 (Building Codes): When there are many individually developed products supporting each other, where does the buck stop? How do we think through - from a product perspective - building technical safeguards into individual products and sub products when competing with launch timelines and code revision limitations? Are there suggestions for keeping teams accountable in how they structure and build products given that the question of "who is facilitating what" is sometimes hard to pin down?
That’s kinda technical jargon that flies WAY over my widdle head. I think John Khoury hit the nail with the notion that reputation will guide people wishing to avoid toxic and unproductive verbal brawling. Some do want a more confrontational exchange