123 Comments

Eric Schmidt! I don't think so. Google has played along with this schtick from the beginning and Eric has been there in the driver's seat at least since 2000. He's played along and capitalized on section 230 since the beginning of Google.

https://en.wikipedia.org/wiki/Section_230

Section 230 is the reason that it is impossible to hold social media companies legally liable for their content.

Also, Google, with Eric Schmidt at the helm, has made a pile of money off of licensing the Android operating system which is the operating system in Samsung cell phones.

Likely, a primary reason that Eric is so concerned about AI is that open source AI poses an existential threat to Google Search (his cash cow monopoly.)

Eric never had a moral compass and I doubt that he has suddenly developed one.

Expand full comment

People can change -- thank God! I started as a communist, embraced libertarianism (to be come a better communist -- read Marx if you don't know why), re-embraced libertarianism just to be libertarian, and have gradually become a Burkean, common-good, conservative. Those changes happened as I gained life experience and also found my country facing different problems over time.

Eric has had a lot of life experience. The man who led Google (who really did believe the "don't be evil" mantra -- even if he failed to see it in himself) is undoubtedly very different than the man writing an AI-skeptical book today.

Expand full comment

The Atlantic articles says that using algorithms to boost content is not what was meant by Section 230. I think this is a plausible angle.

Expand full comment

Section 230 protects companies from content that is posted on their platforms, not from problems that occur because of their algorithms.

https://arstechnica.com/tech-policy/2021/10/algorithms-shouldnt-be-protected-by-section-230-facebook-whistleblower-tells-senate/

What I'm saying is that Section 230 should be eliminated so that social media and search companies would be exposed to the same liability as other companies. Why the special carve out for social media companies?

I'm also arguing that companies should be exposed to liability for algorithms that promote addictive content or push users to more radical content.

Expand full comment

Right. The algorithms that steer people to trouble should be grounds for suing. That is what the Are Technica and The Atlantic article are saying. That could happen without repealing Section 230, which protects the companies that host websites and community bulletin boards and eBay. If Section 230 went away, I could sue substack for your comment. That is not a good way to go. What would help, what Jon and your friend Eric are saying is to make social media companies liable for the algorithms promoting content. That seems winnable.

Expand full comment

Obviously, if Section 230 were repealed, you would not want trivial lawsuits.

The problem is that right now, companies are protected not only from being sued for frivolous online disagreements, but also protected from liability for online trafficking, grooming of underage minors, identity theft, misrepresentation, and behavior that is known to elevate anxiety disorders and even suicide.

In the real world, almost all frivolous disagreements between two parties never see the light of day in the courts. The courts won't hear these cases. If Section 230 were repealed, likely most frivolous online disagreements where someone sued would be thrown out of court.

But trafficking of underage minors by older persons is covered by numerous laws. As it stands right now, Section 230 protects companies from responsibility for crimes that, in the world outside the internet, most companies would be held liable for.

Another less serious problem online is identity theft and misrepresentation. These are covered by existing laws. Due to Section 230, social media companies are not exposed to the full extent of the law compared to other companies regarding identity theft.

I realize that there are other online problems that are less clear with regard to Section 230. But Section 230, as it stands right now, is simply providing immunity for social media companies from any liability for an array serious and egregious crimes.

Expand full comment

The magnitude of the harm should be given priority over "freedom of speech" - I understand it's your first amendment but it has been skillfully weaponized against you by Big Tech just like in Europe they have weaponized "privacy" against us (violating it for their purposes but raising it as a wall when parents wanted to see what their children liked on Instagram or the French senators asked to get rid of anonymity because it was at the root of cyberbullying!) These people core business is exploiting individuals, societies vulnerabilities...and they excel at doing it inside and outside their platforms.

Expand full comment

I think if a social media platform doesn't moderate or limit speech of the posters, 230 is ok. But since most of them end up pruning publicly posted content (even privately at times) they are not longer just hosting the platform. They are taking an active role, so then they shouldn't get 230 protection.

Expand full comment

We have dwelled too long on this one. We can't be fighting for democratic values outside our borders and not being able to take courageous action to protect those values at home. Does anyone in the US notice that Section 230 is not a domestic conversation and the toll is being paid by plenty of countries across the planet so the answer should take in account the size of the damage done (and being done)? Section 230 has brought the internet outside the house of Democracy, and Tech companies outside the rule of law...everywhere at once.

Expand full comment

Amending it is one thing, repealing is another.

Expand full comment

OK.

Expand full comment
Comment deleted
May 12, 2023Edited
Comment deleted
Expand full comment

Some things shielded by Section 230 that have very little or nothing to do with protection free speech:

Human trafficking

Hardcore porn (I know, some would argue against this, but the Pornhub of today isn't anything like the Playboy or even Hustler of yesterday.)

Online grooming of underage minor

Surveillance by social media companies and their collaborators (and use of Section 230 to hide it).

Use of algorithms to direct social media users to addictive content that they would not otherwise be interested in.

Protection from liability for presenting demonstratably false information.

Expand full comment

...because SECTION 230 was never created to protect Free Speech, it was created to launch the platform economy. Later, Big Tech waved Freedom of Speech to trigger Americans (and keep the legal shield for its own dark business)

Expand full comment

I was at the University of British Columbia studying electrical engineering between 1991 and 1994. This happened while I was there:

The Origin of Silicon Valley's Dysfunctional Attitude Towards Hate Speech

https://www.newyorker.com/tech/annals-of-technology/origin-silicon-valley-dysfunctional-attitude-toward-hate-speech

It's a long description of the various efforts of professors at Stanford's computer science department to dismiss concerns about hate speech. The original joke that started that discussion just wasn't that funny. It's doubtful that people really needed to hear yet another stereotypical joke about Jews and Scots. That is why someone at MIT complained and why McGill University stopped hosting the newsgroup rec.humor.funny. But not Stanford. The fight to preserve a distasteful, not funny joke became their raison d'etre. Peter Thiel, then a philosophy student at Stanford, was a staunch defender of an absolute position on defending all speech, no matter how distasteful, damaging or petty it might be, consequences be damned:

https://stanfordpolitics.org/2017/11/27/peter-thiel-cover-story/

So, yeah, it's not an accident that Section 230 has ended up providing blanket immunity for tech companies from serious crimes. It was baked in on purpose.

Footnote:

https://www.gawker.com/this-is-why-billionaire-peter-thiel-wants-to-end-gawker-1778734026

And he did shut down Gawker. But we're still stuck with Section 230.

Expand full comment

Well-spotted, Marnie!

Expand full comment

I was also very surprised given his background.

But hey I am a believer and the situation is getting so terrible that if he is looking for an opportunity to redeem himself than let him. At this point any help is welcome and we cannot afford to be picky. Can we?

Expand full comment

Thank you for taking on this formidable challenge, Jonathan. I have read all of your books and frequently recommend them.

I understand AI poses a serious threat to the information landscape, but we must vigilantly guard against authoritarian tendencies in our efforts to thwart those threats lest we inadvertently empower the state and other authorities to infringe on our inalienable human rights—much as chemotherapy indiscriminately destroys healthy cells with malignant ones alike.

Over the past three years, we have witnessed how governments have used the excuse of suppressing “misinformation” to silence dissident voices exposing their disinformation and lies—to lethal effect—as I’ve covered extensively at my Substack:

• “Letter to US Legislators: #DefundTheThoughtPolice” (https://margaretannaalice.substack.com/p/letter-to-us-legislators-defundthethoughtpolice)

• “Letter to the California Legislature” (https://margaretannaalice.substack.com/p/letter-to-the-california-legislature)

• “Dispatches from the New Normal Front: The Ministry of Truth’s War on ’Misinformation’” (https://margaretannaalice.substack.com/p/dispatches-from-the-new-normal-front)

My concern with the proposed reforms you have outlined here is they can easily be abused by totalitarian forces. #1, for example, would eliminate the protective cloak of privacy for whistleblowers and others attempting to expose corruption and other regime crimes, thus endangering the ability of individuals to share information that incriminates the powers enforcing this rule.

#2 is an excellent idea and one I support; same goes for #5.

#3 is a bit amorphous—I would need to understand more what you mean by requiring data transparency but am strongly in favor of transparency for government officials, agencies, and other public entities.

#4 worries me greatly as it could threaten the very platform this piece has been published on. I am extremely grateful to Chris Best and Hamish McKenzie for taking a strong stance in favor of free speech, despite ongoing pressures from pro-censorship advocates. The discussion provoked by this Note from Hamish is well-worth perusing for those who wish to understand the nuances of this contentious debate:

https://substack.com/profile/3567-hamish-mckenzie/note/c-15043731

As you formulate solutions to address the challenges of AI, I ask that you never lose sight of the necessity to protect our freedom of expression. As Michelle Stiles writes in “One Idea To Rule Them All: Reverse Engineering American Propaganda”:

“The greatest attack on language is censorship and this must be resisted at every level. You cannot have a free society without free speech, period. Any attempt to argue that others must be protected from offense and hurt feelings should be utterly repudiated. No government, no company, no fact-checkers can ever be the arbiters of truth.”

Expand full comment

See my comments about Eric Schmidt. His ties to the NSA, the Council on Foreign Relations and the Pentagon deserve scrutiny.

Expand full comment

Yes, Kissinger is NOT pro human.

Just read his own writings to quickly draw his population control mindset and take over all , and OWN the resources (food, power, water) and you CONTROL the world.

He sounds like a greedy dictator enabler.

Just like the new finger puppet of WEF, Harari.

Listening to his hate of the useless eater class, is almost like watching a VENTRILOQUIST show -

Harari, the dummy, says what his masters actually intend on doing to the masses if they are not stopped.

Then there’s Eric Schmidt ….a BOARD member on Bilderberg.

Well, read the Bilderberg history and you’ll conclude Eric cannot be NOT pro human being part of this nefarious group.

Everyone wants to build their empire and dominate - without any consideration for the freedom of others. They have no shame about enslaving people and denying them opportunities to prosper and thrive.

https://www.theguardian.com/world/2023/may/20/bilderberg-meeting-group-lisbon-kissinger

Excerpt from attached Guardian Bilderberg meetup of puppet masters of the world and wannabee mini dictators who want to deny you representation and destroy nation states. They have no loyalty to any country or you - only themselves as financial speculators.

“Longtime Bilderberg kingpin (Kissinger) will be delighted, or whatever dull ache he feels instead of delight,

to see so many US intelligence officials at this year’s meeting.

They’re Kissinger’s kind of people.

Biden sent his director of national intelligence, Avril Haines, and his senior director for strategic planning at the national security council,

Thomas Wright, plus a shadowy gaggle of White House strategists

and spooks. “

Expand full comment

Yes, I covered much of that in my Anatomy of a Philanthropath series :-)

• “Part 1: A Mostly Peaceful Depopulation” (https://margaretannaalice.substack.com/p/anatomy-of-a-philanthropath-dreams)

• “Part 2: Downloadable Digital Dictatorships” (https://margaretannaalice.substack.com/p/anatomy-of-a-philanthropath-dreams-947)

• “Part 3: Yuval Noah Harari: Not the Man We Think He Is?” (https://margaretannaalice.substack.com/p/anatomy-of-a-philanthropath-dreams-3fd)

Expand full comment

👍will read your insightful writings Margaret.

PS you are also a resident of CA i presume where Lord Nuisance NIGHTMARE Newsom reigns. (Another infiltrated indoctrinated globalist WEF puppet)

Expand full comment

Thank you, chris, and heavens no, I am not a resident of Commiefornia 😅 I just wrote that letter to object to the medical tyranny bills under consideration at the time :-)

Expand full comment

The fact that he of all people won a Nobel Peace Prize shows just how low the bar has been set for the past 50 years.

Expand full comment

Oh, don’t forget Obama got one too.

For being the USA’s first HALF black President …

why is the world so “superficially color” obsessed…

It’s a joke - he received the prize for “peace”.

Really? He dropped more bombs than Bush did…

https://www.independent.co.uk/news/world/americas/us-president-barack-obama-bomb-map-drone-wars-strikes-20000-pakistan-middle-east-afghanistan-a7534851.html

The nobel prize these days is a joke - Yasser Arafat and Henry Kissinger both won Nobel Peace Prizes when a person TRULY DESERVING of the original founders intent of the prize,

and nominated MULTIPLE times like Ghandi,

did NOT win.

I now regard the Nobel prize as the NOT so NOBLE prize.

It goes to those who do NOT deserve any reward -

for these “winners” do NO good for humanity - but merely advance evils agenda.

Expand full comment

I like the comparison to chemo. There are situations when chemo is necessary. We have reached that state, I am convinced. This is not specifically supporting the specific measures recommended in the post. Stable systems can allow themselves more liberty than transitional ones, and now the entire world appears to be in transition. I would argue, any restriction must be regularly reviewed and re-evaluated and should be considered as temporary. Chemo is to be stopped after a while.

Expand full comment
Comment deleted
May 12, 2023Edited
Comment deleted
Expand full comment

Hi Sara! I am having difficulty parsing your comment.

When you say, “consider that your first amendment led to where we are today,” I don’t know what you mean by “where we are today,” nor can I see how our First Amendment [rights] led to it.

By referencing COVID and the Twitter files, I *think* we are on the same page in recognizing that we are now living in a technocratic totalitarian state, BUT it is *because* of the censorship (Big Tech, government, MSM, and otherwise) of free speech that we are in this situation, whereas you appear to be implying the opposite.

It is *because* the voices of scientists, physicians, data analysts, independent journalists, and other individuals of integrity exposing lies, corruption, and COVID policy/product harms were silenced and smeared that the masses slept-walked into their own enslavement and destruction, the suppression of ivermectin being but one example:

• “Letter to a Scientifically-Minded Friend” (https://margaretannaalice.substack.com/p/letter-to-a-scientifically-minded)

• “Letter to Alex Berenson on World Ivermectin Day” (https://margaretannaalice.substack.com/p/letter-to-alex-berenson-on-world)

“electing social media as the only way to grant freedom of speech”

This is a straw-man argument as I said nothing of the sort. Big Tech is only one of many tentacles of the Censorship Industrial Complex. It is arguably the most important because it effectively serves as a Ministry of Truth by memoryholing any information that contradicts their mono-narrative, so the non–critical thinkers who automatically swallow propaganda are not even aware there are contrary opinions because free speech has been abridged.

“other venues where freedom of speech can be exercised and become creative about it”

What venues would those be? Standing on a street corner and reaching a handful of random strangers instead of potentially awakening millions to the very COVID lies and tyranny you rightly acknowledge as threats?

Are you’re saying we should cede our inalienable right to free speech when it comes to the most powerful form of mass communication on the planet—indeed, the only avenue that puts an ordinary human being on an equal playing field with trillion-dollar corporations, which already control every other form of public speech (television, radio, newspapers, magazines, movies, the arts, education) in the name of the dictators’ age-old trope of “do it for the children”?

As Hitler said:

“The state must declare the child to be the most precious treasure of the people. As long as the government is perceived as working for the benefit of the children, the people will happily endure almost any curtailment of liberty and almost any deprivation.”

If you truly care about saving the children, you would guard their precious right to freedom of speech with your life as it is the only thing standing between them and their future slavery.

Parents already have tools to protect their children from illicit content, and it is their responsibility to do so rather than depriving the rest of humanity of their freedom of expression. As adults, we have the right to freely exchange information without Big Mother interfering with our ability to do so in the name of “the children,” “the good of society,” and every other purported excuse totalitarian regimes have used to stripped people of their rights throughout history.

I highly recommend reading the book I referenced in my original comment to understand how the muting of free speech while amplifying propaganda has created the conditions for the dystopian prison state we are hurtling toward at meteoric speed. As Michelle Stiles writes in that book:

“Individuals do not need to be ‘protected’ from ideas. Protection from ideas is a way to disable and cripple free people. The First Amendment to the Constitution ensures that nothing can stand in the way of free speech and a robust marketplace of ideas, whether they be good, bad, or ugly. Of course, therein lies the problem. Labeling and censoring ideas as ‘good’ or ‘bad,’ ‘harmful,’ or ‘offensive’ ultimately leads to control of ideas in general and cannot be done without suppressing everyone’s right to free speech. It’s either all or none.”

—“One Idea To Rule Them All: Reverse Engineering American Propaganda”

Expand full comment

"If you truly care about saving the children, you would guard their precious right to freedom of speech with your life as it is the only thing standing between them and their future slavery."

Not to mention their present oppression.

Even with our current albeit constrained freedom of speech, we still had to endure the lunacy of COVID lockdowns—despite it being patently obvious even at the time they were destructive.

https://newsletter.allfactsmatter.us/p/destroying-society-is-no-way-to-save

Yet even at that, we were spared the hell of Shanghai’s Zero COVID lockdown, the videos of which can only be described as Dantesque.

https://newsletter.allfactsmatter.us/p/shanghai-the-grim-reality-of-pandemic

We at least have some opportunity to learn from these mistakes—even the corporate media has largely conceded the lockdowns did not work.

https://newsletter.allfactsmatter.us/p/research-shows-i-was-right-in-2020

China has no such hope. If Xi Jinping decides to lock the country down again he will do so, and no one in China will stand to stop him.

Ours may not be a perfect society, but one with at least a chance of turning away from the precipice is infinitely preferable to one fated to rush headlong over it.

Expand full comment
Comment deleted
May 13, 2023
Comment deleted
Expand full comment

This makes absolutely no sense whatsoever.

If the parents are diligent, the children will be free, particularly from any addictive aspects of technology.

In every generation, this is the order of things.

Expand full comment
Comment deleted
May 14, 2023
Comment deleted
Expand full comment
Comment deleted
May 13, 2023Edited
Comment deleted
Expand full comment

Thank you for taking the time to clarify your comment, Sara, and now that I have a better understanding of your position, I think we’re more in alignment than it initially appeared.

The First Amendment already has an exception for obscenity and pornography, so those are not protected forms of speech:

https://mtsu.edu/first-amendment/article/1004/obscenity-and-pornography

The argument that we must choose between preserving free speech (the First Amendment) and protecting the children is actually a false dichotomy under US law, which already has tools to support both.

Tech companies that shield and enable criminal behavior such as pedophilia (as the Twitter files reveal) can and should also be held liable for those actions under the existing laws.

A June 25, 2021, Texas Supreme Court ruling indicated that Section 230 does not exempt tech companies from liability for sex-trafficking that occurs on their platforms:

“We do not understand Section 230 to ‘create a lawless no-man’s-land on the Internet’ in which states are powerless to impose liability on websites that knowingly or intentionally participate in the evil of online human trafficking.… Holding internet platforms accountable for the words or actions of their users is one thing, and the federal precedent uniformly dictates that Section 230 does not allow it. Holding internet platforms accountable for their own misdeeds is quite another thing. This is particularly the case for human trafficking. Congress recently amended section 230 to indicate that civil liability may be imposed on websites that violate state and federal human-trafficking laws.” (https://www.txcourts.gov/media/1452449/200434.pdf)

The UN has a dark history of association with sexual exploitation of minors, so it certainly should not be looked to as a moral authority:

https://www.independent.co.uk/voices/un-child-rape-sex-exploitation-united-nations-antonio-guterres-prosecutions-immunity-trial-a7956816.html

https://www.jpost.com/International/UN-staff-allegedly-responsbile-for-over-60000-cases-of-sexual-exploitation-542817

With the exception of crimes against humanity, international guidelines should not supersede national laws, a painful lesson that was made abundantly clear by the WHO’s abuse of power demonstrated during COVID and their escalating efforts to secure even greater controls:

• “Letter to the WHO” (https://margaretannaalice.substack.com/p/letter-to-the-who)

• “Letter to the US HHS Office of Global Affairs” (https://margaretannaalice.substack.com/p/letter-to-the-us-hhs-office-of-global)

• “What If They Threw a Pandemic and Nobody Came?” (https://margaretannaalice.substack.com/p/what-if-they-threw-a-pandemic-and)

I realize laws vary across nations, but in the US, the First Amendment has proven a robust defense in combination with laws that penalize criminal behavior, and the government should not be empowered to further infringe on our right to free speech as history shows it will indubitably abuse that power to advance totalitarian aspirations.

Expand full comment

Thank you for your patience and kindness Margaret.

Yes, it is not useful to reduce the debate to the dichotomy freedom of speech and protection of children.

"Tech companies that shield and enable criminal behavior such as pedophilia (as the Twitter files reveal) can and should also be held liable for those actions under the existing laws."

With EXPONENTIAL TECHNOLOGY the number of harms generated exceeds the capability of democratic Institutions to deal with. Existing laws help if they are enforceable and enough people are there to contrast the criminals behaviors. PYE reported that Police Department could deal with 782 CSAM cases but they had been notified 100.000!

Also, Tech Companies enabled a NEW CRIME therefore new laws are needed to address it: BRAIN HACKING FOR MALICIOUS PURPOSES

So through platforms, children have been exposed to an unsafe product that involves the use of a super powerful computer pointed at a child's brain to transform it "on demand" by rich anonymous third parties. A NEW CRIME must be defined and that requires new laws to protect children/users.

British Baroness Beeban Kidron's study strengthens the point made by Prof. Jonathan Haidt during his interview at the Megan Kelly Show when he described that smartphones as the most powerful operant conditioning machine (https://www.youtube.com/watch?v=sj87AB_Yvqo) and we've given over the training of our children to millions of complete strangers (I add: not equipped with the best intentions!)

"One of the things we did was we ran workshops in 28 different countries. (..) with children between the age of seven and 19, I think. What absolutely confounded all the people who crunched the numbers as it were about what their attitudes were was that wherever they were in the world, children had probably four fifths of their concern, 80% of their concern were absolutely identical. (...)But for the most part, the kids were talking about the hostility, the addiction, the nudges, the social and personal sort of despair that they felt in the competition, et cetera, et cetera, all the things. And of course the sexual and the violence and so on. The children put the same things because they’re all using the same services. And so now what you’re actually saying is that in the development of a child, which used to be school, family and state was the sort of cultural envelope is actually more determined by using Snap, TikTok, Instagram, YouTube, et cetera, than it is by those things. (https://techpolicy.press/a-conversation-with-baroness-beeban-kidron-on-child-online-safety/)

As to the UN, I hear your disappointment, and yet they have already made an important move:

"During its 86th session, the Committee adopted the General Comment No. 25 on children’s rights in relation to the digital environment. This General Comment sets out how the UN Convention on the Rights of the Child applies in the digital world. Indeed, it is the first international authoritative legal document recognizing explicitly that children’s rights apply both offline and online."

Let me share a beautiful quote from your 38th Vice President: "the moral test of government is how that government treats those who are in the dawn of life, the children; those who are in the twilight of life, the elderly; those who are in the shadows of life; the sick, the needy and the handicapped.” Hubert Humphrey

Again thank you for our exchange of thoughts which certainly helps me make progress.

Expand full comment

I apologize for the long-delayed response to your comment, Sara! This got buried amidst hundreds of tabs, and I only just unearthed it.

Thank you for your thoughtful response and your passion for protecting children, which I definitely empathize with. I will take your concerns into consideration and be on the lookout for a solution that preserves the sacred right of freedom of speech while also shielding children from the harms associated with technology.

Expand full comment

"...a small number of trolls, foreign agents, and domestic jerks gain access to the megaphone that is social media, and they can do a lot of damage to trust, truth, and civility."

What does it say that 2 of the 3 examples here are purely subjective? What is a "troll" or "jerk".. other than someone you don't like? You might define these characters in one way, but that doesn't mean the next person would.

Expand full comment

And Taibbi’s disclosures about the Hamilton Project shows that “foreign agents” are more likely to be some grandma from Peoria than a bot from Moscow.

Expand full comment

The social media megaphone is generally commanded by the angriest and most obnoxious people in the room. There's a reason we're all here commenting on a closed substack newsletter instead of on Twitter.

They fact that the terms are subjective doesn't mean they're not getting at a real truth.

Expand full comment

Great article at the Atlantic. As a software engineer with some security background I think the argument for marking AI content as such should be flipped on its head. It is much harder to get all models (many of which are open source) to mark generated pictures. That genie is out. Absolutely no way around it. What you CAN do, is the exact opposite: require phone manufacturer/ camera makers to digitally sign all pictures taken at point of capture with a smart card (like the one you have in your credit card). Much more viable and require more work to break.

Expand full comment

I really appreciate this collaboration with Eric. I'm also concerned about including Renee DiResta, given what Michael Shellenberger has recently been writing about her. Hopefully your desire to preserve liberal democracy will prevail.

Expand full comment

Agreed. I hope the good intentions of Mr. Haidt aren't being coopted by the illiberal authoritarians on the left determined to silence the unruly masses.

Expand full comment

The risk is always there. I remember some EU officials taking pictures next to Frances Haugen to look good and exploit the "halo effect" but later not keeping in mind many of the recommendations she made to protect children online for the ( already outdated) Digital Services Act -

Expand full comment

Predispositions determine the converstion. How about a "conservative" democracy ???Besides the fact it doesn't exist, because it was pre Disposed of. How about that word "Republic" that was prevalent in the 1776 begining, but now replaced by democracy in the new USSA? Perhaps I could predispose the continued conversation on Demoncrazy.

genearly.substack.com

Expand full comment

Take a breath. Compose a sentence with a clear intent, that has a subject and a predicate. Your weird punctuation and little name-calling schtick is profoundly stupid.

Expand full comment

And what about privacy? Authenticating users? Does that mean the end of using pseudonyms? Look, personally, if I were independently wealthy and didn’t have to worry about losing my livelihood I might use my real name. But for now my employer doesn’t need to know what I think about certain things let alone my creative work. Employers are looking for any excuse to fire employees (they’re liabilities on the books, not assets). No thought police for me, thank you very much!

Expand full comment

The irony, the irony. Without search, it wouldn't really matter if something said under their own name was hidden in some dark corner of the internet. Employers probably wouldn't bother trying to find stuff. Now everything one says under their own name is easily searchable on Google by employers and potential employers. Thank you very much Eric Schmidt!

I'm not talking about anything illegal here, by the way. Years ago, in about 1995, I attended a graduate student organizing meeting when I was a grad student. The issues we were advocating for were things like better access to dental care (at the time, UBC had no dental care insurance policy), and doing something about the fact that some UBC university computer servers were being used to serve hardcore online pornography. There were some female graduate students in the crowd that called themselves feminists. At the time, I think I was rather ambivalent about using the term feminist. Anyway, a student journalist attended the meeting, got the names of everyone who attended the meeting, and wrote an article about the meeting in the UBC student newspaper. In the title, he referred to those who attended the meeting as "radical feminists".

This article, labelling me as a "radical feminist" came to the top a google search on my name for the next ten years. I think it was finally taken down in about 2004. (Note: While I do align with some aspects of feminism, most of my views do not align with what is popularly understood to be radical feminism. Regardless, I don't think that government funded computers in a university lab should be used to serve hardcore pornography.)

UBC didn't get around to addressing the fact that their computer servers were being used to serve hardcore pornonography until about 2010. The perpetrators were given a handslap and put on leave for a few weeks.

This is the kind of B.S. that Google Search has been enabling for more than 25 years.

Expand full comment

I empathize.

Expand full comment

Thank you. It was a long time ago now. Still, a sign of things that were to come.

Expand full comment

The full essay is behind a paywall, so I apologize in advance if I’m misinterpreting your reforms based on this Substack.

You write: “we saw that social media and AI both create collective action problems and market failures that require some action from governments, at least for setting rules of the road and legal frameworks within which companies can innovate.”

The idea of “market failure” is a nonsense idea. It is based on the (false) premise that proponents of free markets promise perfect outcomes, and that any time less-than-perfect outcomes arise, it’s time to send in the troops.

Of course, free-market proponents DON’T promise this. Our claim is simply that free markets provide the mechanisms by which people may arrive at better outcomes.

What are those mechanisms? Primarily, it comes down to individual accountability: If as a business owner, you fail to give your customers what they want, you will go out of business. Etc.

This is key. Because the solution proposed by the “market failure” folks is that we get the government to come in and enforce the kind of outcome we/they want.

But think about it. The defining feature of government is that there IS no real accountability. When the FDA fails, over and over again, to do what it says it is doing – protect the public from dangerous drugs – does it go out of business? No. Maybe the higher-ups will be replaced. But the (dysfunctional) “business” will keep on doing what it’s doing. And the same goes for every single government agency.

The idea that we’re sold in our econ 101 classes, about “market failure”, and the necessity for the state to come in and regulate markets, is nonsense because it does not take into account the nature of the state. What we’re told in those classrooms is: “Look at how the market didn’t provide the outcome we want!” (Which may or may not be true, let’s say it is true.) The instructor then makes the fantastical leap in logic to assert that “this is why we need to government to regulate X,Y, or Z market.”

With no attempt to explain how or why government decision-making will result in better outcomes than the market-based, individual decision making. It’s like fairy dust: Just sprinkle it on whatever you don’t like, and it will magically transform into The Right Thing.

Except it doesn't. And we have a whole history of the regulatory state to demonstrate that it doesn’t. And the reason it doesn’t is the thing that those promoting it never even consider, or attempt to explain: The nature of the state.

The assumption that is built in to the “market failure”/need to regulate model of thinking is that the government a) is benevolent, or at least is morally neutral, and b) that it produces the outcomes it says it will produce.

Any attempt to regulate AI that is based on this childish assumption will fail just as spectacularly as we are witnessing the highly regulated medical industry fail right in front of us.

Let’s please not bang our collective heads against this same wall one more time.

Expand full comment

No Brian, what is really (really, really, really) important is that we recognize that government will not - and can not - solve these problems for us.

Expand full comment

The only institution conservatives have the ability to gain control of is government. We aren't going to get control of the media, the universities, the NGOs, the K-12 schools, the social media companies, the govt bureaucracy, the labor unions, or the corporate boards. The progressives who do control all those institutions do not hesitate to use them to hurt you and those who agree with you. If you're unwilling to use the one institution that you CAN get control of to fight those that you can't, you're going to lose.

Expand full comment

You have articulated the pure libertarian argument perfectly:

forget the raving mobs organized on social media but burning down entire city blocks;

forget the Civil War level tribalism engulfing our families;

forget the teen girls slicing off their private parts;

forget the remaking of the town square into a digital corporate-owned space;

forget universities by mobs of uninformed students empowered by social media;

forget the increasingly common ritual 2 minutes of hate...

What's really important is that we don't let the government do anything about it!

Expand full comment

and multiply that for dozens of democracies (and not) around the world and add victimhood culture that breeds entitlement.

Expand full comment

Interesting that what is missing from the proposals is the obligation for (social) media companies to provide transparency regarding the angorithms they use. A consumer cannot be expected to agree to the kind of manipulation that is taking place under the hood without being properly informed. Unfortunately, informed consent has been under extreme pressure from the powers that be. And Big Tech continues to act as if nothing’s wrong with the secret source driving their bottom line, the angorithms.

We are all being manipulated by a series of machines and their operators.

Expand full comment

If they provide transparency to the algorithms, they would have to also explain who placed the order...wouldn't they? can you imagine ideological or political movement or porn industry targeting children wanting that?

Expand full comment

It’s clear they don’t want it. It’s just as clear that - besides the issue of addiction induced by dopamine hits - people do not realize in far too great numbers how they are being manipulated. I don’t think subjecting users to such treatment is moral. That it is harmful should be clear to everyone.

Expand full comment

This is covered in the Atlantic article

Expand full comment

"We both share a general wariness of heavy-handed government regulations when market-based solutions are available."

A potentially good way to do this is to change the incentives in the economy via taxes. Note that if one is interested in tax as a light-handed small government solution to collective action problems, one can tax without handing money to the government. One can pass on the tax as a dividend to citizens. The carbon fee and dividend is a well-known example.

Along these lines one could think about an AI fee and dividend.

Another idea should be to cut taxes on labour and increase taxes on material resources. This could at least buy as some time and slow down the impact that AI will have on the labour market. (Note that much of the danger that arises from AI and other algorithms is the speed with which they transform society, giving us little time to adapt.)

Third, network effects are likely to increase the power of corporations to make profits from rent extraction instead of from value creation. Estimating network effects to be quadratic, we should think about quadratic taxes, that is, taxes increasing quadratically with income and wealth.

These proposals also address an issue that this article unfortunately ignores. While I agree that AI and other algorithms (such as those pushing adverts on us users) pose a danger to democracy, the article does not investigate the dangers to democracy arising from increasing income and wealth inequality. All three example proposals above will work in the direction of reducing inequality.

Expand full comment

Did you and Eric Schmidt discuss requiring AI to disclose its sources, e.g. what sites and authorities were used to create the content generated by the AI? Did you also discuss royalties to the creators of that? In its current form, AI feels a bit like a plagiarism machine. Kind of like Google.

Expand full comment

How about: Did you and Eric Schmidt discuss the repeal of Section 230 so that members of the public can sue social media companies?

Real thing that happened in San Francisco in 2020:

My daughter, a student at Lowell High School, wanted to find a way to socialize in May of 2020. Due to the pandemic, her school had been in remote only mode for several months and she wanted to reconnect with her friends. Some of her school friends were playing Mindcraft using a Discord server. It was supposed to be a group only of her school friends. Someone infiltrated the group and started grooming my daughter. I won't go into the months of time consuming all absorbing nightmare that this created for our family or how we extricated ourselves from this. What I will say here is that when my husband, an MIT PhD working in Silicon Valley, discovered what was going on with the Discord server on my daughter's school computer, he used his MIT alumni affiliation to contact the CFO of Discord. She never responded. For a few months, it seemed that things tightened up at Discord, but I read about other cases of teen gaming groups being infiltrated by older sexual predators for the next several years. It is probably still going on. Discord knows this is going on.

There are no laws on the books to stop social media and gaming companies from allowing this sort of behavior.

Repealing Section 230 would allow social media companies to be sued. This might overwhelm the courts for a while, but it would likely affect a lot of change for the better.

Until I see Eric Schmidt talk about Repeal of Section 230, I don't buy it that he cares two beans about what is happening to teenagers online.

Expand full comment

Wow, that Discord story is scary. Luckily, my kids are a little older and missed social media as teenagers. And I am optimistic it will get straightened out before grandchildren are ready for it.

I think repealing Section 230 would make the internet pretty useless. However, using algorithms to boost content should NOT be covered by Section 230. Here is a quote from the Atlantic article - worth reading, even you read this post: https://www.theatlantic.com/technology/archive/2023/05/generative-ai-social-media-integration-dangers-disinformation-addiction/673940/

"But today’s platforms are not passive bulletin boards. Many use algorithms, AI, and architectural features to boost some posts and bury others. (A 2019 internal Facebook memo brought to light by the whistleblower Frances Haugen in 2021 was titled “We are responsible for viral content.”) Because the motive for boosting is often to maximize users’ engagement for the purpose of selling advertisements, it seems obvious that the platforms should bear some moral responsibility if they recklessly spread harmful or false content in a way that, say, AOL could not have done in 1996."

Expand full comment

Yes, I do hope this gets fixed before our grandkids have to deal with this!

To be honest, I don't think that repealing Section 230 would make the internet useless at all. It could make it much better. Likely, without Section 230, new laws could develop. Because of the blanket immunity of Section 230, we hardly never hear about the kids who have been trafficked through the internet, or kids who were pushed to suicide by the what happened online. Companies largely escape from responsibility. We only hear about it in the very rare cases where the police actually managed to track down the perpetrator. But in most other cases, we don't hear about it because platforms allow for anonymity and are able to protect themselves from having to disclose online interactions.

I agree that the algorithms are a problem too. I love youtube, but almost everytime I use it, have to stop myself from being sucked into a rat hole.

Expand full comment

It would do two things: 1) make the above-ground internet largely useless, heavily censored and micromanaged, and 2) lead to the exponential growth of the dark web in response. Think of the virtual equivalent of speakeasies during Prohibition, only much, much worse.

Yes, the status quo is problematic, but it calls for a scalpel, not a sledgehammer.

Excluding algorithms specifically (and AI more generally) from Section 230 would be good I think. But repealing Section 230 entirely would be throwing out the baby with the bathwater.

Expand full comment
Comment removed
May 5, 2023
Comment removed
Expand full comment

The problem as the law now stands is that it is very difficult to sue a social media company for anything. Small claims court and mediation won't help. Section 230 provides blanket immunity for social media companies such as Facebook, Instagram, Twitter and Discord.

While Google/Alphabet is not a social media company per se, it enables Twitter in particular by connecting Google Search to Twitter.

One suggestion I do have about filing a small claim is that you hire a lawyer and ensure that your case is sealed. Otherwise, for decades afterward, your small claim filing will show up on a Google search of your name.

Expand full comment

My understanding is that loss of attribution is baked into the way machine learning like ChatGPT happens. There is literally no way to determine after the fact where a particular bit of a response came from since the evaluation of information is based in part on frequency of reference. This isn't necessarily unique to AI, could you tell me precisely where you learned water freezes at 32 degrees F? Your reference to plagiarism is spot-on.

Expand full comment

I haven't read the article, but your suggestion for making platforms liable sounds like it would converge toward heavy censorship ("we have to worry about liability, so your criticism of government COVID policy has to be deleted") and your suggestion about "age of adulthood" sounds like it would keep kids trapped in schools, unable to take advantage of potential learning opportunities from AI tutors.

Expand full comment

"Unable to take advantage of potential learning opportunities from AI Tutors"...maybe what we have learned with the first ACT of Artificial Intelligence (social media) is that children need anything but technology to develop the skills they need as humans and to interact in a kind way with the people and the world around them. Later they can learn to rely for SOME things on technology that should be a tool not a master of a cognitive and emotionally illiterate slave. May I recommend you watch AI DILEMMA (almost 2 millions views) https://www.youtube.com/watch?v=xoVJKj8lcNQ

Expand full comment

Liability is a core principle of American justice. One can argue that it works in the opposite direction than censorship. Censorship is heavy handed regulation by the government, liability leaves the actual decisions to the courts.

Expand full comment

But the best defense in court is going to be "The government approved of this." No one is every going to be held liable for repeating what the CDC says about COVID, but they will be at risk if they contradict the CDC. What will be censored in practice is what the government disapproves.

Expand full comment

This is crazy – anything with Eric Schmidt should set alarm bells ringing. Google and the other big platforms have been angling for a long time for govt regulation that would effectively hamper smaller competing platforms.

BUT repealing section 230 is even worse. It would destroy substack and given the current use of lawfare to target dissidents in the USA would completely destroy whatever free expression (and there certainly is some) is on the internet.

And authenticating all users could mean the loss of any anonymity. Someone – Big Tech or the FBI will see that no one can post anything that arouses mob anger because their identities and addresses will soon be leaked.

And finally, while there is evidence that social media use does seem to be harming young people, we do not have much evidence on what the mechanisms are. For one thing western countries like the USA have been experiencing a massive cultural revolution (sometimes described as Woke) starting around 2010, and social media (together with public education) have become enormously powerful transmitters of this cultural shift. By destroying free expression on the social media one may simply be killing the messenger, when the problem is the message. And killing free expression on the internet will probably deliver the final blow to whatever still remains of freedom and democracy in our societies. Is that what we want?

Expand full comment

Indeed, anything with Eric Schmidt should be seen as a bright red flag.

Big Tech hates regulation, until they figure out a way to capture it for their own Machiavellian machinations against their competitors.

Expand full comment

This might be out of left field but it sounds to me like you’ve been writing about two problems that will solve each other. If social media is as bad for us as you say, and AI will make social media more unusable, why not let it rip, let social media be overrun by toxic fake content, so then we can finally just move past it and get our lives back into the real world?

Expand full comment

Indeed, I made a similar "devil's advocate" argument yesterday on this thread, half-jokingly. If it becomes *unbearably* toxic under the influence of AI, and especially if no one knows who is a bot or a human anymore, a critical mass of people will then quit en masse, and the whole Big Tech house of cards will then collapse. Problem solved. (Collateral damage along the way notwithstanding.)

May AI and Big Tech mutually destroy each other, and the real world rise up from the ashes reborn. Let the planetary healing begin!

Expand full comment

ah ah ah! thanks for making me smile.

Expand full comment

You're very welcome :)

Expand full comment

The reform ideas are interesting, but I don't think they will work (sorry for my directness) because they do not address structural issues. You can see how on Twitter verification hasn't done much, because of the infinity of nuances of what is "right" and "wrong". Algorithms have limited effect on how people think or do.

The problems is seen too much from a psychological point of view. The assumptions may be correct (from a behavioural perspective), but I am not convinced that social media is the cause of so many ills. If we do not reform educational institutions, we just do band aid patching.

I anticipated the social media problem at about the same time when I was on Facebook while I was doing research on social adoption of ideas. The explosion of negative side effects around 2012 have less to do with social media itself, but with changes within society, changes which are reflected on social media. I just think the causal order is reversed here.

The new AI is a civilisation altering event. In fact I believe we are on our way out (I explained my reason for that in a prior post and will expand on the follow up), but what happens to us could be the difference between heaven and hell. What AI does is a conversation worth having. Must have.

Thanks.

Expand full comment

Questions: Have you seen the Social dilemma? or the AI Dilemma https://www.youtube.com/watch?v=xoVJKj8lcNQ

Please do.

I find it very scary that at Facebook you were doing research on "social adoption of ideas" especially in the light of what FB has done in terms of implanting ideas and values on children "on demand" thus creating the possibility to sell the normalization of what is not normal at all and all sort of stuff that made them sick -

Changes within society in 2012 maybe were driven by social media, dating apps, etc which through its business model was carrying out a "digital colonization" of the minds of millions of users of all age

Expand full comment

One way to read what has happened the last however many decades is that building a tower of babel is a poor choice and its fall is a good thing.

Expand full comment

Agree. The question is, how should the Tower of Babel be dissassembled? Unlike its Biblical analogue, it is not just going to fall down.

The problem is not AI per se.

The Tower of Babel was assembled through:

1. Section 230 that exempted social media companies and online search from legal liability.

2. Monopoly concentration of social media companies and technology companies. A good source on that is Matt Stoller. He has a substack if you're interested in that.

3. The proliferation of anonymous accounts on platforms such as Twitter.

4. The development of algorithms that optimize advertising revenue and guide users to content that is known to be addictive. Youtube, a subsidiary of Google/Alphabet, is particularly guilty of this. Sheryl Sandberg and Susan Wojcicki, both proteges of Eric Schmidt during points in their careers, were instrumental in developing advertising focused content algorithms that steer users, especially teenagers, to addictive content.

Creating a STASI (https://en.wikipedia.org/wiki/Stasi) like overlordship of the Internet is unlikely to disassemble the Tower of Babel. It will likely change the nature of Babel, but it would not undo Babel.

Anything involves Eric Schmidt, the Council on Foreign Relations or the Stanford Internet "Observatory" will likely lead to more Babel, not less.

Expand full comment

Marnie - In the Bible, the tower didn’t fall down, it was abandoned.

Expand full comment

You are right:

Genesis, Chapter 11

1 And the whole earth was of one language, and of one speech.

2 And it came to pass, as they journeyed from the east, that they found a plain in the land of Shinar; and they dwelt there.

3 And they said one to another, let us make brick, and burn them thoroughly. And they had brick for stone, and slime had they for morter.

4 And they said, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth.

5 And the Lord came down to see the city and the tower, which the children of men built.

6 And the Lord said, behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.

7 Let us go down, and there confound their language, that they may not understand one another's speech.

8 So the Lord scattered them abroad from thence upon the face of all the earth: and they left off to build the city.

9 Therefore is the name of it called Babel; because the Lord did there confound the language of all the earth: and from thence did the Lord scatter them abroad upon the face of all the earth.

Expand full comment

The monolith before that one was also bad.

Expand full comment

Are you going to publish this essay on Substack for your subscribers ?

Atlantic has a paywall and I already subscribed to your Substack page.

Expand full comment