Let’s have a look at two closely related topics:
-what is the impact of Section 230 of the 1996 Communications Decency Act upon the nature of public discourse?
-what about the recent Supreme Court decision in a case called “Gonzalez v Google” as it relates to Section 230?
My contention is that Section 230 has created a class of rights and exemptions for digital platforms (like Facebook, X and Google) that has unleashed an unprecedented amount of digitally-driven hatred and abuse. I contend that digitally-driven hatred has reached near-intolerable levels. And that, if we are to continue as a civilization, this dynamic needs to be changed.
First, let’s be clear on what Section 230 of the 1996 Communications Decency Act actually says:
“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
At the same time, Section 230 permits the same platforms to determine whether they want to include information provided by another content provider; or not. The argument in this formulation is that their right to “not amplify” is a form of free speech. Perhaps surprisingly, I could not agree more. This provision grants them the primary right of a publisher, and why should it not? Publishers are not forced to publish anything. But traditional (aka non-social media) publishers are responsible for what they do publish. This simple relationship has been held to be perfectly rational forever and a day.
But the tech giants want to say that they should be able to operate with impunity no matter what. In fact, Section 230 grants them that free pass. Not content to hide behind 230, they contend also that they must be permitted to continue in this way simply because it would be bad for their business to do otherwise. This was one of the arguments Google put forth in defending against the claims put forth by the plaintiff in Gonzalez vs. Google, where Gonzalez claimed Google aided and abetted an ISIS terrorist who shot and killed Gonzalez’ daughter at a Paris Nightclub in 2019. I will address this case in more detail below.
We have established that Section 230 accomplishes two things that, together, represent the underpinnings of social media business models: First, that the platforms have total immunity from every libel or damage claim. Second, that they can choose not to amplify you, or even ban you from their platform solely at their discretion. There is a surface argument to be made that both of these frameworks are plausible. But in the real world, this pair of rights add up to total unaccountability PLUS unchecked power. And that is a combination that we may no longer be able to afford.
The tech giants will maintain that they must not only retain the rights of a publisher, while also maintaining that they must enjoy immunity from the responsibilities of a publisher.
For instance, the New York Times cannot publish an article that calls for the death of an individual and then claim they have no responsibility if someone acts upon that prompt. But Google can amplify the exact same article, sell millions of dollars of advertising around it—they can even recommend hateful articles!—but at the same time enjoy total immunity from consequence. Their claim is that the method by which they amplify is meaningfully different from amplification via print and physical distribution. This claim is questionable at best and most likely false.
Already section 230 has led to a communications disaster of a type that threatens to undermine the very topology of truth. Witness the sheer volume of misinformation, disinformation and outright fabulism that clogs our news feeds, and you may get a sense of why I am making this argument. That said, Section 230 does have its enemies. And I don’t agree with all of them.
The Censorship Gambit
A common right wing objection about 230 is that platforms like Facebook and Google are in the business of categorically censoring conservative viewpoints. These so-called “conservative” critics, concealing themselves under the mantle of “free speech”, are looking at the problem from the wrong end of the telescope. If anything, the big digital platforms are far too deferential to obvious misinformation and disinformation–most of it with a rightward slant. Worse, much of this misinformation is of a particularly pernicious sort that tends to make fantastical claims about liberals, climate-change, and cannibalism. All of it is published by the platforms, and amplified by the platforms, with impunity. And make no mistake: it is hate speech.
Today’s so-called free speech advocates claim there ought to be no “censorship” of any kind. This is a misuse of the word. It’s a misuse of the word because only government can censor. Not being published is not censorship, and here’s why: You may not get onto YouTube, but that’s just too bad. You can still hand out bills, shout through a megaphone, paint your wagon, write letters, and harangue everyone at the bar—and because of the First Amendment, that is your right.
What you never had was the right to be amplified. These platforms choose to amplify your message. They know exactly what’s on their platforms. They know it because that is their product, and that’s how they sell ads. And they ought to be responsible for what’s on the platform. After all, they are allowed to keep all the money!
But what would happen if we stripped the platforms of a right to not amplify everything that’s posted? Perhaps we ought to grant the free-speech folks their wish! This may well rapidly destroy the platforms as they sink under an avalanche of unthinkable garbage. Maybe then, it will become clear how absurd the so-called “free speech” crusade is—for it is not about free speech, and never was. It is about the amplification of hate speech, and always has been.
Gonzalez v Google
Gonzalez v. Google was a case before the Supreme Court of the United States in 2023. It was decided in favor of the defendant based on their Section 230 protections. I cite it here as an illustration of how this law works in the real world.
According to Oyez.org, “Nohemi Gonzalez, a U.S. citizen, was killed by a terrorist attack in Paris, France, in 2015—one of several terrorist attacks that same day. The day afterwards, the foreign terrorist organization ISIS claimed responsibility by issuing a written statement and releasing a YouTube video. Gonzalez’s father filed an action against Google, Twitter, and Facebook, claiming, among other things, that Google aided and abetted international terrorism by allowing ISIS to use its platform—specifically YouTube—“to recruit members, plan terrorist attacks, issue terrorist threats, instill fear, and intimidate civilian populations.” Specifically, the complaint alleged that because Google uses computer algorithms that suggest content to users based on their viewing history, it assists ISIS in spreading its message.”
Clearly Gonzalez wanted to pierce the veil of Section 230. At the heart of the case is the common social media platform practice of “recommended content”. Google’s defense wanted to claim that their proprietary recommendation algorithms are neutral (and therefore beyond reproach). In other words, the algorithm might recommend anything to anyone, depending on the circumstance. One might also suggest that swinging a flaming baseball bat in a crowded bar is also neutral, insofar as anyone might get hit by it.
Those who want to say that Gonzalez might has well have targeted a book store that sold an ISIS book, are missing the point. Google is not a book store. It is not a newsstand. It is a globally-dominant, vertically integrated, content-leveraging platform that sucks data from your web site, sells ads around that content, repeatedly recommends that content with little regard to any reasonable publication standards, and profits enormously from the collected mass of content that may or may not include instructions on murdering innocent people at nightclubs. To equate it with a book store, or a bulletin board in a coffee shop, is the silliest type of reductivism; and blind to the fact that global reach plus universal recommendation creates an unsustainable double-whammy that threatens to undermine civilization itself. The Supreme Court decided in Google’s favor, finding that Section 230 absolved them of any responsibility. An excellent review of the amicus briefs filed in this case can be found at bipartisanpolicy.org.
Section 230 Needs a Major Update
What’s clear is that Section 230 is outdated. It was created in 1996 to help shield primitive electronic bulletin boards, like Compuserve, from liability for what someone might post on the bulletin board. But this was written during the Internet’s horse-and-buggy era. Compuserve was not the destination of 70% of all global advertising dollars in 1996. Nor did it have a recommendation engine. Nor was Compuserve advertised as the answer to all questions for everyone.
The big tech platforms today need guardrails. They must keep hate speech off of their platforms—exactly in the way that publishers must. Any suggestion from the platforms that it is too difficult, and the messages too voluminous, are missing the point. For if Google can monetize to the tune of billions of dollars a year in profit, and monetize via a complex system that no human can track in real time, then they must also be able to construct a system that protects the public from physical harm that is directly related to their recommendation of hateful content.
An Engagement Tax?
And if it turns out that Google really cannot manage the content on their own site, then perhaps they ought to pay a tax for having burdened the world with hate-speech. After all, they don’t pretend to forego the profit from ads that surround hateful content. They are shoveling off the problem onto society at large. And what do civilizations do to fund solutions to a problem too big for any one company to solve? They generate tax revenue and develop programs to help ameliorate the problem. Much as environmental polluters are regulated, and much as tax dollars go towards such regulation, we need the digital media equivalent of a carbon tax.
I call it an Engagement Tax. A tax is placed on a social media platform based on the amount of engagement they advertise to their advertisers. The tax dollars might be used to help fight hate speech that the platforms cannot be bothered to remove from their own profitable sites.
I am aware that we hardly ever talk about regulations as it relates to the Internet, but my contention is that we are at the end of the unregulated era. It’s time to approach digital with a much more mature set of policies.
A case can be made that any major change to 230 could mean the end of social media as we know it. Will Google remain enormously profitable? Will Facebook? Of course they will. But the world will be a better place without the avalanche of hate.
The alternative is something far worse. That alternative is the media equivalent of raw sewage spilling down the center of your street, into your home, into your schools, into your places of business—all day, every day. And with artificial intelligence now beginning its ascendancy, this will only get worse.
Time to fix 230, one way or another.
Verity7 pioneers truth in media, arming organizations against disinformation. We assess threats, train through Prevency’s SaaS module, and provide a clear roadmap for preparedness. Elevate your resilience with Verity7 – where truth meets action.