Republicans are eager to stop social media companies from removing COVID-19 misinformation posts, flagging Stop the Steal rhetoric, banning racist users and purging propaganda bots. But to make that happen, the GOP needs to overturn or reform Section 230 of the 1996 Communications Decency Act, which has become a fixture of the right wing's culture wars, and the centerpiece of two US Supreme Court cases argued last week. But Section 230 isn't just the law that protects Twitter from being sued every time Dril dunks on Republicans; it's also what holds together the internet as we know it.
Following the high court's obvious trepidation during last Tuesday and Wednesday's hearings, experts are skeptical that the justices will hand down rulings against Google and Twitter. During Tuesday's hearing of Gonzalez v. Google, the family of Nohemi Gonzalez, an American killed in the Islamic State's 2015 Paris attack, asked the court to hold Google responsible for promoting ISIS' videos via YouTube algorithms. Relatives of Nawras Alassaf, who was killed in another ISIS attack — in Istanbul in 2017 — attack, argued Wednesday in Twitter v. Taamneh that the platform should have done more to restrict content generated by Islamist militants.
In both cases, the justices admitted their ignorance about the tech specifics at play, but also seemed baffled by the plaintiffs' shaky claims, suggesting that remedies should be found in Congress. Even if the court rules in favor of Google and Twitter in June, it will get another chance to upend Section 230 during its October term. The court is then likely to consider challenges to a Florida law barring social media companies from suspending politicians, as well as a Texas law that could stop platforms from taking down neo-Nazi content.
Since 2018, Republicans have used Section 230 as a political piñata whenever Twitter or Facebook brings down the hammer on a right-wing politician for violating content policies (or inspiring new ones). Social media platforms are indeed private companies, but they have become the primary engine for amplifying extreme right-wing political messaging to targeted groups. The hunger to seize control of that engine has only grown stronger as Republican leadership shifts ever more toward fringe candidates, whose xenophobic, racist or overtly violent rhetoric wouldn't otherwise make it onto the nation's regulated airwaves.
Regardless of how it rules on any of the cases, the Supreme Court is poised to do what Congress so far won't do: that is, to go beyond current copyright and child protection laws and compel the speech of private companies by forcing them to host content that violates their established standards. One potential outcome here is that the Supreme Court — which by the justices' own admission is poorly positioned to rule on such cases — will face risks a tidal wave of costly, frivolous lawsuits. These lawsuits could force tech giants to become de facto partisan mouthpieces, and could also bankrupt small blogs, subreddits or news websites with comments sections that require moderation.
What is Section 230?
Section 230 protects free speech and an open internet by ensuring that individual users -- instead of a site's publishers -- are held accountable for their own public posts. If you're falsely accused of a crime in a news article, you can sue the news site (by way of its publisher) for libel; but if you're falsely accused in someone's tweet, you can only sue the person who posted it, not Twitter itself.
The protections of Section 230 allow social media platforms to moderate user-generated content, and sound simple enough on the surface: "[N]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.
It's important to grasp that those 26 words cut both ways on the partisan scale. When it has to take down either nude photos or posts that encourage lynching, Facebook is immune from lawsuits from both amateur porn purveyors and the creators of neo-Confederate memes. Both forms of expression are constitutionally protected speech that the government cannot restrict — and so are Facebook's content-removal policies. Facebook's right to set the rules under its own roof is, for now, protected under 230.
What's up with Google's Supreme Court case?
In the Gonzalez case, the plaintiff's central claim was clumsily defended but important to understand: There's a crucial difference, the argument goes, between YouTube simply failing to find and remove ISIS-generated content and YouTube actually recommending that content. You could describe it as the difference between negligent inaction and malignant action. Under Section 230, YouTube would have legal immunity here for its failure to catch and remove all ISIS recruitment videos. But YouTube's wildly profitable business model is about pushing content (and of course targeted advertising) at individual users based on the unregulated collection and algorithmic exploitation of personal data.
Republicans like to say that platform moderation should offer "political neutrality." That's upside-down reality: Neutrality is the last thing they want.
YouTube's algorithm isn't some blind worm eating through the sludge of user data; it's a precise, aggressive and highly adaptive instrument devoted to generating revenue. As with Facebook, Twitter and other attention-economy platforms — YouTube's algorithmic user-persuasion is the addiction-by-design product of millions of dollars in human behavioral research, which has yielded success rates rarely seen outside of Las Vegas and Phillip Morris.
As University of Virginia professor Danielle Citron pointed out on Twitter, no algorithm is neutral.
"It is built by engineers who have strong ideas and goals for what they are building. Given that business is online behavioral ads, they are building algos that mine personal data to optimize chance of like, clicks and shares. That ain't neutral," she wrote.
That brings us back to the problems with that 26-word legal shield meant to cover a vast array of internet services, including email providers, chat apps, file hosts and AI content generators.
Even the slightest change to 230 could trigger sweeping changes across an industry that lawmakers in 1996 could never have imagined. Unless justices or lawmakers alter 230 with a surgeon's eye, experts say, the internet could be polarized into two types of sites: completely unmoderated depravity bins like 8chan, or sites where content is choked by extreme censorship.
How much right-wing bias is enough?
Democrats and Republicans have both eyed changes to 230 — but for very different reasons.
Democrats say they want to see greater content moderation from platforms to stem foreign election interference, disinformation campaigns from political operatives and medical misinformation. They also want platforms to do more to stop the spread of child exploitation material and the proliferation of cybersecurity threats -- two points where Republicans generally agree.
Social media algorithms aren't neutral. They aren't blind worms burrowing through all that user data. They are precise, aggressive and highly adaptive instruments designed to maximize revenue.
Republicans, unsurprisingly, want more moderation restricting LGBTQ content and abortion advocacy. But they want less moderation of medical misinformation (read: Ivermectin prescriptives), election interference campaigns and far-right user content. Republicans also want social media platforms to break their own encrypted messaging protocols and provide law enforcement agencies a backdoor to spy on private user conversations. That last bit has now become a standalone debate, but remains part of the GOP's Section 230 efforts.
Democrats argue that social media platforms are still plagued with election-targeted propaganda, pseudoscientific disinformation and extremist organizing of various kinds. Indeed, it's impossible to argue that any company designed to profit from users' engagement with persuasively promoted ideas operates in an ethical and legal vacuum -- especially not when chaotic flood-the-zone campaigns push dangerous lies about the pandemic or patently false claims about the legitimacy of the 2020 election, while also promoting the spread of anti-LGBTQ paranoia and curating white supremacist echo chambers.
Republicans have a point of their own, sort of, although it contradicts some of their other points: If government agencies actually had pressured Twitter executives to yank down the infamous New York Post article on Hunter Biden's laptop, and if Twitter had complied because of the threat of government penalty, then yes, that would have been censorship. But that isn't really what happened, as the nothing-burger of the Twitter Files story makes clear. Rather, a private company exercised its right to craft a moderation policy — one which may have been flawed, but was not the result of government pressure. Similarly, Twitter and Facebook both exercised their right to coordinate with government officials on public safety measures, like flagging the worst kinds of COVID-19 conspiracy theory posts, even if the impetus to do so was mainly prompted by a desire to evade potential antitrust enforcement.
Republicans like to say that social media platform moderation should offer "political neutrality," a word that does not appear in 230. But of course neutrality isn't really what they want. Social media moderation in recent years have largely helped to spread right-wing viewpoints. Facebook moderation didn't censor conservative posters with its algorithmic changes -- it boosted them, at the expense of liberal and left-wing posters. A 2021 study found that although right-leaning Facebook posts were only 26% of American political posts, they accounted for 43% of users' political-post interactions.
"For years, right-wing media and conservative politicians have claimed that social media and tech companies are biased against them and censor their content, despite copious data proving otherwise," wrote Media Matters, the study's author. "In fact, we found that right-leaning pages consistently earn more interactions than left-leaning or ideologically nonaligned pages."
House and Senate Republicans can hold as many theatrical hearings as they like about why conservative content isn't as popular as they think they ought to be — whether that content came from Diamond and Silk or Marjorie Taylor Greene. But as long as Section 230 is in place and private companies still have First Amendment protection, neither Donald Trump nor Joe Biden can force Twitter's engineers to give them an edge in online popularity contests.
Elon Musk, on the other hand, is entirely free to do that — and a great deal more.
Read more
from Salon on social media's double-edged sword
Shares