Over the past two decades, the internet has transformed from a clunky and pixelated patchwork of sites to the digital domain we know and (mostly) love today. Unfortunately, a small, but vocal minority of lawmakers led by Sen. Josh Hawley (R-Mo.) are trying to dismantle this progress. His proposed “Limiting Section 230 Immunity to Good Samaritans Act” would expose digital platforms to near-endless liability if deemed to have unfair content moderation policies. Meanwhile, other pieces of legislation spanning across the political aisle would tie Section 230 protections to everything from addressing user complaints (no matter how bogus) to undermining encryption. These policies would backfire spectacularly, causing companies to defer to bureaucrats on the “proper” level of political speech and balance. Americans deserve a vibrant digital marketplace filled with new ideas, not paranoid platforms hounded by overzealous government officials.
Top online platforms such as Facebook, Twitter, and YouTube give users an endless array of choices as to who to follow and which videos to watch. As seen in these interesting side-by-side comparisons of conservative versus liberal Facebook, users are free to get virtually whatever experience they would like on leading websites. To some critics, however, these platforms’ (limited) content moderation policies are akin to having “speech suppressed in Communist China.” Lawmakers such as Sen. Hawley have taken these complaints seriously and even proposed curtailing liability protections that have allowed the World Wide Web to flourish. Other lawmakers such as Sen. Lindsay Graham (R-S.C.) propose the same kinds of damaging policies even if their immediate concerns (i.e. too much privacy and encryption) are different than Hawley’s qualms.
Under these lawmakers’ proposed restrictions, digital platforms would have to constantly watch their backs for fear of user lawsuits. If Twitter were to remove an over-the-top post, they could be sued for thousands of dollars on top of attorney’s fees. Large digital platforms would initially respond to this avalanche of liability by retreating into caution, leaving posts up even if they contain libelous claims.
The Twitters and Facebooks of the world would be preempted from barring posts linking to wildly false reports, say, linking individuals to child sex abuse. Online platforms currently have significant leeway in restricting this content on their own platforms and laying the ground rules for engagement.
Now it’s true, sometimes there’s overreach by these platforms in restricting content.
But by and large, these companies operate like any private business and realize that restricting content with a heavy hand will invite backlash, bad optics, and less revenue. That’s why, for example, Twitter allows countless right-of-center voices to make their voices heard even if media executives find their content thoroughly distasteful. Candace Owens’ posts assailing George Floyd likely strike the vast majority of liberals (and many conservatives) as distasteful and tone deaf, yet Twitter and Facebook have allowed the airing of these viewpoints. Owen’s video attacking Floyd was the number one video on Facebook the day it was posted, hardly the sign of liberal censors run amok. Fox News host Tucker Carlson constantly complains about Big Tech and the unfairness of these platforms. Carlson has 3.5 million Twitter followers and has surely had his voice amplified by the same social media companies he has criticized.
These platforms usually reserve the heavy hand for users using derogatory terms or advocating for violence against entire groups. Yet if the anti-Section 230 crowd succeeds, even these reasonable restrictions could trigger a slew of liability. And if the long-run result was a wholesale elimination of Section 230 protections for these companies, large platforms would respond with “content moderation” policies more erratic than ever before. In an environment where any platform could be sued for any libelous content posted by their users, large internet companies would respond by banning content carrying even an iota of legal risk.
The problem is, poorly specified liability rules would create a “darned if you do, darned if you don’t” environment where banning content is punishable but leaving content up is also punishable. The current liability system may result in some overzealous content moderation but succeeds in (mostly) allowing a free marketplace of ideas. Twitter and Facebook users could take a look at the mad ramblings of Candace Owens or Bill Mitchell, while taking a detour and perusing Antifa accounts. It’s highly unlikely that a system of liability run amok would preserve this vibrant digital ecosystem.
Policymakers must oppose lawmakers’ misguided quest for faux fairness and find ways to unleash – rather than curtail – the digital domain.
Ross Marchand is the Vice President of Policy for the Taxpayers Protection Alliance.
Read Full Article »