Can Trump's Lawsuits Prevent Big Tech from Becoming Big Brother?
We’re “Never-Trumpers” who support former President Donald Trump’s lawsuits against Facebook, Twitter, and YouTube. That disclaimer’s necessity is one reason we’re writing: We live in a period of unprecedented polarization and partisanship — a problem exacerbated by the conduct at issue in Mr. Trump’s case.
In its rebuttal to The Social Dilemma, Facebook insisted that, contrary to the documentary, it does work to remove “misinformation” and “hate speech.” In multiple hearings before legislators, Facebook and other platforms were warned that they must remove more such content — or else. And now, after revoking former President Donald Trump’s executive action concerning the legal protections for online platforms in Section 230 of the 1996 Communications Decency Act, it seems the Biden Administration is calling in the favor by encouraging Facebook and other platforms to remove coronavirus “misinformation.” Yet the removal of some of this content — and the de-platforming of some who posted it — forms the basis for Mr. Trump’s lawsuits. Why? Because much of this content is, however baseless or offensive, nonetheless legal, and therefore protected by the First Amendment against censorship by state actors.
This content removal, now often done openly and even self-aggrandizingly, kicked into overdrive during the pandemic-election season. Leading platforms wholly prohibited content that has since been rehabilitated: speculating about the Wuhan lab-leak theory, doubting the efficacy of lockdowns or mask-wearing, touting the efficacy of Ivermectin or other drugs — and yes, discussing or sharing the contents of Hunter Biden’s laptop.
These platforms’ “moderation” policies hinder the quest for truth about vital issues. Moreover, they’re a poorly fitting band-aid, slapped on the wounds created by the platforms’ own engagement-driven content curation practices. These algorithms attempt to identify content a user will find “engaging” and insert it in his or her feed, displacing content that would have otherwise been there. It is thus a subtler form of content throttling. While perhaps wonderful for a platform’s bottom line, it’s been held to drive our culture’s polarization problem, and impair the exercise of human autonomy.
Facebook notes that “polarization and populism have existed long before…online platforms were created….” True. But that doesn’t mean that algorithms haven’t made them worse. Nor does it exonerate those who double-down on the practice by officiously removing only the algorithms’ byproducts, leaving the algorithms themselves free from scrutiny. Our politicians, unfortunately, seem to approve of this arrangement, so long as they get credit for pressuring the platforms to remove content they and their “base” find undesirable. And the tech companies seem happy to continue the charade, self-identifying as private actors operating in a free market.
This band-aid practice of culling “misinformation” and “hateful” content has increased polarization because it reinforces conspiracy-style thinking, which rests on the idea that information is or must somehow be controlled by authorities. What better way to fertilize it than to show exactly how: by blatantly and self-righteously anointing oneself or others as authorities over what may be said, seen, heard, or thought.
Platforms’ removal of this “objectionable” content is, in relation to the original content curation which fostered it, as affirmative action is in relation to the evil of racism. In both cases, we must try to choose a cure that does not worsen the affliction. In the case of censorship by tech giants the stakes are quite high, as choosing the wrong cure can spawn Big Brother.
Enter Mr. Trump’s lawsuits. Vivek Ramaswamy recently argued that if Trump’s lawyers improve upon their initial filing, they can successfully prove that tech platforms, in removing legal content, sometimes operate as state actors. Building upon an argument he presented earlier this year with Jed Rubenfeld, he noted (1) they enjoy legal immunity for this conduct, (2) politicians have pressured them to do more of it, and (3) there’s evidence of deliberate coordination with government officials as to which content or individuals to remove or ban. If, as part of a cronyist, mutual back-scratching arrangement, a company performs actions that would be illegal if done by government directly, legal redress is appropriate.
Note also that the immunity provided by Section 230, while correct in principle, has, according to Justice Clarence Thomas and others, been applied in an overbroad manner. Assuming they’re right, and considering that politicians have threatened to abolish this immunity entirely — a power they don’t properly have — strengthens the state action argument.
Moreover, depending on exactly how Mr. Trump’s cases are presented and decided, they might help establish as precedent the narrower interpretation of Section 230 for which Thomas has argued. If so, this would redress grievances arising from coordinated action between tech giants and government officials, in a way which has the least chance of making the problem worse. Compare proposed legislative solutions, which will likely result in government officials having more authority over what may be said, seen or heard online.
We believe these lawsuits could solve one of our world’s most pressing problems. (Did we forget to mention that we’re also optimists?) Thanks to Mr. Trump, and to our system of checks and balances, judges may soon be able to excise the cancer of cronyist censorship, while establishing precedent that will discourage its recurrence. Then freer minds, operating in the context of a freer market, can undergo some desperately needed healing.
Amy Peikoff is chief policy officer at Parler. Benjamin Chayes is a historian.