Is Censorship the Answer?
This February marked the 25th anniversary of Section 230, which was enacted into law as part of the Communications Decency Act of 1996. Establishing rules that influenced how the Internet has grown and operated, Section 230 has become a hot topic in recent months. Republicans and Democrats want to reform the law, albeit in different directions and for different reasons.
My first job out of college was for Prodigy Services, one of the companies that pioneered what became the Internet. There was no Facebook back then; Mark Zuckerberg was eight years old. Amazon and Google didn’t exist, either. We’re talking 1992, long before the graphical web browsers that ushered in mass usage of the Internet. Working as a “content moderator” at the dawn of the Internet age, I never imagined that people in this role would one day assume the power to de-platform world leaders and circumscribe the boundaries of political speech.
Back then, I was the only one among my coworkers who was also an active Prodigy user. I loved Prodigy’s online bulletin boards. They operated like discussion groups, in which any of us could post whatever we wanted within certain categories of topics. I gravitated to the “Alternative Music” forum, as it connected me with a community of super-knowledgeable music fans who shared my quirky tastes. This was a revelation to me. Prior to that, I had no friends that even recognized the names of my favorite bands. On Prodigy’s online bulletin boards, I was making friends around the country, learning about their local scenes, exchanging mix tapes through the mail, and discovering older influences of my contemporary heroes.
By contrast, my job at Prodigy – making sure that the bulletin boards were “family friendly” – was mostly a bore. An automated system screened incoming messages before they went live on the service. Obscenities would get a message automatically rejected. Other words (“kill,” for instance) would be sent to us for context review. Was it a benign “I’d kill to have seen that Pavement show!” or was it a user actually threatening to kill another user?
We’d also browse the subject lines of messages that had gone live on the bulletin boards to catch what scanners might have missed. After a while, you got a feel for which users were trying to stir up trouble. I still remember how my heart raced the first time I saw a subject line with “F,” then a few spaces, “U,” then a space and a comma, then a “C,” and you know the rest. It was a simple hack, but I couldn’t conceive of how Prodigy would ever be able to automate removal of this kind of disguised indecency.
Some other infractions that we were asked to censor seem odd in retrospect. Smoking prohibitions on airplanes were pretty new, and we were told to be vigilant in preventing users from posting advice for how to turn off smoke detectors in airplane lavatories. We also had to prevent people from posting copyrighted material, defaming others, and crossing the line from flirty to creepy. There were lots of gray areas and inconsistencies that drove our users crazy. As a user myself, I could sympathize with this frustration. Overall, our censoring operation was doomed to fail. Staffed by people who didn’t understand online culture, Prodigy often punished its most passionate users.
Ironically, when drama on a Prodigy bulletin board wound up in court – in Stratton Oakmont, Inc. v. Prodigy Services Co. – it was the existence of our team of content monitors that led the Supreme Court of New York to hold the company liable for defamation, on the basis of content posted by a bulletin board user. Prodigy’s competitor, CompuServe, had not been held liable for damages in a similar case because its discussion forums were completely unrestricted.
The Communications Decency Act was passed within a year of the suit against Prodigy. Section 230 of the CDA established that online companies could not be held liable for user content, as Prodigy had been. And it protected online companies with civil immunity so they could not be sued for “good faith” efforts to remove offensive content from their service. The rise of Facebook and Twitter – and many other important technology companies – would have been impossible without the protections of Section 230.
Some think that Section 230 has made companies lax in addressing scourges like child pornography and sex trafficking. Others worry that efforts to deputize platforms to help law enforcement to combat harmful online activity will erode online privacy.
After making accusations of anti-conservative bias among Big Tech companies, President Donald Trump signed his Executive Order on Preventing Online Censorship in May 2020. The order sought to change Section 230’s interpretation so that media companies could lose liability protections if their editorial decisions go beyond “good faith” efforts to eliminate offensive materials.
Subsequently, Trump and his adversaries seemed to validate one another’s worst suspicions with erratic behavior. When Minneapolis descended into anti-police riots, Trump carelessly tweeted “when the looting starts, the shooting starts.” Thinking back to my days as a content moderator at Prodigy, I can imagine a debate among reasonable people about whether Trump’s words should be censored. Were they an incitement to violence?
In late October, Twitter suppressed the New York Post’s reporting about Hunter Biden’s abandoned laptop. Twitter’s explanations for locking the Post out of its account and preventing the sharing of the story among its users were disingenuous in the extreme. The Post’s story was not based on “leaked documents,” as Twitter alleged, and even if it had been, the service had shown no reservations in circulating leaked tax returns damaging to Trump only weeks earlier. If Twitter and Facebook didn’t want Trump’s populist base to engage in conspiracy theories, they shouldn’t have conspired to suppress news stories favorable to his cause.
Fast forward two months, and, in the aftermath of pro-Trump rioters breaching the U.S. Capitol, Twitter and Facebook have banned Trump from their services entirely. And they find themselves under attack from both sides.
Democratic leaders want to revisit Section 230, believing that the law prevents social media giants from preempting violent action such as what took place at the Capitol. Republicans are aghast at the power of Big Tech firms: if they behave like traditional publishers, rather than as a simple forum for free speech, why do they enjoy the liability protections of Section 230?
The way forward is not clear. Private companies have the right to set their own terms of service, but legal scholars such as Richard Epstein have argued that Twitter and Facebook have become monopolies, and that a “common carrier” solution is needed. That would mean forcing the social media behemoths to give up control of access to their platforms to a third party, presumably run or regulated by government, with the goal of ideological neutrality. I’m skeptical about such proposals, as government takeovers rarely stay neutral.
Our most straightforward goals should be protecting First Amendment rights and encouraging, not discouraging, new entrants. The trials of Parler are especially relevant here. In the first weeks of 2021, the startup alternative to Twitter went from overnight success to being quashed through the market power of Apple, Google, and Amazon.
Whether you see Parler as having been unfairly persecuted or having deserved its fate for negligence in policing vile content, we should be rooting for a viable, competitive market in social media. Through this lens, it’s clear that Section 230’s legal protections will be especially important to future upstarts if they are ever to compete with behemoths like Facebook and Twitter.
While it is always tempting to imagine legislation could achieve perfect outcomes, experience shows this is unlikely. Market pressures and the liability carve-outs in Section 230 motivate social media platforms to be vigilant against truly harmful content. I’m hopeful that public opinion can inspire humility among the titans of Big Tech, whose content-moderation operations have become politicized.
An authentically free society tolerates ideas that appear wrong and expects its citizens to use their individual rights to advocate for the truth as they see it. Greater civility in public dialogue can be achieved through persuasion, but not by fiat – at least not without sacrificing core American values.
Brad Lips is the chief executive officer of Atlas Network, a global network of over 500 independent civil society organizations working to promote individual freedom and remove barriers to human flourishing.