Section 230 Debate Obscures Some Real Concerns
Section 230 of the Communications Decency Act continues to play a surprisingly large role in our political discourse, given its status as the last remaining vestige of a quarter-century-old law that was largely struck down by the courts long ago. The immunity shield the law grants to online platforms has been implicated in issues as broad-ranging as Twitter’s decision to ban former President Trump to whether Instagram exacerbates eating disorders among teens.Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet.”
But while much of the Section 230 discussion from both the political left and the political right misses the mark, and many of the reform proposals put forward to address their concerns are incompatible with the First Amendment, that doesn't mean that Section 230 is sacrosanct, or that it's working perfectly. There are real harms and illegal behavior on the internet. Defenders of the Section 230 status quo argue that addressing these by imposing any form of liability on platforms would squelch free speech online. But not all speech is high-value speech, and not all speech is protected by the First Amendment. The First Amendment does not, for example, prevent the suppression of illegal content like child pornography, nor does it protect fraud, perjury, true threats, or incitement to violence. At the same time, there may well be good reasons to hold online platforms liable for their failure to moderate this material, such as in cases where it would be too costly to pursue users individually, or where the nature of the platform itself encourages harms that wouldn't otherwise occur. Speech and conduct are commonly deterred by self-restraint, enforced by fear of reprisal, threat of social sanction, and people’s baseline sense of morality. Where there is less incentive for self-restraint, as in anonymous online forums, there will be more speech and conduct, but more of it can be harmful or illegal. Intermediaries may therefore be facilitating harmful speech that would otherwise be deterred. Had Section 230 never passed we would expect the common law to have developed standards for when intermediaries should be held liable, just as it has for other sorts of third parties — such as hotel owners, bar owners, and operators of private parks — who may be held liable for actions of third parties in cases where they knew or should have known their customers or the general public would be harmed. For lawmakers who are currently considering changes to the Section 230 regime, the central question ought to be whether any proposed reform passes a cost-benefit test, reducing unlawful or harmful online content sufficiently to outweigh the net costs. Section 230's proponents argue that reform or repeal would produce catastrophic litigation costs, especially for smaller platforms. It’s difficult to know in advance whether or not that’s a real threat, but it’s certainly possible that our civil litigation system would impose too much liability. But it also could very well be that the current liability risk is too low. Some litigation costs exceed what's needed to properly assign liability, but liability costs, where properly found, should be borne by the party best positioned to prevent harm. That may often be an intermediary like a platform. It’s also not clear that reform would create particular barriers to entry for startups. If content moderation at the scale of Facebook or YouTube is as difficult as many contend it is, and otherwise meritorious lawsuits are currently deterred by Section 230, it could be that some platforms are larger than they would be had the law never been passed. Because we've gone nearly a quarter-century without such cases, it would probably be disruptive to simply repeal the law and throw it to the courts. Hence, we propose an intermediary step of creating a duty of care for online intermediaries, who would have to demonstrate that their moderation practices comport with best practices certified by a body overseen by an agency like the Federal Trade Commission. It’s crucial that any changes to the law preserve to the extent feasible the large social gains the Internet economy has provided over the last quarter century. But that doesn’t preclude adjusting the law to better align platforms’ incentives with reducing illegal and tortious online conduct. Free speech is an important American value, but it’s also not the only and final word. Geoffrey Manne is Founder and President of the International Center for Law & Economics (ICLE) in Portland, Oregon. Kristian Stout is ICLE’s Director of Innovation Policy. Their new paper on Section 230 Reform, co-authored with Ben Sperry, is “Comment
Show comments
Hide Comments