Abstract

Because private companies now control the most prominent communication platforms, the most pressing question in the field of content moderation is how to ensure that the governance of public discourse responds to public values. The prevailing approach, given that the state cannot regulate speech directly, is that state regulation can be substituted with audited self-regulation, broad stakeholder participation, and negotiated rulemaking. In this model, which this article refers to as the “new governance model for content moderation,” companies include advocates as representatives of the public in their processes to govern online speech. Ideally, they negotiate policy goals and share responsibility for achieving them. The end goal is to have a process in which public values are given effect. This article argues, however, that this governance model is unsound in both theory and practice. In the field of content moderation, the ambition of constructing public values through a collaborative process between companies and stakeholders is conceptually incoherent: those interests that cannot elicit the cooperation from corporate actors and are not consistent with the values of participating advocates are excluded by design. In practice, no present or past demonstrations have shown that the inclusion of advocates in speech governance and the agreements they reach with companies have epistemic credibility to construct the public interest. Though those flaws might seem unsurprising, scholars and activists double down on independence, diversity, and expertise as design strategies that can result in self-regulatory bodies that could adequately set policy goals. This article advocates for pluralism as a framework that more effectively achieves the participatory goals of new governance. It argues that the state has a central role to play in creating a plural and contested public sphere. A robust legal system can complement self-regulation and push it structurally in the direction of public values.