The Record

The introduction of any new technology challenges judges to determine how it fits into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough. The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges. This technology will alter the foundation of the largest source of civil liability in the United States.

This essay unpacks the practices of trusted flagging. We first discuss self-regulatory flagging partnerships on several major platforms. We then review various forms of government involvement and regulation, focusing especially on the EU context, where law-making on this issue is especially prevalent. On this basis, we conceptualize different variants of trusted flagging, in terms of their legal construction, position in the content moderation process and the nature of their flagging privileges.

Reducing the visibility of risky, misleading, or salacious content is becoming a commonplace and large-scale part of platform governance. Using machine learning classifiers, platforms identify content that is misleading enough, risky enough, problematic enough to warrant reducing its visibility by demoting or excluding it from the algorithmic rankings and recommendations.
The goal of this essay is to more systemically explore the key actors involved in platform governance than has been done so far. What exactly does it mean to be a “governance stakeholder” — and how does it matter for our frames of analysis as to who and what is centered in such definitions? Who are the key “platform governance stakeholders”? And what combinations of actors matter in different domains of platform governance?
 

The paper discusses the types of experts engaged in platform governance before analyzing the tactics and methods at their disposal.