The Record

This submission introduces the phrase “networked governance” as a term to describe how a broad array of platforms— not just Facebook— are conceptualizing the engagement of external actors and organizations in the creation and implementation of content standards. This terminology builds on theories from new institutionalism/neo-institutionalism and organizational sociology, taking as its starting point the “demise of the isolated and sovereign actor or organization” and placing an emphasis on “understanding interaction” between interdependent actors and organizations.

The paper discusses the types of experts engaged in platform governance before analyzing the tactics and methods at their disposal.

The goal of this essay is to more systemically explore the key actors involved in platform governance than has been done so far. What exactly does it mean to be a “governance stakeholder” — and how does it matter for our frames of analysis as to who and what is centered in such definitions? Who are the key “platform governance stakeholders”? And what combinations of actors matter in different domains of platform governance?
 
Reducing the visibility of risky, misleading, or salacious content is becoming a commonplace and large-scale part of platform governance. Using machine learning classifiers, platforms identify content that is misleading enough, risky enough, problematic enough to warrant reducing its visibility by demoting or excluding it from the algorithmic rankings and recommendations.

This essay unpacks the practices of trusted flagging. We first discuss self-regulatory flagging partnerships on several major platforms. We then review various forms of government involvement and regulation, focusing especially on the EU context, where law-making on this issue is especially prevalent. On this basis, we conceptualize different variants of trusted flagging, in terms of their legal construction, position in the content moderation process and the nature of their flagging privileges.