Reduction / Borderline Content / Shadowbanning

Tarleton Gillespie
24 Yale J.L. & Tech. 476
Reducing the visibility of risky, misleading, or salacious content is becoming a commonplace and large-scale part of platform governance. Using machine learning classifiers, platforms identify content that is misleading enough, risky enough, problematic enough to warrant reducing its visibility by demoting or excluding it from the algorithmic rankings and recommendations. The offending content remains on the site, still available to the user who can find it directly; but the platform limits the conditions under which it circulates: how it is offered up as a recommendation, search result, part of an algorithmically-generated feed, or “up next” in users’ queues.
 
In this essay, I will call these “reduction policies.” There are several emergent terms for this practice, as I will discuss, and they are all problematic. But the fact that there is not yet a settled industry term is itself revealing. Understandably, platforms are wary of being scrutinized for these reduction policies. Some platforms have not publicly acknowledged them; those that have are circumspect. It is not that they are hidden entirely, but the major platforms are only just beginning to acknowledge these techniques as a significant element of how they now manage problematic content. Consequently, reduction policies remain largely absent from public, policy, and scholarly conversations about content moderation and platform governance.