Reducing the visibility of risky, misleading, or salacious content is becoming a commonplace and large-scale part of platform governance. Using machine learning classifiers, platforms identify content that is misleading enough, risky enough, problematic enough to warrant reducing its visibility by demoting or excluding it from the algorithmic rankings and recommendations.