Current Issue: Volume 23
As an FTC Commissioner, I aim to promote economic and social justice through consumer protection and competition law and policy. In recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. This article describes harms caused by algorithmic decision-making in the high-stakes spheres of employment, credit, health care, and housing, which profoundly shape the lives of individuals. These harms are often felt most acutely by historically disadvantaged populations, especially Black Americans and other communities of color. And while many of the harms I describe are not entirely novel, AI and algorithms are especially dangerous because they can simultaneously obscure problems and amplify them—all while giving the false impression that these problems do not or could not possibly exist.
Our aim for this special issue is to bring a few novel approaches to platform governance which can be applicable to social media and other online platforms. The different scholars included in this issue approach social media governance through different lenses, and sometimes use different terminology (e.g., “platforms” vs. “technology firms” vs. “social media companies”). Yet the common thread is the importance of exploring new ideas for managing the social impact, good and bad, that these large players have in our society. Our hope is that this issue will spur as lively a conversation about these topics as we had at the mini conference at which each of these papers was presented. These papers reflect not only the ideas of their authors but also the feedback from the distinguished group of scholars convened to comment upon them. To make progress upon these ideas we will need a dedicated cohort of people willing to think about these problems in a different way. This issue represents our effort to create such a group.
Cyber risk insurance coverage has become an increasingly vital tool permitting both public and private-sector organizations to mitigate an array of cyber risks, including the prevalent issue of ransomware. However, despite the rapid uptake of these policies, a series of issues have emerged. Litigation has centered on issues ranging from what constitutes “covered computer systems” to questions of negligence. Yet the literature to date has largely ignored the critical issue of when a cyber attack attributed to a foreign nation constitutes an act of war, thus excluding coverage.
Cooperation between companies developing artificial intelligence (AI) can help them create AI systems that are safe, secure, and with broadly shared benefits. Researchers have proposed a range of cooperation strategies, ranging from redistributing “windfall” profits to assistance to address the harmful dynamics of a competitive race for technological superiority. A critical tension arises, however, between cooperation and the goal of competition law, which is to protect the very process of competition between rival companies. Whilst these potential conflicts are significant, they are currently underexplored in the literature. This paper examines the relationship between proposed forms of AI cooperation and competition law, focusing on the competition law of the European Union.
The legal framework governing online speech relies on a distinction between the public and private sphere. A direct consequence of this distinction is the bifurcation between user and citizen. While the former is largely governed by private contractual norms—like a platform’s terms of service—the latter is traditionally governed by public law norms. Governments, however, increasingly exploit this distinction and treat citizens as users: by engaging with the interpretation of private companies’ self-regulation policies, governments are circumventing public law norms and fostering a new system of informal governance. This article suggests the term informal governance to capture the nonbinding and opaque interplay between state actors and private content intermediaries, taking place in the shadow of the law and affecting online content moderation. Informal governance rests on the border of the public/private legal infrastructure and facilitates the circumvention of public law constraints. A distinctive feature of informal governance involves state institutions that subject their action to a private governance apparatus of a market player and engage with it to achieve their interests. Whereas informal governance is a conceptual framework, Internet Referral Units (IRUs) are its device in the content moderation enterprise.
COVID-19 has created pressing and widespread needs for vaccines, medical treatments, PPE, and other medical technologies, needs that may conflict—indeed, have already begun to conflict—with the exclusive rights conferred by United States patents. The U.S. government has a legal mechanism to overcome this conflict: government use of patented technologies at the cost of government-paid compensation under 28 U.S.C. § 1498. But while many have recognized the theoretical possibility of government patent use under that statute, there is today a conventional wisdom that § 1498 is too exceptional, unpredictable, and dramatic for practical use, to the point that it ought to be invoked sparingly or not at all, even in extraordinary circumstances such as a pandemic. Yet that conventional wisdom is a recent one, and it conflicts with both history and theory. This Article considers the role of § 1498 in the context of national crises and emergencies like COVID-19, a context so far not addressed substantially in the literature on the statute.
With the advent of artificial intelligence (AI), the end of patent law is near. Though it may not happen today or tomorrow, the system’s decline is underway. Groundbreaking innovations in AI technology have made inventions “made by AI” a reality. Today, AI is able to “invent” not only new materials and machines but also manufacturing processes, pharmaceutical drugs, and household products. Soon, our life will be replete with artificial artifacts. In a sense, humans no longer stand at the center of the creative universe—we are no longer the masters of innovation.
This article presents a different analysis of deepfakes’ First Amendment status—and that of other fabricated evidence. Deepfakes will often deserve less First Amendment protection than a verbal lie. But this isn’t because they are a more harmful form of expression. It is because there are some uses of deepfakes are expression of the kind the First Amendment protects. They are in some respects at least partly outside the First Amendment’s “coverage.” And the reason they have to be is that they would otherwise extend the “authorship” that the First Amendment provides to speakers beyond the sphere that the Constitution sets aside for it (and can afford to set aside for it).
The antitrust “essential facilities” doctrine is reawakening. After decades of rejection and decline, the doctrine’s approach of granting access rights to facilities for which there is no reasonable alternative in the market has received several high-profile endorsements across the political spectrum. While courts have mainly applied the doctrine to physical infrastructure, its potential now lies in addressing the gatekeeping power of online platforms.