Volume 26
Predictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and criminal justice. Yet severe ethical issues, such as bias and discrimination, privacy invasiveness, opaqueness, and environmental costs of these systems, are well known. Generative AI (GAI) creates hallucinations and inaccurate or harmful information, which can lead to misinformation, disinformation, and the erosion of scientific knowledge. The Artificial Intelligence Act (AIA), Product Liability Directive, and the Artificial Intelligence Liability Directive reflect Europe’s attempt to curb some of these issues. With the legal reach of these policies going far beyond Europe, their impact on the United States and the rest of the world cannot be overstated.
This Essay examines racial formation in the context of the digital public sphere with a focus on how artificial intelligence (AI) systems’ understanding of social identities—especially racial identities—translates into real-world policy decisions about “bias,” “risk,” and “impact” as commonly interpreted by industry, government, and philanthropy. Drawing on examples in business advertising and consulting, I illustrate the ethical costs of uncritically integrating the notion of race as a data point, a drop-down menu of physical features one can mix and match. I turn then to three case studies of artist-technologists of color whose work models radical alternatives to techno- instrumentalist notions of race that often invisibly inform the quantification of social justice impact (sometimes referred to as effective altruism or strategic philanthropy). Rashaad Newsome, Amelia Winger-Bearskin, and Catie Cuan challenge discourses that frame racialized populations primarily in terms of negatively “impacted” communities, as the grateful recipients of largesse deserving of “access” to digital tools or technological literacy, or as those who can be best uplifted through the so- called “blessings of scale” and other maximalist approaches to social impact. In radical contrast, these three artist-technologists refigure those “impacted” as agentive co-producers of knowledge and imagination. Their art and performance engage alternative cultural values and metrics that counter the technological vision embracing Mark Zuckerberg’s refrain of “move fast and break things.” Instead, the aesthetic values of friction, duration, and liveness in their work offer counter- narratives and experience to more fully effect both joy and justice in the digital public sphere.
Industry will take everything it can in developing artificial intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good.
Debate about the regulation of “digital labor platforms” abounds globally among scholars, legislators, and other analysts concerned about the future of work(ers). In 2024, the European Parliament passed a first-of-its kind “Platform Work Directive” aimed at extending and growing protections for workers who labor for firms that utilize “automated systems to match supply and demand for work.” In this Essay, we consider the problematics of regulating the digital labor platform as a distinct subtype of firm and “platform work” as a novel form of employment. We propose that digital platforms are not firms, but rather labor management machines. Thus, the Directive is vastly underinclusive in its extension of much-needed rights to workers who toil under algorithmic decision-making systems.
The recent popularization of generative artificial intelligence (GAI) applications, such as ChatGPT and other large language model (LLM)-powered chatbots, has led many to expect transformative changes in legal practice. However, the actual use of LLM chatbots in the legal field has been limited. This Essay identifies China’s public legal services (PLS) sector as a potential use case where AI chatbots may become widely and quickly adopted. China’s political economy is generally conducive to such adoption, as the government must rely on technological solutions to fulfill its commitment to universal access to PLS. The Legal Tech industry is keen to find a practical use case for its LLM chatbots, which with proper development and fine-tuning could function adequately in meeting a significant popular demand for basic legal information. The use of AI chatbots in China’s PLS sector could contribute not only to narrowing the gap in access to justice but also to strengthening the degree of legality in governance that the country has achieved through years of deliberate efforts. But such use could also raise a range of concerns, including loss of confidentiality, errors and inaccuracies, fraud and manipulation, and unequal service quality. On balance, however, AI chatbots offer benefits in the PLS sector as a positive innovation, and the risks associated with their adoption appear manageable through pragmatic approaches.
This Article addresses the copyright regime of artistic works generated by artificial intelligence (AI). I argue that the law of authorship as developed by courts, together with the Intellectual Property Clause in the U.S. Constitution, entails that, if anyone is entitled to copyright ownership of these works, it is the AI itself. Arguments advanced in the literature that programmers, developers, or similarly situated humans should own the copyright instead are rejected. However, I argue further that countervailing policy considerations suggest that AI-generated works should remain in the public domain for the time being. In particular, the fundamental differences between AI-generated artworks and traditional artworks justify thinking of the former not as art, but rather as what I call “pseudo art.” Considerations concerning the nature of pseudo art support the position of the U.S. Copyright Office, who has so far denied copyright protection to AI-generated material.
Artificial Intelligence (AI) is not all artificial. Despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create AI. From data selection, decisional design, training, testing, and tuning to managing AI’s development as it is used in the human world, humans exert agency and control over the choices and practices underlying AI products. AI is now ubiquitous: it is part of every sector of the economy and many people’s everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to remedy any wrongs. This Article introduces the myriad of choices humans make to create safe and effective AI products and explores key issues in existing liability models. Significant issues in negligence and products liability schemes, including contractual limitations on liability, distance the organizations creating AI products from the actual harm they cause, obscure the origin of issues relating to the harm, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the limits of tort law, challenging long-held divisions and theoretical constructs. From the perspectives of both businesses licensing AI and AI users, this Article identifies key impediments to realizing tort doctrine’s goals and proposes an alternative regulatory scheme that shifts liability from humans in the loop to humans outside the loop.
We are witnessing the birth of a Platform Federation. Global platforms wield growing power over our public sphere–and yet our politics and public debates remain stubbornly state-based. In the platform age, speech can transcend international boundaries, but the repercussions of speech are mainly felt within our own domiciles, municipalities, and national territories. This mismatch puts countries in a difficult place, in which they must negotiate the tension between steering the public sphere to protect local speech norms and values and the immense benefits of free transboundary communication. This Article explores the outcome of this balancing act—what we call platform federalism: where it comes from, how it is unfolding, and how to make it better. The rise of global digital platforms brought up a crisis that has not yet been fully diagnosed. Until their appearance, the public sphere was disciplined by gatekeepers such as traditional mass media and other civil society institutions. They acted to enforce a common set of norms over public discourse. These gatekeepers fulfilled crucial social functions. They enacted and enforced the fundamental social norms that made public communication possible, while at the same time avoiding direct state intervention in public discourse. Through social media, people are now able to bypass these institutions and reach mass audiences directly—what we call the “bypass effect.” Countries are reacting to the consequences of the bypass effect by enforcing local social norms directly. Autocracies might enjoy the dubious luxury of shutting down Internet borders completely. This option, however, is not available for democracies, nor is it desirable. Democracies have embraced softer forms of regulation, which we call “state federalism.” As civil-society gatekeepers are bypassed, states take the mission of curating the public sphere onto themselves: they forcefully impose their own civility norms on platforms’ users (like Germany) or directly forbid fake news on them (like France). State federalism might work in restoring the public sphere’s civility, but it risks unduly imposing the state’s (as opposed to the community’s) values upon the population. State federalism, in other words, can quickly become incompatible with liberalism. We propose a new set of policy tools to maintain domestic civility in the public sphere while keeping state power at bay: civil society federalism. In civil society federalism, the state does not police the public sphere by itself, but rather requires platforms to invite civil society back into their gatekeeping role. These policies ask civil-society organizations to shape the norms that constitute public discourse; as in the past, they are the ones to exclude hate speech, profanity, or misinformation from the public sphere. By bringing civil society back, states can ensure the civility of the public sphere without exerting undue power over it.
Crypto industry attorneys have argued in litigation and before regulatory agencies that the First Amendment immunizes their line of business from ordinary market regulation. On the merits, these arguments range from weak to frivolous. But they nevertheless create value for the crypto industry in two ways. First, they help to drive a predatory marketing strategy that attracts retail investors with appeals to individual liberty and resistance to “financial censorship.” Second, they tee up arguments that financial regulators’ jurisdiction should be interpreted narrowly under the “canon of constitutional avoidance” and the “major questions doctrine.” Overall, crypto’s First Amendment opportunism interferes with public efforts to protect investors, collect taxes, and fight financial crime—and ultimately, it debases the First Amendment itself. At every opportunity, agencies and courts should debunk these arguments in terms that are clear enough for the industry’s target audiences to understand.
In March of 2023, OpenAI released GPT-4, an autoregressive language model that uses deep learning to produce text. GPT-4 has unprecedented ability to practice law: drafting briefs and memos, plotting litigation strategy, and providing general legal advice. However, scholars and practitioners have yet to unpack the implications of large language models, such as GPT-4, for long-standing bar association rules on the unauthorized practice of law (“UPL”). The intersection of large language models with UPL raises manifold issues, including those pertaining to important and developing jurisprudence on free speech, antitrust, occupational licensing, and the inherent-powers doctrine. How the intersection is navigated, moreover, is of vital importance in the durative struggle for access to justice, and low-income individuals will be disproportionately impacted. In this Article, we offer a recommendation that is both attuned to technological advances and avoids the extremes that have characterized the past decades of the UPL debate. Rather than abandon UPL rules, and rather than leave them undisturbed, we propose that they be recast as primarily regulation of entity-type claims. Through this recasting, bar associations can retain their role as the ultimate determiners of “lawyer” and “attorney” classifications while allowing nonlawyers, including the AI-powered entities that have emerged in recent years, to provide legal services—save for a narrow and clearly defined subset. Although this recommendation is novel, it is easy to implement, comes with few downsides, and would further the twin UPL aims of competency and ethicality better than traditional UPL enforcement. Legal technology companies would be freed from operating in a legal gray area; states would no longer have to create elaborate UPL- avoiding mechanisms, such as Utah’s “legal sandbox”; consumers—both individuals and companies—would benefit from better and cheaper legal services; and the dismantling of access-to-justice barriers would finally be possible. Moreover, the clouds of free speech and antitrust challenges that are massing above current UPL rules would dissipate, and bar associations would be able to focus on fulfilling their already established UPL-related aims.
Over the last seven decades, mainstream U.S. torts jurisprudence shifted dramatically from rigid formal rules—focused on duty and culpability—to more flexible norms and principles of accountability. This shift was part of a general transformation of tort law that can be observed in the case law, the Restatements, and academic scholarship. Recently, however, where internet platforms such as Amazon are involved, courts appear to have reverted to a formalistic approach to limit duty, and hence liability, for personal injuries caused by the sale of defective products using the platform. With a few notable exceptions, courts have focused on the word “seller” in § 402A of the Second Restatement of Torts and have concluded that Amazon is not a “seller” when it facilitates a sale between a customer and a third-party merchant. This Article is the third in a series of articles that develop a functional, control-based approach to platform liability. It proceeds in five steps. First, we develop the general tort principles that govern liability for transactions in defective consumer products. Second, we show how Amazon, as a platform situated squarely between a third-party seller and the customer, has control over both sides of that transaction. This places Amazon in a position where they should be held accountable as a non-manufacturing seller, where the third-party seller is not amenable to suit. Third, we give an example of how courts have resisted this conclusion, taking shelter in formal concepts of title rather than traditional understandings of culpability and loss allocation. Fourth, we develop a functional approach to platform liability that uses traditional tort principles to evaluate the platform’s role in a transaction and apply those principles to Amazon. Lastly, we consider how these principles should apply to platforms generally.