Benjamin T. Seymour
24 Yale J.L. & Tech. 1

This Article is the first to identify the New Fintech Federalism, examining how its disparate set of legal experiments could revolutionize U.S. financial regulation. It also details a statutory intervention that would promote the interests of entrepreneurs and consumer protection advocates alike by codifying this emergent approach. Far from jettisoning federalism, this Article’s proposed legislation would harness the distinctive strengths of the state and federal governments to bolster America’s economic vitality and global competitiveness.

Shrutarshi Basu, Nate Foster
24 Yale J.L. & Tech. 75

This Article presents Orlando, a programming language for expressing conveyances of future interests, and Littleton, a freely available online interpreter (at https://conveyanc.es) that can diagram the interests created by conveyances and model the consequences of future events. Doing so has three payoffs. First, formalizing future interests helps students and teachers of the subject by allowing them to visualize and experiment with conveyances. Second, the process of formalization is itself deeply illuminating about property doctrine and theory. And third, the computer-science subfield of programming language theory has untapped potential for legal scholarship: the programming-language approach takes advantage of the linguistic parallels between legal texts and computer programs.

Drew Simshaw
24 Yale J.L. & Tech. 150

Artificial intelligence (AI) has been heralded for its potential to help close the access to justice gap. It can increase efficiencies, democratize access to legal information, and help consumers solve their own legal problems or connect them with licensed professionals who can. But some fear that increased reliance on AI will lead to one or more two-tiered systems: the poor might be stuck with inferior AI-driven assistance; only expensive law firms might be able to effectively harness legal AI; or, AI’s im-pact might not disrupt the status quo where only some can af-ford any type of legal assistance. The realization of any of these two-tiered systems would risk widening the justice gap. But the current regulation of legal services fails to account for the practi-cal barriers preventing effective design of legal AI across the landscape, which make each of these two-tiered systems more likely.

Jonathan Gingerich
24 Yale J.L. & Tech. 227

Much scholarly attention has recently been devoted to ways in which artificial intelligence (AI) might weaken formal political democracy, but little attention has been devoted to the effect of AI on “cultural democracy”—that is, democratic control over the forms of life, aesthetic values, and conceptions of the good that circulate in a society. This work is the first to consider in detail the dangers that AI-driven cultural recommendations pose to cultural democracy. This Article argues that AI threatens to weaken cultural democracy by undermining individuals’ direct and spontaneous engagement with a diverse range of cultural materials. It further contends that United States law, in its present form, is ill equipped to address these challenges, and suggests several strategies for better regulating culture-mediating AI. Finally, it argues that while such regulations might run afoul of contemporary First Amendment doctrine, the most normatively attractive interpretation of the First Amendment not only allows but encourages such interventions.

Jasper L. Tran
24 Yale J.L. & Tech. 317

Aspiring scholars are often asked: What is your research agenda? If my research agenda were honest, my response would unapologetically be that I have no research agenda and that I, like Toni Morrison and possibly many others, mostly write about what I want to read but have yet to be written. As one’s own ideas, especially on perspective and whole view, change as she gains experience, her writings after all become just little fragments of her fleece left upon the hedges of life.

Mike Annany
24 Yale J.L. & Tech. 342

Algorithmic “mistakes” are windows into the social and technological forces that create computational systems. They reveal the assumptions that drive system designers, the power that some people have to define success and call out errors, and how far institutions are willing to go to fix failures. Using a recent case of facial detection and remote proctoring, I suggest “seeing like an algorithmic error” as a way to turn seemingly simple quirks and individually felt glitches into shared social consequences that shape social life— that is, into public problems.

Jasmine McNealy
24 Yale J.L. & Tech. 365

Advances in data collection and processing have facilitated ultra- and infra-sonic machine-listening and learning. This requires the recognition of sonic privacy, protection for our “sonic data:” those representations or observations that define the characteristics of sound and its cognitive and emotive forces. This right would protect (non)participation in the public sphere.

Anupam Chander
24 Yale J.L. & Tech. 393

Despite its local origins, Section 230 serves as a key architectural norm of the global internet: internet service providers are not typically responsible for the speech of their users. Section 230 underpins what we might describe as the International Law of Facebook— the global community guidelines written and enforced by internet platforms, largely allowing these platforms to regulate the speech on their platforms. Reviewing Section 230 cases involving foreign events, foreign parties, or foreign law, the essay reveals how Section 230 made the U.S. a safe home base from which to offer a global speech platform.

Chand Rajendra-Nicolucci, Ethan Zuckerman
24 Yale J.L. & Tech. 421

As Silicon Valley giants sketch their preferred future for digital advertising, an infrastructure with significant implications for life online and offline, there are startlingly few alternatives to their vision. In response, we propose "forgetful advertising", a vision for responsible digital advertising structured around a single design choice: avoiding the storage of behavioral data. Forgetful advertising can still target ads using information like geography, intent, context, and whatever else can be gleaned from a single interaction between a user and a website, but it cannot remember any previous interactions to inform its targeting. We believe our proposal can make digital advertising compatible with the values of human agency and privacy and offer it as a bottom-up solution for organizations that find existing digital advertising systems inconsistent with their values.

Naomi Appelman, Paddy Leerssen
24 Yale J.L. & Tech. 452

This essay unpacks the practices of trusted flagging. We first discuss self-regulatory flagging partnerships on several major platforms. We then review various forms of government involvement and regulation, focusing especially on the EU context, where law-making on this issue is especially prevalent. On this basis, we conceptualize different variants of trusted flagging, in terms of their legal construction, position in the content moderation process and the nature of their flagging privileges. We then discuss competing narratives about the role of trusted flaggers; as a source of expertise and representation; as an unaccountable co-optation by public and private power; and as a performance of inclusion. In this way, we illustrate how “trusted flagging,” in its everyday operationalization and critique, serves as a site of contestation between competing interests and legitimacy claims in platform governance.

Tarleton Gillespie
24 Yale J.L. & Tech. 476

Reducing the visibility of risky, misleading, or salacious content is becoming a commonplace and large-scale part of platform governance. Using machine learning classifiers, platforms identify content that is misleading enough, risky enough, problematic enough to warrant reducing its visibility by demoting or excluding it from the algorithmic rankings and recommendations. The offending content remains on the site, still available to the user who can find it directly; but the platform limits the conditions under which it circulates: how it is offered up as a recommendation, search result, part of an algorithmically-generated feed, or “up next” in users’ queues. In this essay, I will call these “reduction policies.” There are several emergent terms for this practice, as I will discuss, and they are all problematic. But the fact that there is not yet a settled industry term is itself revealing. Understandably, platforms are wary of being scrutinized for these reduction policies. Some platforms have not publicly acknowledged them; those that have are circumspect. It is not that they are hidden entirely, but the major platforms are only just beginning to acknowledge these techniques as a significant element of how they now manage problematic content. Consequently, reduction policies remain largely absent from public, policy, and scholarly conversations about content moderation and platform governance.

Robert Gorwa
24 Yale J.L. & Tech. 493

The goal of this essay is to more systemically explore the key actors involved in platform governance than has been done so far. What exactly does it mean to be a “governance stakeholder” — and how does it matter for our frames of analysis as to who and what is centered in such definitions? Who are the key “platform governance stakeholders”? And what combinations of actors matter in different domains of platform governance? This essay engages directly with these questions by presenting a typology of platform governance stakeholders intended to help structure more systematic thinking about the politics of platform capitalism on a global, trans-jurisdictional and trans-sectoral scale. Drawing on a brief review of extant literature in both global governance more generally and platform governance more specifically, I break down the key actors across four levels (“supra-organizational”, “organizational”, “sub-organizational”, “individual”) that correspond to various groupings of actors across different political and economic levels of analysis, from the individual worker all the way up to large constellations of firms, governments, or other actors. It then suggests that the relative importance of these actors will vary in their importance depending on the specific policy issue, the specific context, and the dominant platform type that is being discussed.

Amre Metwally
24 Yale J.L. & Tech. 510

The paper discusses the types of experts engaged in platform governance before analyzing the tactics and methods at their disposal. The central contention in this paper is that the platforms themselves are the “foreground deciders” and that these experts, or “people with projects,” operate in the background, instead “advis[ing] and interpret[ing] by inhabiting modes of knowledge and communication through which they can pursue projects with some plausible deniability of agency.” The ideological agendas and facts that serve as a basis for a preferred vision of platform governance are socially constructed.

Robyn Caplan
24 Yale J.L. & Tech. 541

This submission introduces the phrase “networked governance” as a term to describe how a broad array of platforms— not just Facebook— are conceptualizing the engagement of external actors and organizations in the creation and implementation of content standards. This terminology builds on theories from new institutionalism/neo-institutionalism and organizational sociology, taking as its starting point the “demise of the isolated and sovereign actor or organization” and placing an emphasis on “understanding interaction” between interdependent actors and organizations. “Networked governance” is a term that can be useful for researchers in platform governance who are theorizing about how platform companies, like Facebook and Google, are using strategies like trusted flagger programs, trust and safety councils, and external stakeholder engagement teams, to engage relevant organizations and experts in providing feedback on platform rules and content standards.

Gali Racabi
24 Yale J.L. & Tech. 554

Tech drift is the phenomenon where a change in technolog-ical context alters policy outcomes. Tech drift can cause legal vulnerabilities. Socially empowering policies such as labor and employment laws can become inaccessible or ineffective because of changing technological contexts. But tech drift begets more than legal weakness. It also constitutes tech politics, as one of tech drift’s outcomes is a shift in the locus of struggles over poli-cy outcomes: from policy enactment, blocking, or reform to the very shaping of tech context. Tech drift can be designed as a power move, reallocating power from one set of actors and institutions to others. Tech drift has profound effects on tech change’s ecosystem of actors and institutions. Specifically, I survey how drift divided losing coalitions and dissident groups. Using qualitative interviews, I document how Uber’s tech drift split labor actors on institution-al, substantive, and jurisdictional grounds. Following these findings, I offer a set of structural remedies. These remedies aim at intervening in tech change’s ecosystem of actors, ethos, and the initial allocation and attainment of legal powers. With these novel intervention points, we can find new levers to bend the arc of tech change toward justice and democ-racy.

Peter Lee
24 Yale J.L. & Tech. 611

How can law help translate great ideas into great innovations? Venture capital (VC) markets play an increasingly important role in funding innovation, and they have benefitted from substantial public support. While venture capital is almost synonymous with innovation, the ability of VC markets to catalyze innovations is often overstated. This Article examines the innovation limitations of VC and the role of law and policy in enhancing its innovative capacity. It draws upon academic commentary and original interviews with thirty-two early-stage investors, entrepreneurs, lawyers, and other innovation professionals in Northern California. This Article explores, in an integrated fashion, three mutually reinforcing features that limit the capacity of VC markets to fund a wide range of socially valuable innovations. First, social ties are critical to connecting entrepreneurs and venture capital. This phenomenon shrinks the pool of entrepreneurs with a realistic chance of obtaining funding and distorts capital allocations in favor of those with greater social capital. Second, VCs exhibit a surprising degree of herd mentality, investing in trendy technologies while shying away from truly radical innovations. Finally, the VC business model favors innovations that promise large returns in a medium time frame with minimal risk. Such criteria necessarily deprioritize large swaths of socially valuable innovations with longer, riskier development timelines. While such practices are privately expedient in many contexts, they may leave significant profits unrealized. At a societal level, such practices are problematic to the extent that policymakers support VC markets to help effectuate innovation policy objectives. This Article argues that law and policy have an important role to play in addressing these structural deficiencies and enhancing the innovative capacity of venture capital. It proposes a holistic suite of prescriptions to increase diversity and inclusiveness within the VC-startup ecosystem and to nudge VCs toward greater funding of certain technologies of high social value.

Jeffrey J. Rachlinski, Andrew J. Wistrich
24 Yale J.L. & Tech. 706

The introduction of any new technology challenges judges to determine how it fits into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough. The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges. This technology will alter the foundation of the largest source of civil liability in the United States. Although regulatory agencies will determine when and how autonomous cars may be placed into service, judges will likely play a central role in defining the standards of liability for them. Will judges express the same negative biases that lay people commonly express against technological innovations? In this Article, we present data from 967 trial judges showing that judges are biased against self-driving vehicles. They both assigned more liability to a self-driving vehicle than to a human-driven vehicle for an accident caused under identical circumstances and treated injuries caused by a self-driving vehicle as more serious than identical injuries caused by a human-driven vehicle. These results suggest that judges harbor suspicion or animosity towards autonomous vehicles that might lead them to burden manufacturers and consumers of autonomous vehicles with more liability than the tort system currently imposes on conventional vehicles.