Current Issue: Volume 24
This Article is the first to identify the New Fintech Federalism, examining how its disparate set of legal experiments could revolutionize U.S. financial regulation. It also details a statutory intervention that would promote the interests of entrepreneurs and consumer protection advocates alike by codifying this emergent approach. Far from jettisoning federalism, this Article’s proposed legislation would harness the distinctive strengths of the state and federal governments to bolster America’s economic vitality and global competitiveness.
This Article presents Orlando, a programming language for expressing conveyances of future interests, and Littleton, a freely available online interpreter (at https://conveyanc.es) that can diagram the interests created by conveyances and model the consequences of future events. Doing so has three payoffs. First, formalizing future interests helps students and teachers of the subject by allowing them to visualize and experiment with conveyances. Second, the process of formalization is itself deeply illuminating about property doctrine and theory. And third, the computer-science subfield of programming language theory has untapped potential for legal scholarship: the programming-language approach takes advantage of the linguistic parallels between legal texts and computer programs.
Artificial intelligence (AI) has been heralded for its potential to help close the access to justice gap. It can increase efficiencies, democratize access to legal information, and help consumers solve their own legal problems or connect them with licensed professionals who can. But some fear that increased reliance on AI will lead to one or more two-tiered systems: the poor might be stuck with inferior AI-driven assistance; only expensive law firms might be able to effectively harness legal AI; or, AI’s im-pact might not disrupt the status quo where only some can af-ford any type of legal assistance. The realization of any of these two-tiered systems would risk widening the justice gap. But the current regulation of legal services fails to account for the practi-cal barriers preventing effective design of legal AI across the landscape, which make each of these two-tiered systems more likely.
Much scholarly attention has recently been devoted to ways in which artificial intelligence (AI) might weaken formal political democracy, but little attention has been devoted to the effect of AI on “cultural democracy”—that is, democratic control over the forms of life, aesthetic values, and conceptions of the good that circulate in a society. This work is the first to consider in detail the dangers that AI-driven cultural recommendations pose to cultural democracy. This Article argues that AI threatens to weaken cultural democracy by undermining individuals’ direct and spontaneous engagement with a diverse range of cultural materials. It further contends that United States law, in its present form, is ill equipped to address these challenges, and suggests several strategies for better regulating culture-mediating AI. Finally, it argues that while such regulations might run afoul of contemporary First Amendment doctrine, the most normatively attractive interpretation of the First Amendment not only allows but encourages such interventions.
Aspiring scholars are often asked: What is your research agenda? If my research agenda were honest, my response would unapologetically be that I have no research agenda and that I, like Toni Morrison and possibly many others, mostly write about what I want to read but have yet to be written. As one’s own ideas, especially on perspective and whole view, change as she gains experience, her writings after all become just little fragments of her fleece left upon the hedges of life.
Seeing Like an Algorithmic Error: What are Algorithmic Mistakes, Why Do They Matter, How Might They Be Public Problems?
Algorithmic “mistakes” are windows into the social and technological forces that create computational systems. They reveal the assumptions that drive system designers, the power that some people have to define success and call out errors, and how far institutions are willing to go to fix failures. Using a recent case of facial detection and remote proctoring, I suggest “seeing like an algorithmic error” as a way to turn seemingly simple quirks and individually felt glitches into shared social consequences that shape social life— that is, into public problems.
Advances in data collection and processing have facilitated ultra- and infra-sonic machine-listening and learning. This requires the recognition of sonic privacy, protection for our “sonic data:” those representations or observations that define the characteristics of sound and its cognitive and emotive forces. This right would protect (non)participation in the public sphere.
Despite its local origins, Section 230 serves as a key architectural norm of the global internet: internet service providers are not typically responsible for the speech of their users. Section 230 underpins what we might describe as the International Law of Facebook— the global community guidelines written and enforced by internet platforms, largely allowing these platforms to regulate the speech on their platforms. Reviewing Section 230 cases involving foreign events, foreign parties, or foreign law, the essay reveals how Section 230 made the U.S. a safe home base from which to offer a global speech platform.
As Silicon Valley giants sketch their preferred future for digital advertising, an infrastructure with significant implications for life online and offline, there are startlingly few alternatives to their vision. In response, we propose "forgetful advertising", a vision for responsible digital advertising structured around a single design choice: avoiding the storage of behavioral data. Forgetful advertising can still target ads using information like geography, intent, context, and whatever else can be gleaned from a single interaction between a user and a website, but it cannot remember any previous interactions to inform its targeting. We believe our proposal can make digital advertising compatible with the values of human agency and privacy and offer it as a bottom-up solution for organizations that find existing digital advertising systems inconsistent with their values.
Tech drift is the phenomenon where a change in technolog-ical context alters policy outcomes. Tech drift can cause legal vulnerabilities. Socially empowering policies such as labor and employment laws can become inaccessible or ineffective because of changing technological contexts. But tech drift begets more than legal weakness. It also constitutes tech politics, as one of tech drift’s outcomes is a shift in the locus of struggles over poli-cy outcomes: from policy enactment, blocking, or reform to the very shaping of tech context. Tech drift can be designed as a power move, reallocating power from one set of actors and institutions to others. Tech drift has profound effects on tech change’s ecosystem of actors and institutions. Specifically, I survey how drift divided losing coalitions and dissident groups. Using qualitative interviews, I document how Uber’s tech drift split labor actors on institution-al, substantive, and jurisdictional grounds. Following these findings, I offer a set of structural remedies. These remedies aim at intervening in tech change’s ecosystem of actors, ethos, and the initial allocation and attainment of legal powers. With these novel intervention points, we can find new levers to bend the arc of tech change toward justice and democ-racy.
How can law help translate great ideas into great innovations? Venture capital (VC) markets play an increasingly important role in funding innovation, and they have benefitted from substantial public support. While venture capital is almost synonymous with innovation, the ability of VC markets to catalyze innovations is often overstated. This Article examines the innovation limitations of VC and the role of law and policy in enhancing its innovative capacity. It draws upon academic commentary and original interviews with thirty-two early-stage investors, entrepreneurs, lawyers, and other innovation professionals in Northern California. This Article explores, in an integrated fashion, three mutually reinforcing features that limit the capacity of VC markets to fund a wide range of socially valuable innovations. First, social ties are critical to connecting entrepreneurs and venture capital. This phenomenon shrinks the pool of entrepreneurs with a realistic chance of obtaining funding and distorts capital allocations in favor of those with greater social capital. Second, VCs exhibit a surprising degree of herd mentality, investing in trendy technologies while shying away from truly radical innovations. Finally, the VC business model favors innovations that promise large returns in a medium time frame with minimal risk. Such criteria necessarily deprioritize large swaths of socially valuable innovations with longer, riskier development timelines. While such practices are privately expedient in many contexts, they may leave significant profits unrealized. At a societal level, such practices are problematic to the extent that policymakers support VC markets to help effectuate innovation policy objectives. This Article argues that law and policy have an important role to play in addressing these structural deficiencies and enhancing the innovative capacity of venture capital. It proposes a holistic suite of prescriptions to increase diversity and inclusiveness within the VC-startup ecosystem and to nudge VCs toward greater funding of certain technologies of high social value.
The introduction of any new technology challenges judges to determine how it fits into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough. The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges. This technology will alter the foundation of the largest source of civil liability in the United States. Although regulatory agencies will determine when and how autonomous cars may be placed into service, judges will likely play a central role in defining the standards of liability for them. Will judges express the same negative biases that lay people commonly express against technological innovations? In this Article, we present data from 967 trial judges showing that judges are biased against self-driving vehicles. They both assigned more liability to a self-driving vehicle than to a human-driven vehicle for an accident caused under identical circumstances and treated injuries caused by a self-driving vehicle as more serious than identical injuries caused by a human-driven vehicle. These results suggest that judges harbor suspicion or animosity towards autonomous vehicles that might lead them to burden manufacturers and consumers of autonomous vehicles with more liability than the tort system currently imposes on conventional vehicles.