Current Issue: Volume 24
This Article is the first to identify the New Fintech Federalism, examining how its disparate set of legal experiments could revolutionize U.S. financial regulation. It also details a statutory intervention that would promote the interests of entrepreneurs and consumer protection advocates alike by codifying this emergent approach. Far from jettisoning federalism, this Article’s proposed legislation would harness the distinctive strengths of the state and federal governments to bolster America’s economic vitality and global competitiveness.
This Article presents Orlando, a programming language for expressing conveyances of future interests, and Littleton, a freely available online interpreter (at https://conveyanc.es) that can diagram the interests created by conveyances and model the consequences of future events. Doing so has three payoffs. First, formalizing future interests helps students and teachers of the subject by allowing them to visualize and experiment with conveyances. Second, the process of formalization is itself deeply illuminating about property doctrine and theory. And third, the computer-science subfield of programming language theory has untapped potential for legal scholarship: the programming-language approach takes advantage of the linguistic parallels between legal texts and computer programs.
Artificial intelligence (AI) has been heralded for its potential to help close the access to justice gap. It can increase efficiencies, democratize access to legal information, and help consumers solve their own legal problems or connect them with licensed professionals who can. But some fear that increased reliance on AI will lead to one or more two-tiered systems: the poor might be stuck with inferior AI-driven assistance; only expensive law firms might be able to effectively harness legal AI; or, AI’s im-pact might not disrupt the status quo where only some can af-ford any type of legal assistance. The realization of any of these two-tiered systems would risk widening the justice gap. But the current regulation of legal services fails to account for the practi-cal barriers preventing effective design of legal AI across the landscape, which make each of these two-tiered systems more likely.
Much scholarly attention has recently been devoted to ways in which artificial intelligence (AI) might weaken formal political democracy, but little attention has been devoted to the effect of AI on “cultural democracy”—that is, democratic control over the forms of life, aesthetic values, and conceptions of the good that circulate in a society. This work is the first to consider in detail the dangers that AI-driven cultural recommendations pose to cultural democracy. This Article argues that AI threatens to weaken cultural democracy by undermining individuals’ direct and spontaneous engagement with a diverse range of cultural materials. It further contends that United States law, in its present form, is ill equipped to address these challenges, and suggests several strategies for better regulating culture-mediating AI. Finally, it argues that while such regulations might run afoul of contemporary First Amendment doctrine, the most normatively attractive interpretation of the First Amendment not only allows but encourages such interventions.
Aspiring scholars are often asked: What is your research agenda? If my research agenda were honest, my response would unapologetically be that I have no research agenda and that I, like Toni Morrison and possibly many others, mostly write about what I want to read but have yet to be written. As one’s own ideas, especially on perspective and whole view, change as she gains experience, her writings after all become just little fragments of her fleece left upon the hedges of life.
Algorithmic “mistakes” are windows into the social and technological forces that create computational systems. They reveal the assumptions that drive system designers, the power that some people have to define success and call out errors, and how far institutions are willing to go to fix failures. Using a recent case of facial detection and remote proctoring, I suggest “seeing like an algorithmic error” as a way to turn seemingly simple quirks and individually felt glitches into shared social consequences that shape social life— that is, into public problems.
Advances in data collection and processing have facilitated ultra- and infra-sonic machine-listening and learning. This requires the recognition of sonic privacy, protection for our “sonic data:” those representations or observations that define the characteristics of sound and its cognitive and emotive forces. This right would protect (non)participation in the public sphere.
Despite its local origins, Section 230 serves as a key architectural norm of the global internet: internet service providers are not typically responsible for the speech of their users. Section 230 underpins what we might describe as the International Law of Facebook— the global community guidelines written and enforced by internet platforms, largely allowing these platforms to regulate the speech on their platforms. Reviewing Section 230 cases involving foreign events, foreign parties, or foreign law, the essay reveals how Section 230 made the U.S. a safe home base from which to offer a global speech platform.
As Silicon Valley giants sketch their preferred future for digital advertising, an infrastructure with significant implications for life online and offline, there are startlingly few alternatives to their vision. In response, we propose "forgetful advertising", a vision for responsible digital advertising structured around a single design choice: avoiding the storage of behavioral data. Forgetful advertising can still target ads using information like geography, intent, context, and whatever else can be gleaned from a single interaction between a user and a website, but it cannot remember any previous interactions to inform its targeting. We believe our proposal can make digital advertising compatible with the values of human agency and privacy and offer it as a bottom-up solution for organizations that find existing digital advertising systems inconsistent with their values.