Volume 27
AI is unfair. It can be inaccurate (in several ways), biased (in several ways, and to several groups), disproportionate, exploitable, and opaque. The policy world is awash in AI-governance frameworks, ethical guidelines, and other policy documents, but these lack concrete standards and provide little guidance on how to select between competing versions of (un)fairness. In other words, they abdicate the responsibility of setting priorities among values. At the same time, many of the policy documents harshly criticize AI and algorithmic tools for deficiencies in some particular aspect of fairness without considering whether alternative designs that fix the problem would make the system more “unfair” in other aspects. Ad-hoc criticism is abundant and hard to predict. This article offers a strategy for AI ethics officers to navigate the “abdication/ad-hoc criticism” problems in the regulatory landscape. After explaining the meaning and sources of the most important forms of AI unfairness, we argue that AI developers should make the inevitable tradeoffs between fairness goals as consciously and intentionally as the context will allow. Beyond that, in the absence of clear legal requirements to prioritize one form of fairness over another, an algorithm that makes well-considered trade-offs between values should be deemed “fair enough.”
Public libraries face a digital lending crisis. Even as library patrons demand greater access to digital materials, eBook publishers have subjected libraries to onerous licensing terms. Some publishers are releasing new books only in digital formats, making it even more costly for libraries to maintain robust collections. eBook publishers also compel libraries to use specific digital lending platforms which pose risks to patron privacy. At the same time, many public libraries face budget cuts as well as politically-motivated book bans, reducing their ability to meet local patrons’ needs, and forcing patrons to search for materials from other sources. To better serve patrons, some libraries have resorted to self-help in the form of controlled digital lending (CDL), producing and lending their own scans of printed materials, lending the digital copy to only one patron at a time, while making the print copy unavailable for the duration of the digital loan. However, under precedents interpreting the first sale rule and the fair use defense, CDL is likely to constitute copyright infringement. The better solution is to amend federal copyright law to ensure that nonprofit libraries can obtain eBook licenses on reasonable terms. Such an amendment could draw inspiration from the Model Law as well as the European Union’s rental right, and could take the form of either an exception or a compulsory license. Consistent with the long tradition of library exceptions already included in federal copyright law, such an amendment would recognize the critical role that libraries play in maintaining an informed electorate.
Increasingly, social media companies have engaged in the creation, development, and deployment of “worlds” within a virtual reality setting, leading to significant interactions among users within these engineered spaces. However, this expansion has also been accompanied by harms. While some harms are unique to immersive reality technology, many mirror harms that occur in the analog environment, including fraud, theft, verbal abuse, and child sexual exploitation. Others replicate harms that have already exploded in non-immersive online spaces, including image-based sexual exploitation, cyberstalking, and invasion of privacy. Unfortunately, the architecture and infrastructure of these spaces has created what we coin here to be a “veil of scale”—behind which bad actors are able to hide, and through which criminal and civil actions are systemically unable to reach. Moreover, because of Section 230 of the Communications Decency Act, which has been consistently held to limit social media platforms' liability for third-party “content,” plaintiffs who attempt to make themselves whole by suing the platforms themselves have routinely been thwarted by courts. In this Article, we make the case for using premises liability doctrine within the metaverse to address these harms and hold platform companies accountable. Specifically, by using this doctrine to hold corporations liable for harms within their engineered venues, platforms would be incentivized to use their superior knowledge of ongoing risks within their properties to prevent harm to others—just as premises law has done with regard to physical space for centuries. The premises framework provides a path of redress for victims of foreseeable, preventable, and egregious harm, while also recognizing that not all harms are preventable, and not all precautions are reasonable. As we face emerging harms facilitated by a new, engineered space of interaction, premises liability offers a familiar legal paradigm that (1) has sound jurisprudential foundations, (2) is well-aligned, for concrete technological reasons, with dilemmas of place-built risk and third-party harms, and therefore (3) can be taken with minimal adjustments and applied to real-world harms effectuated via the metaverse.
Privacy law has long centered on the individual. But we observe a meaningful shift toward group harm and rights. There is growing recognition that data-driven practices, including the development and use of artificial intelligence (AI) systems, affect not just atomized individuals but also their neighborhoods and communities, including and especially situationally vulnerable and historically marginalized groups. This Article explores a recent shift in both data privacy law and the newly developing law of AI: a turn towards stakeholder participation in the governance of AI and data systems, specifically by impacted groups often though not always representing historically marginalized communities. In this Article we chart this development across an array of recent laws in both the United States and the European Union. We explain reasons for the turn, both theoretical and practical. We then offer analysis of the legal scaffolding of impacted stakeholder participation, establishing a catalog of both existing and possible interventions. We close with a call for reframing impacted stakeholders as rights-holders, and for recognizing several variations on a group right to contest AI systems, among other collective means of leveraging and invoking rights individuals have already been afforded.
Faced with spiking computer crime, the United Nations adopted a global cybercrime convention in December 2024. This watershed moment was the result of rushed, combative negotiations that involved a wide range of stakeholders. Occupied with contentious fights over overbroad substantive provisions, negotiating states paid little attention to the jurisdictional provisions of the convention. Yet a provision granting states jurisdiction over crimes committed anywhere in the world against their nationals, known as passive personality jurisdiction, represents a major expansion of jurisdiction under international and domestic law. Adoption of this type of jurisdiction in the treaty consummates a rise that has taken it from spurned to ubiquitous in a few short decades. Passive personality jurisdiction threatens sovereignty, due process, and human rights. It remains both ill-advised and unnecessary. Other jurisdictional bases better address the jurisdictional challenges posed by cybercrime. But if passive personality jurisdiction is here to stay, states can take steps to mitigate its harm: from limiting it to violent, universal offenses to taking unilateral measures that impose costs on abusive passive personality prosecutions.