Vol. 26 - Issue 3

Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond

Authors: 
Sandra Wachter
Volume: 
Issue: 
Spring
Starting Page Number: 
671
Year: 
2024
Preview: 
Predictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and criminal justice. Yet severe ethical issues, such as bias and discrimination, privacy invasiveness, opaqueness, and environmental costs of these systems, are well known. Generative AI (GAI) creates hallucinations and inaccurate or harmful information, which can lead to misinformation, disinformation, and the erosion of scientific knowledge. The Artificial Intelligence Act (AIA), Product Liability Directive, and the Artificial Intelligence Liability Directive reflect Europe’s attempt to curb some of these issues. With the legal reach of these policies going far beyond Europe, their impact on the United States and the rest of the world cannot be overstated.
Abstract: 

Predictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and criminal justice. Yet severe ethical issues, such as bias and discrimination, privacy invasiveness, opaqueness, and environmental costs of these systems, are well known.

Digital Griots, Wampum Codes, and Choreo-Robotics: Artist-Technologists of Color Reshaping the Digital Public Sphere

Authors: 
Michele Elam
Volume: 
Issue: 
Spring
Starting Page Number: 
645
Year: 
2024
Preview: 
This Essay examines racial formation in the context of the digital public sphere with a focus on how artificial intelligence (AI) systems’ understanding of social identities—especially racial identities—translates into real-world policy decisions about “bias,” “risk,” and “impact” as commonly interpreted by industry, government, and philanthropy. Drawing on examples in business advertising and consulting, I illustrate the ethical costs of uncritically integrating the notion of race as a data point, a drop-down menu of physical features one can mix and match. I turn then to three case studies of artist-technologists of color whose work models radical alternatives to techno- instrumentalist notions of race that often invisibly inform the quantification of social justice impact (sometimes referred to as effective altruism or strategic philanthropy). Rashaad Newsome, Amelia Winger-Bearskin, and Catie Cuan challenge discourses that frame racialized populations primarily in terms of negatively “impacted” communities, as the grateful recipients of largesse deserving of “access” to digital tools or technological literacy, or as those who can be best uplifted through the so- called “blessings of scale” and other maximalist approaches to social impact. In radical contrast, these three artist-technologists refigure those “impacted” as agentive co-producers of knowledge and imagination. Their art and performance engage alternative cultural values and metrics that counter the technological vision embracing Mark Zuckerberg’s refrain of “move fast and break things.” Instead, the aesthetic values of friction, duration, and liveness in their work offer counter- narratives and experience to more fully effect both joy and justice in the digital public sphere.
Abstract: 

This Essay examines racial formation in the context of the digital public sphere with a focus on how artificial intelligence (AI) systems’ understanding of social identities—especially racial identities—translates into real-world policy decisions about “bias,” “risk,” and “impact” as commonly interpreted by industry, government, and philanthropy. Drawing on examples in business advertising and consulting, I illustrate the ethical costs of uncritically integrating the notion of race as a data point, a drop-down menu of physical features one can mix and match.

Two AI Truths and a Lie

Authors: 
Woodrow Hartzog
Volume: 
Issue: 
Spring
Starting Page Number: 
595
Year: 
2024
Preview: 
Industry will take everything it can in developing artificial intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good.
Abstract: 
Industry will take everything it can in developing artificial intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good.
 

Digital Labor Platforms as Machines of Production

Authors: 
Veena Dubal
Vitor Araújo Filgueiras
Volume: 
Issue: 
Spring
Starting Page Number: 
560
Year: 
2024
Preview: 
Debate about the regulation of “digital labor platforms” abounds globally among scholars, legislators, and other analysts concerned about the future of work(ers). In 2024, the European Parliament passed a first-of-its kind “Platform Work Directive” aimed at extending and growing protections for workers who labor for firms that utilize “automated systems to match supply and demand for work.” In this Essay, we consider the problematics of regulating the digital labor platform as a distinct subtype of firm and “platform work” as a novel form of employment. We propose that digital platforms are not firms, but rather labor management machines. Thus, the Directive is vastly underinclusive in its extension of much-needed rights to workers who toil under algorithmic decision-making systems.
Abstract: 

Debate about the regulation of “digital labor platforms” abounds globally among scholars, legislators, and other analysts concerned about the future of work(ers). In 2024, the European Parliament passed a first-of-its kind “Platform Work Directive” aimed at extending and growing protections for workers who labor for firms that utilize “automated systems to match supply and demand for work.” In this Essay, we consider the problematics of regulating the digital labor platform as a distinct subtype of firm and “platform work” as a novel form of employment.

Who Wants a Robo-Lawyer Now?: On AI Chatbots in China’s Public Legal Services Sector

Authors: 
Xin Dai
Volume: 
Issue: 
Spring
Starting Page Number: 
527
Year: 
2024
Preview: 
The recent popularization of generative artificial intelligence (GAI) applications, such as ChatGPT and other large language model (LLM)-powered chatbots, has led many to expect transformative changes in legal practice. However, the actual use of LLM chatbots in the legal field has been limited. This Essay identifies China’s public legal services (PLS) sector as a potential use case where AI chatbots may become widely and quickly adopted. China’s political economy is generally conducive to such adoption, as the government must rely on technological solutions to fulfill its commitment to universal access to PLS. The Legal Tech industry is keen to find a practical use case for its LLM chatbots, which with proper development and fine-tuning could function adequately in meeting a significant popular demand for basic legal information. The use of AI chatbots in China’s PLS sector could contribute not only to narrowing the gap in access to justice but also to strengthening the degree of legality in governance that the country has achieved through years of deliberate efforts. But such use could also raise a range of concerns, including loss of confidentiality, errors and inaccuracies, fraud and manipulation, and unequal service quality. On balance, however, AI chatbots offer benefits in the PLS sector as a positive innovation, and the risks associated with their adoption appear manageable through pragmatic approaches.
Abstract: 

The recent popularization of generative artificial intelligence (GAI) applications, such as ChatGPT and other large language model (LLM)-powered chatbots, has led many to expect transformative changes in legal practice. However, the actual use of LLM chatbots in the legal field has been limited. This Essay identifies China’s public legal services (PLS) sector as a potential use case where AI chatbots may become widely and quickly adopted.

Subscribe to RSS - Vol. 26 - Issue 3