Volume 28
Significant attention has been devoted to the question of how best to govern artificial intelligence (AI). In addition to legislation, many policy proposals focus on extra-legal regulatory instruments. Notably, AI evaluations provide a particularly attractive solution, imposing seemingly neutral measurements across the widespread contexts in which AI operates. Because AI evaluations are driven by a wide range of actors, their adoption as a governance tool is shifting power in AI policymaking. In particular, the companies that create AI are also key players in designing and marketing AI evaluations. This Essay examines how large technology companies and government actors conceptualize self-regulation by technology companies as a legitimate policy intervention. We note that AI evaluations are often described using the language of standards, another more established soft law regulatory instrument. Drawing on the history of standards, we discuss how AI companies leverage the metaphor of standards to describe benchmarks and evaluations in order to legitimate corporate expertise. We then examine the implications of this metaphor, describing where it is useful in the context of AI and where it obscures important policy decisions.
Privacy is a fundamental right. I begin with this statement, which is both widely recognized as fact and heavily contested in its meaning. Certainly, it has been contested in practice, especially in recent decades, as data flows have arisen, grown to rivers, expanded to floods, and reshaped the economy. But we should not conflate the challenges that practical economic choices pose for fundamental rights—and that fundamental rights may pose for those who desire to collect, use, sell and share personal data— with the fundamental rights themselves. Accordingly, if we take our given topic—“Governing Data”—literally, we should consider privacy a principal component of data governance. Relatedly, we should consider issues of “privacy” as compared to “data protection,” and approaches to “informational privacy” (in data) in relation to autonomy and other fundamental interests as they are expressed in privacy protections. With that in mind, I am going to discuss the role of state privacy law in data governance, focusing on California. That’s partly because I am standing here as the Board Chairperson for the California Privacy Protection Agency, and partly because I think California provides an illuminating example of policymakers’, and the public’s, ongoing tussles with the governance of personal data. Not only does California enshrine the right to privacy in its state constitution, but it has also led the way in innovating privacy protections as data has become more and more economically and societally significant.
The recent surge in regulation seeking to establish age-based governance online is part of a decades-long attempt to establish online zoning. It is driven by active development of technologies to estimate or verify user age based on various characteristics of users, their credentials, or their activities. However, these developments have heightened prevailing concerns that online age gating technology will inevitably be abused and misused to cause a variety of privacy harms and rights infringements. This paper examines this ongoing debate by bridging technical and legal scholarship to explore the current state of online age-based governance. We discuss the current legal and policy landscape, the current status of online age gating technologies, and provide recommendations to guide legal and technological scholarship and practice. Our interdisciplinary assessment is particularly important and timely, given the recent flurry of state and federal laws that aim to implement age gating online and ongoing litigation challenging such laws.