Volume 21
The Supreme Court’s decision in WesternGeco LLC v. ION Geophysical Corp. had the potential to reach into a number of trans-substantive areas, including the nature of compensatory damages, proximate cause, and extraterritoriality. Instead of painting with a broad brush, however, the Supreme Court opted to take a modest, narrow approach to the issue of whether lost profits for foreign activity were available to a patent holder for infringement under 35 U.S.C. § 271(f)(2). In addressing this issue, the Court utilized its two-step framework for assessing the extraterritorial reach of U.S. law that it adopted in RJR Nabisco Inc. v. European Community. Step one under RJR Nabisco entails an assessment of whether the presumption against extraterritoriality has been rebutted. Step two requires a court to examine whether activity relevant to the focus of the statute occurred within the United States, even if other acts occurred outside. If so, then the statute still applies to the conduct. The Court skipped step one in WesternGeco, but its analysis of step two confirmed that the territorial limits of damages is tied to the corresponding liability provision. Ultimately the Court allowed the damages for the relevant foreign activity. This decision clarified a few important aspects about the extraterritorial application of U.S. law. By skipping step one of RJR Nabisco, the Court made clear that the presumption against extraterritoriality is distinct from the focus analysis of step two. The Court passed on the opportunity to further elaborate on step one and to answer definitively whether the presumption applies to remedial provisions. The Court did elaborate on step two and embraced a methodology that tied the extraterritorial reach of a general remedy provision to the corresponding liability provision. The Court’s decision also leaves a number of questions open. Specifically, it remains unclear whether the Federal Circuit’s decisions in Power Integrations, Inc. v. Fairchild Semiconductor International, Inc. and Carnegie Mellon University v. Marvell Technology Group, Ltd. survive WesternGeco, along with other decisions regarding the extraterritorial reach of U.S. patent law. I contend that the ultimate conclusions in Power Integrations and Carnegie Mellon are correct, even though the methodology used in the original decisions was wrong. I also discuss how the Court also failed to explore the important role that proximate cause may play in future patent cases, particularly those involving global theories of damages. The Federal Circuit could—and should—embrace a narrower conception of proximate cause to limit these types of global theories of patent damages.
A “Democracy Index” is published annually by the Economist. For 2017, it reported that half of the world’s countries scored lower than the previous year. This included the United States, which was demoted from “full democracy” to “flawed democracy.” The principal factor was “erosion of confidence in government and public institutions.” Interference by Russia and voter manipulation by Cambridge Analytica in the 2016 presidential election played a large part in that public disaffection. Threats of these kinds will continue, fueled by growing deployment of artificial intelligence (AI) tools to manipulate the preconditions and levers of democracy. Equally destructive is AI’s threat to decisional and informational privacy. AI is the engine behind Big Data Analytics and the Internet of Things. While conferring some consumer benefit, their principal function at present is to capture personal information, create detailed behavioral profiles and sell us goods and agendas. Privacy, anonymity and autonomy are the main casualties of AI’s ability to manipulate choices in economic and political decisions. The way forward requires greater attention to these risks at the national level, and attendant regulation. In its absence, technology giants, all of whom are heavily investing in and profiting from AI, will dominate not only the public discourse, but also the future of our core values and democratic institutions.
Personal jurisdiction has been a time-honored judicial concept since the 1800s. The Supreme Court has considered the ramifications of personal jurisdiction and its application in various factual scenarios over the years, often leading to plurality opinions where the Justices disagreed on the reasoning behind the judgements. The confusion resulting from this lack of consensus over the doctrine’s application has been further compounded by advances in technology. Technology has enabled people to connect in new ways and the Court has struggled to reconcile this with the traditional minimum contacts analysis it first employed in International Shoe v. Washington. Virtual Private Networks and proxies facilitate internet connections to servers located outside internet users’ home states. Some internet users rely on these technologies to specifically target a geographic area to obtain access to geographically restricted content. Others do not intentionally target a location, but only have a general awareness of their connection. Still others have no knowledge of the ultimate location of their IP address. By accessing servers outside their home state, these internet users could be establishing connections that give rise to the exercise of personal jurisdiction. This Article argues that the proper way to address this challenge is to continue to adapt the traditional personal jurisdiction analysis of International Shoe, with a focus on the intentionality of the user to avail themselves of a particular forum.
At the November 2017 oral arguments in the case Carpenter v. United States, Justice Sotomayor commented that many individuals even carry their cell phones into their beds and public restrooms: “It’s an appendage now for some people." On June 22, 2018, in a 5-4 opinion written by Chief Justice Roberts and joined by Justices Ginsberg, Breyer, Sotomayor, and Kagan, the Supreme Court held that the government will generally need a warrant to access cell-site location information (CSLI). Ostensibly, Carpenter is only about CSLI, and the language of the decision carefully limits its application. However, the Court’s reasoning behind why the third-party doctrine should not apply is broadly applicable: the information was involuntarily exposed, incidental to merely having a cell phone, which is an item necessary for functioning in modern society. Indeed, technology’s constant forward march leads one to wonder, what privacy issue awaits around the next corner? What technological innovation will pose yet another Fourth Amendment challenge? Our cell phones commonly have health apps that monitor our activity, sleep, mindfulness, and nutrition. Internet of Things (IoT) devices, which have the ability to connect to and interface with a network, include “smart” light bulbs, refrigerators, and even a mattress cover that starts your Bluetooth or WiFi-enabled coffee maker when you wake up in the morning. The private genomic testing industry, too, in which intimate genealogical and genetic health information is sent to third-party laboratories, medical researchers, and even sold to pharmaceutical companies for profit, has seen tremendous growth recently. IoT devices and private DNA testing seem vastly different from each other and from cell phones, and yet both are increasingly popular consumer technologies whose functioning, by design, necessitates a third party. Like CSLI, the data sent to third parties by smart devices and genomic testing services involves no voluntary act, let alone affirmative sharing. This lack of voluntariness was a significant part of the Carpenter Court’s basis for holding—in a decision lauded by privacy advocates—that the cell phone owner has an expectation of privacy in CSLI, despite the fact that the data is owned by a third party. Thus, notwithstanding its limiting language, Carpenter opens the door to a slew of questions about consumers’ privacy expectations in multitudes of other burgeoning technologies that, like cell phones and the location data they produce, also necessitate a third party. This Article, therefore, proposes extending the third-party doctrine in Carpenter’s wake to reflect the realities of the digital age, both to protect privacy and provide some limits to the third-party doctrine. Given that a third party has control over a consumer’s personal data, a meaningful test for whether an expectation of privacy remains or has been forfeited should include two inquiries: first, whether the consumer understands that the technology’s very design necessitates a third party; and second, whether the consumer has a meaningful opportunity to opt out of sharing data with that third party. This Article begins by describing the common law background that prefaced Carpenter, explains why the Carpenter analysis is incomplete, and offers the new, extended test for the third-party doctrine as one that balances decisional analysis with technological reality and provides a principled framework to encompass technologies beyond CSLI. Next, this Article offers normative explanations for why digital data is a square peg in the round hole of the third-party doctrine but explains that privacy in the digital era should nonetheless survive the disconnect. Finally, this Article applies the newly extended third-party doctrine test to two specific examples of increasingly popular technologies in which private data is necessarily shared with third parties: IoT devices and private DNA testing. This Article illustrates the inability of smart devices and private genomic testing services to pass the two inquiries of the proposed extended test, and affirms the consumer’s expectation of privacy in the absence of any voluntary act.
This Article critically examines the analogies scholars use to explain the special relation between the author and her work that copyright law protects under the doctrine of moral rights. Authors, for example, are described as parents and their works as children. The goal of this Article is to determine “when to drop the analogy and get on with developing” the content of the relation between the author and the work. Upon examination, that moment approaches rather quickly: none of these analogies provide any helpful framework for understanding the purported relation. At best, these analogies are first attempts at describing the relation between the author and her work. At worst, they are misleading rhetorical devices used to gain support for moral rights. So I assume that analogies are valuable as starting points for thinking about the relation between the author and her work, rather than explaining the nature of the relation. Even when viewed this way, however, the analogies raise more questions than they purport to answer. Because the analogies discussed do not explain the author-work relation, scholars must look elsewhere for arguments to support moral rights.
As the global policymaking capacity and influence of non-state actors in the digital age is rapidly increasing, the protection of fundamental human rights by private actors becomes one of the most pressing issues in global governance. This Article combines business & human rights and digital constitutionalist discourses, and uses the changing institutional context of Internet governance and the Internet Corporation for Assigned Names and Numbers (‘ICANN’) as a case study to argue that economic incentives fundamentally act against the voluntary protection of human rights by informal actors in the digital age. I further contend that the global policymaking role and increasing regulatory power of informal actors such as ICANN necessitates a reframing of their legal duties by subjecting them to directly binding human rights obligations in international law. I argue that such reframing is particularly important in the digital age for three reasons. First, it is needed to rectify an imbalance between hard legal commercial obligations and soft human rights law. This imbalance is well reflected in ICANN’s policies. Second, binding obligations would ensure that individuals whose human rights have been affected can access an effective remedy. This is not envisaged under the new ICANN bylaw on human rights precisely because of the fuzziness around the nature of ICANN’s obligations to respect internationally recognized human rights in its policies. Finally, I suggest that because private actors such as ICANN are themselves engaged in the balancing exercise around such rights, an explicit recognition of their human rights obligations is crucial for the future development of access to justice in the digital age.
In a controversial decision in Goldman v. Breitbart, the U.S. District Court for the Southern District of New York ruled that, by embedding a tweet containing a copyrighted photograph in a webpage, defendants violated the copyright owner’s exclusive display right. In reaching this decision, the Goldman court explicitly rejected the “server test,” which was first established over a decade ago by the Ninth Circuit in Perfect 10 v. Google, and has since then become a de facto bright-line rule upon which many Internet actors rely. Because of the ubiquity of website embedding, this ruling has created significant legal uncertainty for online publications. Through the lens of statutory interpretation, this Note concurs with the Goldman court that the “server test” has a weak legal footing. However, this Note explains that none of the alternative defense mechanisms suggested by the Goldman court, including fair use, DMCA safe harbor and implied license doctrine, is adequate to protect legitimate embedding from copyright liabilities. Accordingly, this Note advocates for the enactment of a statutory exemption to protect legitimate embedding in the realm of Internet, which promises to serve as a bright-line rule for online publication and resolve the legal uncertainty created by the Goldman court.
Advances in healthcare artificial intelligence (AI) will seriously challenge the robustness and appropriateness of our current healthcare regulatory models. These models primarily regulate medical persons using the “practice of medicine” touchstone or medical machines that meet the FDA definition of “device.” However, neither model seems particularly appropriate for regulating machines practicing medicine or the complex man-machine relationships that will develop. Additionally, healthcare AI will join other technologies such as big data and mobile health apps in highlighting current deficiencies in healthcare regulatory models, particularly in data protection. The article first suggests a typology for healthcare AI technologies based in large part of their potential for substituting for humans and follows with a critical examination of the existing healthcare regulatory mechanisms (device regulation, licensure, privacy and confidentiality, reimbursement, market forces, and litigation) as they would be applied to AI. The article then explores the normative principles that should underly regulation and sketches out the imperatives for a new regulatory structure such as quality, safety, efficacy, a modern data protection construct, cost-effectiveness, empathy, health equity, and transparency. Throughout it is argued that the regulation of healthcare AI will require some fresh thinking underpinned by broadly embraced ethical and moral values, and adopting holistic, universal, contextually aware, and responsive regulatory approaches to what will be major shifts in the man-machine relationship.
Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, a threshold issue involves what role AI will play in the larger medical system. This Article argues that AI can play at least four distinct roles in the medical system, each potentially transformative: pushing the frontiers of medical knowledge to increase the limits of medical performance, democratizing medical expertise by making specialist skills more available to non-specialists, automating drudgery within the medical system, and allocating scarce medical resources. Each role raises its own challenges, and an understanding of the four roles is necessary to identify and address major hurdles to the responsible development and deployment of medical AI.
Suicidal thoughts and behaviors are an international public health problem contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades, reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in artificial intelligence-based suicide prediction algorithms. This article describes these tools and the underexplored risks they pose to patients and consumers. AI-based suicide prediction is developing along two separate tracks. In “medical suicide prediction,” AI analyzes data from patient medical records. In “social suicide prediction,” AI analyzes consumer behavior derived from social media, smartphone apps, and the Internet of Things (IoT). Because medical suicide prediction occurs within the context of healthcare, it is governed by the Health Information Portability and Accountability Act (HIPAA), which protects patient privacy; the Federal Common Rule, which protects the safety of human research subjects; and general principles of medical ethics. Medical suicide prediction tools are developed methodically in compliance with these regulations, and the methods of its developers are published in peer-reviewed academic journals. In contrast, social suicide prediction typically occurs outside the healthcare system where it is almost completely unregulated. Corporations maintain their suicide prediction methods as proprietary trade secrets. Despite this lack of transparency, social suicide predictions are deployed globally to affect people’s lives every day. Yet little is known about their safety or effectiveness. Though AI-based suicide prediction has the potential to improve our understanding of suicide while saving lives, it raises many risks that have been underexplored. The risks include stigmatization of people with mental illness, the transfer of sensitive personal data to third-parties such as advertisers and data brokers, unnecessary involuntary confinement, violent confrontations with police, exacerbation of mental health conditions, and paradoxical increases in suicide risk.
We are witnessing an interesting juxtaposition in medical decision-making. Increasingly, health providers are moving away from traditional substitute decision-making for patients who have lost decisional capacity, towards supported decision-making. Supported decision-making increases patient autonomy as the patient—with the support and assistance of others—remains the final decisionmaker. By contrast, doctors’ decision-making capacity is diminishing due to the increasing use of AI to diagnose and treat patients. Health providers are moving towards what one might characterize as substitute decision-making by AIs. In this article, we contemplate two questions. First, does thinking about AI as a substitute decision-maker add value to the development of AI policy within the health sector? Second, what might the comparison with traditional substitute decision-making teach us about the agency and decisional autonomy of doctors, as AI further automates medical decision-making?
What does it mean to give professional advice, and how do things change when various forms of technology, such as decision-support software or predictive advice-generating algorithms, are inserted into the process of professional advicegiving? Professional advice is valuable to clients because of the asymmetry between lay and expert knowledge where professionals have knowledge that their clients lack. But technology is increasingly changing the traditional process of professional advice-giving. This Article considers the introduction of artificial intelligence (AI) into the healthcare provider-patient relationship. Technological innovation in medical advice-giving occurs in a densely regulated space. The legal framework governing professional advice-giving exists to protect the values underlying the providerpatient relationship. This Article first sketches the regulatory landscape of professional advice-giving, focusing on the values protected by the existing legal framework. It then considers various technological interventions into the advicegiving relationship, identifying the changes that result. Finally, it outlines legal responses aimed to integrate AI-based innovations into medical advice-giving while at the same time upholding the values underlying the professional advicegiving relationship. To the extent the existent regulatory framework is responsive to these changes, it ought to be kept in place. But when the introduction of AI into medical advice-giving changes the dynamics of the relationship in a way that threatens the underlying values, new regulatory responses become necessary.
The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts. Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing. Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimens even though these contain DNA that might be re-identifiable with current or foreseeable technology. Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives.’ But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.
For well over a decade the U.S. Food and Drug Administration (FDA) has been told that its framework for regulating traditional medical devices is not modern or flexible enough to address increasingly novel digital health technologies. Very recently, however, the FDA introduced a series of digital health initiatives that represent important experiments in medical product regulation, departing from longstanding precedents applied to therapeutic products like drugs and devices. The FDA will experiment with shifting its scrutiny from the pre-market to the post-market phase, shifting the locus of regulation from products to firms, and shifting from centralized government review to decentralized non-government review. This Article evaluates these new regulatory approaches, explains how they depart from previous approaches, and discusses why these experiments themselves require evaluation moving forward.