Special Issue

Of Regulating Healthcare AI and Robots

Authors: 
Nicolas Terry
Volume: 
Issue: 
Fall
Starting Page Number: 
133
Year: 
2019
Preview: 
Advances in healthcare artificial intelligence (AI) will seriously challenge the robustness and appropriateness of our current healthcare regulatory models. These models primarily regulate medical persons using the “practice of medicine” touchstone or medical machines that meet the FDA definition of “device.” However, neither model seems particularly appropriate for regulating machines practicing medicine or the complex man-machine relationships that will develop. Additionally, healthcare AI will join other technologies such as big data and mobile health apps in highlighting current deficiencies in healthcare regulatory models, particularly in data protection. The article first suggests a typology for healthcare AI technologies based in large part of their potential for substituting for humans and follows with a critical examination of the existing healthcare regulatory mechanisms (device regulation, licensure, privacy and confidentiality, reimbursement, market forces, and litigation) as they would be applied to AI. The article then explores the normative principles that should underly regulation and sketches out the imperatives for a new regulatory structure such as quality, safety, efficacy, a modern data protection construct, cost-effectiveness, empathy, health equity, and transparency. Throughout it is argued that the regulation of healthcare AI will require some fresh thinking underpinned by broadly embraced ethical and moral values, and adopting holistic, universal, contextually aware, and responsive regulatory approaches to what will be major shifts in the man-machine relationship.
Abstract: 

Advances in healthcare artificial intelligence (AI) will seriously challenge the robustness and appropriateness of our current healthcare regulatory models. These models primarily regulate medical persons using the “practice of medicine” touchstone or medical machines that meet the FDA definition of “device.” However, neither model seems particularly appropriate for regulating machines practicing medicine or the complex man-machine relationships that will develop.

Artificial Intelligence in the Medical System: Four Roles for Potential Transformation

Authors: 
W. Nicholson Price II
Volume: 
Issue: 
Fall
Starting Page Number: 
122
Year: 
2019
Preview: 
Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, a threshold issue involves what role AI will play in the larger medical system. This Article argues that AI can play at least four distinct roles in the medical system, each potentially transformative: pushing the frontiers of medical knowledge to increase the limits of medical performance, democratizing medical expertise by making specialist skills more available to non-specialists, automating drudgery within the medical system, and allocating scarce medical resources. Each role raises its own challenges, and an understanding of the four roles is necessary to identify and address major hurdles to the responsible development and deployment of medical AI.
Abstract: 

Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, a threshold issue involves what role AI will play in the larger medical system.

Artificial Intelligence-Based Suicide

Authors: 
Mason Marks
Volume: 
Issue: 
Fall
Starting Page Number: 
98
Year: 
2019
Preview: 
Suicidal thoughts and behaviors are an international public health problem contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades, reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in artificial intelligence-based suicide prediction algorithms. This article describes these tools and the underexplored risks they pose to patients and consumers. AI-based suicide prediction is developing along two separate tracks. In “medical suicide prediction,” AI analyzes data from patient medical records. In “social suicide prediction,” AI analyzes consumer behavior derived from social media, smartphone apps, and the Internet of Things (IoT). Because medical suicide prediction occurs within the context of healthcare, it is governed by the Health Information Portability and Accountability Act (HIPAA), which protects patient privacy; the Federal Common Rule, which protects the safety of human research subjects; and general principles of medical ethics. Medical suicide prediction tools are developed methodically in compliance with these regulations, and the methods of its developers are published in peer-reviewed academic journals. In contrast, social suicide prediction typically occurs outside the healthcare system where it is almost completely unregulated. Corporations maintain their suicide prediction methods as proprietary trade secrets. Despite this lack of transparency, social suicide predictions are deployed globally to affect people’s lives every day. Yet little is known about their safety or effectiveness. Though AI-based suicide prediction has the potential to improve our understanding of suicide while saving lives, it raises many risks that have been underexplored. The risks include stigmatization of people with mental illness, the transfer of sensitive personal data to third-parties such as advertisers and data brokers, unnecessary involuntary confinement, violent confrontations with police, exacerbation of mental health conditions, and paradoxical increases in suicide risk.
Abstract: 
Suicidal thoughts and behaviors are an international public health problem contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades, reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in artificial intelligence-based suicide prediction algorithms. This article describes these tools and the underexplored risks they pose to patients and consumers.

AIs as Substitute Decision-Makers

Authors: 
Ian Kerr
Vanessa Gruben
Volume: 
Issue: 
Fall
Starting Page Number: 
78
Year: 
2019
Preview: 
We are witnessing an interesting juxtaposition in medical decision-making. Increasingly, health providers are moving away from traditional substitute decision-making for patients who have lost decisional capacity, towards supported decision-making. Supported decision-making increases patient autonomy as the patient—with the support and assistance of others—remains the final decisionmaker. By contrast, doctors’ decision-making capacity is diminishing due to the increasing use of AI to diagnose and treat patients. Health providers are moving towards what one might characterize as substitute decision-making by AIs. In this article, we contemplate two questions. First, does thinking about AI as a substitute decision-maker add value to the development of AI policy within the health sector? Second, what might the comparison with traditional substitute decision-making teach us about the agency and decisional autonomy of doctors, as AI further automates medical decision-making?
Abstract: 

We are witnessing an interesting juxtaposition in medical decision-making. Increasingly, health providers are moving away from traditional substitute decision-making for patients who have lost decisional capacity, towards supported decision-making. Supported decision-making increases patient autonomy as the patient—with the support and assistance of others—remains the final decisionmaker. By contrast, doctors’ decision-making capacity is diminishing due to the increasing use of AI to diagnose and treat patients.

Artificial Professional Advice

Authors: 
Claudia E. Haupt
Volume: 
Issue: 
Fall
Starting Page Number: 
55
Year: 
2019
Preview: 
What does it mean to give professional advice, and how do things change when various forms of technology, such as decision-support software or predictive advice-generating algorithms, are inserted into the process of professional advicegiving? Professional advice is valuable to clients because of the asymmetry between lay and expert knowledge where professionals have knowledge that their clients lack. But technology is increasingly changing the traditional process of professional advice-giving. This Article considers the introduction of artificial intelligence (AI) into the healthcare provider-patient relationship. Technological innovation in medical advice-giving occurs in a densely regulated space. The legal framework governing professional advice-giving exists to protect the values underlying the providerpatient relationship. This Article first sketches the regulatory landscape of professional advice-giving, focusing on the values protected by the existing legal framework. It then considers various technological interventions into the advicegiving relationship, identifying the changes that result. Finally, it outlines legal responses aimed to integrate AI-based innovations into medical advice-giving while at the same time upholding the values underlying the professional advicegiving relationship. To the extent the existent regulatory framework is responsive to these changes, it ought to be kept in place. But when the introduction of AI into medical advice-giving changes the dynamics of the relationship in a way that threatens the underlying values, new regulatory responses become necessary.
Abstract: 
What does it mean to give professional advice, and how do things change when various forms of technology, such as decision-support software or predictive advice-generating algorithms, are inserted into the process of professional advicegiving? Professional advice is valuable to clients because of the asymmetry between lay and expert knowledge where professionals have knowledge that their clients lack. But technology is increasingly changing the traditional process of professional advice-giving.
 

Big Data: Destroyer of Informed Consent

Authors: 
A. Michael Froomkin
Volume: 
Issue: 
Fall
Starting Page Number: 
27
Year: 
2019
Preview: 
The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts. Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing. Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimens even though these contain DNA that might be re-identifiable with current or foreseeable technology. Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives.’ But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.
Abstract: 

The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information.

Digital Health and Regulatory Experimentation at the FDA

Authors: 
Nathan Cortez
Volume: 
Issue: 
Fall
Starting Page Number: 
4
Year: 
2019
Preview: 
For well over a decade the U.S. Food and Drug Administration (FDA) has been told that its framework for regulating traditional medical devices is not modern or flexible enough to address increasingly novel digital health technologies. Very recently, however, the FDA introduced a series of digital health initiatives that represent important experiments in medical product regulation, departing from longstanding precedents applied to therapeutic products like drugs and devices. The FDA will experiment with shifting its scrutiny from the pre-market to the post-market phase, shifting the locus of regulation from products to firms, and shifting from centralized government review to decentralized non-government review. This Article evaluates these new regulatory approaches, explains how they depart from previous approaches, and discusses why these experiments themselves require evaluation moving forward.
Abstract: 

For well over a decade the U.S. Food and Drug Administration (FDA) has been told that its framework for regulating traditional medical devices is not modern or flexible enough to address increasingly novel digital health technologies. Very recently, however, the FDA introduced a series of digital health initiatives that represent important experiments in medical product regulation, departing from longstanding precedents applied to therapeutic products like drugs and devices.

Subscribe to RSS - Special Issue