The Record

Suicidal thoughts and behaviors are an international public health problem contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades, reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in artificial intelligence-based suicide prediction algorithms. This article describes these tools and the underexplored risks they pose to patients and consumers.

Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, a threshold issue involves what role AI will play in the larger medical system.

Advances in healthcare artificial intelligence (AI) will seriously challenge the robustness and appropriateness of our current healthcare regulatory models. These models primarily regulate medical persons using the “practice of medicine” touchstone or medical machines that meet the FDA definition of “device.” However, neither model seems particularly appropriate for regulating machines practicing medicine or the complex man-machine relationships that will develop.

The Supreme Court’s decision in WesternGeco LLC v. ION Geophysical Corp. had the potential to reach into a number of trans-substantive areas, including the nature of compensatory damages, proximate cause, and extraterritoriality. Instead of painting with a broad brush, however, the Supreme Court opted to take a modest, narrow approach to the issue of whether lost profits for foreign activity were available to a patent holder for infringement under 35 U.S.C. § 271(f)(2).