The Reasonable Robot Standard: Bringing Artificial Intelligence Law into the 21st Century

By: 

Michael Conklin

Authored on: 
Friday, September 4, 2020

“The new generations bring with them their new problems which call for new rules, to be patterned, indeed, after the rules of the past, and yet adapted to the needs and justice of another day and hour.”     - Benjamin Cardozo, 1925

This is a review of The Reasonable Robot: Artificial Intelligence and the Law, by Ryan Abbott.[1] The book does an excellent job providing insights into the legal challenges that arise from the proliferation of artificial intelligence (AI).[2] It is well organized, divided into the four main areas of AI legal impact: tax, tort, intellectual property, and criminal. While each area could be read on its own, it is interesting to note the underlying theme all these areas have in common. Namely, as AI increasingly occupies the roles once held by people, it will need to be treated under the law more like a person. This review largely praises Abbott’s common-sense proposals for legal changes that accommodate AI technology. However, areas of criticism are also discussed.

Tax

Abbott is technically correct when he states, “AI does not pay income taxes or generate employment taxes. It does not purchase goods and services, so it is not charged sales taxes. It does not purchase or own property, so it does not pay property taxes.”[3] But it is highly misleading to claim that “[i]f all work were to be automated tomorrow, most of the tax base would immediately disappear.”[4] This is because it neglects to consider how human labor is required to write the code for, design, deliver, maintain, and replace all of the technology that provides the automation. All of the people involved in those processes would be paying taxes. Furthermore, with the current progressive federal tax rates, higher wage earners (such as computer programmers) pay higher federal taxes in both real dollars and as a percent of their income than lower wage earners (such as the workers being replaced by robots). Therefore, it is unclear that replacing the latter with the former will result in a net decrease in tax revenue. The last sixty years of tax revenue support this. The stark trend is that the more technologically advanced society becomes, the more—not less—tax revenue is raised.[5]

While Abbot may be misguided as to the effects of increased automation on tax revenues, assessing the tradeoffs businesses face between human labor and automation is valuable. Current employment law legislation already heavily incentivizes automation over human labor before tax policy is considered. Robots are never paid unemployment or workers compensation; they never cause liability for sexual harassment; they never call in sick, unionize, divulge trade secrets, whistleblow, accept another job, take vacation, embezzle money, or collect retirement benefits; they do not require health insurance, overtime pay, or racial sensitivity training; and they cannot sue for wrongful termination.

Some of Abbott’s proposals for tax policy are based on a faulty assumption. Like many modern-tech analysts, Abbott overemphasizes the threat that adopting new technology will displace human workers.[6] This is surprising given that Abbott does a good job of identifying that when people—such as John Maynard Keynes and the Luddites—made similar claims in the past, they were emphatically proven wrong.[7] History provides little evidence to support the conclusion that technological advances lead to increased unemployment. In the year 2019—when the U.S. was more technologically advanced than any time before—the unemployment rate was at a fifty-year low.[8] Abbott does not provide adequate justification for his belief that this long-term trend will suddenly reverse.

It is interesting to note that Abbott’s arguments regarding the effects of automation on tax revenues would also apply to other adoptions of tools that made humans more efficient, such as chainsaws, word processors, and automobiles. Much like robots, these newly introduced tools also displaced workers. But more specifically, they only displaced them from specific jobs. Simply put, history provides no good reason to believe that the adoption of tools that increase human productivity results in an aggregate increase in unemployment.

Tort

Abbott persuasively argues that AI should be held to a modified theory of product liability that functions more like the negligence standard (no strict liability).[9] Under such a theory, AI would be treated like a person, and the law would therefore focus on the AI’s act rather than its design.[10] Essentially, if the AI performed an act that would be below the reasonable person standard if performed by a human, then liability would exist.[11] This prevents the undesirable outcome under products liability in which AI has greater liability than a person, even though it is safer. Abbott’s proposed standard would incentivize the safer AI behavior over the more dangerous human behavior, as it should be.

Relatedly, Abbott provides an illuminating hypothetical to illustrate how transitioning people to the use of AI is more important than making AI safer.[12] Imagine that a self-driving car is currently ten times safer than a human driver. Here, it would be preferable to replace one human driver with a self-driving car than to make one self-driving car ten-times safer. To prove this statement, Abbott explains that if the human driver would cause one fatality per 100 million miles—and therefore the self-driving car, at ten times safer, would produce one fatality per one billion miles—over the course of ten billion miles driven by both, there will be an average of 110 combined fatalities. Improving the self-driving car tenfold will only reduce the fatalities to 101. But by instead replacing the human driver with a self-driving car, fatalities would be reduced to twenty. Therefore, society should focus more on incentivizing the adoption of self-driving cars—through reduced liability, for example—over the incentivizing of safer self-driving cars—through the strict liability of products liability.

The more interesting aspect of Abbott’s tort liability proposals is not how to treat AI that causes harm but how to treat humans who cause harm through their refusal to use AI. For example, when self-driving cars become significantly safer than—and more ubiquitous than—human drivers, a human driver should be held to the standard of the reasonable robot, thus making him liable for nearly every accident he caused—because a similarly situated self-driving car would have avoided such an accident.[13] This theory has thought-provoking implications, as it would drastically raise human liability and serve to coerce people into adopting AI technology. This standard would apply not only to self-driving cars but in all implementations of AI. For example, in the future a doctor could be liable for not using AI to spot tumors on x-rays even if most doctors would not have identified the tumor.

Intellectual Property

AI in the field of intellectual property brings about many nuanced topics. Abbott largely focuses on the patentability of AI-generated inventions and who the owner of such a patent should be. Currently, in order to receive a patent, one must show that the invention would not be obvious to a skilled person in that field.[14] Since AI can consider billions of potential inventions, Abbott argues for the skilled person standard to be altered into the “skilled person using AI” standard.[15]

Abbott also argues in favor of the issuing of patents for AI-generated inventions and that the AI should be recognized as the inventor.[16] The argument for allowing patents on AI-generated inventions is straightforward; It would encourage the disclosure of AI involvement in inventions which would therefore provide more accurate information to future inventors.[17] The argument for recognizing AI as the inventor is less clear. Abbott claims that it would “safeguard human moral rights because it would prevent people from receiving undeserved acknowledgement.”[18]  

Abbott goes on to explain the complications that arise in intellectual property law when one considers how AI is both a tool that assists inventors and something that functionally automates conception.[19] Not allowing AI-generated inventions to be patented incentivizes the practice of AI owners relying instead on trade secrets. This is detrimental to the AI owners specifically and to the advancement of innovation more broadly.[20] The issue of intellectual property and AI will surely become more complicated and more relevant in the future. As the ability of AI to invent increases exponentially, it will surpass humans whose abilities to increase inventive capabilities are far more limited.[21]

Criminal Law

Currently, nearly all crimes involving AI can be reduced to a human crime.[22] But as the autonomous nature of AI grows, cases where no natural person can be held criminally liable for an AI-generated crime will inevitably increase.[23] The belief that AI should be punished for its criminal acts may initially seem nonsensical. However, it is gaining adherents in legal academia.[24] These advocates often provide as support the notion that punishing AI for criminal acts is not very different from the current practice of holding corporations liable for criminal acts.[25] Critics may object that, because AI is not deterrable, punishing it will not produce any affirmative harm-reduction benefits.[26] This objection is misplaced. Punishing AI for engaging in the functional equivalent of criminal acts has the benefit of incentivizing the behavior of AI developers, owners, and users.[27] Additionally, the restorative function of criminal law would be furthered by victims of AI-generated crimes seeing the condemnation of such acts.[28]

Abbot is upfront about the many potential harms associated with attempts to hold AI criminally liable. For example, it could be interpreted as sending an implicit message that AI is the moral equivalent of a human, and therefore AI deserves the same rights as humans.[29] Adding to the difficulty of holding AI criminally liable is the nebulous nature of how the criminal law mens rea requirement would apply.[30] Crimes generally requires a voluntary act, and it is unclear how this could apply to AI which lacks consciousness.[31] Advocates for AI criminal liability attempt to address this concern by pointing out that corporations also do not possess consciousness and it is a well-established tenet of criminal law that a human’s mental state may be imputed to a corporation.[32] Additionally, the utilization of a strict liability standard for AI-generated crimes could be a potential solution to the issue that AI lacks consciousness.[33]

After an excellent job of presenting both sides, Abbot advocates for a more moderate course of action. He explains that instead of attempting to prosecute and punish AI under criminal law, the more pragmatic course of action is to expand civil penalties directed at the developers and supervisors of AI.[34] But he cautions that this must be done carefully as to not have a chilling effect on valuable AI development.[35]

Conclusion

Given the scope of the topics discussed, Abbott understandably does not propose specific legislation or engage in deep, nuanced analysis. Rather, he considers the issues in broad terms and gives various suggestions for the reader to consider. He is not dogmatic in any of his suggestions; sometimes he even provides competing alternative solutions. Abbot prudently explains that AI is not guaranteed to improve lives. In order for it to do so, appropriate laws and policies must be implemented.

Michael Conklin is the Powell Endowed Professor of Business Law at Angelo State University. His research focus is broad, often applying statistical analysis to areas previously only covered by a theoretical approach. He has been published in journals at thirty-one of the top 100 law schools. Reach him at michael.conklin@angelo.edu.

***

[1] Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law (2020).

[2] AI is defined by Abbott as “An algorithm or machine capable of completing tasks that would otherwise require cognition.” Id. at 22.

[3] Abbott, supra note 1, at 6.

[4] Id.

[5] Kimberly Amadeo, US Federal Government Tax Revenue, Balance (July 1, 2020), https://www.thebalance.com/current-u-s-federal-government-tax-revenue-33….

[6] Abbott, supra note 1, at 5.

[7] Id. at 4–5. “Indeed, in each previous era when concerns have been expressed about automation causing mass unemployment, the new technology has created more jobs than it destroyed.” Id. at 38.

[8] U.S. Unemployment Rate Falls to 50-Year Low, White House (Oct. 4, 2019), https://www.whitehouse.gov/articles/u-s-unemployment-rate-falls-50-year-….

[9] Abbott, supra note 1, at 50.

[10] Id. at 9.

[11] Id.

[12] Id. at 58.

[13] Id. at 9.

[14] Id. at 92.

[15] Id. at 12.

[16] Id. at 10.

[17] Id. at 10–11 (“Patent offices have likely been granting patents on AI-generated inventions for decades—but only because no one’s disclosing AI’s involvement”). Id.

[18] Id. at 11.

[19] Id. at 78.

[20] Id. at 83.

[21] Id. at 12.

[22] Id. at 13.

[23] Id.

[24] Id. “A small but growing number of academics are advancing such arguments.”

[25] Id. at 111. “[C]orporations cannot literally satisfy mental state (mens rea) elements or the voluntary act requirement, criminal law has developed doctrines that allow culpable mental states to be imputed to corporations.” Id. at 119.

[26] Id. at 116.

[27] Id. at 112.

[28] Id.

[29] Id. at 16 (Abbott draws a parallel between how criminal liability for corporations has led to ever-increasing rights for corporations).

[30] Id.

[31] Id. at 14.

[32] Id. at 15.

[33] Id.

[34] Id. at 16.

[35] Id.