Age of Ultron, Part 2

(This post contains spoilers for Avengers: Age of Ultron.  You have been warned.)

In the first part of this series, I examined whether Tony Stark and Bruce Banner could be held liable for the damage caused by Ultron.  Now we turn to Ultron itself: could Ultron be held legally liable for damage that it caused?

I. Legal Personhood for Robots in the Real World

To be clear, under the current real-world legal regime the answer is a pretty firm no.  Computer programs and robots, no matter how complex, are not legal persons and have neither legal rights nor legal responsibilities.  They are always property.

For example, a computer program cannot be the author of a copyrighted work, even if the program does significant independent creative work and the only human involvement is in causing (even indirectly) the program to be run.  There are arguments that this should change if artificial intelligences (AIs) become sufficiently sophisticated, but those arguments are purely speculative at this point.  See, e.g., Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131 (1997).

Indeed, no non-humans are currently considered legal persons in the US.  Cetacean Community v. Bush, 386 F.3d 1169 (9th Cir. 2004).  Recently the Nonhuman Rights Project brought a case in New York state court in the name of two chimpanzees, Hercules and Leo, kept at Stony Brook University.  The judge in that case initially granted an order for writ of habeas corpus, which would indicate a degree of legal personhood for the animals, but the judge promptly amended the order to remove any reference to the writ.  Nonhuman Rights Project v. Stanley, 152736/15 (Sup. Ct. N.Y. Apr. 21, 2015).  So far that’s the closest any non-human has gotten in the US.

II. Legal Personhood and Criminal Liability for Robots in the MCU

There’s not a lot of reason to believe that robots or artificial intelligences are considered legal persons in the MCU, either.  Jarvis, for example, seems to be regarded as the property of Tony Stark (or possibly his company), not as an independent entity that just happens to work for Stark.  Similarly, as best I can recall, no one discusses arresting Ultron, only destroying him.  But, let’s suppose that the legal framework is in place in the MCU for a robot to be held legally liable for its actions.  How would the criminal sanction even work for a robot?

The legal justification and feasibility of criminal sanctions for robots have been discussed by a few commentators, most notably Gabriel Hallevy.  See, e.g., The Criminal Liability of Artificial Intelligence Entities—From Science Fiction to Legal Social Control, 4 Akron Intell. Prop. J. 171 (2010).  In Hallevy’s view, “If all of its specific requirements are met, criminal liability may be imposed upon any entity—human, corporate, or AI entity.”  Id. at 199.

But how would this work as a practical matter?  There are four major justifications for the criminal sanction: retribution, deterrence, incapacitation, and rehabilitation.  Hallevy rejects retribution and deterrence, since an AI cannot experience suffering.  Rachel Charney, Can Androids Plead Automatism? A Review of When Robots Kill: Artificial Intelligence Under the Criminal Law by Gabriel Hallevy, 73 U.T. Fac. L. Rev. 69, 71 (2015).  But he supports rehabilitation (through learning) and incapacitation (through reprogramming or being shut down).

(Although an AI may not be able to experience suffering, it could seek to preserve its own existence.  Thus, I’m not sure I agree with Hallevy that there is no deterrence value in the criminal sanction for AIs, but it does presuppose a very sophisticated AI.  Similarly, humans can definitely derive significant retributive value from punishing non-sentient animals and even inanimate objects.)

Practically speaking, incarceration is of little purpose for an AI, since computers are nothing if not patient.  Being shut down or deleted is effectively a death sentence, which bears with it all the same difficulties we have with capital punishment in the human context.  Reprogramming is perhaps analogous to involuntary neurosurgery, which the law generally does not condone in the human context, although involuntary medical treatment is common.  Rehabilitation may or may not be possible, depending on the nature of the AI.  It certainly did not appear to be possible in Ultron’s case.

So, then, what is to be done with an AI that cannot be rehabilitated or reprogrammed?  Should we incarcerate or delete it because it makes us feel better?  Or perhaps because we think it will teach other robots a lesson?  Or maybe just to prevent it from causing more harm?  The last is the only one that makes sense to me,  but these are difficult questions, and ones that I don’t think have very satisfying answers at the moment.

28 responses to “Age of Ultron, Part 2

Leave a Reply

Your email address will not be published. Required fields are marked *