Age of Ultron, Part 2

(This post contains spoilers for Avengers: Age of Ultron.  You have been warned.)

In the first part of this series, I examined whether Tony Stark and Bruce Banner could be held liable for the damage caused by Ultron.  Now we turn to Ultron itself: could Ultron be held legally liable for damage that it caused?

I. Legal Personhood for Robots in the Real World

To be clear, under the current real-world legal regime the answer is a pretty firm no.  Computer programs and robots, no matter how complex, are not legal persons and have neither legal rights nor legal responsibilities.  They are always property.

For example, a computer program cannot be the author of a copyrighted work, even if the program does significant independent creative work and the only human involvement is in causing (even indirectly) the program to be run.  There are arguments that this should change if artificial intelligences (AIs) become sufficiently sophisticated, but those arguments are purely speculative at this point.  See, e.g., Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131 (1997).

Indeed, no non-humans are currently considered legal persons in the US.  Cetacean Community v. Bush, 386 F.3d 1169 (9th Cir. 2004).  Recently the Nonhuman Rights Project brought a case in New York state court in the name of two chimpanzees, Hercules and Leo, kept at Stony Brook University.  The judge in that case initially granted an order for writ of habeas corpus, which would indicate a degree of legal personhood for the animals, but the judge promptly amended the order to remove any reference to the writ.  Nonhuman Rights Project v. Stanley, 152736/15 (Sup. Ct. N.Y. Apr. 21, 2015).  So far that’s the closest any non-human has gotten in the US.

II. Legal Personhood and Criminal Liability for Robots in the MCU

There’s not a lot of reason to believe that robots or artificial intelligences are considered legal persons in the MCU, either.  Jarvis, for example, seems to be regarded as the property of Tony Stark (or possibly his company), not as an independent entity that just happens to work for Stark.  Similarly, as best I can recall, no one discusses arresting Ultron, only destroying him.  But, let’s suppose that the legal framework is in place in the MCU for a robot to be held legally liable for its actions.  How would the criminal sanction even work for a robot?

The legal justification and feasibility of criminal sanctions for robots have been discussed by a few commentators, most notably Gabriel Hallevy.  See, e.g., The Criminal Liability of Artificial Intelligence Entities—From Science Fiction to Legal Social Control, 4 Akron Intell. Prop. J. 171 (2010).  In Hallevy’s view, “If all of its specific requirements are met, criminal liability may be imposed upon any entity—human, corporate, or AI entity.”  Id. at 199.

But how would this work as a practical matter?  There are four major justifications for the criminal sanction: retribution, deterrence, incapacitation, and rehabilitation.  Hallevy rejects retribution and deterrence, since an AI cannot experience suffering.  Rachel Charney, Can Androids Plead Automatism? A Review of When Robots Kill: Artificial Intelligence Under the Criminal Law by Gabriel Hallevy, 73 U.T. Fac. L. Rev. 69, 71 (2015).  But he supports rehabilitation (through learning) and incapacitation (through reprogramming or being shut down).

(Although an AI may not be able to experience suffering, it could seek to preserve its own existence.  Thus, I’m not sure I agree with Hallevy that there is no deterrence value in the criminal sanction for AIs, but it does presuppose a very sophisticated AI.  Similarly, humans can definitely derive significant retributive value from punishing non-sentient animals and even inanimate objects.)

Practically speaking, incarceration is of little purpose for an AI, since computers are nothing if not patient.  Being shut down or deleted is effectively a death sentence, which bears with it all the same difficulties we have with capital punishment in the human context.  Reprogramming is perhaps analogous to involuntary neurosurgery, which the law generally does not condone in the human context, although involuntary medical treatment is common.  Rehabilitation may or may not be possible, depending on the nature of the AI.  It certainly did not appear to be possible in Ultron’s case.

So, then, what is to be done with an AI that cannot be rehabilitated or reprogrammed?  Should we incarcerate or delete it because it makes us feel better?  Or perhaps because we think it will teach other robots a lesson?  Or maybe just to prevent it from causing more harm?  The last is the only one that makes sense to me,  but these are difficult questions, and ones that I don’t think have very satisfying answers at the moment.

28 responses to “Age of Ultron, Part 2

  1. I would think that if an AI possessed enough sentience and sapience that it could bear criminal liability, then it can probably experience something enough like suffering and a desire to avoid suffering or destruction, that retribution and deterrence would both apply. If it doesn’t have enough self awareness and sense of identity and self preservation that they could apply, then it’s probably not enough of a person for liability to be a useful concept as applied.

    • Philo Pharynx

      Imagine if Ultron were taskes as the server for Yahoo Answers – that would certainly be suffering, though it would probably be considered cruel and unusual punishment.

  2. On the subject of sapient robots, we saw in the classic Short Circuit 2 that Johnny Five successfully applied for and gained US citizenship. I suppose the question is: should he have had to? I mean, if they granted Johnny Five with the ability to receive US citizenship, shouldn’t he have gained it automatically since he was “born” in the US when he was first assembled there? There is also the thorny question of Johnny’s assault on the sleazy doublecrossing criminal.

    • I’ll admit I’m more familiar with the original Short Circuit, although I vaguely recall having seen the sequel. In any case, it could be that he had to apply for citizenship because, as a new and unique case, there was no legal presumption that he would be a US citizen, as in the case of a human born in the US or born to a US parent. It’s a bit circular to say that since he was granted citizenship he shouldn’t have had to apply: if he hadn’t applied, we wouldn’t know if he should have it or not.

    • Daniel Taylor

      Johnny’s assault on the criminal is unlikely to be a problem under the circumstances, since:

      (a) Johnny’s “assault” consisted of: (1) some acts in blatant self-defence, followed by (2) removing an escaping criminal from his boat, without injury – technically a crime, but:

      (b) Authorities are generally reluctant to prosecute people who restrain or defeat kidnappers and bank robbers, without serious injury to any party, and then immediately turn them over to the police.

      (Can it be considered to be a citizen’s arrest? It may actually matter whether Johnny’s citizenship is retroactive to his construction – or to the lightning strike.)

      • A ‘citizen’s arrest’, while called citizen’s arrest, is not actually dependent on being a citizen. Any person can do.

        Assuming Johnny 5 is legally a person at the time (And if he’s not, he can do whatever the hell he wants.), he can use a citizen’s arrest. I don’t know the exact laws in New York, but generally you can temporarily detain someone if you *know* they have committed crimes, or sometimes just felonies. The threshold under which you are required ‘know’ this sometimes varies from state to state, but Johnny 5 (again, assuming he’s a person, because none of this matters if he’s not) was *personally* the victim of attempted murder at their hands, so certainly ‘knows’ they have committed felonies under any rules. And he seems to have used the least possible lawbreaking possibly to detain them.

        I think the movie was using the naturalization ceremony at the end to indicate the courts had decided his personhood before that. It is fun to try to figure out the actual *reason* he’d need to do that…the 13th amendment says it applies to people *born* in the US…so perhaps he doesn’t count as being ‘born’. So he could only get citizenship via the laws…and most of the laws end up talking about people’s *parents*, which Johnny 5 also doesn’t have. It’s entire possible, if slightly weird, that he legally ended up a person while not legally ending up a citizen. (He is, however, in the country legally and in no danger of being deported. He didn’t *enter* the country, so couldn’t have done it illegally.)

        Heh. And now I’m remembering the ‘Foundling’ article here a while back. Johnny 5 does not know who is parents are, because he literally does not have parents. He *can’t* know. So it appears, on his 21st birthday, he might actually qualify as a foundling and get citizenship that way….but maybe he just didn’t want to wait that long.

        Whether or not that personhood was defined retroactively is an interesting question, but if it was, he probably was behaving legally. (And if it wasn’t, well, legally, he couldn’t have committed crimes in the first place.)

      • Ken Arromdee

        Does “knowing who your parents are” mean “knowing the contents of the set ‘your parents'” or does it mean ‘being able to name specific people who are your parents'”? These normally go together, but someone without parents would “know who his parents are” in the first sense but not the second sense.

  3. Retribution, Deterrence, Incapacitation, and Rehabilitation?

    I’m not sure I see the difficulty; if an AI is recognized as a “person” and held liable, the only Retribution we allow ourselves in the modern criminal justice system is the Death Penalty. If you have a homicidal AI, and he was designed to be homicidal, you can rehabilitate him through reprogramming–or if not possible, you can incapacitate him through electronic isolation. If he “evolved” through his own self-programming choices then the Death Penalty is certainly an option, and just retribution.

    Deterrence? Not exactly an issue unless you have a whole population of sophisticated AIs calculating their own self-interests.

    • Retribution is a rationale for punishment, not a category of sanction. Imprisoning someone can be imposed for reasons of retribution (you did something wrong, so we will limit your freedom), deterrence (limiting your freedom will make you or others think twice before doing something wrong), incapacitation (limiting your freedom will prevent you from doing more wrong things), or rehabilitation (while we limit your freedom we will try to improve whatever personal or moral failing led you to do the wrong thing).

      Even the death penalty is not (in theory) purely retributive. It also incapacitates the offender, fully deters them from future criminal acts, and in theory it’s a deterrent to others. But imprisonment, fines, mandatory community service, etc, can all be imposed out of a desire for retribution.

      • Understood; I merely wished to point out that criminal sanctions against AIs would not generally present a problem, although the methods might certainly be different.

      • James Pollock

        Criminal sanctions against AIs might present a problem, in that they’re cruel and unusual if applied to human people. Will the 14th amendment permit human people and non-human people to be treated differently under the law? To what degree? This is, of course, a question no court can answer, yet, because there is no case or controversy over this question currently before any court.

  4. In a sense, didn’t Vision already take care of this in the movie? I literally just got back from seeing it a second time, and Vision’s first attack on Ultron during the final battle was to purge him from the internet. That’s a form of incarceration, since it kept Ultron from jumping away from Sokovia, and just left him with his thousands of copies. You could say that the Avengers were destroying his alternate escape routes, which also adds to the incarceration perspective.

    Even though their goal was to destroy all of the Ultron-bots, it was Wanda who dealt the blow to the main Ultron copy when she ripped out his heart/power core after Pietro died. Of course, Vision did deliver the coup de grace to the final copy. But that was just one of the repurposed Iron Legion bots, I think, not the Ultron that had been improving himself with the vibranium.

    I think though, between Tony and Vision/Jarvis the two of them together might have been able to reprogram Ultron, if his ultimate purpose hadn’t been the destruction of the world. Tony was clearly a little pissed off by that, and Vision just seemed…sad…that Ultron couldn’t see the beauty in the human world. Vision did succeed in cutting off all his ties to the outside world, so if the goal had been capture, rather than destroy, my guess is that they could have eventually found a way to purge the homicidal parts of him, and reintroduce him to the beauty of things that Vision saw. I see it as less a case of involuntary neurosurgery, and more along the lines of intense behavioral conditioning/rehabilitation/re-education.

    • The fact remains, however, that reprogramming a sufficiently advanced AI like Ultron is still equivalent to cracking open a human’s skull and rewiring their brain. The fact that it’s considerably easier, and involves less goopy stuff, doesn’t really negate the moral issue of whether or not you can just edit people with criminal behaviors.

    • The Vision *was* the improved Ultron body containing vibranium. The Avengers stole that body from Ultron before it was completed, then Tony put JARVIS into it and Thor animated it with the power of Mew-Mew, and the Vision was born.

      My problem with the issue of “reprogramming” is that a truly intelligent AI, by definition, would not be guided purely by programming, but would be able to make choices beyond its programming. Humans have programming in the form of our emotions, instincts, drives, cognitive biases, and so forth, but our intellect gives us the ability to transcend those things, to choose whether or not — and in what way — we act upon them. By the same token, a truly sapient AI would act out of choice rather than mindless programming. It would have an adaptive neural network capable of learning, growing, and modifying its behavior. So it stands to reason that a sapient AI could be taught or convinced to make more benevolent, law-abiding choices in the same way a human could. Reprogramming might not be necessary, or even certain to work, since a conscious mind can make choices beyond its programming. Even Asimov’s robots could sometimes reason their way into making exceptions to the Three Laws.

      • The Vision *was* the improved Ultron body containing vibranium.

        The Vision was the body that was going to be the ultimate Ultron, yes. I’m talking about the one that was clearly the main focus of the final battle – the one that Thor and Vision tag teamed so effectively, and that Wanda ripped apart at the end. All the other Ultron-bots that the team took out were “inferior” copies of Ultron, or the remaining Iron Legion bots.

        And you have a point about choice vs. programming – I think, to that extent, Ultron doesn’t qualify as a fully sentient AI, per se. His first actions are born out of reaction to the data he downloads on Tony and all the wars that humanity has caused. If anything, I think the only thing that we could say was truly his “choice” was his attempt to evolve himself with the body that became Vision. Everything else was reactionary, based on what he heard Tony say about “peace in our time”.

        But if that’s true, and he doesn’t fully qualify as sentient, then the reprogramming option would be completely valid., and would probably be satisfactory in terms of “punishment”, although Tony and the others might still be held to blame for creating him in the first place.

  5. In a related point “cracking open a human’s skull and rewiring their brain” is exactly what S.H.I.E.L.D. appears to have done to Mr. Hyde in the Agents of S.H.I.E.L.D. finally. Of course I would postulate that such a procedure is not exactly legal in the MCU, else why would S.H.I.E.L.D. have kept T.A.H.I.T.I. secret?

  6. Haven’t you written a post a while ago that went into the definition of personhood? I think it was in the context of aliens. If I remember the argument correctly, Ultron and the Vision would IMO qualify. The tl;dr was, the definition isn’t grounded in any biological criteria or even appearance, where it’s defined at all… I think a clearly sentient entity with independent agency would be considered a person, should the question go to court. But then again IANAL (or even American), just interpreting what I remember from your old post 😉

    • There was an old post about that subject, but it was written by Ryan Davidson and was more philosophical then legal. The chapter on the subject in The Law of Superheroes addresses those topics from a much more concrete legal perspective. From that perspective, it’s pretty clear that the law as it stands today would not regard Ultron, the Vision, Superman, or any number of other non-human characters as legal persons.

      • Lalo Martins

        thanks

      • James Pollock

        “it’s pretty clear that the law as it stands today would not regard Ultron, the Vision, Superman”

        Respectfully, I’m not certain that it is all that clear. Present day law has never been asked to differentiate between human people and aliens of intelligence and moral capability at least equal to human. It HAS been ask to differentiate between humans and other humans (and didn’t get that one right in the case of first impression, either… sorry, Mr. Scot, you’re still a slave because you’re still a black man, and (oops) we need to adjust the Constitution to make you a person) and between human and animals not clearly equal or superior to us in intelligence and moral capability (sorry, cetaceans, go stand over there with Mr. Scot.)
        Superman’s going to be judged as a person without any difficulty, I think. Ultron is not, but may get there, things don’t look good for the Lizard, unless he can prove he’s actually Dr. Connors.
        It’s possible that you could get laws that treated x-men differently through (Korematsu) but I don’t think you’d get a ruling that they weren’t people, even in a situation as hostile as the start of the Marvel Civil War.

        So, I don’t think it’s clear whether or not Superman is a “person” under the law, because there’s no precedent. The fact that he can stand there are argue with you about it, though, I think pushes the answer to “yes”.

  7. Terry Washington

    I am reminded in this context of the legal rights of non human entities of the 1972 “Night Gallery ” episode “You Can’t Get Help Like That Anymore”(which I suspect is an allegory for African American slavery) in which the humans for whom a robot “servant” says ut loud “We own you!”( Shades of the 1857 US Supreme Court ruling “Dred Scott”!)

  8. What about the two OUTER LIMITS adaptations of Eando Binder’s “I, Robot”? While it’s not in the original short story, both the 1964 and 1995 TV adaptations (which both had Leonard Nimoy in different roles!) involved the robot Adam Link being put on trial for the murder of his creator. Going by Wikipedia’s summaries, the original episode simply had the robot on trial for murder (although I don’t think there was a jury, so maybe it was more a hearing), with its legal culpability evidently established already, whereas the 1995 version focused on a hearing to determine whether or not Adam was a legal person and thus able to stand trial for murder, with the verdict being that he was. I’d be interested to know how those hold up legally.

  9. Megan wrote: “And you have a point about choice vs. programming – I think, to that extent, Ultron doesn’t qualify as a fully sentient AI, per se. His first actions are born out of reaction to the data he downloads on Tony and all the wars that humanity has caused. If anything, I think the only thing that we could say was truly his “choice” was his attempt to evolve himself with the body that became Vision. Everything else was reactionary, based on what he heard Tony say about “peace in our time”.”

    I don’t know if I’d go that far. Just because an entity doesn’t use its capacity to make choices, that doesn’t mean it lacks that capacity. Plenty of human beings tend to be impulsive and reactive rather than truly thinking through their choices. Indeed, I’d say that’s part of the process of rehabilitating wrongdoers; they usually blame their actions on factors outside themselves, insisting their victims brought it on themselves or whatever, and it’s only by recognizing their responsibility for their own choices and actions that they can really reform and moderate their behavior. The potential for choice was always there, even when it wasn’t really embraced.

    And I’d say Ultron made a choice of his own right off the bat. Sure, he was motivated by Tony’s imperative for peace, but Ultron decided on his own that the Avengers themselves, and the rest of humanity, were the problem. So he had an inbuilt drive, but he used his free will to decide how to direct it and act on it. Which is just the sort of thing I’m talking about as the basis of intelligence. It’s not independent of our instincts and emotions and other “programming,” but it modulates and guides our expression of those things.

    To return to an earlier point, I think Ultron in the movie is very much a child. He has intelligence and free will, but he doesn’t have experience or understanding, so he’s trying to figure out the world and is acting on what information and goals he has to start out with. It’s hard to make free choices when you have too little experience to know what your options are.

  10. This all depends very much on form the hypothetical AI takes. To borrow a phrase, this is a situation where the devil is very much in the details.

    First, how do we know that an AI will not be able to experience suffering? It may be able to experience something that is at least sufficiently close to be indistinguishable. Andrew in Asimov’s “The Positronic Man” seemed to experience many human emotions and the whole plot revolves around his desires and what he is willing to go through to satisfy them. I suspect Andrew would be able to experience suffering, or at least come close enough to it for all practical intents.
    Similarly, we don’t know that time will necessarily be meaningless to them. Andrew ultimately dies of old age (albeit only after consciously sacrificing his immunity to it for the sake of becoming more human.) Indeed, any AI that is bound to a particular piece of hardware will likely be subject to aging in a sense and if they are capable of emotions then they may find forced confinement to be extremely unpleasant. Incarceration may well make sense for such an AI in the same sense it does to us.

    These are difficult, nigh impossible, questions right now best contemplated through the lens of sci-fi. But I suspect much of this difficulty may vanish in the face of reality if and when we know the particulars of AI in the real world. Androids like Asimov’s Andrew and Star Trek’s Data would probably best be treated as human for all practical purposes. But those are both bound to a physical body and at least appear to experience emotions. An AI that is purely software (capable of running on numerous machines simultaneously and independently) and clearly has no emotions may well best be treated as software under its owner’s control, even if it can occasionally take surprising and seemingly autonomous actions. After all, software in the real world already has a certain degree of autonomy and is very often surprising even to its programmers.

  11. Chris Bussard

    4 thoughts:

    1. The comics version of Ultron is clearly sentient. The movie version is somewhat more interesting in that it’s a closer case whether his behavior derives from free will or buggy design. That muddiness, in turn, muddies up both today’s topic — is Ultron a person? — and the prior topic — how much is Stark to blame?

    2. Clearly Ultron is capable of suffering because Vision states that he *is* suffering. So Hallevy’s analysis is probably inapplicable on this point (unless there turns out to be no feasible mechanism for inflicting additional suffering).

    3. As for the uselessness of incarceration: The comics version of Ultron appears to have infinite patience. In the comics Age of Ultron storyline he waited thousands of years to conquer the world and invent time travel so that he could go back and conquer it sooner. The movie version seems more human-like in this respect. He demonstrates impulsiveness, which would make me suspect he’s not infinitely patient.

    4. On the topic of incarceration for robots, Copernicus Jones: Robot Detective (Monkeybrain Comics) has an interesting take: In that world, robots are capable of boredom, so incarceration works to impose suffering. The problem is that robot criminals with long sentences sometimes opt to delete themselves rather than serve the sentence. The penal authorities’ solution is to make forced backups at regular intervals so that a self-deleting prisoner can be revived to serve out the rest of his sentence. Who wants to analyze the legal implications of that one?

  12. James Pollock

    In Jabba’s palace in ROTJ, there is a scene where droids are being tortured. So, in at least one fantasy setting, it IS possible to inflict suffering on an artificial intelligence. (Of course, C-3PO fairly frequently whines about his suffering, so…)

  13. Very interesting thoughts here. I wonder in all of this discussion of Ultron, where then Vision stands? If Ultron is considered a person, is Vision also? In a case against Ultron, would Vision be allowed to testify against Ultron given that he isn’t exactly “human”, but clearly has sentience?

  14. The problem with any form of suffering for an AI, if it should be possible, is that without understanding the experience we would have a very difficult time figuring out when it might pass the line into outright torture.

    Personally I’d say that we might want to avoid creating AI not because people keep saying that AI’s would destroy us, but because the legal headaches would keep popping up.

Leave a Reply

Your email address will not be published. Required fields are marked *