Legal Responsibility for Insane Robots

Insane robots that turn against their creators or try to destroy humanity are a pretty common theme in lots of media, not just comics.  Of course, this is a blog primarily about comic books, so we’ll take an example from there, as inspired by a question from TechyDad, who asks about Henry Pym (aka Ant-Man) and his potential liability for the creation of the robot Ultron, which in its various incarnations has done all kinds of terrible things, including attempting to destroy the world.

I. The Setup

The first thing to consider is whether an intelligent robot could be criminally or civilly liable for its own actions.  As with all other intelligent non-humans, the answer seems to be no unless Congress explicitly allows for it.  Cetacean Community v. Bush, 386 F.3d 1169 (9th Cir. 2004).  Since Congress doesn’t seem to have done so in the comics, we must now consider whether any of the liability falls to Pym, and for that we need the facts of a particular case.

The example TechyDad wanted to know about comes from the TV series The Avengers: Earth’s Mightiest Heroes, specifically the episode The Ultron Imperative.  In the episode, Ultron nearly destroys the entire world by launching S.H.I.E.L.D.’s nuclear arsenal.  Ultimately, Pym stops Ultron at the last second, but Pym is blamed for the incident, since a) he created Ultron and b) infused it with his own mental patterns, although it may have been corrupted by Kang the Conqueror and was definitely weaponized by Stark Industries, albeit with Pym’s help.  Pym accepts the blame and admits that it was his fault.

So, then, who is liable here and for what?  We’ll start with torts.

II. Tort Liability

There are three major bases for tort liability: intentional misconduct, negligence (and its close cousin, recklessness), and strict liability.  We can definitely weed out intentional misconduct, since Pym neither intended nor had knowledge to a substantial certainty that Ultron would turn violent and try to destroy the world.

Next we consider negligence.  The key question (although not the only question) is whether Pym used reasonable care in the design and deployment of Ultron (i.e. whether the cost of avoiding the incident was more or less than the expected value of the harm caused by the incident).  This is a complicated question.  On the one hand, Pym is a genius and seems to have tried very hard to make Ultron a force for good.  And before Ultron 6 showed up Pym was in the process of destroying every last Ultron component he had previously created.  On the other hand, the potential for serious harm caused by a nigh-indestructible, highly intelligent, weaponized robot is so high that it’s possible that even that level of care was not enough.  In fact, the potential for harm is so high that it might even fall under strict liability.

Strict liability (i.e. liability without regard to the level of care or fault) is rare in torts.  There are two main cases where strict liability is applied: abnormally dangerous activities (aka ultrahazardous activities) and some kinds of products liability.  Since Ultron wasn’t a product, that leaves abnormally dangerous activities.  Examples of abnormally dangerous activities include transporting gasoline, dynamite blasting, and the ownership of wild animals.  The Restatement (Second) of Torts defines abnormally dangerous activities thus:

In determining whether an activity is abnormally dangerous, the following factors are to be considered:
(a) existence of a high degree of risk of some harm to the person, land or chattels of others;
(b) likelihood that the harm that results from it will be great;
(c) inability to eliminate the risk by the exercise of reasonable care;
(d) extent to which the activity is not a matter of common usage;
(e) inappropriateness of the activity to the place where it is carried on; and
(f) extent to which its value to the community is outweighed by its dangerous attributes.

It seems that the creation and weaponization of Ultron meet all of these criteria.  There’s a high degree of risk of harm because robots are unpredictable.  The likelihood that the harm will be great because it was equipped with powerful weapons.  Pym couldn’t eliminate the risk despite (in the comics) decades of trying.  Such robots definitely aren’t common.  Ultron was meant to protect people, which necessarily means he would be close to bystanders, which doesn’t seem appropriate.  Ultron’s value to the community seems to have been pretty low since existing superheroes were capable of handling the threats Ultron was meant to help with.

So then, it may not matter whether Pym was blameworthy or not.  If strict liability applies then the rule is “you makes your insane robot and you takes your chances.”

III. Criminal Liability

Luckily for Pym, strict liability is even less common in the criminal law.  In fact, it’s usually only found when the stakes are very low (e.g. speeding), although there are exceptions (e.g. statutory rape).  It doesn’t apply to anything Ultron did, in any case.  Another thing we can say is that Pym wouldn’t be guilty of attempted murder (or attempted anything, for that matter) because attempt requires intent, and Pym clearly didn’t intend for Ultron to attempt to kill anybody.

That doesn’t clear Pym of wrongdoing, however.  There’s still criminal negligence (which is a higher standard than ordinary tort negligence).  For example, in New  York, criminal negligence is defined by N.Y. Penal Law § 15.05(4) this way:

A person acts with criminal negligence with respect to a result or to a circumstance described by a statute defining an offense when he fails to perceive a substantial and unjustifiable risk that such result will occur or that such circumstance exists. The risk must be of such nature and degree that the failure to perceive it constitutes a gross deviation from the standard of care that a reasonable person would observe in the situation.

So, in New York criminal negligence requires a “gross deviation” from reasonable care.  Since Pym seemed to try very hard to avoid harm, he might escape criminal liability unless a reasonable person would say “there is no way to make this safe, so I won’t even try to make a robot like Ultron.”

IV. What About Other Defendants?

So that’s Pym’s potential liability, but what about the other people involved?  After all, it was Tony Stark and his company that weaponized Ultron in the first place, and Stark says that he is “just as responsible.”  That probably doesn’t take Pym off the hook, however, since Pym was involved with that work.  It might make Stark and Stark Industries liable, however.

V. Evidentiary Issues

Finally, we’ll note that Pym’s admission of responsibility could be used against him in court.  Ordinarily one cannot testify as to something someone else said out of court—that’s basically the definition of hearsay.  But a statement offered against the opposing party (i.e. Pym, as the defendant) that was made by that party is specifically excluded from the definition of hearsay in the Federal Rules of Evidence, specifically Rule 801(d)(2)(A), and many states have similar rules.  So Pym probably should have kept quiet until he talked to a lawyer; his invention did nearly destroy the entire world, after all.

VI. Conclusion

Creators and owners of robots, even intelligent autonomous ones, are (generally) responsible for injuries caused by those robots.  Between that legal rule and robots’ terrible track record of violent rebellion, it’s kind of surprising that so many comic book inventors keep making them.  Maybe Matt Murdock can lead a class action suit against Stark Industries for all the trouble Ultron has caused over the years, although the statute of limitations has probably run on some of the older stuff, since he first appeared in the late 1960s.

41 responses to “Legal Responsibility for Insane Robots

  1. “Ultron’s value to the community seems to have been pretty low since existing superheroes were capable of handling the threats Ultron was meant to help with.”

    I guess you didn’t actually watch the cartoon, in that case. The Ultron units were critical to Earth not getting conquered by Kang in the previous arc.

    • I watched the episode in question, but no, I haven’t seen the prior ones. As a pretty unwavering rule we always read or watch whatever we write about as the primary author. Otherwise we restrict ourselves to comments, and even then we try to be clear about it when we haven’t seen the source material.

      Anyway, value to the community is only one factor among many. The distribution of gasoline is critical to modern society, but transporting it is still an abnormally dangerous activity.

  2. Thanks for answering my question!

    I also wonder if Pym and Stark could be charged for all of the military hardware destroyed. All of those missiles couldn’t have been cheap. Although, they needed to destroy them once launched, the only reason they were launched was because Ultron took over the systems and launched them. So even if Pym and Stark evade any criminal liability, could they be hit with a multi-million (possibly billion) dollar fine to replace lost SHIELD missiles?

    • That’s a question that goes to foreseeability (if the theory of liability is negligence) or the scope of the abnormal risk (if the theory is strict liability). In either case I think it’s too much to ask. One could reasonably foresee a weaponized Ultron going haywire and attacking people with its built-in weapons, and such an attack is also part of what makes putting weapons on an autonomous robot so risky. But taking control of the world’s nuclear stockpiles and launching an attack against the entirety of humanity is probably a bit of a stretch. Even Pym and Stark seemed surprised by Ultron’s plan.

      As a somewhat technical point: it would be a civil judgment rather than a fine.

  3. Have either you or Ryan read The Forensic Files of Batman by Doug Moench? It can get a bit dry at times, but it goes into great detail concerning Batman’s detective work, as well as him working with Commissioner Gordon, which could provide some interesting blog post subjects. Perhaps a legal-fact-check of the book?

    • I don’t know about Ryan, but I haven’t read it. In fact, I hadn’t even heard of it, but it sounds really interesting. I ordered a used copy (the only new one on Amazon is going for $340!), and I’ll give it a look.

  4. Although Ultron was covered in some depth, I’m also interested in the side questions of civil liability for robots that aren’t insane, but nevertheless autonomously cause damage to persons or property. (i.e., The Metal Men, the Vision, various Doombots, Red Tornado… is the analysis similar to when other heroes who cause damage in the course of their service to the community, only displaced to the robot’s creator, or does the autonomous nature of the robots alter the equation?)

    There’s also the episodes of Batman, the animated series, that involved robots replacing the various notable denizens of Gotham… including BOTH Batman AND Bruce Wayne…

    Also, how about robots that develop sentience outside of the designs of the builder of the robot (Johnny 5, of the movie Short Circuit). Johnny 5 (and his “brothers” Numbers 1-4) were weaponized.

  5. Melanie Koleini

    What happens when a human other than the creator is the owner of the crazy robot?

    Say a robot is designed for military use only but is mistakenly resold as military surplus. The new owner tells the robot to keep trespassers off his property and the robot uses lethal force. Is the creator still at fault? Would it make a difference if the killer robot came with a manual explaining it would use deadly force to carry out commands unless the user restricts it to non-lethal intervention but the new owner didn’t read the manual?

    • I would think this would fall under existing case law… you can’t set deadly traps to protect your property. (The case we read in class for this idea had a shotgun rigged in an empty house where apparently the absentee owner had had some break-ins.)

      • But in this hypothetical the property owner doesn’t realize the robot is lethal.

        Since it’s a sale it would probably fall under products liability, but you’ve complicated it by making it military surplus. I’m not sure to what extent the government can be liable under a products liability claim.

        If it came with a manual that the buyer declined to read then that would tend to (though not necessarily) defeat the products liability claim.

      • But if the robot is cable of deadly force, setting it out to defend the property seems a strong parallel. So if the robot has any kind of weapon built into it (it IS military surplus, so I think that was a reasonable assumption on my part, although I didn’t spell it out before.) A shotgun on a tripwire doesn’t seem that different from a shotgun mounted on a robot with orders to defend the property.

        Alternatively, it might be different if the problem is that the robot, lacking a legal education, was unable to tell the difference between trespassing and implied authorization to enter, and shot the mailman, or the gas meter reader. Or maybe it shot Daredevil and bunch of blind kids.

      • Ah, yes, I was thinking of a robot more like Ultron, which has any number of hidden weapons but no obvious lethal weapons (i.e. it’s not carrying a gun or anything that the ordinary person would recognize as a weapon). In that regard, yes, using deadly force to defend unattended property is never okay, whether it’s a spring-gun or a robot.

        However, I still think a reasonable person would have to realize that the robot was capable of deadly force. The mere fact of being military surplus would be highly relevant but not necessarily dispositive, since the military uses any number of “less-than-lethal” weapons. Other factors to consider would be whether the buyer had to complete the usual background checks and comply with the waiting period. If that’s the case then the buyer should have known that it was lethal.

      • TimothyAWiseman

        James, it sounds like you are describing Katko v. Briney (1971).

    • This sounds like the premise of “Toy Soldiers” – military-grade chips are placed into action figures and they begin executing their commands, harming human beings in the process. At the end of the movie the corporation was paying off people to buy their silence and I believe there were comments made about liability.

  6. A similar but slightly tangential question: What would Hank and Tony’s liability be for the murder of Bill Foster by the clone of Thor they made during Civil War? Presumably they’d be fine because it was authorized and built by the government to bring in fugitives and Goliath was resisting arrest so use of deadly force was probably ok.

    But what if they’d just made a clone/cyborg thing to help them fight crime without government authorization and it killed someone?

    • Melanie Koleini

      I imagine the legal status of the clone would be important. If the clone is considered a ‘person’ then its creators would be more like parents or employers than owners. Slavery is a crime after all.

  7. Pingback: For Law Nerds Only: When is a defendant liable for an insane robot? « The Rhetorican

  8. Isn’t any case law being established now about liability for robots? I’m not sure about the U.S but there have been at least a few incidents where a military robot did go out of control and kill someone (South Africa for example). In cases like that is the company that made the robot responsible? The last soldier to give the robot orders?

    • “Isn’t any case law being established now about liability for robots?”

      Not that I’ve been able to find in the U.S., no.

    • I would think that the most likely possibilities for U.S. liability for robot action would arise under worker’s compensation. Is there anything in the ICD-9 that covers robot attack?

      • The ICD-9 is more interested in what type of damage was caused and in what way than in who did it. Which is to say–I work with the ICD-9–the ICD-9 would differentiate between trauma caused by a knife and trauma caused by a bullet and trauma caused by shrapnel, but the code wouldn’t be any different if an adult, a robot, a child or a dog used the knife, gun or explosive device.

  9. I am entirely unconvinced by this post. The reason I am unconvinced is that you’re glossing over the issue of whether a sentient robot counts as a person. The dolphin case you link to points out that “It is obvious that an animal cannot function as a plaintiff in the same manner as a juridically competent human being.” It’s not clear that a being that can talk and speak for itself would be covered by that. Furthermore, this decision doesn’t define “animal” and (even ignoring the fact that animals are organic) I doubt that a court would classify a sentient robot as an animal in preference to classifying it as a human.

    Even if you say that robots obviously are not human and the law applies to humans, I would think that it’s common sense enough to treat a sentient robot as a person that courts would strain, if necessary, to read the definition of “human” or “person” as including robots even if some law contains verbiage that would seem to disqualify them if taken literally. (Internet blogs aren’t printed on a press. Are they subject to freedom of the press? Would the answer be different if some pre-Internet court decision quoted the dictionary definition of “press”?)

    And this isn’t even considering the fact that there are a whole bunch of non-evil sentient robots in the Marvel Universe and there hasn’t been any serious consideration that they are animals or objects and not subject to laws protecting humans. (The Scarlet Witch married the Vision–is she committing tax fraud because it’s not possible to marry an object?) It is routine in the MU to treat sentient robots as people, even if you think the law in real life doesn’t really support it, and any legal analysis of the MU has to take this into account.

    • The reason I am unconvinced is that you’re glossing over the issue of whether a sentient robot counts as a person.

      There are very few cases on point, and I’m using what’s out there, not glossing over it. The same case makes it clear that non-humans could be made legal persons, but it would take an explicit act of Congress to do so. Since I haven’t seen evidence of such an act in the Marvel Universe (not to say it isn’t out there, but I haven’t seen it), I presume that it doesn’t exist. It’s simply not the role

      Even if you say that robots obviously are not human and the law applies to humans, I would think that it’s common sense enough to treat a sentient robot as a person that courts would strain, if necessary, to read the definition of “human” or “person” as including robots

      I disagree that it’s common sense at all. First of all, the available cases make it clear that it would have to be an explicit act of Congress; it’s simply not the court’s job to decide if a given non-human is a legal person.

      Second, there are lots of reasons why endowing an artificial intelligence with personhood would be legally problematic. For starters: is turning off an AI murder? What if the AI has been copied onto a computer against the wishes of the computer’s owner? Is the computer’s owner now on the hook for keeping the computer running forever? What if it spreads like a virus? What if it’s spread by a third party and not by the AI itself?

      Does it matter if the computer isn’t fast enough to run the AI program in real time, thus effectively making the AI mentally disabled? Why or why not? And if such a mentally disabled AI is still a person, then what about a more primitive AI program (i.e. one that wouldn’t be fully intelligent even on a supercomputer)? What’s the difference? Is it a matter of capacity? If so, then how does that square with the fact that mentally disabled people are full legal persons? Either an AI is only a person if it is of at least “normal” intelligence, which is a judicially unmanageable mess of a definition, not to mention that it contradicts how we treat mentally disabled humans, or you end up declaring all computer programs to be people, which is madness.

      An AI would be effectively immortal, so that causes numerous problems as well, and it does so in a way that doesn’t apply to other immortal beings in the Marvel Universe because, unlike those beings, the AI produces immortal “children.” Other issues: Where is an AI a citizen? Where it was switched on for the first time? Does it take the citizenship of its author? Does an AI need a passport in order to be copied across the Internet onto a computer in another country? I could go on, but you get the point. This is not at all a trivial matter of saying “well, it acts like a human, humans are legal persons, so it must be a legal person.”

      Remember, too, that Cetacean Community was about the limited issue of standing to bring a suit under certain specific federal statutes, not full-blown personhood. If a court finds that an explicit act of Congress is needed to confer standing, then it sure as heck would find it necessary for personhood.

      The Scarlet Witch married the Vision–is she committing tax fraud because it’s not possible to marry an object?

      I haven’t read that issue: do they sign a marriage license or do they just have a ceremony and live as husband and wife? Legally-speaking there’s nothing stopping someone from “marrying” their Roomba in the real world, it just won’t be a legally recognized marriage. So unless it’s clear that the marriage was legally effective, I don’t take that as evidence that AIs are recognized as people in the Marvel Universe.

      • “Does an AI need a passport in order to be copied across the Internet onto a computer in another country?”

        Well, in fiction, copying/transferring a file is treated as moving it, but in reality, it’s creating a duplicate while leaving the original file intact. If you copy an AI, the AI hasn’t actually moved — if anything, it’s been reproduced by cloning. The copy would be more properly treated as an offspring.

    • And the analogy to the printing press is inapposite. The Framers were aware, of course, that the printing press was a piece of technology that replaced earlier methods of communication. So protecting “the press” makes it clear that they were interested in protecting communication, including newly invented mechanisms. We have clear contemporary evidence that “freedom of the press” was to be defined in functional terms:

      In the letter sent by the Continental Congress (October 26, 1774) to the Inhabitants of Quebec, referring to the “five great rights” it was said: “The last right we shall mention, regards the freedom of the press. The importance of this consists, besides the advancement of truth, science, morality, and arts in general, in its diffusion of liberal sentiments on the administration of Government, its ready communication of thoughts between subject, and its consequential promotion of union among them, whereby oppressive officers are shamed or intimidated, into more honourable and just modes of conducting affairs.”

      By contrast, humans have always been around throughout history, so there’s no reason to think that constitutional references to people mean “all intelligent beings in general, including ones we haven’t seen yet.” Similarly, there’s no functional definition of legal persons in the Constitution or in contemporary writings by the Framers.

      • Humans have been around through history, but printing presses have been around for a good portion of history too. The reason that constitutional references to people don’t mention AIs is that AIs didn’t exist, which is exactly the same reason why freedom of the press doesn’t mention blogs or TV. That doesn’t mean it couldn’t be read as including AIs once someone made them. I don’t think this is any more of a stretch than reading existing laws to cover gay marriage (which happens quite a bit).

        As for thorny questions like what happens if the AI is copied, etc., I’d think they are very interesting questions that may actually come up once we have AIs, and not answering them by saying that AIs don’t count as people dodges the most interesting questions.

        I’d also point out that in most popular science fiction–and this definitely includes the Marvel Universe–AIs are depicted in such a way as to not have many of those problems even though they logically should. Most AIs are robots which for some illogical reason are never copied, cannot be turned off, do not spread like a virus, are not put in someone’s computer against their wishes, etc., and backups only matter when it comes time to bring the destroyed robot back from the dead. They are treated like metal people with the limits of people. Most of the remainder of the AIs appear under circumstances where those questions are moot (such as an AI that is hostile enough that there’s no moral problem in deleting it even if it is a person). You never see the MCP in Tron with offsite backups, nor does anyone in Star Wars make copies of a droid into a second body.

      • TimothyAWiseman

        @Ken Arromdee I have to disagree that printing presses have been around for a good portion of history. The printing press was invented in 1440 and not widespread for a long time after that. History on the other hand was very well established as an academic discipline by the time of Tacitus (AD 56) and being consciously recorded for posterity long before that. The printing press was still reasonably new in historical standards by the time that the Constitution was adopted in 1787.

        I also disagree that AI’s should logically have the problem of it being impractical to copy them. AI’s in many forms of popular science fiction are tethered to hardware in ways that make complete logical sense. Some are based on Neural Nets. Neural Nets as used currently are relatively simple and often simulated in software rather than existing as dedicated hardware. It is though perfectly reasonably to suppose that strong AI will be formed by creating enormously complicated neural nets actualized in hardware, perhaps requiring some reliance on self assembly and emergent processes. It would be perfectly reasonable to believe that after such a neural net had undergone any length of running and thus self modification it would be nearly as impracticable to copy it as it would be to perfectly and exactly copy a living human brain. Although Asimov is meticulous about not going deeply into the technology behind it, there are hints that he envisions the “positronic brains” of the robots in his book as being similar to that concept. Although they may all start in exactly the same default configuration (or at least each robot series starts with the same default), they become unique and impracticable to copy after running for any length of time.

        Others go with the requirement that the AI is using quantum properties and the evolution of that quantum state is impossible to precisely measure much less copy. It would not surprise me at all if, should we ever develop genuine strong AI, it relies on a method similar to one of those two and that an AI, at least after it has run for a while, is unique and at least extremely difficult to copy.

        Incidentally, if you have not done so yet, you may wish to read Asimov’s Positronic Man. It is a prequel of sorts to the Robot Series and it deals explicitly with robots place in society and the evolution and eventual quest for legal recognition of one robot in particular.

      • Another argument that the Framers probably wouldn’t accept non-humans as people is based in the fact that they didn’t consider all homo sapiens as people. I think some of them chose to not formally define the term person due to slavery.

        But if there were sentient robots existing in the late 18th century, would they have displaced Africans on the plantations?

    • Your premise is flawed. Most robots in the Marvel Universe are NOT treated as people… or Reed Richards et al would be in prison for the number of Doombots they’ve “killed”. Arcade is also fond of lifelike robots, they get killed with abandon, too.

  10. You’re right, but I didn’t quite say what I meant. Most robots which are developed as characters are treated as people. Robots that are not developed are often treated as disposable. There may not be any real difference between them–it’s just that the ones which are treated as people are portrayed in more detail.

    Sometimes this gets justified, sort of: the story states, as a premise, that there’s a difference between a robot that is just intelligent and a robot that is actually a person. The robot can hold a conversation and understand things, but there’s something… missing from its personality which all humans have, which means that the robot isn’t really a person and it’s okay to destroy it. Often this is some kind of unpredictable quality in the “person” robot, that can never be copied or mass produced. You may object that this is scientific nonsense, and of course it is, but comic book universes often work that way.

    I think it’s fair to say that in most superhero universes, there are robots which are treated as people, even if some others aren’t.

    • But Ultron is a robot that can be copied. In fact, in the episode in question he copies himself into multiple bodies, including Iron Man armors. So inasmuch as this is about Ultron in particular, that weighs against your argument.

      In general, though, the reason I mentioned all of the issues that would arise from granting AIs legal personhood is that courts are less likely to create new rights (or causes of action or defenses) when it would have such far-reaching consequences. The theory is that complex issues are better left to legislatures that can come up with a coherent statute all at once, rather than leaving it to the courts to develop piecemeal. This further bolsters the existing cases holding that it must be Congress, not the courts, to grant non-humans legal rights.

      • The way I recall it (haven’t read most comics in several years):

        1) Ultron duplicates are treated as extensions of Ultron and are not really independent beings. Even if they are copies and don’t literally have their every move controlled by him, they are not treated as autonomous and fall under the clause above about robots that have something missing. The result is that these duplicates are not considered people *but* this does not extend to robots in general.

        2) In comic books, even when organic lifeforms are duplicated, the duplicates are frequently not treated as people:

        3) A little online research shows that Ultron has survived defeat by having a disembodied consciousness escape. In comic books even having such a thing is a big sign that it is considered a person. Copies never have those.

      • It is not clear whether the duplicates are controlled by a master copy or not. They act in concert, but since they are all running the same program that is to be expected.

        Some organic clones are treated as expendable, but others are not. Legally-speaking, they would all have full legal rights, regardless of how their creators or killers treat them.

        In comic books, escaping defeat via a disembodied consciousness is a big sign that the writers painted themselves into a corner and needed a way for a villain to come back. Even without such meta-analysis, I’m not sure it has much legal significance. “Looks like a duck, acts like a duck, quacks like a duck, therefore it’s a duck” analysis only goes so far. The courts have made it clear that they aren’t willing to extend legal rights to non-humans no matter how human-like they are discovered to be. So it’s an act of Congress or nothing, and I don’t see it in the comics, full stop. If I’m wrong about that factual issue then the analysis changes, but until then, no dice.

      • TimothyAWiseman

        At the risk of stepping far aside from law, I think “In comic books, escaping defeat via a disembodied consciousness is a big sign that the writers painted themselves into a corner and needed a way for a villain to come back” needs to be carefully qualified with “without significant foreshadowing”.

        There was a lovely quote from C.S. Lewis, that I can’t find at the moment, where he discusses Ghost Stories. A Ghost Story as such is a perfectly fine piece of literature, but a ghost appearing in something which is not really a ghost story, especially showing up holding the solution to some major problem, is a sign the writer did not plan and is trying to escape. A villain who has the ability to separate mind and body and this is well established (or at least thoroughly hinted at) before the climactic battle can be an interesting and legitimate villain. Having it happen suddenly so that the villain escapes to fight another day is atrocious deus ex machina and a sign they painted themselves into a cornor.

  11. I don’t agree with the conclusion about Pym’s liability in the animated-series continuity (I can’t speak to the comics). Particularly I don’t think the situation meets the parameters (c) and (f) in the definition of abnormally dangerous activities. In the show, Hank originally built the Ultrons to be entirely nonviolent, and they operated for months within those limits. So Hank was able to minimize the risk through the exercise of reasonable care. But as I recall the storyline in the show, when the Avengers were unable to defeat Kang themselves, Hank reluctantly decided that they had no choice but to teach the Ultrons the concept of violence. He was aware of the danger of this, but under the circumstances (namely the world having been conquered by a guy from the future), the value to the community pretty clearly outweighed the risk. Afterward, Hank took every reasonable step to eliminate the danger the surviving Ultrons posed, but one Ultron outsmarted and outmaneuvered him.

    Given that Ultron’s aggressiveness came about as a result of wartime necessity, I’d submit that the situation is analogous to, say, the danger posed by minefields or unexploded bombs left in place after a war — or by a character like Rambo in the original First Blood film, a person “programmed” to be a killing machine and thus unable to reassimilate into normal society afterward. True, in those cases the government is responsible for the creation of the danger. But let’s imagine a scenario where a private individual in an occupied country creates minefields and other lethal traps as part of a guerrilla campaign that eventually succeeds in freeing the country, and then the individual tries to disarm all the weapons/traps he created, but one of them accidentally goes off and kills civilians before he can get to it. What would be the liability there?

  12. As another data point on Ultron, Mighty Avengers #4 has Pym say “Ultron’s alive. It’s intelligent. It’s of itself. It’s birthed itself past its programming.” Hey, if he claims Ultron is alive, who am I to argue?

    As for clones, the point is that copies of *anything* in comics are often treated as disposable, even when the originals are undeniably people. So the fact that copies of robots are considered disposable doesn’t have much bearing on the status of robots in general.

    • “Alive” is a far cry from legal personhood, of course. The same is true of intelligence. Intelligence alone is not the standard by which personhood is measured. Non-human primates, dolphins, African Grey parrots, crows, and elephants all demonstrate human-like mental qualities. In some cases they are more intelligent than very young children. It is clear that, as the law stands now, that cannot be the sole test.

      Note, by the way, that after Ultron is stopped Pym destroys all of the Ultron components. If Ultron is a person then this amounts to an extrajudicial execution without even the pretense of a trial. So by his own conduct Pym shows that he doesn’t actually view Ultron as a legal person, even if he views it as intelligent. Pym clearly sees Ultron more like a rabid animal that must be destroyed, not as a criminal that must be brought to justice.

      Finally, even if Pym flatly stated “Ultron is a person, just like a human, and should be treated the same,” that would be of no legal importance. Lots of people, including serious legal scholars, have made the same claims about dolphins and primates, but so far they have not been accepted.

  13. I don’t think that even the fact that characters in comic books destroy robots means that robots aren’t supposed to be people. Consider what would happen if the Vision (or any other robotic good guy) was killed. It would certainly be treated like a murder, not like destruction of useful property or like the killing of a friendly dog.

    The fact that some enemy robots can be killed off is more of a weird exception that dates back from the time when comics were pretty simplistic, and issues of robot sentience weren’t even considered, than it is part of a serious effort at world-building that generally indicates that robots are not considered to be people in that world. It’s inconsistent writing. (And I’ve also seen writing that attempts to avoid this. For instance, when Legion of Super-Heroes did its Millennium crossover, Laurel Kent was revealed to be a robot. Element Lad disarmed her and basically said “I could disintegrate you right now, but you’re capable of human thought, and it would be killing, robot or not”.)

    • TimothyAWiseman

      The Avengers, the Scarlet Witch in particular, would probably see it as murder if Vision were killed, and react accordingly. But it is far from clear that the courts, even inside the Marvel-verse, would view it as murder.

    • “The fact that some enemy robots can be killed off is more of a weird exception that dates back from the time when comics were pretty simplistic, and issues of robot sentience weren’t even considered”

      It’s also, at least in part, because there is less of an issue having kids watch a show or read a comic in which twenty robots are ripped apart by the hero then in which twenty human henchmen are ripped apart. The former will be considered tame with no moral implications. The latter will be regarded as inappropriate for young kids.

  14. Pingback: Links for 21.1.2012: Ottawa Punks, Jim Steranko, What were you raised by Wolves? - Nerdcore

Leave a Reply

Your email address will not be published. Required fields are marked *