Non-Human Intelligences III: Categories

Now, at last, we get to the good stuff. We’ve laid the foundation and talked about existing law, now it’s time to talk about specific types of non-human intelligences and how the law might treat them.

From the main comic book stories, we can identify three main types of non-human intelligences. The first are individuals from animal species who become intelligent for one reason or another. Gorilla Grodd would be a good example here, but there are also Gorr and the New Men. Then there are genuinely alien intelligences, like the Skrull and Shi’ar, i.e. species we’ve never encountered before. Finally, there are machine and non-biological intelligences like Bastion or one incarnation of The Thinker. Unsurprisingly, the law is likely to treat these categories differently.

I. Elevated Animals

The first category we can safely call “elevated animals,” i.e. species with which we are already familiar and do not generally consider to be sapient, at least not in the way that humans are. Dolphins and chimpanzees may be smart and may even display a lot of human-like characteristics, but we do not, by and large, consider them to be people. The law certainly does not, for good or ill.

So what happens when one of them turns up to be actually intelligent to the point of being able to have real conversations with human people? Who can basically do everything that you or I can do but just happens to be a canid? It seems like that that a being capable of asking for civil rights would probably be granted them, though if the being doing the asking is of a species that is not generally so capable, it seems unlikely that a court would recognize any rights beyond the being in question. So Gorilla Grodd would probably be afforded due process, but that doesn’t mean that we’re emptying out the local zoo’s primate house and getting the former residents into public housing. As most comics stories don’t seem to suggest that elevated animals are naturally occurring or likely to be all that common–though there are exceptions, the courts would probably be content to do a case-by-case analysis here.

II. Alien Species

But entirely new species? Particularly ones that are extraterrestrial in origin? Here it seems likely that the courts would be a little more willing to paint with a broader brush. If an alien showed up exhibiting all the marks of a civilized race, like the Shi’ar, Skrull, or Kree, counting the entire species as being sapient and entitled to civil rights seems plausible. If a race is trying to establish diplomatic relations with human governments, whether or not they count as people seems like a foregone conclusion. But most comic books aren’t generally about space exploration as such, and while both DC and Marvel have extra-terrestrial settings, those races usually discover humanity rather than the other way around, so the borderline cases have not really been explored in comics as much as they have in other kinds of speculative fiction. For example, Alasdair Reynolds Revelation Space series has a species known as the Pattern Jugglers, a kind of collective marine “intelligence” of sorts that spans the entire surface of the ocean worlds where they are found. Individual organisms are basically mindless, but together, they constitute a massive distributed intelligence, after a fashion. The Pattern Jugglers don’t necessarily have anything resembling personality, and there’s some debate in the books about whether they are actually intelligent or are simply massive biological machines used to store recordings of the races they encounter. How’s a court going to treat that? The closest analog in the comics universe is probably Ego the Living Planet, but he still has an identifiable personality and will. Still, being an actual planet, it’s not like a human court is going to try to assert jurisdiction over him, so the point may be moot.

All that by way of saying that as far as comics go, alien intelligences are probably going to fare pretty well legally, because most of them are more or less obviously people.

III. Non-biological Intelligences

The same cannot necessarily be said about non-biological intelligences. Here, the courts may well decide that enough is too much. Sure, we can recognize the personhood of non-humans that are obviously acting like people, but machines? Machines that people manufacture? That can be turned off and on? Whatever else they may be, and however smart they appear, people, as people, aren’t like that. Getting the courts to recognize the personhood of machine intelligences is going to be really tough.

This is in no small part due to the fact that trying to draw the line here is going to be a huge mess. Compared to computer science, biological taxonomy is downright obvious. That’s a dog, and that’s a cat, and even if I don’t know the genetics, the differences are there to see. But deciding which programs are intelligent and which aren’t? Depending on how we present artificial intelligence in our comics, that has the potential to be a lot fuzzier.

If an artificial intelligence is simply a really well-designed, powerful program, even one that can regularly pass a Turing Test, asserting that there is some kind of metaphysical difference between that and a calculator which makes the AI more like a human being than a video game is going to be almost impossible to do with the kind of rigor a court is going to require. Besides, recognizing AI personality raises a whole host of other issues:

If the program is a person, is powering down the computer on which it is running murder?  Does a powered-down AI have a right to be powered back on? If the program is copied, do we now have two people? Is deleting one of the copies homicide?  If the program is installed on a person’s computer against their will, do they have to take care of it forever or can they delete it? Does introducing a trojan horse constitute assault, or is trespass a better analogy? Is the essence of the being the code, the running program, or what? Does the consciousness reside on the hard drive or in RAM? Maybe the CPU cache? What if the program is installed on a really slow computer?  In a theoretical sense it’s still the same program, but is it still intelligent?  We don’t even really think about regular programs with this kind of rigor, so something as sophisticated as an AI is likely to make a court draw a bright line that means it doesn’t have to think about that stuff.

On the other hand, if our non-biological intelligence is somehow tied to particular hardware, similar to Asimov’s positronic brain, so that it is more than merely software but an actual artificial brain, that’s a closer case. If a plaintiff can point to a particular physical object, the destruction of which would result in the death of the plaintiff, that’s a lot more like human persons than a program which can run on any compatible hardware. This is particularly true if, as in the case of Data, no one is really quite sure how it works, making the entity more or less unique.

Then there’s the issue of property law. People own computers. They are personal property. Suddenly recognizing that the AI running in my PC has civil rights does weird things to my property interests there, particularly as disposing of the computer would theoretically represent the end of the life of the AI. Might such an AI be immediately guilty of trespass to chattels, giving me the right to “evict” the AI from my computer at will? And since every other computer in the world is owned by someone, where is the AI going to go?

Again, a lot of these questions are dealt with more in science fiction literature than comic books, but enough AI characters show up from time to time that it’s worth thinking about.

IV. Conclusion

So it would seem that how a particular non-human intelligence is treated would depend on what exactly we’re talking about. “Elevated” animals can probably expect a case-by-case recognition of their personality, while intelligent alien species would probably be recognized as persons wholesale, particularly if they act like civilized beings. Non-biological intelligences are going to have a harder time, and traditional AI conceived as intelligent programs running on more-or-less ordinary computer hardware are going to be a tougher sell than some kind of software/hardware amalgam which is categorically different from mundane computers.

35 responses to “Non-Human Intelligences III: Categories

  1. Gorilla Grodd is not a single individual of a normally nonintelligent species. He comes from a hidden civilization of intelligent gorillas (though they were created by an alien from normal gorillas).

    In the DCU, Gorilla City has also joined the United Nations, which may have some implications to the answer of how such beings are treated.

    • Thanks for the correction. But do the Gorilla City gorillas require continued intervention to maintain their intelligent status? For example, if Grodd had a child with a normal gorilla, would the child be normal or hyperintelligent?

      • There seems to be no clear answer to this online. Here’s a question: would sexual intercourse between an “enlightened gorilla” and a regular mundane one constitute bestiality? Even if they are the same species, one might question if the other could give consent.

        It is somewhat moot however, except possibly for GC gorillas living abroad.

      • They’d be intelligent. Gorilla City is its own civilization and doesn’t take external forces to maintain.

      • Sorry, misread. I don’t even know if a superintelligent gorilla could have a child with a normal one. I suspect not. They’re mostly treated like their own species.

      • In re. to Tom Pile. Whether or not it would be bestiality would probably depend on the political issue of what kind of jurisdiction the enlightened gorilla is in. If the laws of humans apply then you might have some case, if not then it really depends on the norms and laws of Gorilla City. Consent might not be a problem, I’m not sure about our definition of consent but I don’t think that it can only be verbal in the U.S. Since regular gorillas obviously can’t give verbal consent you might be able to argue that movements and sounds made could constitute consent (but now you’d probably find it easier to institute a gorilla court system to deal with gorilla issues).

        To Ken Arromdee: There’s at least a theoretical possibility. One idea floated around for cloning some long extinct species (such as mammoths) or even more recently extinct and endangered ones would be to have a female from a very closely related species bear the child (though that’s a very difficult process with no guarantees I understand). Mules are the children of horses and donkeys. Wolves have interbred with coyotes on occasion. Of course the main question is whether or not enlightened gorillas having sex with regular ones would be considered bestiality, which is a very different matter.

  2. What of supernatural entites? Ghosts of people dead? Demons and angels? Clearly intelligent, but… people? (I won’t comment on people trying to sue a god or whatever…)

    • As far as ghosts and vampires go, I think we’d say that if their mind has a continued existence and (in the case of ghosts) they have some means of manifesting themselves to the living, then they never really died, legally-speaking.

      The Uniform Determination of Death Act, which has been adopted by most states, says that “An individual who has sustained…irreversible cessation of all functions of the entire brain, including the brain stem, is dead.” This equates the physical brain with the mind (which makes sense in the real world), but if a comic book character’s mind somehow continued despite the death of their physical brain, I think a court would be comfortable saying that considering them legally alive is consistent with the intent of the Act.

      This is especially true given that the “determination of death must be made in accordance with accepted medical standards.” So long as the medical community demonstrates that the ghost or vampire is still mentally the same as they were when they were alive, that gives a lot of weight to the conclusion that they aren’t legally dead.

      • “the conclusion that they aren’t legally dead” … wow, that could change a lot of murder cases, no? Not that every dead person becomes a ghost…
        Oh, and further more, if you call back their “soul” and show that they are existing in another form, but still the same person mentally, would that count as them being a ghost / no murder?
        Yikes!

      • Well, most ghosts seem to be less than complete, mentally-speaking. They often don’t speak or are obsessed with a particular thing such as the unjust circumstances of their death. In that case I think murder would still be an appropriate charge.

        But suppose someone died and became a fully conscious ghost that was able to manifest themselves visibly and audibly (e.g. a Harry Potter-style ghost). I think in that case murder might not be the right charge. There would still be some serious charges, of course, not to mention a massive tort suit for the loss of the person’s body, but if I were the defense attorney I’d definitely argue against a charge of murder.

  3. There are two considerations regarding artificial intelligence, one is the algorithm and the other is memory. The algorithm is a computer program that had to be written by somebody and can be owned and copyrighted. The memories, however, would be what makes the AI unique. The same logic applies to you and your twin brother: you are unique individuals because you have different memories. Different memories means different decisions as all our decisions are based not simply on our decision making process but based on what we believe to be true. With a computer program, its memories could be stored on a hard drive, flash drive, memory card or CD. Destroying all copies of the memories collected by one program would be equivalent to destroying a unique individual but if many copies were made then destroying one wouldn’t be murder at all.

    So that brings us to this question: if Madrox the Multiple Man makes a hundred copies of himself and I kill one is it murder? It would be assault in the sense that Madrox the unique individual would be hurt -if only emotionally- when one of his copies died but would it be murder? It’s an interesting case because when one of Madrox’s copies die they stay dead and there is a body so if the police didn’t know that the original Madrox was still alive I would be in a lot of trouble.

    • Well, further to that, would Madrox himself be vulnerable to criminal charges if he produced a duplicate which was killed (and he could have foreseen the duplicate would likely die)?

  4. Thompson Hayner

    Would a robot who’s mind has been copied from a human be regarding as a human or non-human intelligence (as in the case of Noman)? How about if a human’s mind had been uploaded to a clone (as in the case of the HateMonger)?

    • Based on their conclusions the fact that it’s a robot body and originally a human mind probably makes it a lot easier to get recognized as a person.

  5. In regards to AI – I do recall a series with an AI protagonist. She (the AI self-identified as female) owned a company (under a false identity, because she didn’t yet have a legal identity of her own) and used that to earn the money to buy computers of her own to move herself into.

    If I remember right, she intended those computers to be used only as back-up, in case something happened to the ones she currently existed on (similar to someone getting, say, an Epi-pen, I guess? Or maybe cardiac paddles?). I don’t recall if she did end up needing to use them, but I do remember mention of how much safer she felt, knowing that she *had* that back-up available.

    She was also trying to get herself recognized as a legal person, but I don’t remember if it had gone to court yet or what the result was if it had.

  6. These sorts of discussions always make me think of the holographic doctor from Star Trek: Voyager. Purely a software construct, can (and has) been copied, ‘lives’ on multiple state-owned computers and is one of hundreds of identical (but discrete) installations. Also brings up issues of obsolescence, and planned obsolescence, given that I think there were at least 3 successive EMH generations. More interestingly, you can make a strong argument that he didn’t actually start off being a ‘person’, but only became one after a few seasons.

    So, if he ends up being declared a person, what does that mean for all of the thousands of other EMH programs? Do all of the existing programs have to be activated and left running because they have the potential to become people too? If you don’t, or delete them, is that abortion, murder or just good resource management given what energy and memory-chewing programs they’re supposed to be? Is it ok to delete them if they’ve never been turned on? What if you re-assign them or re-write them for other duties? Can you force them to be upgraded? Do their makers have any obligations towards them? What if they didn’t realize that what they were making was going to turn out to be a person?

    Interesting stuff, got alluded to but was never really got looked at in any great detail on the show.

  7. If AIs are property, what about liability? If my AI escapes and becomes a supervillain, to what extent am I liable for the damages? (Let’s assume I created the AI and own it; otherwise, I imagine there would be a distinction between the manufacturer and the owner.) If I try and fail to recapture it, does that help me?

    What if my AI escapes and becomes a superhero? Do I have a duty to not recapture it because it is helping people?

    • You mean like the Thor clone in Civil War? Not quite the same thing but it did kill Bill Foster. So who is liable? The Thor clone? It was a child. Mister Fantastic? Iron Man? Thor???

      • Probably Iron Man since he was (if I remember correctly) an official agent of the government and in charge of the pro-registration forces.

  8. Similarly, how liable is Dr Pym for all the things that Ultron has done?

    • Presumably Hank Pym got his money from his then girlfriend Janet Van Dyne aka The Wasp. Which is good for him because creating a robot that wanted to enslave humanity would certainly have gotten him fired from any corporate or government lab he would have been working at in real life.

      That reminds me: in the John Byrne Avengers: West Coast series the government hires an international team to kill the Vision. Obviously the government didn’t consider the Vision a person.

      • They considered him a person, but uppermost in their minds was the fact that he’d previously used the Internet-equivalent of the day to crack the defence systems of dozens of nations. Vision had to answer for that and the various nations victimized as a result believed that they couldn’t allow the information that Vision had accumulated in the course of his efforts to spread.

  9. Pingback: Quora

  10. As far as the ghost issue, can you imagine the civil-rights cases that would result? Granted, many fields of employment require a physical body, but think of, say, a police dispatcher who died and came back as a ghost to find their job had been given to someone else. Or would that count as being AWOL and a justified termination?

    • I can’t imagine that death is considered work avoidance and a reason for termination. The basic assumption would be that death, is well death. While the legal definition may stand, OED’s definition is that death is “the worst possible state of health.” Amusing, and true.

      Most courts would use a strict interpretation to go with life being implied to belong to a physical body if they were forced to account for a ghost going after their former employer for an unfair termination. Unless it’s the mob, then you have two types of unfair termination.

    • If coming back as a ghost was a rare occurrence then I can’t see how they could have been expected to keep the job open. If they could reasonably expect the dispatcher to return as a ghost and that the ghost would want to work (and the laws permit this) then maybe they could be sued.
      Also don’t forget that dispatchers do need at least some physical interaction like pushing a button to answer a call and your comment seems to suggest that you’re assuming ghosts can’t touch things. Work as singers and motivational speakers is possible of course.

  11. Pingback: Speed Reading « Speed Force

  12. You have addressed the rights of these, but what of the responsibilites of citizenship, i.e. militray draft registration, jury duty, etc.?

  13. Oddly enough, individual machine inteligences would possibly receive more recognition when and if we came in contact with a race of machine inteligences, Cybertronians, etc.

  14. Pingback: Quora

  15. Does one “disposability”/”repleceability” makes one any less of a person? In a colony of sentient/sapient/personhood holding microscopic organisms (the organisms individually each hold whatever is the characteristic(s) that would otherwise be taken in consideration, it’s not some sort of distributed mind), should you consider each cell less worthy than a single human if they somehow had similar minds as humans have? What if it was a whole country the size of China composed of clones (in the RL sense), would each of those cloned humans be less of a person than a Chinese person? Is a Chinese person less of a person than someone from some really small country?

    How does the fact a computer program can and has been copied and exists in many different locations with slight differences in it’s runtime variables and non-volatile storage areas make each instance less of a person than if the exact same program only had a single copy running on a standalone robot body?

  16. Pingback: The Avengers: Declarations of War | Law and the Multiverse

  17. What happens if the non-human intelligence is made of [[http://tvtropes.org/pmwiki/pmwiki.php/Main/OrganicTechnology|organic technology]]? On one hand, the machine was made in a lab with all the funcions of a computer, but on the other hand, it’s technically biologically alive!

  18. “How does the fact a computer program can and has been copied and exists in many different locations with slight differences in its runtime variables and non-volatile storage areas make each instance less of a person than if the exact same program only had a single copy running on a standalone robot body?”

    It doesn’t make it any less or any more. For example, the Doctor example from Voyager (mentioned above) was a highly advanced but non-sentient* EMH (emergency medical hologram) program that was designed to be used for short periods of time in emergency situations. After Voyager was lost in the Delta quadrant and their Doctor killed, it was turned on and left running indefinitely. Thanks to this, it eventually exceeded its programming and became a true AI, becoming “him” and a person. With one possible exception at the season 5/6 switch, no other version of the EMH has experienced this situation, and therefore none are true AIs. Had they been created as true AIs (and therefore persons) from the beginning*, then all copies would be.

    *Some fans argue that based on what was shown the EMH was clearly an AI and a person from day one. The writers have said that this was not the case however. So whether we as the audience feel he meets the definition or not, for purposes of the show he didn’t at the beginning and therefore neither did any of the others.

    • That sorta puts the stock EMH program on the same position as human fetuses…

      I’m not sure how to feel about this…

Leave a Reply

Your email address will not be published. Required fields are marked *