Non-Human Intelligences I: Introduction

One of the most common questions we get is how the law would treat a genuinely non-human intelligence. Such characters appear with regularity in most of the major comics universes. The DC universe has Superman and various other Kryptonians as well as Gorilla Grodd, etc. Marvel has described entire galactic empires, including the Shi’ar and Skrull. Both universes include intelligent machines of various kinds.

This is a big subject, and as there is currently no law on the books which would directly answer this question, finding an answer is going to involve at least as much philosophy and history as it will law. But it is an important question, so we will consider it here.

This is likely to be the first in a series of posts. Most of the consideration of actual examples will be in later posts; this one attempts to set the stage for such questions by examining the reasons for and history of human beings’ rather unique status in the legal system.

I: Imago Dei

The American legal system is a common law jurisdiction. Technically, this means that we use precedent, i.e. judicial decisions, to create law in addition to the more traditional law-making powers, the executive and legislature. Common law legal systems are based on tradition as much as anything else, and the tradition in question is the English legal system. Early American judges made liberal use of English legal materials published prior to 1776 and treated decisions by English courts before that time as binding precedent. Though there are practically no situations where an American court would need to reference law created before 1776, as just about every issue which it is possible to conceive has been treated either by case law or statute since then, the opinions of English judges prior to Independence–or at minimum decisions prior to the arrival of English settlers in North America–are still technically valid law.  Generally speaking, state statutory codes will have a section explicitly recognizing this.

The reason we bring this up is because, for good or ill, the English legal system was created in a time where Christianity was the official state religion, where there being a state church meant a lot more than it does in many European countries that still have state churches. Christianity was so much a part of the culture that not only were major Christian beliefs like the existence of God and the nature of man pretty uncontroversial, but they actually played an active role in the legal system. Trial by ordeal was originally conceived as a way to permit God to reveal guilt or innocence. Really. People took that very seriously. It took centuries for the problems of this to become undeniable, and it was not until the Roman Catholic Church decreed in the Fourth Lateran Council that priests were no longer permitted to take place in such ordeals that the legal system had to scramble around for a new decision-making process.  They eventually came up with trial by jury, but it took almost 500 years before it resembled modern practice.

All of that by way of saying that until only a few centuries ago, everyone in England believed that humans were created in the image of God, the “imago Dei.” The doctrine continues to do an incredible amount of work in Christian theology, but it contributed to the legal system by giving an obvious justification for treating humans as different from animals. The very word “homicide” literally means “to kill a human being” in Latin. Murder is homicide with a particular intent, but killing a cow or dog can, by definition, never be murder, because there is no underlying homicide.

The doctrine itself does show up in American jurisprudence from time to time. A Texas court referenced it while upholding a Sunday law. McLeod v. State, 180 S.W. 117, 119 (Tex. Crim. App. 1915). It shows up in a discussion of the origin of natural rights in nineteenth-century Michigan. People v. Gallagher, 4 Mich. 244, 278 (1856) (Pratt, P. J., dissenting). It regularly shows up in cases discussing criminal sentencing. See, e.g., U.S. v. Carvajal, 2005 U.S. Dist. LEXIS 3076 (S.D.N.Y. 2005).

But these days it is almost always seen in the context of First Amendment establishment cases, which may lead one to ask what the point of this discussion is. Other cultures have come up with reasons for outlawing murder without and even before Christianity. Murder doesn’t seem to have been any less criminal in ancient Egypt or Babylon than in Jerusalem. Fair enough. But their justifications were just as religious. Heck, it is illegal to kill cows in most of India, not that cows are people as such, but it’s definitely a religion-based prohibition. Be all of that as it may, Christianity is undeniably the heritage of the West, particularly in the development of the legal system, and its effects are still with us. The fact of a distinction between humans and non-humans is well established, even if the justification is gone.

Because that justification is largely gone now, the doctrine of the imago Dei no longer does any real work in the legal system. Organized Christianity has been receding from its central position in Western culture for centuries, and the nineteenth century saw the birth and growth of legal positivism, which is the view that the law is simply what humans have made of it and does not appeal to any deeper, metaphysical truths. But the result of the theological doctrine remains: the law treats human beings as a different kind of thing from inanimate objects and other living creatures. This happens to line up pretty well with most people’s basic ethical intuitions, so it hasn’t received a lot of scrutiny. But if we were to discover a genuinely intelligent non-human entity, you can bet that the justification for limiting personhood and its attendant civil rights to human beings would immediately get a rehearing.

II. Coming Up With a Definition

Why not simply punt and give anything with intelligence full status as human? A lot of comic book and science fiction readers would probably prefer such a result. But things are not so simple as that. The entire discipline of anthropology is basically an attempt to answer the question “What makes us human?” Here, we are asking what it takes for a given being to be considered a person, i.e. to receive the same protections of the law as human people.

The sci-fi fan’s desire to grant personhood to all intelligences raises the question of just how intelligent a being has to get before it receives protection. I mean, dolphins are pretty smart, right? Why aren’t they people? And is it not conceivable that a smart dolphin or gorilla might not actually be smarter than a developmentally disabled human? If the standard we use to decide that someone is a person is intelligence, there does not appear to be a good reason why the human should be considered a person and the dolphin not.

And that, right there, is not only a major problem of anthropology but a huge issue for the legal system. Courts really do not like engaging in case-by-case analyses, as in addition to being an enormous pain in the neck, it is also immensely expensive and time-consuming. Courts are much better at applying bright-line rules, i.e figuring out how such a rule applies to particular sets of facts. Coming up with a bright-line rule for personhood means coming up with a definite bar above which you’re a person and below which you aren’t.

But even assuming we can come up with a rigorous, cross-species quantitative measurement of intelligence, which no one seriously suggests that we can do, we’re still going to have problems, because as soon as we embark on the project of limiting personhood to a defined set of measurable criteria, it is inevitable that people are going to try to use these criteria to exclude people they don’t like. Dehumanization is a big enough problem as it is without providing a handbook for it. Also, what about children and the elderly? There are plenty of criteria which would exclude both of them if we aren’t careful. So a list of criteria, particularly if those criteria are ranges rather than binary decisions, has the potential to be quite problematic.

This would be true of any criterion, not just intelligence. Tool use? Chimps do it. Language? Koko the gorilla is contended to have a vocabulary of about 2000 words. Religion? There are plenty of bizarre animal behaviors that, if interpreted as having religious significance, make about as much sense as a lot of human religious rituals would if you didn’t understand their content. Civilization? Ants and termites are certainly organized, even if their purpose seems pretty undirected.  Culture?  Various non-human primates pass on non-genetically-determined behaviors (such as tool-use) within their social group.

As it turns out, the doctrine of the imago Dei is actually pretty useful here, because it does provide a bright-line rule: if it’s biologically human, it’s a person. End of story. Doesn’t matter how disfigured, disabled, or hated a person is, they’re still a person. Heck, if we did discover a genuinely intelligent non-human, theologians the world over would be staying up late trying to figure out how to work them into existing doctrinal systems. This is actually a pretty common trope in science fiction stories.

But since religion isn’t really something we can use to come up with these kinds of rules anymore–the First Amendment would seem to prohibit it even if there were broad cultural consensus, which there isn’t–we need to find something else.

III: A Complex Problem.

Here, we close for the moment. We do not conclude anything yet, but we have set the stage for an attempt to come up with a legal definition of personhood which might include non-human intelligences but not dogs or radishes, without excluding the very young, very old, or developmentally disabled.

More to come. The next post will be about what existing law has to say on the subject, so save questions about that for next time!

34 responses to “Non-Human Intelligences I: Introduction

  1. The basic rule of thumb I would use is if it can understand (or at least demonstrate the understanding of) the concept of and ask for human rights, it gets human rights, although this is a very case by case measure. I suppose to generalize it you could say if a representative member of a species can understand the concept of and ask for human rights, then all members of that species get human rights, with the usual exceptions carved out for children and others who may not be fully able to look out for themselves. This also assumes that intelligences are capable of communicating with us in a way we understand.

    • That’s probably where we’re headed, but the analysis is a big part of what we do here. If a judge were to be presented with such a case he’d have to cite some legal authority for his decision. The next post will look at that authority.

    • I think the problem with that kind of approach is you are assuming that the non-human intelligence can be categorized as belonging to a “species.” This would be true for biological non-human intelligences, but not necessarily for non-biological ones. The problem being addressed in the post above is most likely to first arise in the context of artificial intelligences, whose “intelligence” may fall along a spectrum from automaton to smarter-than-any-human-ever. So, you need a test that can apply to each individual. You need to draw a line that will leave some artificial “intelligences” without any rights while granting rights to other intelligences.

      Personally, I would say we should use your proposed test for biological intelligences. Then, we should use something like the Turing test to decide which artificial intelligences qualify.

      • Alas it’s not even necessarily applicable in the biological case. Consider an animal that has been mutated, enhanced, empowered, or otherwise altered to be sapient. If the change is one that can’t be passed on genetically or if the animal is essentially unique, is that a representative member of its species? What if it’s still genetically identical to non-intelligent examples of the same species?

        And what, exactly, is a representative member? Consider a species that has a caste system like bees or ants. Suppose that the leader caste is apparently intelligent but the drones are not. Do the drones also get rights based on the intelligence of the leaders? What is the representative member? The leaders? The vastly more numerous drones? Some sort of hypothetical average between them? Tricky stuff.

      • @James

        So what if the being is unique or part of a species? Such a petition isn’t going to be vetted by the courts, and an organization can afford to keep track of these things.

        Any being that requests human rights gets them. If they are part of a species, their species does too. For hive minds just assume that they are all important. More precise definitions can be put together as the dialogue continues.

        Also, any other method (expanding the definition of human for example) is going to come up short once more than a few extra-terrestrial life forms show up in common society. If the law is any being capable of requesting human rights or stating that they have them, then the civilians know that they have such rights and don’t need to look them up in a database or memorize the legal definition.

        This means that for a while after contact conversations with non-human intelligences are going to start with a declaration of rights, but that is going to fade out after a few generations.

        The only catch is if such a being shows up before figuring this out, and gets killed. There’d need to be some laws against killing extra-terrestrials (not gonna use non-human here). Two phase laws: misdemeanor at first, but if a subsequent member of that species claims human rights that offense transfers to homicide (obv. once phase 1 is tried phase 2 cannot, but, eh). In return for the weirdness there the government stands between the extra-terrestrial’s government and its killer, so no alien criminal justice.

    • But what if the aliens, while physiologically having the capacity to understand and ask for human rights, are culturally incapable of understanding the concept? For example, the aliens might be anarchists who don’t get the whole concept of courts, codified laws, or enforcement of the same.

      Or the aliens might live in a Hobbesian “war of all against all” where the only thing stopping a person from killing you is your fist in your attacker’s face. Such aliens would not understand what human rights are at all and would be incapable of asking for them.

      • The first kind is no problem. They’ll understand a gun in the face, which can be provided by local law enforcement or individual citizens. At that point we can explain how we do things. We could likely live as neighbours, but it would be unlikely that there would be any meaningful integration.

        The second would be djur–or, being generous, varelse–going by Orson Scott Card’s hierarchy of exclusion*. We would have to defend ourselves rigorously against such a civilization, lest they destroy us. Rights would not even enter onto it.

        http://en.wikipedia.org/wiki/Concepts_in_the_Ender's_Game_series

      • I agree with Artanis, if an alien species is truly incapable of comprehending or respecting the basic rights of others, then we would have much more important things to worry about than whether they deserve such rights. Important things like trying to prevent our species’ annihilation at their hands.

      • EsonLinji’s proposed rule was that human rights should be extended to aliens who can understand and ask for those rights, so I asked about aliens who are culturally incapable of understanding, which I don’t believe your replies are addressing. While perfectly sapient, the aliens can’t ask for human rights at all. According to the proposed rule, do they still deserve to have human rights?

        Anyway, why would it be necessary to enforce human rights on others? Simply because a society has no concept of human rights doesn’t mean that it’s fundamentally aggressive or violent.

        For instance, traditional hunter-gatherers, can best be described as living in an anarchist political system, where they have no leaders per se. If one member of such a society were to wrong another, they would not speak of punishment in terms of laws or rights, but would instead discuss the matter in terms of custom, morality, or patron-client obligations (as an example). The whole thing would probably take lots of talking and punishment would be decided on a case-by-case basis.

        Second, you are both assuming that the aliens would be more powerful than humans, which would not necessarily be the case. What if they got here by falling through an inconvenient wormhole? Or what if we’re the aliens who are tromping around on their planet? The question of human rights would still apply.

      • @Sarapen I think I did address it. I touched on it somewhat in my last paragraph of my Jan 12 post. Basically, we should be prepared to protect other species from ourselves by default. The request thing is basically asking to be treated as we would treat our own. Also, I think it’s been implied mostly in this discussion, but here it is explicitly: this other species must be willing to reciprocate.

        I think my Jan 14 response to the anarchist example may have been a bit to rough, so here’s another take on it:
        Basically, our ‘rights’ are like this, though grossly simplified: life, liberty and property, plus some limits to governmental action upon them. The ‘anarchist’ civilization is going to understand those very well because where we have governments, laws and courts to protect those rights, they will likely do so themselves, either individually or in ad hoc groups. The trick is to defend your own rights long enough to get some mutual understanding going. Whether that’s by force or by discussion depends on the disposition of the civilization. In the end, I suspect they would understand that our government (assuming representative) corresponds to some group they form, and is merely larger and more permanent.

  2. This so opens the doors for you guys to “re-try” a case from Star Trek versus the American legal system:

    http://en.wikipedia.org/wiki/The_Measure_of_a_Man_%28Star_Trek:_The_Next_Generation%29

    Data > Ultron, Vision, Red Tornado, etc. If, for example, Apple builds a Data, and it becomes sentient, do they “own” him, under IP and the like? Can you legally own a sentient life form, or is that slavery? What if Genetics Megacorp builds a Data, but he is biological?

    • I’d argue “ownership = slavery” under those circumstances. Parentage would be more likely a group project in those circumstances, which could lead to its own paperwork complications if such a being were to properly apply for a Social Insurance/Social Security number(depending on whether it happened in Canada or the States).

      • Robots and other artificial creations (including biologically transformed animals) raise another question: It’s illegal to take a person and give him drugs to make him into a slave who wants to just obey your every command. If you create an AI, and the AI is recognized as a person–you can’t just force the robot to be a slave. Presumably you could not, for instance, forcibly reprogram the AI to want to serve.

        So assuming that robots are people, are we permitted to create a robot whose initial programming is to want to serve, even though we clearly cannot create a robot and forcibly reprogram it to want to serve later?

        Also, are we allowed to create a “stupid” AI that can talk but whose intelligence level is intentionally lower than we can create? Or would that be the equivalent of intentionally having a mentally handicapped child?

        But we are getting a little far afield of superheroes here. Superhero universes are not generic sci-fi. Most superhero nonhuman intelligences fall into two categories: species with the same social structures as humans and the same or greater intelligence, and one-off, nonreproducible AIs or uplifted creatures which are at least as intelligent as humans and which cannot be reprogrammed on a basic personality level (though sometimes they can be reprogrammed to make villains into heroes). Many of the issues we’re discussing would never come up.

      • We’ll get to robots and AIs, I promise!

  3. “Uplift” scenarios – if you’ll forgive my cribbing terminology from David Brin – could provide an interesting twist on any such tests and issues. But I’d think it still comes down to simply demonstrating the ability to understand, whether the being can genetically pass on the “uplift” effect to their offspring or not.

  4. I would argue that there is more to sentience than intelligence. The more important element is self-awareness–does the entity recognize itself as being separate and apart from others?

    • Well what about “collective intelligences” — if the collective recognizes itself as being separate and apart from other collectives, but the individuals making up that collective are not self-aware? E.g., would the Borg collective from Star Trek be a single “person” that spans billions or trillions of individuals, or would each individual be granted personhood?

    • “The more important element is self-awareness–does the entity recognize itself as being separate and apart from others?”

      I’m not sure what you mean by that. Most animals seem to have a concept of self (e.g., they can tell where they end and the rest of the world begins and they understand when something threatens them as opposed to another). If you mean something a bit more in-depth than that, several non-human species of animal can do things like recognize themselves in a mirror (e.g., dolphins, elephants, many primates, European magpies).

    • All living multicell organisms do this; my cats can do this, they both know they are not each other, myself or my fiancee. That said they are not sapient, which is a different thing entirely. Several of the defining properties of human intellect include the ability to associate a current event with past or future events, to plan beyond immediate needs, and create objects that have no intrinsic utilitarian value.

  5. Just for what it’s worth…the DC Universe fairly abounds with non-human intelligences, far beyond the cited Kryptonians and occasional super-gorilla. Just within the Green Lantern Corps, we have an intelligent bee, an intelligent smallpox virus, and even an intelligent and self-aware mathematical concept serving as interstellar policemen. It doesn’t get much more non-human than that.

  6. There is no III… (I- Imago Dei; II- Coming up; IV- Complex). Missing something?

    Rule Three: There is no rule three!

  7. Realizing that I’m far afield of any rational understanding of how the law works, would sanity be a reasonable analogy? If you can conceivably function in society, you’re accepted, but do have to verify your capacity if it’s called into question.
    In this case, if a robot asks to be free, then the court considers it a person until someone (presumably the owners wanting him back) challenges the “human” capacity of the robot. Then we pull some magical expert on personhood out of our hats and he flips his coin.
    Then again, it seems to me that there is already a body of law that might function, here: Corporations are granted a kind of limited personhood, as I understand it, so the concept isn’t entirely burdened by Imago Dei…though some might suggest that wasn’t a particularly good idea. Someone, somewhere must have tried to get a corporation to testify in court or claim that their company was murdered, no?

  8. TimothyAWiseman

    Indeed an interesting and complex topic. For anyone interested in it, it is worth looking at The Positronic Man by Asimov. The entire book addresses this topic in a narrative fictional form and much of the action takes place between lawyers, though it is filled with precisely the type of speculative laws that this blog generally eschews.

    Though I am interested to see how this blog plays out, I suspect the final answer is that there can be no bright line rule. That it will have to be answered on closer to a group by group basis.

    For instance, we may very well never want to recognize rights for AIs that are created by humans, and it is entirely possible that they would never have any desire to ask for them, that in fact they could be constructed for it to be impossible for them to have such desires while still having a human level of intelligence along with emotions that are at least broadly analogous to human emotions.

    In dealing with aliens, they will likely have to be considered on a nearly species (or even group within species for situations such as the hives you have described where there are morphologically distinct groups within a species) by species basis. Some may be clearly equivalent to humans in their morality and ability to reason, but others clearly not so, and many others may be a difficult borderline.

    The courts at least have the option of punting the question to the legislature. They would likely be fully justified, at least from a legal standpoint, of claiming that until the legislature recognizes a group as possessing human rights that it is treated as an animal.

    That of course does not resolve the underlying question, but it gives the courts a bright line rule: creatures are treated as animals until the legislature says differently. The legislature on the other hand has a the ability and indeed the duty to invoke very different reasoning processes than does a judge and they may be readily swayed by things as fickle as public opinion and intuition with no need to attempt to justify their decision beyond that at all if they chose to go that route.

    • I’m troubled by a rule that treats the absence of rights as the default position. If you treat a species as animals until they’re proven sentient, what happens if they’re treated as fair game for lethal experimentation or hunting or eating, and then it turns out that they’ve been sapient beings all along and you’re guilty of torture, murder, and cannibalism?

      It seems more sensible that if there’s any uncertainty, you err on the side of caution and at least forbid killing or imprisoning members of a new species until you resolve the question of their sapience.

  9. Actually, similar issues are already being discussed in various parts of the world. According to the facebook page ‘Equal Rights for Animals’:
    “On June 25, 2008, a committee of Spain’s national legislature became the first to vote for a resolution to extend limited rights to non-human primates. The parliamentary Environment Committee recommended giving chimpanzees, bonobos, gorillas, and orangutans the right not to be used in medical experiments or in circuses, and recommended making it illegal to kill apes, except in self-defense, based upon Peter Singer’s Great Ape Project (GAP)”. While not explicitly stated, this would effectively grant other primates non-human person status.

    There have also been calls by scientists to grant dolphins non-human person status under law. If people are already calling for this for terrestrial species, I feel sure that biological non-human intelligences of other origin would experience little opposition. The only problems I would anticipate would be in the case of Intelligent Machines where peoples perceptions may still be that they are mere tools.

  10. Pingback: Law and the Multiverse - Sarapen

  11. Emanuele Vulcano

    Truly interesting topic indeed! And how this plays out may actually affect us in the real world sooner than we may believe (although most likely not in this or the next decade) — for example: would it be illegal to turn off a computer running a AI?

  12. I think that the uplift scenarios are particularly complicated, since a few minor modifications to dolphins or chimps and a lesson in civics could, under guidelines proposed above, grant “human rights” to the entire species.

    Further, what if a unique individual appeared from a population by mutation. Since this species has the potential to spontaneously produce intelligences, does the species now get protection? What if such protection was just to prevent humans from limiting the species in some way so in no longer produced intelligences?

    What about a hypothetical species that had an intelligent gender and an animalistic gender? Do the non-intelligent “breeders” get protections? Or can they be owned by the intelligent members of their species?

    Currently humans are covered by a range of rights that protect them when they are not capable of exerting them as individuals. Babies, severely handicapped and very elderly individuals, as humans, have both legal rights and ethical protections where people know that certain treatments are wrong. Would these ethical and legal rights, as a bundle, transfer to newly discovered intelligent species/groups, or would they go piecemeal? Surely not every single right of a living human being applies to a computer AI, but how will we find a doctrine that explains this to our ethics and laws?

  13. I don’t know if someone has already mentioned this (there are alot of posts to dig thru) but what about venom’s symbiote?? It has, on many occasions, demonstrated a level of intelegence and understnading that matches any human. In the older marvel story lines it is a member of a bigger species (though a very strange one) in the ultimate story lines it is a government creation, would that factor in to its eligibility to human rights?? Would such a being be considered property?? Would the fact that it is a part of another living being that is eligible for such rights factor in at all?? I like that you guys look at these issues, it makes for very interesting reading, keep up the good work.

  14. Oliver Neukum

    It seems to me that this is a diplomatic question rather than legal. We’d grant the recognition to members of any group that is willing to return the favor.

  15. Pingback: Quora

  16. Pingback: Law and the Multiverse « Sarapen

  17. I’m all for the definition of “human” being altered to include artificial human level intelligences, and animals that can demonstrate reasoning and awareness of self.
    Dolphins may appear intelligent but in fact the African Grey (Alex being a good example) are substantially more intelligent than a two year old human child if suitably “educated”.

Leave a Reply

Your email address will not be published. Required fields are marked *