This Week in Law

Ryan and I were on  This Week in Law today to discuss a variety of current legal topics.  If you missed the live show you can watch the video online.  Thanks to Evan Brown for a great show!

2 responses to “This Week in Law

  1. When I was teaching in the Philippines, my colleagues and I researched into possible applications of neural networks. Given that programs that use neural networks (example: chess programs) actually demonstrate a capacity to learn, wouldn’t this be a line that could be divided between them and traditional programs that use IF… THEN logic? The problem is, of course, that the programs are not confined to a single computer. (I don’t even think that we have physical neural networks: I think they are all programs.) They could be copied and placed on another computer. Of course, for legal purposes, if the program was sophisticated enough to be considered sentient the obvious thing to do if you copy the program onto another computer is to not consider it a separate individual. There may be some leeway if the program takes up more than 500 GB of memory or if it required some sort of interface with the outside world other than a screen and a keyboard (ie if the program accepted audio and visual input and if the program operated a robot that could speak and move around). In such a case a copy of the program stored on a laptop might not be considered a person but the robot might be. It would probably be a good idea to wait 18 years before the first of these robots could vote in elections.

    Anyway, I thought it was funny when the interviewer thought you guys were avoiding the question when you were in fact building your case: it isn’t immediately obvious that a robot would need to have rights before you can charge them with a crime but, yeah, if you actually find a robot guilty then what happens? Do you put the robot in jail? I guess you could turn the robot off which isn’t much of a punishment, not unless you destroy the robot altogether. Would a robot fear death? Is this a deterrent for the robot? I suppose so: it would be natural to program a robot with an instinct for self-preservation so that it would not hurt itself, in which case it would be less likely to commit a crime knowing it could be punished. Of course, it would also be more likely to resist arrest knowing it could be punished so that could just make things worse. We could program robots to be obedient but if they have a neural net then they could simply learn NOT to be obedient. (If a child told a robot to jump off a roof and the robot survived the fall then the robot could learn not to obey human commands.)

    Anyway, thanks for the side mention about animal human hybrids. I do think this is an area that could be explored more however. If human-animal hybrids are produced then there may have to be some sort of standard other than our own intuition that tells us which of them would be considered humans with animal DNA and which of them would be considered animals with human DNA.

    Martin

  2. Oh! I just got an idea! Let’s say that a robot had a memory of 500 GB. It wouldn’t be so hard to back up that memory: if the robot committed a crime we could do the same thing to that robot that I do with my laptop when it gets a nasty virus, that is to say I have the option of installing an earlier version of the laptop memory that didn’t have the problem. If there are hundreds of robots not killing people and one robot who is then it might not be the original programming that is at fault so just installing “the last saved version that worked properly” might be an option short of destroying the robot. The robot would then be “fixed” and would not “fear” getting captured. Of course, it might also not “fear” committing a crime in the first place. I suppose we might have to take that route: a robot that did harm to humans was “broken” and would need to be “fixed” and would be no more responsible for its actions than someone who is mentally ill. The fact that you could just replace the robot’s programming with a program that works better has big implications, I think, over whether or not the robot can be considered responsible in the first place.

Leave a Reply

Your email address will not be published. Required fields are marked *