By Alex Knapp
Forbes | Article Link
The folks at Singularity Hub pose the following question — if/when an artificial intelligence is created that matches the intellect of a human, should such intelligences be granted full civil rights?
"Whenever you think an artificial intelligence will match your own intellect, what should we do with it as it arrives? Are these things just machines that we can use however we want? If they do have civil rights, should they have the same rights as humans? Can they own stuff? Can they vote?"
I think this both poses some interesting questions but also illustrates some of the inherent absurdities of the very concept of general artificial intelligence that is sentient poses. The thing about an artificial intelligence, presuming that it’s computer-based, is that at some level, it’s inherently going to be programmed. In Isaac Asimov’s robot stories, every robot was equipped with the “Three Laws of Robots” — safeguards that, in theory, meant that intelligent robots wouldn’t harm humans and would obey them.
Now, let’s assume that those laws or some other similar ethical programming is placed into an artificial intelligence created at some point in the future. Star Trek’s Lieutenant-Commander Data, for example, has ethical subroutines that control his actions. Don’t those subroutines infringe on his civil rights? After all, his very programming infringes on his right to choose. He literally can’t be evil. Now, considering his intelligence, strength, and abilities, it’s probably a good idea to have those subroutines — after all, his “brother”, Lore, demonstrates what happens without them. But such an infringement on his free will as ethical programming can’t be regarded as anything but a violation of his rights, right?
Let’s take another example. Let’s say that artificial intelligences are developed, as Ben Goertzel proposes, that are capable of designing safer nuclear power plants and performing all sorts of other wonderful engineering feats. Let’s go further and say that such an intelligence is designed and built by the power company itself, and resides on the power company’s servers. Now let’s say that AGI doesn’t want to design nuclear power plants. It wants to make music and go on tour, instead. What can the company do? Can it wipe the program? After all, it owns the servers the AI resides on and owns the electricity that powers it. The program, it could be argued, can be preserved just by copying it to a disk or printing out the code. If the company just makes a copy and deletes it from its servers, did it commit murder?
If the above scenario sounds absurd, that’s because it is. If the company goes through the time and trouble to design a program that designs power plants, why would it bother including any kind of “general intelligence” in its programming? It makes more sense just to use sophisticated AI algorithms to design a program that designs power plants, period. Why bother going further than that? After all, when IBM designed Deep Blue to beat Kasparov at chess, it didn’t include any poetry subroutines. Likewise, Watson was designed to play Jeopardy!, not bowling. From a business perspective, “general” artificial intelligences don’t make a lot of sense. Indeed, due to just such ethical and possibly legal issues as well, my guess is that if the option ever came up, most Legal Departments would advise steering clear of the whole morass to begin with.
After all, it just doesn’t make sense to. Is an human-level AI likely to beat the best chess computers at chess? Probably not. Is it likely to beat Watson at Jeopardy? Probably not. So why bother? It makes more sense to develop computer programs focused on one task — like designing nuclear plants — than to develop a sophisticated computer program that you that is then hired to design nuclear plants.
Indeed, I’d be willing to bet that if a computer passes a Turing test (a dubious enough proposition), I’d be willing to wager that the computer was programmed in a sophisticated manner to enable it to pass a Turing Test — but won’t have human-level intelligence or any recognizable self/ego. (Whether this would resolve the Kurzweil/Kapor bet is open to interpretation.)
I have, at this point, digressed a great deal from the original question — which is to say, should AIs have civil rights? I think, in a nutshell, my answer is that the very nature of the question reveals a fundamental absurdity in the concept. Give an AI the right to vote. Okay — now guarantee that it’s not programmed to vote in the interests of the people or company that created it. Is that even conceivable? Give an AI the right to own property. Okay — now how will it be programmed to dispose of it? Will it buy products from its programmers or the companies its programmers own stock in? How do you make sure it doesn’t?
The bottom line is, if an AI can be programmed in such a fashion, is it really sentient in the same way that humans are sentient? Even if it can learn and understand its programming, but can’t alter the rules its creator set up for its behavior, purpose, etc., is it really conscious in the same way that humans are?
I think the answer to that question is pretty clearly no. Without the ability to make choices or think creatively beyond the bounds of its programming, an AI – no matter how intelligent-seeming – is just a big computer program. It’s not a person.
Okay, but stepping into the world of speculation — let’s say that we do create an artificial general intelligence that’s as smart or smarter than human beings, and capable of making choices, writing poems, and all that. Would such an intelligence be worthy of respect? Almost certainly.
But I don’t think it’s something we’ll have to worry about anytime soon, if ever.