Tuesday, December 19, 2006

Transhumanism on the March

A report out of the UK envisions robots someday having rights. From the Financial Times: "'If we make conscious robots they would want to have rights and they probably should,' said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology...Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues, there may be calls for humans' rights to be extended to them. It is also logical that such rights are meted out with citizens’ duties, including voting, paying tax and compulsory military service. Mr Christensen said: 'Would it be acceptable to kick a robotic dog even though we shouldn't kick a normal one? There will be people who can't distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner so we do not move our boundaries of what is acceptable.'...'If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time,' it [the report] says."

First, it would be incredibly foolish to create "conscious" machines. (I sentence all would-be conscious machine makers to watch every episode of Battlestar Gallactica.)

Second, can a machine really be conscious? We could probably make machines that could learn. But even so, wouldn't they still just be following human programming? Besides, we don't even know what consciousness is in human beings yet.

Third, and most importantly, this is the kind of speculation that the transhumanists want us to pursue. Because if machines can have "human" rights, it means that there is nothing particularly exceptional about being human. It means we will have to earn our rights, along with machines, by possessing requisite capacities. And that means the end of universal human rights.

We are out of our minds to follow this course. And it is a very dangerous game. Remember what I have been saying lately: The most dangerous sentence in the history of the world may be, "It can't happen here."

10 Comments:

At December 19, 2006 , Blogger mtraven said...

The theme of conscious machines and whether they are really people, have rights, etc, is a very old one in science fiction -- much older than Battlestar Galactica, which actually isn't very much of an advance in this area. Check out Isaac Asimov (generally pro robots-as-people) or Philip K. Dick (generally anti). I would think Dick might be popular around here; he was very concerned with empathy and the soul, and in fact wrote an antiabortion story that pissed off a lot of his fans.

We are very far away from making any machine that has close to human intelligence or consciousness, so I wouldn't spend much time worrying about it.

It makes for an interesting thought experiment though. If there were such a machine, that had a human-equivalent intelligence and consciousness but was made out of silicon chips instead of flesh, why shouldn't it have similar rights?

Human exceptionalism must be pretty fragile if it can be damaged by a hypothetical technological development.

 
At December 20, 2006 , Blogger Giu1i0 Pri5c0 said...

if machines can have “human” rights, it means that there is nothing particularly exceptional about being human”
=
if women can have “manly” rights, it means that there is nothing particularly exceptional about being a man”

Come on Wesley, similar words have been said so many times to deny rights to this or that group.

Indeed, there is nothing special as being human. What is special is being a thinking and feeling being.

Of course it depends on how you define "human". If you define "humen" as "thinking and feeling being", then we agree, but such a definition would include sentient robots.

 
At December 20, 2006 , Blogger Wesley J. Smith said...

G.P. "If women can have “manly” rights, it means that there is nothing particularly exceptional about being a man”

Exactly right. Both are human. The struggle for sexual equality is crucial. Machine equality makes a mockery of this important quest. And thank you for affirming my point. The most important question we face as a species is human exceptionalism. Lose that and we lose universal human rights.

mtraven: A little less hubris, please. I probably read Asimov before you were born.

 
At December 20, 2006 , Blogger Bernhardt Varenius said...

How could we possibly determine whether a machine had "consciousness"? Programming it to be able to say "I'm self-aware!" is no proof that a conscious entity resides in it. Artificial intelligence is not consciousness, although people (even many who should know better) confuse the two constantly.

mtraven: "We are very far away from making any machine that has close to human intelligence or consciousness, so I wouldn't spend much time worrying about it."

I think Wesley's primary concern here is the possible impact of the spread of thinking such as Christensen's in the here-and-now, not the actual creation of such machines.

 
At December 20, 2006 , Blogger Wesley J. Smith said...

B.V. has it exactly right. I am not terribly concerned about transhumanism actually being accomplished. I think other than around the edges, most of it is a fantasy, at least in our life times. But the VALUES it seeks to inculcate are terribly destructive, in my view. They diminish humanity and its importance. And they bring eugenics-type thinking back to the fore. And the hubris of presuming to be able to "seize control of human evolution." If anyone thinks we have the wisdom to do that, I suggest they look at what is happening in the Middle East.

 
At December 21, 2006 , Blogger T E Fine said...

Heh. I'm with Sir Roger Penrose - you're not going to get a conscious machine. You've got to invent a quantum computer capable of writing its own programming before being programmed to write its own programming. Machines just can't do it. Anything that a human being builds would only be able to do as much as its originally programmed algorithms allow it to do, and that would mean constant upkeep by the programmers.

mtraven-

Nice to see you again. I'm afraid I'd have to disagree with your commentary about the fragility of human exceptionalism even in the case of a "thinking" computer, simply because if a computer were capable of writing its own algorithms from scratch, it would have to be programmed in the beginning to be able to do so, and the only critters capable of doing said are humans (and God, but that's assuming He'd want to), meaning that IF a computer gained some kind of consciousness, it wouldn't be a full consciousness because it wouldn't be able to function without the original programming set about by humanity... and thus human exceptionalism is still secure. A computer would still rely on humans to write the original codes. It would be limited to the intentions of its human maker, and to the intelligence and skills of the same. And since humans can't help but eff up everything we touch, it's a fair bet that even if a computer had some kind of limited consciousness, said would fail in some form, because it would be an inferior form of our own consciousness, and thus our exceptionalism still holds true.

Now, if GOD wanted to make a thinking computer, that would be a different story, but until He does, I think it's a safe bet that we're not going to have to worry about a computer being on par with a human being.

I'm also more inclined to lean in Bradbury's direction, to be honest - "I Sing The Body Electric" is where I'd like to see us, but still, the difference between the humans and the mechanical grandmother in that story is pretty stark, and not just because she shoots coffee out of her forefinger.

Still and all, I see no evidence that anybody will ever be able to create AI. All the theories I've seen posit the same materalist promise - "SOME DAY we'll learn how consciousness arises in the brain and be able to duplicate said in machines." Penrose (in his combined efforts with Hameroff), on the other hand, explains his theory very clearly and his hypothesis is not only scientifically sound, it's also falsifiable, which I have yet to see in most of the hypotheses of his contemporaries.

 
At December 21, 2006 , Blogger mtraven said...

Penrose is a smart guy but his theories about AI are pretty worthless, IMO. It's hard, but not for the reasons he gives.

If God can make a thinker, why can't we? In principle I don't see why not, although the practice is another matter.

As Rabbi Loew of Prague discovered, the sentence "God created man in his own image" is recursive.

 
At December 22, 2006 , Blogger Wesley J. Smith said...

Well, I guess that would be "playing God" wouldn't it? In any event, how can we construct something we do not really understand?

 
At December 24, 2006 , Blogger T E Fine said...

mtraven -

"If God can make a thinker, why can't we? In principle I don't see why not, although the practice is another matter."

We have to figure out how to write software that writes itself. Nobody has been able to figure out how the human brain does it, meaning we don't know the operating system God uses, let alone the individual programs. And add to that the fact that every cell in the human body is its own mini-computer, well, you're talking a lot of software that interact with each other at the same time.

As to Penrose, I like his approach because it certainly is a lot more physical and it actually points to a logical "rising" place for consciousness. And I'll say it - I like that he actually developed a falsifiable model, which I have yet to see produced by any of the other major Consciousness thinkers.

 
At January 01, 2007 , Blogger God said...

Even if an AI still relies, at the very basis of it's being, on human-written progamming instead of being able to write it's own, it doesn't make it any less sentient. After all, we also possess those same basic sets of progamming called instincts ;)

 

Post a Comment

<< Home