Just another Reality-based bubble in the foam of the multiverse.

Tuesday, May 19, 2009

What's wrong with the Three Laws?

MSNBC via Cryptogon:

Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an “ethical governor,” a package of software and hardware that tells robots when and what to fire. His book on the subject, “Governing Lethal Behavior in Autonomous Robots,” comes out this month.

He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.

“Ultimately these systems could have more information to make wiser decisions than a human could make,” said Arkin. “Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives.”


How about these:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.




U.S. drones attacked the Pakistani village of Mirali on Saturday. According to the American press, a pair of missiles from the unmanned aircraft killed “at least 25 militants.” In the local media, the dead were simply described as “29 tribesmen present there...”


Oh, that's right. Dr. Asimov never knew the first real robots would be built by the D.o'D.

Building robots more ethical than human beings isn't very hard to do depending on the so-called humans you use for comparison.

No comments: