Just another Reality-based bubble in the foam of the multiverse.

Wednesday, February 18, 2009

Except for the Company 'Bots



Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.


Obviously, someone's been watching the right sci-fi.



...The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .

The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers...

...Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned...

...A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.

“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”




Samurai 'bots will not stand a chance against the Company's Ninja and Yakuza models. Neither will the rest of us. Sadly for them, that includes the Company, but evidence suggests those jokers are hardly people anyway. Putting robots in charge of the robber barons might significantly humanize them, in fact.

4 comments:

Cosa Nostradamus said...

.
Don't these guys get the Sci-Fi channel?
.

spocko said...

Terry Gross had on a guy talking about "Robot ethics" for the military robots. It was the first time I heard someone say just how crazy the "three laws" are when it come to military robots.
"Of course they won't protect human life! They are designed to kill humans!"

It was interesting just how naive the three laws looked when you put in the people who funded them.

The whole Skynet deal with a terminator didnt' see so strange now that you figure that the robots didn't rise out of some kind of servant helper house robot but out of a battlefield drone who's job it was to kill humans.

Listen to the show, I think you will like it.

http://www.npr.org/templates/story/story.php?storyId=99663723

kelley b. said...
This comment has been removed by the author.
kelley b. said...

If they want to avoid a Butlerian Jihad, they had pretty well better hold with Asimov's three laws.

If they don't, expect big trouble.

The whole point is military bots are intrinsically unsafe, especially the self-aware AIs that DARPA is so hot to build.