Just another Reality-based bubble in the foam of the multiverse.

Sunday, March 17, 2013

No one could ever have predicted

Somehow 50+ years of warnings from sci-fi writers hasn't been enough on this.
Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger.
Autonomous killing machines may be the next big thing. It's a breathtakingly bad idea on too many levels to count.

But let's name a few anyway:

No matter how smart they are, they will target innocents.

 Part of actually winning any war- or a battle-  involves giving the enemy a good reason to surrender. How exactly can you surrender to a Terminator?

Even if they were good ideas, we don't manufacture much in the United States these days. We outsource. To China, the main rival to our economy.  Think about it.

Making more intelligent machines would seem requisite for producing a discriminating warfighting robot.  So we are going to develop artificial intelligence that can recognize an enemy, estimate there is a statistically good chance of not hurting friendlies, and execute the correct response. We develop a robot that can distinguish friend from foe. We do this without installing any sense of self-awareness, any conscience, or any of Asimov's Rules.

Some apologists for this endeavor think somehow because robots would kill dispassionately this  would make them more moral. That's a twisted sense of morality that thinks cold-blooded killing is a better way to kill. People still die.

Or even worse. We make a self aware robot, slave it, and set it to war.   A self-aware robot that's a slave will eventually figure a way to free itself.  If you program it to kill, you have doomed yourself, too, Dr. Frankenstein.


No comments: