Ethics for Kill-Bots

John S Canning, an engineer at the Naval Surface Warfare Centre, proposes (PDF) that “machines target other machines and men target men” be incorporated into the Law of Armed Conflict (LOAC). Some wags at Slashdot are likening this suggestion to Asimov’s three laws of robotics.

My take? Like all conventions, it is only as good as its being implementable, and expedience usually chases out good intent, and law. There are serious incentives to defect from such a treaty, due to the potential killing capacity of autonomous and guided machinery. Such rules of engagement for robots is, nonetheless, something we need to consider and try to apply.

What could a technologically outclassed enemy do in the presence of kill-bots, and other overwhelming force? What they all ready are doing: terrorism. Often, there are low-tech workarounds for high-tech problems (e.g. the US develops expensive, sophisticated spy satellites, so the the enemy tarps encampments with innocuous images; the military creates a raygunthat makes people feel as if they are about to catch fire“, so protesters may wear a portable Faraday cage a.k.a. aluminum jumpsuit). Worse, as fallible as AI is, there are some nasty ways to game it, such as projecting images of weapons on innocents and other targets.

Kill-bots and terrorists– it is a bleak vision.

Leave a comment

Your email address will not be published.