icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
5 Mar, 2016 01:01

Robotopia or Robocalypse? Study warns against fully automated weapons

Robotopia or Robocalypse? Study warns against fully automated weapons

One of the top US experts on automated weapons systems is urging against their development, arguing that human elements will always be necessary to avoid catastrophic accidents, fatal errors and ethical issues.

Autonomous weapons “pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces. This could be because of hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors,” warned Paul Scharre, senior fellow at the Center for a New American Security (CNAS).

Scharre’s study is titled Autonomous Weapons and Operational Risk, and examines the challenges of employing such weapons systems on the battlefields of tomorrow, since today’s militaries are yet to field robo-weapons in any significant numbers.

READ MORE: Russian ‘Skynet’ to lead military robots on the battlefield

Scharre is a former Army Ranger, member of the Council on Foreign Relations, and worked at the Pentagon between 2008 and 2013 as one of the leading theorists on unmanned and autonomous systems.

Even when working as intended, automated weapons systems lack the ability to step outside their instructions and apply common sense, as humans would. That is assuming they have not been hacked and suborned by the enemy, Scharre warns.

“Autonomous systems will do precisely what they are programmed to do,” Scharre wrote, “and it is this quality that makes them both reliable and maddening, depending on whether what they were programmed to do was the right thing at that point in time.”

Human operators often strike unintended targets, but can also divert their weapons at the last second, the CNAS expert noted. Autonomous systems are designed with the kind of weapons in mind that targeting errors could result in far more catastrophic consequences.

“The result could be fratricide, civilian casualties, or unintended escalation in a crisis,” Scharre wrote.

One example of such an escalation was the 1983 incident when Soviet Union’s early warning satellites erroneously reported the launch of five US intercontinental ballistic missiles. Lt. Colonel Stanislav Petrov correctly interpreted the alert as a computer error, refusing to pass the information on to headquarters and averting nuclear war.

READ MORE: Rise of the machines: Super-agile cyborg takes first steps to global domination (VIDEO)

Scharre also pointed out the cascading failure effect involved in the 1979 accident at the Three Mile Island nuclear power plant, as proof that complex systems will inevitably encounter errors over a long enough time horizon.

“One of the major advantages of humans over automation is the ability of humans to adapt to unanticipated problems and arrive at novel solutions,” Scharre wrote.

Developing autonomous weapons system, even if guided by artificial intelligence, is more likely to result in a “robocalypse” than a robo-topia, he concluded, urging instead the development of semi-autonomous weapons where humans would be involved as essential operators, moral agents and the ultimate fail-safe.

Podcasts
0:00
28:37
0:00
26:42