icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
13 Nov, 2017 11:59

Rise of the machines: Is it time to take killer robots seriously? (VIDEOS, POLL)

Rise of the machines: Is it time to take killer robots seriously? (VIDEOS, POLL)

As a United Nations weapons control group meets to discuss the prospect of killer robots, RT.com looks at how new technology is bringing autonomous weaponry out of the movies and into real life.

The superiority of nations has always hinged on the produce of its finest minds. Nuclear fission created the atomic bombs dropped on Hiroshima and Nagasaki. Liquid propulsion wrought the long range missiles that world powers today spend millions on to deflect.

Now, with the rise of AI and fully-autonomous weapons, the leading lights of the science world are discovering new depths to mankind’s fascination with enacting cruel and unusual punishment on its enemies. However, there are others trying to keep a tight rein on this potentially deadly technology.

READ MORE: Killer robot flight: Video of UK’s autonomous drone released

The debate over whether tech will enhance mankind or completely eradicate it is well covered. For every bot ready to alleviate overburdened medical staff, there is also an idea similar to the Super Aegis II, a remote-controlled weapon system which can track and take down a human target in total darkness.

‘AI could destroy us’

At the vanguard of a push for ethical AI development is Stephen Hawking. The physicist was one of 116 experts, among them SpaceX founder Elon Musk and Deepmind’s Mustafa Suleyman, to warn against the “pandora’s box” of autonomous weapons.

Hawking opened a keynote speech at this year’s Web Summit in Lisbon with yet another stark warning: we must be prepared for bad AI. “We cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it,” Hawking told delegates in the Portuguese capital.

“Unless we learn how to prepare for and avoid the potential risks, AI could be the worst event in the history of our civilization. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. AI could develop a will of its own – a will that is in conflict with our own – and which could destroy us,” he said. 

Famous innovators have brought talk of killer robots out of the movie theatres and into the mainstream. But what’s being done behind the scenes by governments to stop the advent of Terminator-like machines, or weapons that are perhaps less evocative but equally terrifying?

READ MORE: Russia developing kamikaze drone that hits target & detonates - report

Machines that can kill without human control is a topic for discussion at a UN meeting in Geneva on Monday. It is part of the Convention on Certain Conventional Weapons (CCW) – an arm of the UN which restricts the use of weapons deemed to be indiscriminate or “excessively injurious”.

The agenda will see academics and representatives from more than 80 countries discuss the prospect of passing control of these weapons from humans to computers. In the past, the UN authority placed restrictions on landmines and, in 1995, banned the use of laser weapons designed to cause permanent blindness.

Stop Killer Robots

Mary Wareham, an activist with Human Rights Watch, is part of the Campaign to Stop Killer Robots, a group seeking a preemptive ban on kinetic fully-autonomous weapons. She explained to RT.com that the secrecy surrounding state weaponry research meant an autonomous arms race is a distinct possibility, with nations like the US, UK, Russia and Israel already operating semi-autonomous armaments and drones.

“We’re seeing talk of arms races,” she said. “That’s the natural extension of what happens. At the moment a handful of states are developing these kinds of weapons systems. Not the fully autonomous weapons, they are at pains to say, but it’s not going to stay like that for long.”

Wareham warned that without new laws this kind of war machinery will likely lead humanity down a dark path.

“[Fully-autonomous weapons] will proliferate. They will end up in the hands of abusive governments and regimes. They will end up being used by non-state armed groups,” she added.

A UN ‘Food for Thought’ paper released in September reveals the topics up for discussion at the Geneva meeting, including establishing a definition of AI and who, or what, is legal accountable for robots.

After watching three years of informal discussions at the UN, Wareham believes it is time for countries to finally act: “It’s unclear what the objectives of the talks are at the moment, besides looking at questions raised by emerging technologies. They’ve been looking at these questions for the last three years,” she said.

“We pin our hopes on the precedent provided by the Blinding Lasers Protocol IV back in 1995. But that was more than 20 years ago. It’s been more than two decades since they’ve managed to produce a useful protocol at the Convention on Conventional Weapons.”

For Wareham, the biggest danger to the public is complacency in thinking that their governments are dealing with the issue when they are not. “Yes they are talking about it but they are not doing anything. We see the technology advancing rapidly now and we see diplomacy at a snail’s pace,” she said.

RT.com has contacted the UN CCW for comment.

By Luke Holohan, RT

Podcasts
0:00
23:13
0:00
25:0