icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
8 Dec, 2017 05:01

Human-AI merger: The pinnacle or demise of mankind? (DEBATE)

With machine learning algorithms evolving at an incredibly fast pace, concerns are mounting whether artificial intelligence (AI) is the logical continuation of human history or its demise. RT talked to three experts in the field about the benefits and dangers of AI.

In the latest twist of AI evolution, Google’s AutoML project (Machine Learning for Automated Algorithm Design) has ‘given birth’ to its own AI-based program, NASnet, which allegedly outperforms any previous human-made algorithms for identifying objects and images. This reignited a somewhat justified fears the technology could be evolving faster than humans can keep up, and will eventually overtake us in the future.

“Part of what we need to understand about artificial intelligence is this is just the beginning, this is just the tip of the iceberg,” futurist and philosopher Gray Scott told RT. “What’s going to happen in the near future is you’re going to have predictive analytics, which will allow AI to predict your desires before you even know what you want.”

Mark Gubrud, professor at the Peace, War and Defense curriculum at the University of North Carolina, however, called for a more cautious approach.

“Just the thought of machines telling me what I want makes me uncomfortable, but we [already] see that happening in our lives,” Gubrud said. “These systems are constantly intervening in ways we didn't expect or didn't ask for. The tech companies are constantly upgrading things in ways we didn't necessarily ask for, just when we learned to use the old version, et cetera.”

“We are at the beginning of a process which is fraught with danger… Of course there are upsides to this new technology, that’s why we’re pursuing them, but there are also lots of problems that are being created.”

AI-based algorithms will remain flawed until they learn to truly understand the information they are processing, believes Mark Bishop, Director at the Center for Intelligent Data, refrencing Facebook’s recently developed algorithm, which can identify users at risk of suicide based on their posts and direct them towards help.

“I think that’s a great move in the right direction, but balancing that praise we have to be aware of what happened to Microsoft with their Tay chatbot which was fielding live in the UK, and had to be taken down because it turned into a racist, homophobic chatterbox,” Bishop told RT. “So my concern with the Facebook algorithms is, in my opinion, these computers don’t understand the meaning of the bits they manipulate, that it will be very difficult for them to avoid being gamed in the same way.”

But one way or another, and despite the concerns, humanity will have to come to terms with AI, believes Scott.

“People need to be citizen scientists, and they need to be technologists and futurists. The future is technological, so we need to start understanding it. It shouldn’t be left to a couple of people in industry or dictate how this goes. We all have to be a part of this process moving forward in the future with AI. AI’s something we’re going to have to live with.”

“For the very first time on this planet, we’re all going to have an enormous amount of power. Not just a few people, every single one of us is going to have enormous power through the power of AI, and we have to decide, as individuals and societies, how we want to process that ability.”

Scott went on to suggest that the growth of AI is the logical continuation of human history and evolution. “You can blame the technology, but it’s not the technology, it’s us, with the fact we are separate from the technology,” he said.

“Technology is merging within this system, the cosmic system, and its emerging from us. It is a continuum. To separate technology and humanity is the most dangerous thing to do because when self-aware machines finally arrive, is that going to be the new enemy if we continue this narrative?”

Gubrut, however, disagreed with Scott’s welcoming take on AI singularity, insisting that humanity’s survival depends on its ability to preserve its own distinction.

“What we need to be very clear about going forward is the distinction between humans and technology. People are talking about implanting technology in the body, and saying the technology itself is human or is an accessory to humanity. This is very dangerous. I think we need to be very clear that we are what is human, and we are the definition of humanity.”

Podcasts
0:00
27:26
0:00
27:2