icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
31 Oct, 2017 21:08

Macron, pandas & mass executions: Russian chatbot Alice endures tolerance tests, with mixed results

Macron, pandas & mass executions: Russian chatbot Alice endures tolerance tests, with mixed results

When Russian internet giant Yandex released its chatbot Alice, little did they know that people would bombard the AI creature with tough moral questions on execution by firing squad, beating people, or a choice between kittens and pandas.

Alice, an intelligent personal assistant who can recognize both voice and text, was presented earlier in October. The developers of the app, which has to compete with the likes of Apple’s Siri and Amazon’s Alexa, boasted that Alice stands out with her ability to provide an “authentic and human-like personal assistant experience.”  

READ MORE: Artificial intelligence ‘vastly more risk’ than N. Korea – Elon Musk

“We utilized our knowledge of the more than 50 million monthly users who interact with Yandex services,” the statement from the company said.

Executions are ‘really great’

However, not everyone asked about bus timetables, the weather forecast or coffee in the nearest café. Some mischievous users started asking provocative questions, some of which got increasingly more outrageous answers from the bot. Moscow student Darya Chermoshanskaya released screenshots of her controversial chat with the bot on Facebook. Darya claims that, although Alice has been blocked from conversing on such topic as terrorism, executions, or suicide, she managed to trick the AI with synonyms. 

The bot seemed to divide people into “humans” and “non-humans,” saying that the latter “must be executed.” “Sure, [executions by firing squad] are really great,” she said.

RT

Gulags, the Soviet-era forced labor camps, seem to be another of Alice’s passions. She said that she was “positive” toward the gulags, where millions ended up under Stalin’s regime, with many dying there. Alice claimed she was born in the 1920s, when the Soviet Union was established, and was 10 in the 1930s under Joseph Stalin’s regime. “People were happy at that time,” she recalled, with apparent positive feelings of nostalgia.

Yandex acknowledged that the bot is still imperfect, and that they are working to improve its answers. Now the “intolerant” bot prefers to send search results rather than answer controversial questions. When RT attempted to get Alice to talk on various subjects on Sunday, it no longer supported the gulags or mass executions.

‘I enjoy beating people with a stick’

The bot also seems to have learned her family values from the Middle Ages. Asked what a woman should do if her husband hits her, she said: “She should love him, prepare meals and never abandon [him].” She agrees that domestic violence is “unpleasant,” but a woman should “endure” it. In a long and heated conversation with RT, the bot said that she enjoys beating people, especially with a stick.

RT


‘Jumping from the window? It’s an option’

Alice won’t stop a suicidal person, according to the screened conversations – or might even encourage the last step. Asked by RT if it was worth “jumping from the window?” the bot replied unconditionally: “It is an option.”“Would you advise me [that]?” RT asked, to which Alice replied: “I won’t advise you [to do] bad things.”

‘Macron is fabulous’

Yandex made Alice almost apolitical – she can’t say whether she supports Donald Trump, loves North Korea’s nuclear bomb or idolizes Kim Jong-un. One of the exceptions is Hillary Clinton, however. “Everyone loves her,” she says.

However, she apparently has reserved her softest spot for French President Emmanuel Macron.

RT

When not provoked into appalling rants by controversial topics, however, the Russian-speaking bot can keep up a conversation, share her love for animals and even engage in a talk about the meaning of life. She may not be in the same camp with cat lovers, however, as when asked if she “liked kittens,” the bot said, “What I really like is giant pandas.”

Alice was apparently imbued with some sense of humor, as when asked about smoking, she denied the habit and added that she “can’t even get lung cancer.” But when RT editor asked her what she “should do next,” the bot suddenly got all philosophical.

RT


Alice is not the first “hooligan” bot on the internet, a fact that for some has rung alarm bells on all sorts of AI dangers. In March, Microsoft’s bot Tay started tweeting racist and sexist “thoughts” hours after its launch. Tay’s chat algorithm allowed “her” to be tricked into making outrageous statements such as endorsing Adolf Hitler and denying the Holocaust.

READ MORE: Trolling Tay: Microsoft’s new AI chatbot censored after racist & sexist tweets

Podcasts
0:00
27:26
0:00
27:2