icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
24 Oct, 2019 15:38

Mock UK shooting staged by Met Police to help train Facebook AI to detect live streams of terror attacks

Mock UK shooting staged by Met Police to help train Facebook AI to detect live streams of terror attacks

Police have begun training exercises as part of efforts to design technology capable of detecting and automatically removing live streams of mass shootings from Facebook as they begin.

The first in a series of exercises was carried out in Kent on Wednesday, where a counterterrorism officer played the role of the shooter. The ‘shooting’ was filmed from the officer’s point of view and captured on a body camera. The footage was then sent to Facebook and will be used to train artificial intelligence systems.

In the video, which mimics a mass shooting, screams can be heard as the ‘gunman’ walks around the building searching for kids. The footage also captures police carrying shields and weapons, and searching for the fake attacker before catching him.

Also on rt.com Victims screamed for help as gunman live-streamed attack on NZ mosque, gruesome video shows

Facebook contacted the police in the aftermath of the Christchurch mosque massacre in March, where a gunman motivated by religious hate killed 51 people. The footage was shared widely online across various platforms in the days following the attack.

In a statement, Commander Richard Smith, head of the Met’s Counter Terrorism Command, said live-streaming terror attacks is an “incredibly distressing method of spreading toxic propagnda.”

Stopping such material from being streamed and shared in the aftermath of attacks could potentially “prevent the radicalisation of some vulnerable adults and children,” he said. The technology could also help Facebook notify police of an attack early.

Police said the footage will also be provided to the UK Home Office, which will share it with other tech companies who want to develop similar warning systems.

Facebook has removed 26 million pieces of content from global terrorist groups in the last two years alone, according to Facebook’s counter-terrorism policy manager, Erin Saltman. The footage from the collaboration with the Met Police will be crucial in the fight to keep terrorist content off both Facebook and other platforms, she said.

Also on rt.com Facebook erased 1.5mn instances of NZ mosque attack video in 24 hours after massacre

The recent German synagogue shooting in Halle was watched by 2,200 people on the Amazon-owned Twitch live-streaming service in the 30 minutes before the video was flagged and removed, with five people watching live as it happened.

In 2017, three men were arrested after live-streaming the gang rape of a Swedish woman on Facebook. The physical abuse of a mentally disabled man was also streamed by one of the four perpetrators in 2017 in Chicago.

Facebook drew criticism in New Zealand following the Christchurch attack, for its lack of a swift response. The country’s privacy commissioner lambasted the tech giant in an email, saying their silence was “an insult to our grief.” In the case of the Christchurch attack, Facebook said 200 people watched the brutal video live, while 4,000 had managed to watch it before the company removed it.

Think your friends would be interested? Share this story!

Podcasts
0:00
23:13
0:00
25:0