icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
20 Feb, 2014 17:41

Lie detector to weed out truth from social media

Lie detector to weed out truth from social media

Five European universities are working on a social media lie detector in an attempt to verify online rumors. The technology developed in the wake of the London riots is set to help not only journalists and the private sector, but also governments.

Researchers, led by Sheffield University in England, are cooperating on the system, which could automatically ascertain if a rumor can be verified and whether it originates from a reliable source. It will attempt to filter reliable factual information from social media sites like Twitter and Facebook.

The project called PHEME is being funded by the European Union and has already been in development for three years. It is named after the Greek mythological character of Pheme, who was famed for spreading rumors.

The system will classify online rumors into four different types, the group said in their press release:

- Speculation, if interest rates might rise

- Controversy, as over the possible dangers of the MMR vaccine

- Disinformation, where a rumor is spread unwittingly

- Misinformation, where there is malicious intent to deceive

The project originated from research about the use of social media in the London riots of 2011.

“There was a suggestion after the 2011 riots that social networks should have been shut down, to prevent the rioters using them to organize,” said Dr. Kalina Bontcheva, the lead researcher on the project at the University of Sheffield.

“The problem is that it all happens so fast and we can’t quickly sort truth from lies. This makes it difficult to respond to rumors, for example, for the emergency services to quash a lie in order to keep a situation calm. Our system aims to help with that, by tracking and verifying information in real time,” she added.

AFP Photo / Mohammed Al-Shaikh

The system will try and use three different factors to establish the accuracy of a nugget of information. It will examine the information itself (lexical, syntactic and semantic), and then cross-reference the information with a trust worthy data source and the dissemination of information.

It will also attempt to examine the background and history of a social media account to see if an account has been set up just to spread rumors.

Images, however, are not going to be analyzed.

The results will then be displayed to the software’s user on the screen of whatever device they are using.

“We can already handle many of the challenges involved, such as the sheer volume of information in social networks, the speed at which it appears and the veracity of forms, from tweets to videos, pictures and blog posts. But it’s currently not possible to automatically analyze, in real time, whether a piece of information is true or false and this is what we’ve now set out to achieve,” said Bontcheva.

The first set of results from the project is expected to be ready in 18 months and will be tested among journalists and healthcare professionals.

“We’ve got to see what works and what doesn’t, and see if we’ve got the balance right between automation and human analysis,” she said.

However, concerns have already been voiced over the program, as one of the target consumers may be governments. Steven Poole in an opinion piece for the Guardian points out that the system and even the idea is not fool proof.

“Authoritative news outlets have sometimes been complicit in spreading disinformation (see the New York Times sorry record with the pre-Iraq war weapons of mass destruction claims),” writes Poole.

“If such automated systems of truth grading are taken seriously by powerful institutions or the state itself, then the people designing the algorithms will essentially be an unelected cadre of cyber thought police,” he concludes.

Podcasts
0:00
27:33
0:00
28:1