icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
11 Nov, 2016 17:16

Lip-reading robot developed by British scientists… but it’s no use to spies (VIDEO)

Lip-reading robot developed by British scientists… but it’s no use to spies (VIDEO)

Lip-reading robots developed by UK scientists can now outperform human interpreters at deciphering words. The technology cannot be used for surveillance, however, as it requires well-lit images and to see the speaker’s tongue.

University of Oxford researchers developed a computer program called LipNet which can read lips more accurately than human specialists. Its readings have proven correct over 90 percent of the time, while most experienced human lip-readers only hit the mark 52 percent of the time.

“Machine lip-readers have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces, covert conversations, speech recognition in noisy environments, biometric identification, and silent-movie processing,” researcher and LipNet co-creator Yannis Assael wrote in a paper about the program.

Speaking to the Daily Mail, the researchers said: “LipNet has no application in the world of surveillance, simply because lip-reading requires you to see the subject’s tongue - meaning that the video has to be straight on and well-lit to get a good result.”

LipNet was developed by feeding thousands of images of two men and two women giving a series of cryptic commands such as “set blue by A four please.” The system then mapped the mouth movements creating a database of video frames and associated words, which are later compared with the lips of the person the program is reading.

The project relied on funding from a number of tech companies, including Google’s DeepMind.

Podcasts
0:00
28:32
0:00
30:40