small-logo
Need help now? Call 216.321.7774

The Future of Fake News: Don’t Believe Everything You Read, See or Hear

From The Guardian:

The University of Washington’s Synthesizing Obama project took audio from one of Obama’s speeches and used it to animate his face in an entirely different video

However, there’s a new breed of video and audio manipulation tools, made possible by advances in artificial intelligence and computer graphics, that will allow for the creation of realistic looking footage of public figures appearing to say, well, anything. Trump declaring his proclivity for water sports. Hillary Clinton describing the stolen children she keeps locked in her wine cellar. Tom Cruise finally admitting what we suspected all along … that he’s a Brony.

This is the future of fake news. We’ve long been told not to believe everything we read, but soon we’ll have to question everything we see and hear as well.

For now, there are several research teams working on capturing and synthesizing different visual and and audio elements of human behavior.

Software developed at Stanford University is able to manipulate video footage of public figures to allow a second person to put words in their mouth – in real time. Face2Face captures the second person’s facial expressions as they talk into a webcam and then morphs those movements directly onto the face of the person in the original video. The research team demonstrated their technology by puppeteering videos of George W Bush, Vladimir Putin and Donald Trump.

Face2Face lets you puppeteer celebrities and politicians, literally putting words in their mouths.

On its own, Face2Face is a fun plaything for creating memes and entertaining late night talk show hosts. However, with the addition of a synthesized voice, it becomes more convincing – not only does the digital puppet look like the politician, but it can also sound like the politician.

A research team at the University of Alabama at Birmingham has been working on voice impersonation. With 3-5 minutes of audio of a victim’s voice – taken live or from YouTube videos or radio shows – an attacker can create a synthesized voice that can fool both humans and voice biometric security systems used by some banks and smartphones. The attacker can then talk into a microphone and the software will convert it so that the words sound like they are being spoken by the victim – whether that’s over the phone or on a radio show.

For the rest of this piece, click here.


Contact Us

Your name Organization name Describe your situation Your phone number Your email address
Leave this as it is