News, Culture & Society

Creepy and lifelike Deepfake videos could be commonplace ‘within six months’, claims expert

Creepy and lifelike Deepfake doctored videos will be commonplace ‘within six months’, claims expert

  • Videos could be created which show people doing things they didn’t do in reality
  • The University of Southern California’s Dr Hao Li made the worrying prediction
  • Obvious signs that a video is fake could soon disappear with new technology
  • People will be unable to tell the true and untrue footage apart, he warned 

Deepfake videos could be commonplace and found across the media and online platforms within six months, according to a leading expert. 

The idea of the videos is to look completely real and show people doing things they never did. 

These are created by complex computing and artificial intelligence and have caused outrage recently. 

 

The video that kicked off the concern last month was a doctored video of Nancy Pelosi, the speaker of the US House of Representatives. It had simply been slowed down to about 75 per cent to make her appear drunk, or slurring her words 

Dr Hao Li, a computer scientist at the University of Southern California, revealed the videos could soon be commonplace.   

Deepfakes combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network. 

They are used to produce or alter video content so that it presents something that didn’t, in fact, occur.

Most fake video can be easily spotted, but Dr Li believes the obvious giveaways will soon disappear. 

‘It’s still very easy, you can tell from the naked eye most of the deepfakes,’ Dr Li said in an interview with CNBC.

‘But there also are examples that are really, really convincing.’ He added that those require ‘sufficient effort’ to create.

‘Soon, it’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions.’ 

The key issue, Dr Li claimed, is learning how to flag up clips intended to manipulate their audience.

‘The real question is how can we detect videos where the intention is something that is used to deceive people or something that has a harmful consequence,’ he said.

The videos began in porn – there is a thriving online market for celebrity faces superimposed on porn actors’ bodies – but so-called revenge porn – the malicious sharing of explicit photos or videos of a person- is also a massive problem.

The video that kicked off the concern last month was a doctored video of Nancy Pelosi, the speaker of the US House of Representatives.

It had simply been slowed down to about 75 per cent to make her appear drunk, or slurring her words.

The footage was shared millions of times across every platform, including by Rudi Giuliani – Donald Trump’s lawyer and the former mayor of New York. 

The danger is that making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level. 

The threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters.

This leaves companies that run those platforms, and governments, unsure on how to tackle the issue. 



Read more at DailyMail.co.uk


Comments are closed.