Deepfake AI has the potential to undermine national security, a cybersecurity expert has warned.
Dr Tim Stevens, director of the Cyber Security Research Group at King’s College London, said deepfake AI – which can create hyper-realistic images and videos of people – had potential to undermine democratic institutions and national security.
Dr Stevens said the widespread availability of these tools could be exploited by states like Russia to ‘troll’ target populations in a bid to achieve foreign policy objectives and ‘undermine’ the national security of countries.
He added: ‘The potential is there for AIs and deepfakes to affect national security.
‘Not at the high level of defence and interstate warfare but in the general undermining of trust in democratic institutions and the media.
‘They could be exploited by autocracies like Russia to decrease the level of trust in those institutions and organisations.’
Here, MailOnline has put together a deepfake test as well as everything you need to know about deepfakes. What are they? How do they work? What risks do they pose? Can you tell the difference between the real thing and AI?
What is a deepfake and how are they made?
If you’ve seen Tom Cruise playing guitar on TikTok, Barack Obama calling Donald Trump a ‘total and complete dipshit’, or Mark Zuckerberg bragging about having control of ‘billion’s of people’s stolen data’, you have probably seen a deepfake before.
A ‘deepfake’ is a form of artificial intelligence which uses ‘deep learning’ to manipulate audio, images and video, creating hyper-realistic media content.
The term ‘deepfake’ was coined in 2017 when a Reddit user posted manipulated porn videos to the forum. The videos swapped the faces of celebrities like Gal Gadot, Taylor Swift and Scarlett Johansson, on to porn stars.
A deepfake uses a subset of artificial intelligence (AI) called deep learning to construct the manipulated media. The most common method uses ‘deep neural networks’, ‘encoder algorithms’, a base video where you want to insert the face on someone else and a collection of your target’s videos.
The deep learning AI studies the data in various conditions and finds common features between both subjects before mapping the target’s face on the person in the base video.
Generative Adversarial Networks (GANs) is another way to make deepfakes. GANs employ two machine learning (ML) algorithms with dual roles. The first algorithm creates forgeries, and the second detects them. The process completes when the second ML model can’t find inconsistencies.
The accuracy of GANs depends on the data volume. That’s why you see so many deep fakes of politicians, celebrities, and adult film stars, as there is often a lot of media of those people available to train the machine learning algorithm.
Successes and failures of deepfakes
A notorious example of a deepfake or ‘cheapfake’ was a crude impersonation of Volodymyr Zelensky appearing to surrender to Russia in a video widely circulated on Russian social media last year.
The clip shows the Ukrainian president speaking from his lectern as he calls on his troops to lay down their weapons and acquiesce to Putin’s invading forces.
Savvy internet users immediately flagged the discrepancies between the colour of Zelensky’s neck and face, the strange accent, and the pixelation around his head.
Mounir Ibrahim, who works for Truepic, a company which roots out online deepfakes, told the Daily Beast: ‘The fact that it’s so poorly done is a bit of a head-scratcher.
‘You can clearly see the difference — this is not the best deepfake we’ve seen, not even close.’
One of the most convincing deepfakes on social media at the moment is TikTok parody account ‘deeptomcruise’.
The account was created in February 2021 and has over 18.1million likes and five million followers.
It posts hyper-realistic parody versions of the Hollywood star doing things from magic tricks, playing golf, reminiscing about the time he met the former President of the Soviet Union and posing with model Paris Hilton.
In one clip, Cruise can be seen cuddling Paris Hilton as they pretend to be a couple.
He tells the model ‘You’re so absolutely beautiful’, to which Hilton blushes and thanks him.
While looking in the mirror, Hilton tells the actor: ‘Looking very smart Mr Cruise’.
The account posts hyper-realistic parody versions of the Hollywood star doing things from magic tricks, playing golf, reminiscing about the time he met the former President of the Soviet Union and posing with model Paris Hilton.
Another video shared to the account shows deepfake Cruise wearing a festive Hawaiian shirt while kneeling in front of the camera.
He shows a coin and in an instance makes it disappear – like magic.
‘I want to show you some magic,’ the imposter says, holding the coin.
The ‘deeptomcruise’ account was created in February 2021 and has over 18.1million likes and five million followers.
Do deepfakes pose a threat?
Despite the entertainment value of deepfakes, some experts have warned against the dangers they might pose.
King’s College London’s Cyber Security Research Group director Dr Tim Stevens has warned about the potential deepfakes have in being used to spread fake news and undermine national security.
Dr Stevens said the technology could be exploited by autocracies like Russia to undermine democracies, as well as bolstering legitimacy for foreign policy aims like going to war.
He said the Zelensky deepfake was ‘very worrying’ because there were people who ‘did believe it’ as there are people who ‘want to believe it’.
Theresa Payton, CEO of cybersecurity company Fortalice, said deepfake AI also had potential to combine real data to create ‘franken-frauds’ which could infiltrate companies and steal information.
She said the ‘age of increased remote working’ was the perfect environment for these types of ‘AI people’ to flourish.
Miss Payton told the Sun: ‘As companies automate their resume scanning processes and conduct remote interviews, fraudsters and scammers will leverage cutting-edge deepfake AI technology to create “clone” workers backed up with synthetic identities.
‘The digital walk into a natural person’s identity will be nearly impossible to deter, detect and recover.’
Dr Stevens added: ‘What kind of society do we want? What do we want the use of AI to look like? Because at the moment the brakes are off and we’re heading into a space that’s pretty messy.
‘If it looks bad now, It’s going to be worse in future. We need a conversation about what these tools are for and what they could be for, as well as what our society will look like for the rest of the 21st century.
‘This isn’t going away. They’re very powerful tools and they can be used for good or for ill.’
How to spot a deepfake
1. Unnatural eye movement. Eye movements that do not look natural — or a lack of eye movement, such as an absence of blinking — are huge red flags. It’s challenging to replicate the act of blinking in a way that looks natural. It’s also challenging to replicate a real person’s eye movements. That’s because someone’s eyes usually follow the person they’re talking to.
2. Unnatural facial expressions. When something doesn’t look right about a face, it could signal facial morphing. This occurs when one image has been stitched over another.
3. Awkward facial-feature positioning. If someone’s face is pointing one way and their nose is pointing another way, you should be skeptical about the video’s authenticity.
4. A lack of emotion. You also can spot what is known as “facial morphing” or image stitches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying.
5. Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t look natural, or there is awkward or inconsistent positioning of head and body. This may be one of the easier inconsistencies to spot, because deepfake technology usually focuses on facial features rather than the whole body.
6. Unnatural body movement or body shape. If someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.
7. Unnatural coloring. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is likely fake.
8. Hair that doesn’t look real. You won’t see frizzy or flyaway hair. Why? Fake images won’t be able to generate these individual characteristics.
9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so an absence of outlines of individual teeth could be a clue.
10. Blurring or misalignment. If the edges of images are blurry or visuals are misalign — for example, where someone’s face and neck meet their body — you’ll know that something is amiss.
11. Inconsistent noise or audio. Deepfake creators usually spend more time on the video images rather than the audio. The result can be poor lip-syncing, robotic- sounding voices, strange word pronunciation, digital background noise, or even the absence of audio.
12. Images that look unnatural when slowed down. If you watch a video on a screen that’s larger than your smartphone or have video-editing software that can slow down a video’s playback, you can zoom in and examine images more closely. Zooming in on lips, for example, will help you see if they’re really talking or if it’s bad lip-syncing.
13. Hashtag discrepancies. There’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect video manipulation.
14. Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.
15. Reverse image searches. A search for an original image, or a reverse image search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.
The answers: the real people are highlighted
Read more at DailyMail.co.uk