Misinformation has been spread for as long as the media has existed, but in recent years, a new and ominous threat has emerged. It is one that literally puts words into people’s mouths via online videos. Dubbed “deepfakes,” these videos first emerged in the late 1990s, but now thanks to affordable video cameras and simple apps, increasingly anyone can make a fake video that seems all too real.
Deepfakes are actually little different from putting Tom Hanks’ face on the bodies of others in the 1994 film Forrest Gump, but back then it took a team working for weeks on a single scene. Now artificial intelligence (AI) and machine learning have made it so simple that deepfakes have gone viral.
Examples and Evolution of Deepfakes
Some examples include President Obama describing President Trump with an expletive, or Mark Zuckerberg “caught on camera” admitting that Facebook wants to manipulate and exploit its users. While something still isn’t quite right and there is a hint of video manipulation, the results are far more convincing than what appeared in an Oscar-winning film from 25 years ago!
In most cases, deepfakes involve a public figure – someone who has been seen in video. This is obviously a concern for those running for office, celebrities, or CEOs who are in the media a lot. However, as more and more people post and share videos of themselves on social media, it wouldn’t take much for someone to take that content and morph it into something more sinister or nefarious.
This could take “revenge porn” to a new and terrifying level, but it could also be used to discredit almost anyone.
The number of deepfakes is on the rise. According to a report from Deeptrace, there were less than 8,000 examples of deepfakes online at the beginning of 2019, but just nine months later that number had almost doubled.
The “good news” is that deepfakes still take a bit of knowledge and expertise, and a reasonably powerful computer is also needed.
For those who are in the public eye on a regular basis, it may be necessary to maintain an archive – with a time stamp – of all videos. If something does show up online that has been manipulated, it is possible to then respond with an original, unaltered video.
Clearance Issue
Even if someone didn’t actually say something is enough to be an issue. Mere denial may not be good enough when it comes to a security clearance investigation. While it may not be true, it could be seen as enough to put someone in a situation where they have still been compromised.
“This is most certainly a concern, and could have a devastating impact on someone’s security clearance,” said attorney Mark Zaid, who works with security clearance related matters.
“I have thought about this concern for years as a possible threat to someone, but, fortunately, I have never experienced or even heard of an actual case yet,” Zaid told ClearanceJobs.
“I can say that, so far, I’ve yet to see a clearance case based on a deepfake but who knows what we’ll see in the upcoming years as the technology becomes more commonplace,” warned Zaid’s colleague, attorney Bradley P. Moss.
“That said, given today’s technology and the ease at which videos and photographs can be realistically manipulated, I would always encourage individuals to routinely run online checks on themselves to ensure nothing false or fake is out there that could cause them problems,” Zaid suggested.
“As a general matter, anyone expecting to undergo a background check of any real seriousness should be taking efforts to keep an eye on their Internet presence,” added Moss. “That’s just basic due diligence. I find it unlikely that a mere deepfake video, without some type of corroborating independent evidence, would be viewed as credible or authentic by an agency.”
What do you do if you come across a deepfake?
It’s understandable if you feel unprepared for what to do if you are ever a victim of this technology.
Technology industry analyst Rob Enderle of the Enderle group told ClearanceJobs, “Contact a reputation protection firm and have them move to have the video removed and, if possible, go after the perpetrator. The problem right now is I’m not aware of a reputation scanning service – like Lifelock – that handles unstructured data like audio, videos or photos. If it is adequately indexed, however, they may still work.”
It continues to become a brave and frightening new world.