Secret squirrels are always encouraged to operate on social media with caution. That’s because foreign governments and U.S. adversaries use these public facing platforms as a way to potentially infiltrate the U.S. national security bubble. LinkedIn can bebattleground for nation state espionage operations.

Any advancements in technology, of course, bring more threats or issues to be mindful of. Artificial intelligence has some amazing gains – like detecting threats in public places through video footage or allowing us to move more quickly in multiple industries. Biometric data collection has allowed us to track and report on terrorists that have killed the innocent. But with all of these positive things come the negative: Advances in fake facial features have become possible in part due to technology becoming better at identifying these features in the first place.


AI is defined the field as the study of “intelligent agents” or intelligence demonstrated by machines. Though founded as an academic discipline in 1955, through the years, it has since experienced several phases of innovation.

With the creation of the computer naturally came software bots. It started with chatbots; algorithms designed to conversate with a human. Social media environments with millions of users present motivations for insider threats to use these algorithms that exhibit human-like behavior as a way to pull information for nefarious purposes.

But it’s not just social media anymore. Most websites have some type of networking or transactional capabilities, and secret squirrels need to tread lightly with any sites that have users.


Now, AI generated faces are making these things in more difficult to detect, looking incredibly real at first glance.

These computer-generated faces showed up on the internet a few years ago, but they still present a very real threat. They are used as masks by real people with malicious intent: foreign actors who have a genuine looking face but whose goal is to infiltrate the intelligence community.

You’ve probably seen them on platforms like Facebook or Twitter, on sites that allow product reviews, or even dating profiles on romantic sites.

There are even businesses dedicated to creating these fake people. On Generated.Photos, you can buy a fake person for $2.99. If you need a few fake people to make your organization appear more diverse, you can get photos for free from – even adjust their age or ethnicity.

generative adversarial network

Another class of machine learning that has made this possible is a generative adversarial network. This division of machine learning was created in 2014 where two networks compete with each other, in this situation given photos of real people, where they learn to generate new data or photos of fake people, while a different part of the infrastructure attempts to distinguish which are fake.

So, while you’re living through a pandemic when you may be hungry for more connections to people – be cautious of social engineering tactics and these fake profiles online. It’s easy for our adversaries to feed a program photo of real people to come up with its new photos of fake people – don’t make it easy for them to gain pertinent information.

Related News

Katie Keller is a marketing fanatic that enjoys anything digital, communications, promotions & events. She has 7+ years in the DoD supporting multiple contractors with recruitment strategy, staffing augmentation, marketing, & communications. Favorite type of beer: IPA. Fave hike: the Grouse Grind, Vancouver, BC. Fave social platform: ClearanceJobs! 🇺🇸