In the wake of Jack Teixeira’s leak of classified documents on the war in Ukraine, many have criticized the security clearance process. While personnel vetting has gone through some major overhauls in the last couple of years to manage the cleared candidate pool and the backlog high of the late 2010s, the process is not perfect – nor probably will it ever be due to changing norms and advances in tech, along with our topic today – social media.
Companies have long been using social media as a part of the hiring process. Prospective candidates’ digital personas are checked up on by hiring managers, and some applicant tracking systems automatically link social media presences with candidates. Recently, the UK requires all K-12 publicly funded teaching applicants to undergo a screening, along with California signing into law that all police officers are required to, as well.
Today I’m joined by Darrin Lipscomb who is the CEO at Ferretly, an AI-Powered social media background screening software. If private companies or individuals can collect this information online, it’s clear that the government has the capabilities to create a more complete whole-person picture and to check on any red flags when determining your level of trustworthiness to protect national security. Listen in on our discussion on this topic.
SOCIAL MEDIA AND YOUR SECURITY CLEARANCE
When the government signed SEAD 5, it created a framework for federal agencies to use as they developed their own social media monitoring programs. Even though the policy hasn’t reached beyond these pilots, we see similar use cases for social media monitoring at large in other ways or in the private sector. The U.S. Immigration and Customs Enforcement purchased a blanket agreement from SRA International for $100 million to vet visa applicant’s social media profiles through a human only approach. Lipscomb notes that a human only approach is time consuming and costly. “AI should do the heavy lifting,” he notes.
In today’s background investigation process, social media can come up, though it is still only within the human approach vs. AI mass monitoring.
We’ve seen that AI in HR has been subject to some discrepancies or discrimination in the hiring process. Even more recently, an eating disorder support group replaced its help line with an AI tool and the bot was taken offline after it began offering weight-loss advice. Lipscomb says that these aren’t necessarily issues that could come up if AI social media screening was implemented in the personnel vetting process. He argues that there would be less of a bias than there could be with a human-only approach.
The security clearance process currently operates under the 13 adjudicative guidelines which social media intertwines with operating under the whole person concept. And under CV, social media may be an aspect. So, anything that is publicly available under social networks (Facebook, LinkedIn), microblogging websites (Twitter), blogging/forums (WordPress, Tumbler), picture/video apps (Flickr and YouTube), music sharing (Spotify), and much more is up for grabs.
WOULD AI POWERED SOCIAL MEDIA SCREENING PREVENT INSIDER THREATS?
Time released an article on digital blind spots within the security clearance process. Would cases like Jack Texteria, Reality Winner, or Aaron Alexis be prevented if we employed an AI screening approach as a part of the personnel vetting process? Maybe so, maybe not. Hindsight is 2020, and the key term here is “publicly available information.” If troublesome statements are posted under aliases, a far-reaching social vetting system may never find them. The cost benefit is no question, but could there be other implications if applicants thought they were wrongfully denied? We will only know at the point if the government decides to implement AI to do the heavy lifting with the human approach to detecting future insider threats.