AI is taking over every industry – but what about security. How could artificial intelligence be used in the security clearance process? Sean Bigley and Lindy Kyzer of ClearanceJobs discuss the use of AI in the job search and application process, and how AI is already being discussed as a force multiplier to improve personnel vetting.

Sean Bigley (00:31):

Welcome back. This is Sean Bigley and Lindy Kyzer of ClearanceJobs.com. We’re talking this segment about automated intelligence or AI and whether it has a place in the security clearance process. Lindy, I don’t know about you, but I feel like every time I have opened the news during the past year, I have heard or seen something about AI and before a year ago, I didn’t even know what the term meant and never heard anything about it. I’m embarrassed to admit, clearly there are plenty of people in the tech space who have been paying attention to this for years, and maybe I’m an anomaly, but it seems like from the general public awareness, this is a really new thing. Am I the only one, or have you been paying attention to this longer?

Lindy Kyzer (01:16):

I think the permeance of it is significant and how we’ve seen it. If we’ve talked to a lot of folks in the tech space, they would not be surprised about what the past year has brought because I think we’ve been building up to this, but how quickly we’ve had that kind of the shift. I mean now honestly, it’s hard to look at something that isn’t AI powered, right? It’s harder to find something where you don’t have an AI tool or application. We even see it in the job search process. I mean, we went from within six months having a ton of different ways to build your resume cover letter with AI, submit to job applications or AI tools that will apply to jobs for you today. So I think now we see this pretty extensively. Like all things, the government can create these technologies, but if they can figure out how to use them for their own power and good, will that come?

(02:04)
But I think it’s a conversation worth having when it comes to the security clearance process. Because we are in this shift of the personnel vetting process. We do have a new technological system powering the personnel security process through NBIS. We could see some lanes where AI is applicable to personnel vetting and could supplement or assist in what’s happening from even the investigative and adjudicator standpoint. Could we automate some of those things that right now is very much a boots on the ground process. I think you could make a strong argument for an AI investigator coming to a clearance near you. I don’t know how far away we are from that, but you could make the argument, and I’m always about making the argument, is there an opportunity to innovate? Is there something here? And on the flip side, there’s a ton of concerns around the DEIA issue inclusion, accessibility when it comes to humans are building these tools. So there’s the two sides. People think that a robot can make your interview and investigation less bias, but there’s somebody behind that robot that actually could bake in a whole host of biases into that that don’t necessarily make it better. So yeah. I’m curious what your, are there things that you wish an AI tool or solution would’ve been offered and available to improve this process?

Sean Bigley (03:22):

The bias piece of it is interesting, and I want to come back to that in a second. But before I do random side note here, I recently had an eyeopening experience with AI. My wife is a high school teacher, and she came home one day and she was talking about how AI was being used to generate lessons and even reference letters. I was going, huh? She goes, yeah, check this out. She pulls this thing up, types in a couple biographical details. Within half a second, it’s spits out this pre drafted pretty professional looking reference letter that she just then had to go in and make a couple little tweaks to. And she gets dozens and dozens of reference letter requests every year from high school students who are applying to college. And it’s always been like a big hassle. And she’s going, well, at some point this may just become the future where reference letters are obsolete because the assumption is it’s just written by a machine.

(04:15)
So what value is this really adding to the process? And it made me kind of stop and think, because obviously in the security clearance process, most of the time we use, or at least we think about live references being checked, but in many cases that’s not the case. Obviously, some of the lower tier investigations where the government is still sending out written questionnaires to references and verifiers that are being completed. And so the question then becomes there like, okay, same deal. This is being completed by a human, or is it being completed by a machine? And if it’s being completed by a machine, what’s really the value in doing this? So interesting side note, I think about some potential utility or ramifications of AI in unexpected spaces. But coming back to the bias piece of it, and you’re right, I think one of the things that has been getting some attention legally when it comes to AI is yes, these things are machine generated, but they require initially inputs by a human.

(05:17)
New York in particular has been kind of leading on this issue they created earlier this year, passing the law, the New York Automated Employment Decision Tool law. I’ve written about this at ClearanceJobs. It is a fairly onerous requirement for employers because now any employer that’s doing business in New York or that’s hiring people potentially from New York, not only has to certify their compliance annually with this law, but they actually have to do audits. They have to hire outside entities to come in on an annual basis and audit hiring systems if they’re using AI to prove that they aren’t screening out applicants on some illegal basis, a protected characteristic or something like that. There’s been a lot of criticism from employers saying, look, this is really no different than if we have a human sitting there reading it. They’re going to have their biases. They’re going to have their potential candidates that they’re looking for, and so why now all of a sudden, if we’re having a machine do the same work?

(06:13)
Do we have to pay to have somebody audit the system when you couldn’t really do the same thing necessarily with a human? And I think that’s valid criticism. I think the flip side of that is when you’re doing this to scale, and we’re talking about large employers where you’re getting thousands and thousands of applications every month, it is legitimate, I think, to ask what are the inputs that are being used here to screen and is there some bias baked into the system that might screen out people who were otherwise qualified for the job? I think we’re starting to see this a little bit in the continuous evaluation space. Obviously that has kind of an AI component to it in a sense. It has to ultimately be checked by a human. At the end of the day, there’s a lot of false alerts that are coming through from what we’ve heard, a lot of sounding names and things like that. And so you have still a human level to the process to ferret those things out. But I’m curious, I know you go to a lot of industry conferences and things like that. Are you hearing any chatter or interest in the industry or from the government about a desire to kind of, as you said, replace investigators with an AI tool or use some other tools to supplement traditional boots on the ground process?

Lindy Kyzer (07:29):

Have you been around the security industry recently, Sean? We’re not at the cutting edge. I work a ton with the IC, the defense industry. They’re all over this stuff, and I think we have, security is a very risk averse industry and for good reason behind that. But I think that’s why things you see now with the Trusted Workforce 2.0 reform effort, there is some clashing that sometimes happens when it comes to reforming stuff, just because getting government and especially government security to change the traditional way it’s done, done things is tough. And so we joke all the time about how we’re working on a 1947 security policy framework, but that is very much the case. Our security framework is very much built in what we saw post Manhattan Project, and the process of changing what that looks like is slow. And again, the good news is I actually think that that framework is fairly good.

(08:27)
People talk all the time about, ‘Hey, we need to fix things by changing the adjudicative guidelines.’ I’m always like, I don’t think so. I think they look pretty solid. I think what we could see is just process improvements. I think for me, it all comes down to process. And that’s where I think AI, there are definitely advantages there. There’s definitely things that the government could do. I would love to see some large data models around the adjudication process because, because we talk about bias here, how much do we know about the process? And if there is bias baked into it now, I think AI could analyze a lot of that in a way that we are afraid to now because we try to keep the process very anonymous in the sense that we’re not asking applicants to turn over a lot of demographic data in the security process, even though they do to some extent in the federal hiring process.

(09:14)
But I think that’s where AI has an advantage. You could do some data analytics around that that would keep folks’ privacy intact and dig into some of this and get some interesting information and potentially pull in some interesting things even in the adjudication process and automate some things. And I think because we have as, as an attorney, an appeals process, I think it makes a good use case for that. I think for the side of it, at least, that does have that appeals process. You do have a human element aside, a judge that can look back and identify it, but there is automation there already with the adjudication process. I think there’s some opportunities now to say, Hey, how could we automate more, put more AI into both the investigation and the adjudication process, but making sure that then we still have those due process rights across the entire applicant pool. And if we do that at large scale across a cleared workforce, I think there could be advantages. I don’t think people are talking about this. I think it’s security, but hopefully somewhere people are, and I’m happy to talk about it.

Sean Bigley (10:10):

To your point, with the lag time, I mean that is a hundred percent accurate. Industry has always led government in the tech space, and I think that’s going to continue to be the case for the foreseeable future. But I also do think that because a lot of the background investigation process is outsourced to industry, to contractors, there is some question there, at least in my mind, about whether or not there’s room for efficiencies that AI could help with in the investigation process. Maybe not so much the adjudication side of it, but certainly in the investigative side. And I think things that seem like obvious places there that could benefit potentially are things like records checks. I mean, that’s something that you don’t necessarily need a human to do. Traditionally that’s been the way that it’s done. You’ve had an investigator that’s been sent out to physically look at an HR file, or you’ve had a questionnaire that’s been sent out to an employer to fill out.

(11:02)
And those sorts of things, I think lend themselves potentially or could lend themselves to ai. The question would just be, where is the information coming from? And so you’d have to develop some sort of a pipeline where defense industry employers are submitting their HR files, for example, to some sort of government run repository that can be scanned by ai. That obviously raises some privacy issues and some other questions as well. But from a big picture standpoint, I think that may be a discussion worth having. Is it worth the trade off? When we look at things like reducing processing times and things like that, that I think a lot of people in industry would be receptive to, so I’ll be curious to see where it goes. It’s definitely something we’re seeing a lot more of in the news, and I think it’s only a matter of time before this conversation reaches our air.

Related News

Lindy Kyzer is the director of content at ClearanceJobs.com. Have a conference, tip, or story idea to share? Email lindy.kyzer@clearancejobs.com. Interested in writing for ClearanceJobs.com? Learn more here.. @LindyKyzer