It’s never a good feeling. You think that you’re a perfect fit for a job opening. So, you go through all the right steps to tweak your resume and submit an application…only to be greeted with a wall of silence or a vague rejection from a prospective employer. The age of automation sounds great until it weeds you out without ever really getting a chance to make your case.

So, what’s happening on the backend in this process? Hiring tools, referred to as automated decision systems include algorithms that screen resumes for terms or patterns. The tools can specifically do things such as prioritize applications using certain keywords, using chat bots to “interview” applicants looking for requirements and even virtual interviews analyzing facial expressions and speech patterns.

Bias and the Law

The idea sounds great in theory, saving multiple layers of human interaction, which takes time and capital. However, two questions have generated much discussion recently. Do these tools really capture the best applicants for the position? Like any system that has been developed with AI, cheat codes inevitably seem to follow. Putting words on your resume that match the job description is one common tactic, studying the software used in virtual interviews to find out where the indicators lie is another, and using very specific words or phrases when talking to a chatbot in a texting interaction. In other words, studying for the test by studying the test writer seems to be as important as the answers themselves. The second question is on the legality of AI hiring – especially when considering factors like bias against protected classes and people with disabilities. Some cities (such as New York City) and states (Ilinois) have already enacted certain laws putting limitations on AI-based interviews and screening. In November of last year,  the US.. Equal Employment Opportunity Commission (EEOC) launched an initiative looking at what’s wrong, what is right, and what needs to be explored as it relates to AI based hiring. Taking it one-step further, just last week the Department of Justice (DOJ) and EEOC joined forces in announcing separate technical guidance on AI-based hiring as it relates to people with protected disabilities.

AI-Hiring Bias Examples

It’s all well and good to think of potential bias, but has it really happened in the past? Here are some of the more well known instances:

  • In 2018, Amazon got rid of an AI recruitment program as it was based on data and patterns over the last 10 years of resumes provided by men.
  • AI-powered tools that track a job candidate’s eye movement intended to measure a candidates engagement doesn’t take into account disabilities that impact vision and eye contact. AI is trained to learn by inputting loads and loads of examples of expressions and behaviors, but is only as good as the data in without qualifying the outliers.
  • Significant gaps in employment history may automatically disqualify someone even though the gap could be due to a serious disability or illness.

While it will be hard to reverse course and stop using AI-based hiring programs, the guidance does make it clear the way and extent the company uses it should be explained thoroughly and upfront to the candidate while giving them a chance to inject their concerns. And if you find yourself rejected the first time, it might be worth it to make a few tweaks and try again.

Related News

Joe Jabara, JD, is the Director, of the Hub, For Cyber Education and Awareness, Wichita State University. He also serves as an adjunct faculty at two other universities teaching Intelligence and Cyber Law. Prior to his current job, he served 30 years in the Air Force, Air Force Reserve, and Kansas Air National Guard. His last ten years were spent in command/leadership positions, the bulk of which were at the 184th Intelligence Wing as Vice Commander.