In the last few years, the hiring process has continued to undergo a dramatic transformation as artificial intelligence becomes increasingly embedded in recruitment strategies. From screening résumés to conducting video interviews, AI promises to streamline operations, save costs, and reduce human error. However, beneath these advantages lies a more complex reality.

To Err is human, but that doesn’t stop ai

AI, while efficient, can unintentionally amplify biases, obscure decision-making processes, and sideline qualified candidates. As companies lean into automation, it’s crucial to examine the potential pitfalls of AI in hiring and ensure that technology serves to enhance—not undermine—fairness and inclusivity in recruitment. However, we must continue to be open-minded about what these same tools are being used to do to subvert the honest approaches and methodologies in the hiring process. AI tools are being used to ‘deepfake’ job applicants and interviews, testing and verification processes, and even create fake announcements that scam those looking for jobs; creating a lack of trust in the hiring process.

Aside from all of the doomsaying, AI can bring significant benefits to the hiring process, such as speeding up candidate screening and reducing costs, but it can also introduce several challenges and problems, including:

1. Bias and Discrimination

AI systems often inherit biases present in the data used to train them. If historical hiring data includes biases (e.g., favoring certain genders, ethnicities, or educational backgrounds), the AI might perpetuate or even exacerbate these biases. For example, a hiring algorithm might favor male candidates if the training data primarily includes successful male applicants. Companies have seen this in the past and been able to learn and adapt, but the cost of discovery is a high one when it comes to losing perspective candidates.

2. Lack of Transparency

Many AI tools function as “black boxes,” making it difficult to understand how decisions are made. This lack of transparency can cause mistrust and make it hard to challenge potentially unfair decisions. Any brand with concern for its customers and employees understands the importance of transparency in order to put concerns to rest by providing clear insight to stakeholders. An example of a lack of transparency is when a candidate might be rejected based on AI scoring, but neither the recruiter nor the candidate understands why. This is even harder to discover when traditionally, not many candidates know whey they are no longer considered or competitive for a position, even when a human is doing the hiring.

3. Over Reliance on Automation

Over-reliance on AI may lead to the exclusion of qualified candidates who don’t fit a strict algorithmic profile. Although this may sound similar to how AI can lean on the programmer’s biases and discriminations, the difference is the lack of accountability. When uploading information, or setting guidelines and requirements for an AI model to follow, it is not just a submit-and-forget situation. The automated process needs to be monitored, and tested, and there is a need for troubleshooting and verification. A candidate with unconventional experience or skills may be overlooked because their résumé doesn’t match predefined templates. Once this interaction is completed in the system, a programmer should be able to adjust the system to include those additional parameters.

4. Data Privacy Concerns

AI systems process large amounts of personal data, raising concerns about how data is stored, used, and shared. With the rise in cyberattacks, ransomware scenarios, and leaks occurring with user data; companies that have any kind of storage containing personal identifying information, should be prepared with strong defenses in the IT realm. This data is of lucrative interest to hackers who are able to steal and sell data due to the ill-preparedness of companies diving into the use of AI. Another risk is if a company’s AI tools analyze candidates’ social media profiles, they may inadvertently access or misuse sensitive information.

5. Inaccurate Assessments

AI might misinterpret or overemphasize certain data points, leading to false positives or negatives in candidate evaluations. AI is only capable of accessing programmed information and interpreting it into viable guidelines that lead to the ‘success’ of its prompting. This means that the outcome of what it is prompted to create, define, or assess, is all based on information that has been adopted into its model. If something was inaccurate or incomplete, the model tends to ‘create’ the rest. This is currently why models that are accessed via the internet are given a warning label so that users know to verify the data that has been produced. A candidate’s facial expressions during a video interview may be incorrectly interpreted as disengagement or dishonesty.

6. Limited Scope of Evaluation

AI often focuses on measurable, surface-level factors like keywords in a résumé or tone in a video interview, which may overlook deeper qualities like creativity, adaptability, or cultural fit. The human body, voice, features, and more are still being adapted into useful information in AI builds (hence the weird hands and teeth in AI illustrations). Tones, facial expressions, and other factors can ‘confuse’ the AI model and give an inaccurate reading of how a person is responding to a question, scenario, or exam. A candidate might have excellent problem-solving skills but be rejected because their résumé lacks certain buzzwords, or they are nervous during an exam, test, or interview.

7. Potential for ‘Gaming the System’

Savvy applicants might tailor their résumés and applications to “game” AI systems, potentially gaining an unfair advantage over more qualified but less tech-savvy candidates. There have been instances of people using ‘deepfake’ technology to impersonate candidates during the interview process. Other examples include, ‘fluffing’ a resume with unnecessary information in case the system doesn’t look for anything other than the prescribed keywords. The idea of overusing keywords is to overload the system in order to rank higher in applicant tracking systems (ATS).

8. Unintended Legal Risks

The use of AI in hiring could lead to lawsuits if candidates feel they’ve been discriminated against or treated unfairly. The business side of having a lack of transparency is increasing the ‘risk-versus-reward’. If a business is over-reliant on AI, and doesn’t care about potential risks involved, they won’t care about being able to show their work when it comes to why a candidate was not hired. Should the applicant feel discriminated against, that AI model, could be a factor and thus used in an investigation. AI screening tools that disproportionately reject candidates of a certain demographic could violate anti-discrimination laws.

9. Failure to Adapt to Nuance

AI struggles with nuanced decisions that involve understanding context, such as evaluating a career gap or diverse professional experiences. Questions that generally arise during an interview process, that could have legitimate answers, are potentially a disqualifying factor when using an automated system to screen candidates. These nuances are another reason to have a human involved at multiple levels in the hiring process to ensure that quality candidates are not being disqualified because the ‘system’ cannot discern life-impacting decisions and the consequences of scenarios and events. A résumé with a gap due to caregiving responsibilities might be flagged as a red flag by AI, even though the gap is unrelated to job performance, this is a perfect example of how one question could have led to the hiring of a well-qualified candidate.

Addressing these challenges

These nine challenges can all be mitigated with the right process put in place. Programming errors can be reduced by keeping humans involved in the process. Data privacy concerns can be dealt with by having a strong cybersecurity team, which should be commonplace in our current climate of cyberattacks, but oddly is not the case. Regardless of mitigating factors for specific issues that could arise, if an AI system is being used, there should be qualified teams ready to address these potential issues.

Here’s what’s required of companies:

  • Audit AI tools regularly for bias and fairness.
  • Maintain human oversight in decision-making processes.
  • Use diverse, high-quality training datasets.
  • Ensure transparency in AI systems and their outcomes.
  • Comply with data privacy and anti-discrimination laws.

While AI continues to reshape the hiring landscape, it’s not without its flaws. Bias, lack of transparency, and overreliance on automation are just some of the challenges that can hinder its effectiveness and fairness. These can be balanced out by companies that also employ recruiters to act as the human points of contact and liaisons during the application process. Organizations must remain vigilant, combining the efficiency of AI with the nuanced judgment of human oversight. By addressing these issues proactively, companies can harness the power of AI while fostering an equitable and transparent hiring process. The key lies in striking a balance: leveraging technology to complement, not replace, the human touch in recruitment.

Related News

Aaron Knowles has been writing news for more than 10 years, mostly working for the U.S. Military. He has traveled the world writing sports, gaming, technology and politics. Now a retired U.S. Service Member, he continues to serve the Military Community through his non-profit work.