Facial recognition technology has become a powerful tool for U.S. law enforcement agencies to identify and track suspects, fugitives, terrorists, and foreign agents. However, this technology also poses significant risks to national security, as adversaries could interfere with or influence its use for malicious purposes.

How Adversaries Could Interfere with or Influence Facial Recognition Technology

Facial recognition technology relies on algorithms that compare facial features from different sources of data, such as video footage, photos, biometric databases, social media platforms, etc. However, these algorithms are not infallible, and could be manipulated or compromised by adversaries in various ways, such as:

Spoofing

Adversaries could use masks, makeup, prosthetics, or other techniques to alter their appearance and evade facial recognition systems, such as the case of a Brazilian gang leader who tried to escape from prison by wearing a silicone mask and a wig to impersonate his teenage daughter. Alternatively, they could use photos or videos of other people to impersonate them and gain access to restricted areas or information.

Hacking

Adversaries could hack into facial recognition systems or databases and tamper with the data or the algorithms. For example, they could delete, modify, or insert facial data to create false matches or non-matches. They could also introduce malware or backdoors to compromise the security or functionality of the systems. In 2020, a group of researchers from McAfee demonstrated that they could fool a facial recognition system similar to those used at airports by using 3D renders of faces based on publicly available photos. Then In 2021, scammers managed to trick facial recognition software used by identity-verification company ID.me to verify fake driving licenses as part of a $2.5 million unemployment fraud scheme.

Training

One way that adversaries could manipulate the training data or the parameters of facial recognition algorithms is by using adversarial examples. These are specially crafted inputs that can cause the algorithms to make incorrect predictions or classifications. For example, researchers from the University of North Carolina showed that they could trick facial recognition logins with photos from Facebook and other online sources. They used a 3D rendering technique to create realistic face models that matched the pose and expression of the target person. They then added some subtle perturbations to the face images that were imperceptible to human eyes, but could fool the facial recognition systems.

These methods could have serious consequences for U.S. national security, as adversaries could undermine the effectiveness and credibility of facial recognition technology for criminal investigations, border security, or counterterrorism operations. They could also pose threats to the privacy and civil liberties of U.S. citizens or officials who are subject to facial recognition surveillance.

What Oversight and Accountability Mechanisms Are Needed

To address these risks, U.S. law enforcement agencies need to implement robust oversight and accountability mechanisms for the use of facial recognition technology. These mechanisms should include:

Standards

Adopt and follow common standards and best practices for the development, testing, deployment, and maintenance of facial recognition systems and databases. These standards should ensure the accuracy, reliability, security, and transparency of the systems and the data.

Regulations

Comply with existing laws and regulations that govern the collection, retention, and sharing of facial data. These laws and regulations should protect the privacy and civil liberties of individuals who are subject to facial recognition surveillance.

Audits

Conduct regular audits and reviews of their facial recognition systems and databases. These audits and reviews should monitor the performance, usage, and impact of the systems and the data.

Oversight

Establish independent oversight bodies or mechanisms that can oversee and regulate their use of facial recognition technology. These oversight bodies or mechanisms should have the authority and expertise to investigate complaints, enforce compliance, impose sanctions, and recommend reforms.

Accountability

U.S. law enforcement agencies should be accountable for their use of facial recognition technology. They should disclose their policies and procedures for using such technology to the public and to relevant stakeholders. They should also report any errors, breaches, or abuses of such technology to the appropriate authorities.

These mechanisms could help prevent or mitigate the potential harms of facial recognition technology for U.S. national security interests. They could also enhance the trust and confidence of the public and the courts in the legitimacy and legality of such technology.

Facial recognition technology is a double-edged sword for U.S. national security. While it can provide valuable benefits for U.S. law enforcement agencies in identifying and tracking adversaries, it can also pose significant risks if adversaries interfere with or influence its use for malicious purposes. U.S. law enforcement agencies need to implement robust oversight and accountability mechanisms for the use of facial recognition technology to ensure its proper and lawful use.

Related News

Shane McNeil has a diverse career in the US Intelligence Community, serving in various roles in the military, as a contractor, and as a government civilian. His background includes several combat deployments and service in the Defense Intelligence Agency (DIA), where he applied his skills in assignments such as Counterintelligence Agent, Analyst, and a senior instructor for the Joint Counterintelligence Training Activity. He is a Pat Roberts Intelligence Scholar and has a Master of Arts in Forensic Psychology from the University of North Dakota. He is currently pursuing a Doctor of Philosophy degree in National Security Policy at Liberty University, studying the transformative impacts of ubiquitous technology on national defense. All articles written by Mr. McNeil are done in his personal capacity. The opinions expressed in this article are the author’s own and do not reflect the view of the Department of Defense, the Defense Intelligence Agency, or the United States government.