Many of the leading voices in the tech world have expressed concern over the development of autonomous weapons or other systems that employ artificial intelligence (AI) and machine learning. However, the U.S. Department of Defense (DoD) already has policies in place that incorporate the Pentagon’s vision for ethical AI, while it will require additional reviews for any systems.

Last month, the Pentagon also announced the update to its DoD Directive 3000.09 Autonomy in Weapons Systems and said that this update reflects its strong and continuing commitment to being a transparent global leader in establishing responsible policies regarding military use of autonomous systems and AI.

The update further is meant to reflect changes within the Pentagon over the past decade, as well as changes in the world. DoD requirements call for it to reissue and update directives within certain time periods.

“DoD is committed to developing and employing all weapon systems, including those with autonomous features and functions, in a responsible and lawful manner,” said Deputy Secretary of Defense Dr. Kathleen Hicks. “Given the dramatic advances in technology happening all around us, the update to our Autonomy in Weapon Systems directive will help ensure we remain the global leader of not only developing and deploying new systems but also safety.”

Addressing Unintended Engagements

The DoD Directive was initially established to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements, the Pentagon had noted.

Those requirements established in the directive included that autonomous and semi-autonomous weapon systems would be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. In addition, persons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems will do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.

It also called for assurance that a weapon system has demonstrated appropriate performance, capability, reliability, effectiveness, and suitability under realistic conditions. Moreover, the design, development, deployment, and use of systems incorporating AI capabilities remain consistent with the DoD AI Ethical Principles and the DoD Responsible AI (RAI) Strategy and Implementation Pathway.

DoD’s AI is on the Right Track

Experts have praised the DoD’s efforts to ensure the ethical use of AI, but admit that more could still be done.

“There is specific language in section three that calls for the system to be tested for resiliency in contested cyberspace and that weapons employing AI are tested to verify it is robust,” explained David Maynor, senior director of threat intelligence at cybersecurity research firm Cybrary.

“To be completely honest these two requirements are very hard to solve individually. Trying to test for both conditions is nearly impossible with today’s technology,” Maynor told ClearanceJobs. “Could you imagine being a developer or QA engineer on these projects? Most people deal with bugs that seem life threatening but these are life-ending potential.”

The DoD will have to ensure that its policies are also able to keep up with the latest advances in AI Development.

“Policy changes are inevitable with the adoption of any new technology,” said Tim Morris, chief security advisor at cybersecurity research firm Tanium.

“As AI matures, as we’ve seen with tools such as ChatGPT, the notion of free-thinking sentient technology with a potential to do harm creates a significant moral conundrum,” Morris also told ClearanceJobs. “We can expect to see ongoing ethical debates and discourse as policymaker try and keep pace with the weaponization of AI.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.