At last month’s DoD 2024: Defense Data and AI Symposium, hosted by the Chief Digital and Artificial Intelligence Office in Washington, D.C., the Pentagon laid out the goals it said would be needed to support its “DoD AI Hierarchy of Needs,” which will focus a strategy directed at quality data, governance, insightful analytics and metrics, assurance, and responsible AI.

“Imagine a world where combatant commanders can see everything they need to see to make strategic decisions,” explained Craig Martell, the Department of Defense’s (DoD’s) chief digital and artificial intelligence officer. “Imagine a world where those combatant commanders aren’t getting that information via PowerPoint or via emails from across the [organization] — the turnaround time for situational awareness shrinks from a day or two to 10 minutes.”

Martell’s remarks were also made days before the Pentagon announced that it deployed machine learning algorithms to identify targets in over 85 air strikes on targets in Iraq and Syria just this year. The DoD first launched its Project Maven, which sought suppliers capable of developing object recognition software for footage captured by drones, in 2017.

The United States Central Command (CENTCOM), which oversees operations in the Middle East, Central Asia, and some parts of South Asia, has employed AI algorithms to help carry out over 85 air strikes across seven locations in Iraq and Syria.

Understanding the DoD AI Hierarchy of Needs

The Pentagon further laid out its strategy that prescribes an agile approach to AI development and application, emphasizing speed of delivery and adoption at scale. This has led to five specific decision advantage outcomes that include: superior battlespace awareness and understanding; adaptive force planning and application; fast, precise and resilient kill chains; resilient sustainment support; efficient enterprise business operations; and a blueprint that also trains the focus of the department on several data, analytics and AI-related goals.

To accomplish these goals, the DoD will need to invest in interoperable, federated infrastructure; advance the data, analytics and AI ecosystem; expand digital talent management; improve foundational data management; deliver capabilities for the enterprise business and joint warfighting impact; and strengthen governance and remove policy barriers.

“Winning for us is when everyone else thinks ‘I launched AI; I solved this data problem. I quickly leveraged data to build an analytical solution that solved my commander’s problem right away, and I have the tools, I have the infrastructure, I have the policies and I have the contract vehicles to deliver it,'” Martell added. “That’s winning.”

New Opportunities

The DoD’s AI Hierarchy of Needs could provide new opportunities for those with certain skillsets, namely in AI and machine learning.

“Knowing the general approach of DoD, we can expect that security, high fidelity and not being run in a cloud environment will be requirements for AI,” explained Melissa Ruzzi, director of Artificial Intelligence at AppOmni, a SaaS security provider.

“Based on this a couple of new jobs related to AI may be needed,” Ruzzi told ClearanceJobs. “The most interesting of this initiative is that these jobs will eventually be needed for a lot of companies using AI, but DoD may speed up the need, definition and awareness around it. We see movement from the companies who create models like Microsoft and Google in this area of new jobs. Next, we will see this change in the companies that use AI.”

It is likely that there will be a need for cybersecurity AI experts who can work not only on the security of the architecture and code used in the AI implementation, as well in the security of the interaction of AI with the user. The need to protect government sensitive data will be taken to another level with the usage of Generative AI.

“In AI engineering, we can expect new jobs about expertise in developing and applying local models,” Ruzzi added. “This is an extra challenge as a lot of the work that is done to keep models updated and tuned passes from the cloud providers to DoD itself. Another AI engineering job will be related to development of tools to apply advanced RAG (Retrieval Augmented Generation), as it is expected that the DoD will most likely not want to use third party libraries who are still under development and have not widely been tested in production environments. They may want to go their own route to develop this in house.”

Ruzzi further suggested that on the high fidelity front, there could be a need to hire people that will work on reinforcement learning from human feedback with the models to be able to train the model for the specific tasks DoD will require, and that could entail running lots of manual testing.

“Given the level of impact that the answers of GenAI can have, we can expect that heavy manual test will be needed to run on top of automatic testing,” Ruzzi continued. “Another area is AI training – there will be a need for technical writers and content trainer creators who are experts in AI so they can properly prepare people on how to use AI, safety guidelines, proper FAQs and more. And, with the assumption that the AI will not run in the cloud but in their local network, network engineers who understand the AI data traffic will be needed to set correct configurations to respect the security needs and correct access to AI.”

The Missing Component

The Pentagon has sought $1.8 billion for AI in fiscal 2024, while the department is now juggling hundreds of AI-related projects, including some associated with major weapons systems.

Yet, what may be missing from the DoD’s hierarchy of needs could be any AI defense, warned technology industry analyst Rob Enderle of the Enderle Group.

“What I mean is the processes and methods they will use to protect the AI from hostile interference or manipulation,” Endlerle told ClearanceJobs. “This is important for any AI implementation but when you are talking about using AI for weapons systems and strategic planning the AI becomes a target for both proactive and reactive attacks and should the AI be compromised what it controls or provides – in the form of information – could be corrupted to a degree where it effectively turns against its owner which is certainly problematic in a civilian situation but potentially deadly when we are talking about a defense implementation.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.