The top artificial intelligence officer at the United States Space Force called for the sixth and newest branch of the United States military to increase its adoption of AI, including in the daily work of service members.
“My two top priorities for the United States Space Force are accelerating adoption of artificial intelligence and data capabilities, and my second is putting tools and capabilities into the hands of warfighters,” Chandra Donelson, the service’s chief data and artificial intelligence officer, said while speaking at the National Defense Industrial Association’s emerging technologies conference last week.
The U.S. Space Force released its strategic action plan in March, calling for the service to be more data-driven and AI-enabled. It is part of a wider U.S. Department of Defense effort to roll AI into nearly all aspects of military operations.
However, even as the Pentagon has claimed that AI is necessary to aid in the collection and analysis of the vast quantities of data produced daily, concerns remain that, unchecked, AI poses a significant threat.
The Rise of the Machines Is Becoming Real
What was once just a plotline from movies like The Terminator and The Matrix, where the machines rose against their human masters, is now becoming a real threat, warn experts from numerous fields.
“We’re entering a new world of artificial intelligence and emerging technologies influencing our daily life, but also influencing the nuclear world we live in,” Scott Sagan, a Stanford professor and expert on nuclear disarmament, said during a meeting of Nobel laureates at the University of Chicago in July, Wired magazine reported.
One great danger is that AI models tend to escalate aggressively in simulations and war games.
“The AI is always playing Curtis LeMay,” Jacquelyn Schneider told Politico, referencing the U.S. Air Force general who advocated such a policy of nuclear escalation during the Cold War. “It’s almost like the AI understands escalation, but not de-escalation. We don’t really know why that is.”
It might seem that the goal for the AI is to win or die trying. Anyone who has played computer-based strategy games knows that the computer often tends to act aggressively, even when it can’t win. AI is a more advanced version of the computer powers in a game, but it employs a similar rationale.
Humans Know When to Fold ’em
Video game versions of strategy games, such as chess, are built on determining the best moves that can lead to victory, and the software is designed to react to the moves made by a human player. It can be scaled to allow humans to have a chance, even a good chance, at winning.
AI, on the other hand, may only seek to win. That means it will go all in, and computers can’t tell when a human might be bluffing. It simply determines the odds. In that case, if the odds indicate a good chance that the human is bluffing, then it will press on.
A February 2025 commentary from The Brookings Institution noted three cases where the world faced nuclear war, including the 1962 Cuban missile crisis, the September 1983 false-alarm crisis, and the October 1983 Able Archer exercise. In all three cases, cooler heads prevailed and the world avoided destruction.
1962 was about determining whether the other side was willing to go to war.
In the case of the September 1983 false-alarm crisis, a Soviet watch officer correctly determined that the sensors might be wrong in predicting that an attack was underway. Stanislav Petrov determined that the five intercontinental ballistic missiles (ICBMs) that the sensors warned were on their way were far smaller than what the U.S. would likely launch. Just a month later, the Soviets feared that the NATO Able Archer exercises might be a cover for a real attack, which resulted in Soviet escalation.
It was the U.S. de-escalation that defused the situation. As the Brookings commentary warned, it is unclear whether AI would make the same decisions. AI might have believed that the other side was bluffing in 1962, and determined in both cases in 1983 that the other side was carrying out an attack!
The AI Escalation
No one is looking to give AI programs like ChatGPT or X’s Grok access to nuclear codes. However, AI is already being slowly adopted as a force multiplier. That includes the Collaborative Combat Aircraft (CCA), which will operate as loyal wingmen for the U.S. Air Force’s fifth- and sixth-generation manned fighters. Those aircraft could, in the future, support the manned fighter by engaging the enemy.
That may require the human pilot to order the AI-powered unmanned aerial systems (UAS) to engage, but at some point, will the AI be trusted to be proactive in determining a threat? At that point, AI could inadvertently escalate an already tense situation.
That could create a situation where one misstep triggers a chain of events, some of which are AI-initiated, that escalates the situation.