I’ve heard it said that we are playing football while our international adversaries are playing chess. That means they plan several moves ahead, while we hope to bull our way through. We believe we can push our way to victory, even if it requires gunfire. Better said, we expect victory to demand gunfire. We’ve got more of it, so we’ll win!

Is this true? What do cleared personnel care about the international scene? We should, rather, ask ourselves why we need to ask this question. Most of our classified projects impact directly upon war and peace.

Artificial Intelligence (AI) is an increasingly significant factor in today’s international diplomacy. Like in sports, business, or warfare, we hope to be a few steps ahead of our adversaries. We hope to out-think them, so that we don’t have to out shoot them. That falls in line with the Chinese philosopher Sun Tsu, who said the greatest victory is one where the winner doesn’t need to fight. He wins because he has created the circumstances where an adversary knows he cannot win, and thus seeks negotiations for peace, or surrender.

Chess players understand this. It is the difference between ‘check’ and ‘checkmate’. AI is a technological system which, it is hoped, will provide suggestions for possible future strategic moves. We hope we have the best AI options presented which an adversary untutored or uninformed. These options provided might be the key to victory. This bears study. 

A recent Economist magazine article studied the massive Chinese investment in AI. AI is a sort of machine learning. We hope to create smart machines which will respond appropriately to requirements, understand them faster than humans, and offer options for decisive moves on the international scene. For example, in our own lives we see this coming to light mostly in ‘smart cars’ without drivers, or in an IBM computer which plays chess. Most major countries of the world seek an advantage by increasing their abilities in this dimension. We don’t want just a robot, but a robot that can adjust to various new requirements as it ‘learns’. Some hope to apply this to weaponry and arms negotiations. Some hope to have a means of countering another country’s weaponry on the battlefield. AI is hoped to give new options based on ‘learned’ experience. 

In the field of diplomacy, or negotiations, AI is being developed to work on storing data that can then be applied to potential future moves. In short, it will show how to play that next round of international treaties’ negotiations. When a new move is made, stored intelligence will help make an even better recommendation for the next counter proposal. AI might be inserted to counter those incoming weapons whose target is unknown, or to recommend who better to supply your company’s future manufacturing requirements in a new location. After the fiasco we all witnessed in the stuck ship in the Suez Canal, what would we not pay to have AI anticipate, and guide us around something similar in the future? 

What AI Can’t Do

What AI can’t do also has a name. It is called ‘Theory of Mind’. This is where a human brain is needed to assess an opponent’s intentions. The CIA spends billions learning how to evaluate foreign numbers of missiles, tanks, and naval ships. We can identify whether a nation is developing a deep-water navy, but that data can’t say why. What we still rely upon is a human being to employ these facts, and analyze how to use them. Humans can anticipate threats, or invest in areas to better counter them.

The cleared project you work on could be part of one such consideration. Think about that. War and peace hang in the balance, so we need this to be right.

War was averted some years ago when Soviet ‘missiles’ were ‘seen’ in flight, approaching the US. All were notified, short of the President, as the threat loomed. Finally, the error was identified. A ‘war game scenario’ was erroneously played which seemed to be real. Human intervention called out this error, which could have led to real life counterfire. The error was noticed at the last minute by fail safe measures, verified at the last minute when a person identified the potentially disastrous error.

Machines without human oversight are dangerous, and we need to learn from these events. AI has changed the course of security – but that doesn’t mean there isn’t still a game of human chess to play.

 

Related News

John William Davis was commissioned an artillery officer and served as a counterintelligence officer and linguist. Thereafter he was counterintelligence officer for Space and Missile Defense Command, instructing the threat portion of the Department of the Army's Operations Security Course. Upon retirement, he wrote of his experiences in Rainy Street Stories.