Hawaiian False Alarm and Cyberspoofing the Nuclear Decision Making Process

Intelligence

Saturday morning in Hawaii was an unsettling time for some, and a terrifying time for others. But the nuclear false alarm broadcast statewide should be a cause for concern for everyone. But we might be asking the wrong questions.

Many wonder, “what if this had been the real thing?” It’s a fair question, as anecdotal evidence from Hawaii shows that most people have absolutely no idea what to do when a nuclear weapon is inbound. This is largely because the nation’s approach to nuclear preparedness since the 1980s has been “put your head between your legs and kiss your a** goodbye.” It’s a catchy bumper sticker, but like most bumper stickers, bad policy.

For those who want to know what to do in the highly unlikely event of a nuclear attack, it turns out “duck and cover” wasn’t just a propaganda exercise designed to calm nervous children. It’s actually a pretty good survival strategy. You can read more at Ready.gov if you’re that worried. Regardless, this is not the right question to be asking.

Who in Hawaii will pay the price?

It is also fair to ask how it was possible that, if you believe the Hawaiian government, a state emergency management employee “pushed the wrong button” during morning shift change procedures. Certainly that’s what FCC Chairman Ajit Pai, who oversees the technical side of the Emergency Alert System, wants to know.

Someone should be held accountable for the initial mistake (Hawaii’s emergency management chief took responsibility on-camera Saturday afternoon), and the fact that while Congresswoman Tulsi Gabbard was able to Tweet the news that it was a false alarm almost immediately, it took official channels 38 minutes to do the same. Hawaii needs to take serious steps to prevent a repeat of this incident.

But although the false alarm caused some to panic, and went on too long, it’s not an altogether bad thing it happened.

On February 20, 1971, an errant teletype message containing the codeword HATEFULNESS was sent to broadcasters across the country. It ordered them to cease regular broadcasts and to activate the Emergency Broadcast System, forerunner of today’s EAS. Most stations believed it was a mistake and never broadcast the alert. Would you rather have to recover from a false alarm, or get caught in an actual emergency because your local station manager was a cynic? Discretion is a good thing, but discretion in these kinds of cases is far above that pay grade.

The real question to be asking is: what do false alarms, accidental or deliberate, mean for the nuclear decision making process?

Cyberspoofs could dangerously cloud future decisions

Just last Thursday, researchers Beyza Unal and Patricia Lewis from London think tank Chatham House, the Royal Institute of International Affairs, published “Cybersecurity of Nuclear Weapons Systems: Threats, Vulnerabilities and Consequences.” While the authors argue all aspects of the nuclear chain are vulnerable, from the manufacturing supply chain through to command and control systems, they raised a sobering thought.

“In particular,” they warned, “successful cyber spoofing could hijack decision-making with potentially devastating consequences.”

Anyone who has served in the military or worked in some sort of emergency response role will tell you, “first reports are always wrong.” You take decisive and irreversible action on the first report at great peril. The fear here is that a coordinated and sustained cyberattack could give decision makers from the President on down a false impression of actual events, causing them to make rash and ultimately incorrect but irreversible decisions.

This is the kind of thing that keeps policy wonks awake at night.

One can only hope the U.S. continues to maintain a “wait and see” policy, where we would not launch a nuclear counterattack on warning, but wait until after the initial attack has actually hit. This is, after all, the purpose of maintaining a nuclear triad. “First use” and “first strike” capabilities are not synonymous. A first strike capability means possessing enough nuclear weapons of the right type, aimed at the right targets, to prevent the enemy from retaliating.

When a nation is confident that at least some of its nuclear capability will survive an initial attack, there is no need to “use them or lose them.” This is, of course, subject to the temperament and inclinations of the people in charge.

There are lots of lessons to be learned from Saturday’s false alarm. They’re all good lessons, but hopefully the nation’s leadership are prioritizing them correctly.

Tom McCuin is a strategic communication consultant and retired Army Reserve Civil Affairs and Public Affairs officer whose career includes serving with the Malaysian Battle Group in Bosnia, two tours in Afghanistan, and three years in the Office of the Chief of Public Affairs in the Pentagon. When he’s not devouring political news, he enjoys sailboat racing and umpiring Little League games (except the ones his son plays in) in Alexandria, Va. Follow him on Twitter at @tommccuin

More in Intelligence