At The Cipher Brief Conference, Katrina Mulligan, Director of National Security Partnerships at OpenAI, offered a candid and compelling look at how artificial intelligence is reshaping both technology and the national security landscape, and how fast it’s happening.
“It’s just a really different cultural experience,” Mulligan said. “Roughly half of our research team aren’t U.S. persons. They come from around the world. And while they share certain democratic values, there’s no single worldview. That’s part of what makes this work both powerful and complicated.”
Her comments underscored the reality that AI isn’t a national effort, it’s a global one. That creates opportunities for collaboration, but also new layers of complexity and risk.
From ChatGPT 3.5 to 4.0: A Rapid Evolution
When Mulligan joined OpenAI, ChatGPT had just emerged and was barely six months old. “ChatGPT 3.5 felt like a junior-high student,” she said. “Then GPT-4 arrived, and suddenly it was passing the bar exam.”
That leap from conversational chatbot to near-expert reasoning happened in under two years. A pivotal breakthrough came when engineers taught the model to think before responding. “That changed everything,” Mulligan said. “The model went from reacting instantly to actually reasoning. That single shift moved us from college-level performance to superhuman outputs on many benchmarks.”
Now, OpenAI’s models can perform complex research that once required teams of analysts and days of work in under 30 minutes. “That kind of cognitive scaling is something humanity has never experienced before,” she said.
AI and the National Security Imperative
For Mulligan, the implications are profound. “We intentionally started our national security partnerships with the national labs,” she explained. “They were ready, and they understood what was coming.”
Within months, OpenAI went from contract signing to running a full-scale model on supercomputers. “That speed is unheard of in government,” she said. “But it’s exactly the pace this moment demands.”
These collaborations aren’t just about access to cutting-edge models, they’re about closing the knowledge gap. “We’re creating custom training for the national security community, meeting them where they are, helping them see both the risks and the opportunities firsthand,” Mulligan said. “You can’t get smart about AI if you’re not using it.”
AI Governance: A Race to Understand
That learning curve extends well beyond the intelligence community. A recurring theme at the Cipher Brief panel on AI and the world of cyber threats was how far Washington still has to go.
Glenn Gersetell, former general counsel of the National Security Agency noted the gap bluntly: “No surprise that in Congress we don’t have the technical fluency to deal with these problems. The average age of members is 58—and just 10 have engineering backgrounds. If we fail to act, we do so at our peril.”
That lack of fluency, several panelists warned, feeds a sense of fatalism. “Fatalism feeds into inaction,” Gersetell said. “And political dysfunction doesn’t help.”
Meanwhile, Cynthia Kaiser, senior vice president of HalyconAI for former senior FBI cyber executive cautioned that “the way we assessed [threats] before isn’t the way we need to look at them now.” From cognitive warfare to algorithmic manipulation, the rules are shifting.
As Jennifer Ewank, former deputy director of CIA for digital innovation summarized: “Platforms are becoming policy. Algorithmic cognitive warfare is already here.”
Private industry, Jon Darby, retired director of operations at the CIA argued, can’t wait for government to lead. “We’ve waited too long for the government to take the lead,” he said. “There are things the private sector can and must do together.”
Humility and Urgency
Mulligan echoed the panel’s warnings with a note of humility. “We’re talking about world-changing technology, and we don’t have all the answers,” she said. “But the worst thing we could do is pretend we do.”
That humility also shapes OpenAI’s relationship with policymakers. “This is the first technological revolution where the U.S. government hasn’t had a seat at the table from the start,” she said. “That makes it harder for policymakers to understand the moment and how fast they need to move.”
She’s not alone in that concern. Many in the national security community fear that without faster public-private collaboration, America’s AI advantage could erode. “We’d be wise not to underestimate China,” Mulligan said. “Export controls matter—but we have to innovate faster. Our advantage has always been creativity, openness, and speed.”
Looking ahead, Mulligan sees 2026 as the year of “enterprise transformation” in government. “Right now, chat interfaces are the tip of the spear,” she said. “But real transformation will come from integrating AI into workflows, finding the three to five processes that define your mission and reinventing them with AI.”
Each agency, she said, will need to determine what that looks like. “It’s about transforming the way acquisition, intelligence, and decision-making happen. That’s the next frontier.”
Mulligan’s message was clear: the U.S. can’t afford to sit on the sidelines. “AI isn’t waiting for us to understand it,” she said. “We have to engage—curiously, humbly, and fast.”