The most interesting moments in the Intelligence and National Security Alliance’s “Coffee & Conversation,” sponsored by Deloitte, weren’t about shiny demos or breathless AI hype. They were about the unglamorous, hard-to-fake work of turning AI into mission advantage inside a classified enterprise.

Two CIA leaders, Larry Taxson, Digital Capabilities Delivery Executive, CIA, and Israel Soong, Deputy Chief AI Officer, CIA, offered a rare, operationally grounded view into how the Agency is thinking about cloud, data, and AI as part of a single, integrated stack.

What emerged was a clear message: the future isn’t “AI as a tool.” It’s AI as a workflow. And the pace is accelerating.

Digital advantage means workflow transformation, not faster search

Asked what “digital advantage” looks like over the next 18 to 24 months, Taxson and Soong didn’t describe incremental productivity gains. They described a reshaping of how intelligence work happens.

Taxson framed the north star as human-machine teaming, AI handling triage, sorting, and pattern surfacing so officers can spend more time doing what humans do best: judgment and critical thinking. The vision: an analyst arrives in the morning and the system has already sifted and prioritized relevant intelligence for their accounts, less scavenger hunt, more synthesis.

Soong pushed that even further: real advantage isn’t just speeding up existing tasks. It’s multiplying effectiveness by redesigning mission workflows, moving from “can AI help analysts search faster?” to “can agentic systems help identify intelligence gaps and propose collection strategies,” creating space for analysts to think creatively.

“The biggest barriers aren’t technical. They’re culture and talent”

If you’ve been around federal modernization conversations long enough, you expect “culture” to show up. But Soong didn’t treat it as a throwaway line. He treated it as the central challenge.

His blunt assessment: the greatest barriers to AI adoption at CIA are cultural and talent-related, not technological. Transforming workflows requires decomposing work into what truly needs human judgment, what can be automated, and, critically, how you build trust in AI outputs.

On talent, the theme wasn’t simply “we need more people.” It was “we need mission experts who understand AI well enough to apply it correctly.” The Agency’s response includes internal literacy programs, including an AI learning badge and training, plus the pitch to prospective hires: government may not match private-sector salaries, but it can offer mission impact money can’t buy.

Governance isn’t optional when the mission is “no fail”

As the conversation turned toward safeguards, manipulated data, trust, bias, and accountability, Soong emphasized that scaling AI means scaling governance.

He highlighted the need for systems that are robust, reliable, secure, and fair, and described a push toward formal governance mechanisms, including an AI governance board and an AI risk roadmap. The goal is straightforward: people won’t adopt AI they don’t understand or don’t trust, especially when decisions can have real-world consequences.

And for all the discussion of autonomy, one principle came through clearly: keep a human in the loop at the most critical nodes. Autonomy may grow, but accountability doesn’t get outsourced.

“Lab to factory” is the battle for scale and cost

One of the most important exchanges was around what Soong described as a “lab to factory” approach: empower innovation close to mission, “at the edge,” then scale successful solutions through enterprise platforms, “at the core.”

It’s the right model and also the messy one.

Taxson described the practical tension: you don’t want to stifle mission innovation, but you also can’t afford a thousand bespoke solutions doing the same thing slightly differently. That isn’t just a governance issue. It’s a cost and efficiency issue. Running AI workflows end-to-end across enterprise pipelines and multiple models can be expensive, and duplicative “blooms” across mission spaces can undermine economies of scale.

In other words: AI success isn’t just about capability. It’s about repeatability.

The foundation matters: data pipelines, RAG, and “plumbing”

For a community that loves to talk about models, Taxson kept returning to the basics: the AI stack rides on the IT stack. If the plumbing doesn’t work, nothing else matters.

His description of technical priorities sounded less like a moonshot and more like an architecture brief:

  • reliable data transport, “pipes,” from dispersed locations
  • secure enterprise data pipelines across structured and unstructured data
  • data enrichment and processing at scale
  • vectorization and retrieval approaches
  • reusable, containerized services and “models as a service”
  • UI layers that stop building new “apps for every dataset” and instead compose reusable services

Soong underscored the human side of that same point: tools like Retrieval Augmented Generation (RAG) and prompt practices don’t implement themselves. Officers need the training to take full advantage of what’s being delivered.

Cloud: CIA’s early lead and the multi-cloud reality check

Taxson also offered a candid reflection on the cloud journey: the Agency’s long-running cloud partnership, dating back to early commercial cloud adoption, has enabled major migration of workloads. But the next chapter is multi-cloud integration, not just “multiple clouds.”

The practical reason is mission: different providers and partnerships can bring different strengths, including model ecosystems. The strategic reason is optionality: when the AI field changes weekly, you want the ability to swap in better tools fast without rebuilding everything from scratch.

What industry keeps getting wrong about the “high side”

This is where the conversation got particularly useful for vendors.

The message from both speakers: don’t assume “it works on the low side” translates to “it ports to the high side.” The right starting point is: will it run in air-gapped, classified environments and meet security requirements? Begin there.

And don’t just show up with a generic pitch. Taxson’s advice was simple: demonstrate mission-relevant use cases, prove it runs in constrained environments, and be ready to show why it’s meaningfully better than what’s already in place, more efficient, more cost-effective, or delivering superior mission impact.

The takeaway: CIA is moving faster, but deliberately

The throughline of this discussion was acceleration with intention. Soong described a shift from multi-year deliberations to rapid experimentation, using mechanisms like internal tech days to gather feedback quickly and integrate commercial models into the tools officers already use.

But the acceleration is bounded by realities that don’t care about hype: trust, governance, cost, and the basic physics of networks and data movement.

That’s the real signal from this panel: the AI future in intelligence won’t be won by who has the best model. It will be won by who can operationalize responsibly, at mission speed, inside the constraints of government.

Related News

Lindy Kyzer is the director of content at ClearanceJobs.com. Have a conference, tip, or story idea to share? Email lindy.kyzer@clearancejobs.com. Interested in writing for ClearanceJobs.com? Learn more here.. @LindyKyzer