What happens when truth is automated, deception is scalable, and your data source might be a chatbot trained to manipulate? That’s the future of Open Source Intelligence (OSINT), and it took the stage at the recent AI Expo in Washington, D.C., where national security leaders, data visionaries, and intelligence veterans wrestled with how to counter AI-driven deception and keep pace in an era where information moves faster than facts.
The panel, “The Future of OSINT: Countering AI-Driven Deception and Emerging Threats,” by the Intelligence and National Security Alliance, brought together key voices including Brian Drake, Chief Technology Officer at Accrete AI , Ellen McCarthy, former Assistant Secretary of State for INR and current chairwoman and CEO of the Trust in Media Cooperative, Kevin Carlson, executive director of the Open Source Enterprise at the Central Intelligence Agency, and Linda Weissgold, former CIA Deputy Director for Analysis). The conversation confronted the role of AI in national security and how it demands more than algorithms; it demands accountability.
Open Source Intelligence Is Not Cheap—And It’s Not Easy
“There’s this belief that because it’s publicly available, OSINT should be cheap. It’s not,” said Weissgold. “Data is expensive. Acquisition, storage, curation—all of that takes resources. And if we want analysts to focus on insight, someone else needs to focus on sourcing.”
McCarthy echoed that sentiment, describing OSINT as more than an input—it’s a discipline that needs professional handling. “You need to know what’s in the information—its provenance, credibility, and timeliness,” she said. “We’re not telling people what to believe—we’re giving them the agency to make that decision with transparency.”
AI Isn’t a Shortcut. It’s a Scalpel—If You Know How to Use It
“We are not in the business of validating if every data point is true. We’re in the business of characterizing what we collect,” said Carlson. “Is this synthetic? Is it biased? What’s the source’s track record? We treat open sources like any other intelligence collection—carefully.”
But the scale of AI-generated data is a double-edged sword. While adversaries embrace risk and speed, the U.S. is hamstrung by governance and values. “Our adversaries are putting AI on targetable platforms with minimal oversight,” Drake pointed out. “Are we falling behind because we insist on ethics?”
“Would you jump off a cliff just because your adversaries did?” Carlson shot back. “We have to care about doing the right thing—even if it means admitting when we get it wrong.”
Weissgold added a sobering anecdote: “I’ve had to sit in front of a President and explain when we got it wrong. That moment makes you appreciate every layer of caution.”
Who Owns OSINT? And Who Teaches It?
The panel raised a key structural question: “The IC was built for strategic surprise and secrecy. But what about tactical threats—human trafficking, cybercrime—facing law enforcement and civil society? They need intel, too. Is our model too slow for the moment?”
Weissgold suggested OSINT acquisition be centralized to avoid duplication and improve access: “We shouldn’t be buying the same commercial dataset five times. And it’s not just about the data—it’s the tools and access.”
Teaching those tools is another challenge entirely. “I teach graduate students, and they are not nearly as AI-native as we think,” said Weissgold. “They don’t know how to prompt engineer. They’re not used to using AI critically. And here’s the scariest stat: 54% of U.S. adults read below a sixth-grade level. How do we teach digital literacy when basic literacy is lacking?”
Carlson emphasized digital readiness must be layered into analyst training. “We don’t need everyone to be an AI expert, but they need to understand how AI works, how to use it, and how to recognize its limits.”
From Content Consumers to Content Providers
McCarthy argued the Intelligence Community needs a mindset shift: “Instead of secrets, we should be measured on getting content to decision-makers—fast, accurately, at the right moment. If we were built like a content delivery service, the incentives, training, and tools would follow.”
It’s a radical notion—less need-to-know, more need-to-deliver.
Policy, Politics, and the Pressure to Be First
Information isn’t just about facts anymore—it’s about timing. As Weissgold noted, “There’s a pressure for policymakers to comment instantly. When Assad fell, reporters were asking President Biden for a response within an hour. That speed isn’t compatible with responsible analysis.”
It’s not just a media problem—it’s a cultural shift. As McCarthy recalled from her time during Desert Storm, “My job was to monitor CNN and report what Wolf Blitzer was saying—so our intelligence briefers didn’t repeat it.”
What’s Next for OSINT?
From AI hallucinations to fake summer reading lists, the panelists stressed that our ability to distinguish real from synthetic is only as strong as the human behind the machine. “Until I can explain why the AI made a choice,” said Weissgold, “I’m not ready to trust it.”
In the end, OSINT isn’t just a tool for countering disinformation. It’s a battlefield where values, speed, and truth collide. The future isn’t about replacing analysts with AI—it’s about training analysts to ask better questions, and giving them the tools—and the time—to find real answers.
Because as Carlson reminded the audience, “The worst-case scenario isn’t being slow. It’s being wrong.”