As federal agencies explore how generative AI can support their missions, leaders are increasingly navigating a narrow path between innovation and the strict data-protection requirements that govern government systems. Many agencies still limit or prohibit the use of public AI tools such as ChatGPT, even as demand for these capabilities continues to rise.

In one recent example cited by Politico, a senior official at the Cybersecurity and Infrastructure Security Agency (CISA) requested permission to experiment with a public AI platform. In the course of that use, sensitive but unclassified contracting information was flagged by existing security controls.

The issue was surfaced through standard auditing and monitoring processes, leading the agency to conduct an internal review to evaluate any potential impact and reinforce existing safeguards.

Commenting on the situation, Dr. Jim Purtilo, an associate professor of computer science at the University of Maryland, noted that publicly available information indicates the use of a commercial AI platform raises legitimate concerns around data handling. He emphasized that information entered into public AI tools should be treated as non-private, while also pointing out that existing safeguards did flag the material, creating an opportunity to address the issue and reinforce training.

While the documents involved were not classified, experts caution that unclassified does not mean non-sensitive. Ensar Seker, chief information security officer at threat intelligence firm SOCRadar, explained that “for official use only” materials—particularly contracting documents—are sensitive by design and can expose internal processes, vendors, pricing structures, or operational dependencies if improperly handled.

Seker told ClearanceJobs that uploading such data into a public AI service creates an uncontrolled dissemination point where retention, reuse, or downstream exposure cannot be fully verified, even if no malicious intent exists.

Security and AI

Although ChatGPT isn’t approved for use by Department of Homeland Security (DHS) employees, they are approved to use the agency’s self-built AI-powered chatbot, DHSChat. As Politico noted, most of the AI tools used by DHS are configured to prevent document input from leaving federal networks.

Strong security is built on both robust technical controls and informed, empowered users across the network.

“This is a concern because the proper handling of classified and FOUO (Now CUI) data enables the United States to protect information from the watchful eyes of our adversaries,” explained Lt. Gen. Ross Coffman (U.S. Army, Ret.), president of Forward Edge-AI.

“Once this information is placed in commercial Large Language Models, it becomes accessible to all,” Coffman told ClearanceJobs. “There are instances when combining multiple CUI/FOUO data sources can increase the security requirements to Secret classification or higher. LLMs enable this consolidation in an unclassified medium.”

It serves as a reminder that cybersecurity effectiveness reflects the combined impact of technology, policy, and human decision-making.

“What makes this case notable is that it wasn’t a lack of tools or awareness, but an exception granted” said Seker. “Cybersecurity maturity isn’t defined by technology alone; it’s defined by consistent behavior, especially from leadership.”

Finally, a reminder that proper cybersecurity training remains crucial.

“Humans are our first line of defense,” Coffman continues. “Through proper training and education, we can become the strongest, not the weakest link.”

Learning, Safeguards, and Responsible AI Adoption

The episode highlights the value of layered safeguards and continuous oversight. Automated alerts, routine audits, and internal reviews functioned as designed, surfacing potential issues early and creating an opportunity to strengthen guidance, training, and governance. As agencies continue to adapt to rapidly evolving AI technologies, moments like these—when systems flag concerns and organizations respond—can ultimately reinforce security culture and support more responsible innovation going forward.

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.