Less than a decade ago, “artificial intelligence” was something most Americans only heard in the context of science fiction. In 2017, just 6% of companies used AI, while that number grew to 78% in 2024, an increase from 55% the previous year.

According to recent studies from Stanford and MIT, private investment in AI has also been surging, with the US leading globally at $109.1 billion in 2024. AI is used in marketing, sales, service operations, and IT.

“The advances and potential use cases for adopting artificial intelligence (AI) technologies brings both new opportunities and new cybersecurity risks. While modern AI systems are predominantly software, they introduce different security challenges and risks than traditional software. The security of AI systems is closely intertwined with the security of the IT infrastructure on which they run and operate,” explained the National Institute of Standards and Technology (NIST), in its recently published whitepaper that proposed a framework of control overlays for securing AI systems.

That framework is built on NIST’s Special Publication (SP) 800-53, which has seen widespread adoption for managing cybersecurity risks.

“I’m pleased to see NIST advancing cybersecurity guidance explicitly for AI systems through the proposed SP 800‑53 Control Overlays,” said Ensar Seker, CISO at cybersecurity provider SOCRadar.

Seker told ClearanceJobs that by tailoring well-established controls to scenarios like generative AI, predictive engines, and autonomous agents, NIST is providing implementers a practical bridge between risk frameworks and real-world AI use cases, whether in development or deployment.

“Importantly, the integration with existing documents like SP 800‑218A and the AI Risk Management Framework shows a thoughtful, layered approach that organizations can adopt without starting from scratch,” Seker added. “Equally valuable is the launch of the NIST Overlays Securing AI Slack channel.”

James McQuiggan, security awareness advocate at KnowBe4, also told ClearanceJobs that utilizing the controls of NIST SP 800-53 is an innovative and effective move, as most organizations already know these controls.

“Processes and training are already in place, and users understand the language,” McQuiggan explained. “Now, NIST is integrating specific AI modifications to controls already in use, which is a bold step forward to securing programs involving artificial intelligence.”

This will create a collaborative, real-time forum, which could allow cybersecurity practitioners, AI developers, and policy experts to shape the overlays alongside NIST.

“This kind of transparent, peer-driven refinement elevates the chances the final guidance will be relevant, actionable, and responsive to emerging threats,” Seker suggested. “I strongly encourage stakeholders, especially those operating AI in sensitive environments, to provide input and help accelerate the maturity of AI-specific cybersecurity controls.”

More Could be Done

However, some cybersecurity researchers have warned that more could still be done, given the increase in AI by businesses of all sizes.

“The main goal of the use cases focusing on safeguarding the confidentiality, integrity, and availability of associated information seems based on security principles that intend to address many of the current concerns around AI,” Melissa Ruzzi, director of AI at AppOmni, told ClearanceJobs in an email.

Ruzzi suggested that use cases be more “defined,” notably in the AI approaches and algorithms that apply to it.

“For example, prediction could be built from both supervised or unsupervised algorithms, or even from a mix of both,” said Ruzzi. “False positive and negative rates are not directly quantifiable in unsupervised learning as they are in supervised learning, so the use case should define better details about the application before establishing metrics for it. There will always be additional areas and use cases to consider, given how fast AI is evolving, especially where different AI approaches are mixed.”

Moreover, even as NIST has provided five detailed use cases regarding implementing and using AI within an organization, it may be overlooking a common weak link.

“What seems to be missing and usually is bolted on afterward is the human element,” added McQuiggan. “AI security isn’t just about protecting models and data; it’s also about the insider threat, Shadow AI. It’s about educating people not to paste sensitive information into public AI tools, and to be aware of and understand hallucinations and biases. To trust and verify. Controls are crucial to address user behavior.”

Finally, he told ClearanceJobs that NIST will need to ensure this information is presented in a way that can be widely understood, even to a less tech-savvy audience.

“NIST should provide overlays to include guidance for organizations that aren’t cybersecurity mature, like SMBs,” he continued. “Otherwise, we’re only helping the companies that already have their act together.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.