The Cybersecurity and Infrastructure Security Agency (CISA) joined the Australian Signals Directorate’s Australian Cyber Security Centre (ADSC) in publishing new guidance on four key principles for critical operational technology (OT) owners and operators to understand the risks of integrating artificial intelligence (AI) into OT environments. The “Principles for the Secure Integration of Artificial Intelligence in Operational Technology” was designed around the Purdue Model Framework, which focuses on the hierarchical relationships between OT and IT devices and the network

“AI holds tremendous promise for enhancing the performance and resilience of operational technology environments – but that promise must be matched with vigilance,” said CISA Acting Director Madhu Gottumukkala. “OT systems are the backbone of our nation’s critical infrastructure, and integrating AI into these environments demands a thoughtful, risk-informed approach. This guidance equips organizations with actionable principles that AI adoption strengthens—not compromises—the safety, security, and reliability of essential services.”

The joint guide was further developed with collaboration from the National Security Agency‘s Artificial Intelligence Security Center, the FBI, the Canadian Centre for Cyber Security, the German Federal Office for Information Security, the Netherlands National Cyber Security Centre, the New Zealand National Cyber Security Centre, and the United Kingdom National Cyber Security Centre.

The Four Principles

The CISA and ADSC provided four key steps that included:

  • Understanding unique risks and potential impacts of AI, and ensuring secure development for its lifecycle.
  • Assessing AI use in OT, including evaluating business cases, managing OT data security risks, and addressing immediate and long-term integration.
  • Establish AI governance that implements government frameworks, continuously tests AI models, and ensures regulatory compliance.
  • Embedding safety and security, which includes maintaining oversight, ensuring transparency, and integrating AI into incident response plans.

“The integration of AI into critical infrastructure brings both opportunity and risk,” said Nick Andersen, executive assistant director for cybersecurity at CISA. “While AI can enhance the performance of OT systems that power vital public services, it also introduces new avenues for adversarial threats.”

Andersen, that CISA, working “in close coordination with” U.S. and international partners, remains committed to providing clear, actionable guidance.

“We strongly encourage OT owners and operators to apply the principles in this joint guide to ensure AI is implemented safely, securely, and responsibly,” added Andersen.

OT Dangers Remain Real

Cybersecurity experts were quick to weigh in on the announcement, and Damon Small, a board member at cybersecurity provider Xcape, Inc., told ClearanceJobs in an email that in OT, the potential dangers outlined in the guide are very real because AI systems can fail or be manipulated in ways that have physical repercussions.

“This represents the continuing evolution of the Purdue Model since its adoption by industrial operators in the 1990s,” Small explained.

He noted that data drift can gradually undermine control decisions, corrupted sensor data can force models into unsafe states, and adversarial attacks or tampering with the model supply chain can create unforeseen vulnerabilities that bypass traditional safety measures.

“The crucial difference from IT is the high cost of error; even subtle AI malfunctions can lead to outages, equipment damage, or public safety issues, thus requiring a much higher standard of assurance,” Small warned.

Although the guidance from CISA and its international partners won’t create any new laws, it could still significantly influence the regulatory landscape for critical infrastructure.

“Utilities and industrial operators use them to inform their architectural decisions, vendor requirements, and audit processes, and regulators often adopt similar language,” Small continues. “Even abstract recommendations become concrete through procurement practices – demanding AI transparency, inventories, and fail-safes – and through insurers and boards assessing operators’ adherence to CISA/NIST-style guidance.”

A Critical Inflection Point

In recent years, it has gone from a buzzword to a technology used by millions of Americans and, perhaps even billions more, in nations around the world. The technology is being adopted, adapted, and integrated into OT at breakneck speed.

CISA is now calling for greater consideration of how the technology might be used.

“This guidance arrives at a critical inflection point,” said Denis Calderone, CRO & COO at Suzu Labs. “We’re seeing organizations rush AI deployments into operational environments with various rationales, but often without the security rigor these systems demand. The consequences of getting this wrong aren’t arbitrary or abstract. We’re talking about critical areas like water treatment, power grids, and manufacturing safety systems.”

The guidance could help companies understand that they are prepared to address unexpected outcomes of AI adoption and integration.

“What I appreciate about this framework is the focus on ‘AI drift,'” Calderone told ClearanceJobs via email. “We have seen evidence where AI models can degrade or behave unexpectedly over time, particularly in OT environments where the consequences of bad decisions can cause physical material outcomes. A mis-calibrated algorithm in a financial system costs money, while a miscalibrated algorithm controlling industrial processes can cost lives.”

However, the greater challenge will be adoption.

“OT environments are notorious for ‘if it ain’t broke, don’t fix it’ cultures, and frankly, they’re not typically built for agility either,” suggested Calerone. “Change management in these environments moves deliberately, often for good reason. Meanwhile, bespoke AI solutions are being stood up at breakneck speed by vendors and internal teams racing to capture efficiency gains. That mismatch is a recipe for trouble. Organizations that treat this guidance as a checkbox exercise will miss the point entirely.”

AI will continue to evolve, and companies need to be prepared.

“In the world of critical infrastructure, a failure in AI governance is no longer a data loss event,” said Small. “It is a physical safety disaster.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.