President Donald Trump announced he would issue an executive order limiting state regulations on artificial intelligence (AI). The White House has said that a single national AI standard is necessary to allow the United States to maintain its lead in the emerging technology.
“There must be only One Rulebook if we are going to continue to lead in AI,” President Trump wrote in a post on the Truth Social platform. He added, “You can’t expect a company to get 50 Approvals every time they want to do something. THAT WILL NEVER WORK.”
Politico reported that a draft of the Executive Order was leaked last month, prompting debate over the federal government’s role in regulating AI.
Trump is not the only one who believes AI standards should be set at the national level.
OpenAI, Google, Meta, and the venture capital firm Andreessen Horowitz have called for national AI standards, warning that multiple state laws will stifle innovation and that the U.S. will fall behind China in AI development if states are allowed to regulate the technology.
The AI Revolution is Here
The adoption of AI continues to outpace efforts to establish governance over the technology. As with other emerging technologies, lawmakers are often left to play catch-up.
“We’re witnessing an industrial revolution, and the U.S. is currently leading,” Michael Bell, founder & CEO of secure AI solutions provider Suzu Labs, told ClearanceJobs.
However, that lead isn’t guaranteed, warned Bell.
“China is investing aggressively in AI infrastructure, and the EU is building comprehensive regulatory frameworks that could set global standards,” Bell explained in an email. “In that context, reducing domestic regulatory friction makes strategic sense. You can’t win a technology race while fighting 50 simultaneous compliance battles.”
An executive order limiting how states can regulate AI doesn’t go far enough.
“We’ve reached the point where national alignment on AI policy is necessary, but not sufficient. A single rulebook means nothing unless it addresses the baseline problem behind every AI failure: a lack of governance over the data structures that feed these models,” suggested Ryan McCurdy, VP of marketing at secure database developer Liquibase.
McCurdy told ClearanceJobs that model-level rules won’t protect the public if the underlying data is inconsistent, drifting, or untraceable.
“The real question is whether the national standard will demand evidence,” McCurdy added. “Evidence of how models are trained, evidence of how data evolves, evidence of how organizations prevent unapproved or risky changes. That’s the difference between actual oversight and a press release.”
A Coming War For AI Regulations
In the short term, it isn’t even clear what the Executive Order will accomplish. Indeed, 50 different rules won’t benefit AI development, but blocking any rules could create its own set of problems.
“The Trump administration’s move to preempt state AI laws is fundamentally about velocity, getting American AI companies to market faster with fewer obstacles. That’s a legitimate competitive priority,” Bell noted.
“The bipartisan pushback, including from Governor DeSantis (of Florida) calling it ‘federal government overreach,’ reflects real concerns about what gets lost in the rush: state-level experimentation, consumer protections, and the ability to respond quickly to emerging harms,” Bell added.
At this point, there appears to be no consensus on how AI should be governed. A lack of a single standard could thus be worse than 50 standards. At present, that is the direction the U.S. is headed.
“From a security perspective, the key question isn’t whether we regulate AI, but how,” Bell continues.
He further explained that supply chain integrity, model provenance, adversarial manipulation, and data handling practices determine whether AI systems are secure, not just competitive.
“A streamlined national framework that establishes clear security baselines while cutting redundant compliance burden is the best of both worlds,” said Bell. “A framework that prioritizes speed over security just creates a different kind of competitive disadvantage: systems that adversaries can weaponize.”
At the national level, AI standards should include minimum security and transparency requirements while allowing states to respond to emerging risks.
“The worst outcome is regulatory paralysis, either 50 conflicting state rules that strangle innovation, or federal preemption that creates a compliance vacuum,” warned Bell. “Right now, the administration is betting that unified national policy enables faster, better AI development. Whether that bet pays off depends entirely on what the actual rules require.”
Getting everyone on the same page may be difficult, but it will be just the first step to maintaining the lead in AI.
“If the United States wants to lead in AI, it needs more than a unified rulebook,” said McCurdy. “It needs a standard that forces AI systems to be explainable, governable, and accountable from the ground up.”



