Last week, OpenAI, the maker of ChatGPT, announced a new initiative to bring its artificial intelligence (AI) tools to the government sector. The tech company announced that its established collaborations with the U.S. National Labs, the Air Force Research Laboratory, NASA, NIH, and the Treasury will be brought under OpenAI for Government, which will provide the ChatGPT Gov product to U.S. federal, state, and local governments.
The software developer was also awarded a $200 million contract to provide the Pentagon with new AI tools.
“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Department of Defense (DoD) announced.
The Vanguard of OpenAI for Government
According to a blog post last week, the DoD contract will be the first OpenAI for Government initiative, explaining that it will bring the company’s “industry-leading expertise” to the Pentagon.
“[The technology] helps the Defense Department identify and prototype how frontier AI can transform its administrative operations, from improving how service members and their families get health care, to streamlining how they look at program and acquisition data, to supporting proactive cyber defense,” the company said in the post. “All use cases must be consistent with OpenAI’s usage policies and guidelines.”
The DoD award specified that the Chief Digital & AI Officer (CDAO) would be the “contracting activity.” That position was only created in 2022 to merge existing AI efforts. It is currently overseen by the Office of the Secretary of Defense and serves as a central hub providing expertise, services, and supporting “infrastructure” for AI projects across the armed services and defense agencies, Breaking Defense explained.
Overcoming Barriers to Entry
Although Open AI for Government is designed to foster greater collaboration, it will still need to navigate a myriad of obstacles within all levels of government.
“As OpenAI says, their initiative is just getting started, and the federal government writ large imposes many adoption barriers, so I don’t see this having any particular near-term effect,” suggested Dr. Jim Purtilo.
“Many other tech firms will quickly emerge to compete for fed business,” Purtilo told ClearanceJobs. “As one of the world’s largest bureaucracies, the feds absolutely should assess AI’s potential savings, but today would be the wrong time to lock in expensive contracts.”
He warned that even the tech world doesn’t yet know all the other consequences of shifting to these tools, nor do we know best practices for federated migration to their use.
“You can’t just flip a switch and declare victory, ‘yeah, we’re using AI!'” Purtilo warned.
Pilot Programs Coming?
It is unclear how quickly AI might be adopted at different levels of government, but given the advances in this technology, it may be impossible to keep pace. Yet, on the other hand, we still haven’t fully understood all the potential repercussions.
“What I’d really like is for early adopters in the government to be selected strategically for pilot projects, with extra consideration given to explicit study of adoption challenges and costs,” Purtilo added. “We should give managers a risk-reduction road map. If we are not measuring efficacy in the use of new technologies, then we don’t know whether stakeholders are being given the best value.”
Moreover, there are still significant differences between the government’s adoption of AI and that of the public sector.
“Deployment practices typical of the federal government could well dampen much of the value,” Dr. Purtilo noted. “AI’s big impacts likely come from streamlining end-to-end business processes, but current federal rules (and regulations) would require that many discrete operations be preserved. This severely restricts the use of AI. The win might be in fully eliminating bureaucratic steps in ossified administrative silos. Still, unlike their public-sector counterparts, federal managers may be constrained to follow stuffy old-millennium workflow.”
That could inform leadership a while to work out, while there may be even more specific considerations to AI adoption within the DoD.
“It is tempting to think we can win big savings by looking at ‘ordinary’ business processes, but poor choices in AI adoption can still pose indirect risks to warfighters,” said Purtilo. “A bursar who makes injudicious use of AI in handling HR matters for military personnel could inadvertently put them in harm’s way by exposing data to a bad actor. A little planning will go a long way.”