The United States government continues to invest in and adopt the latest artificial intelligence (AI) platform across federal agencies. This aligns with last year’s 2025 AI Action plan, which called for greater adoption of AI tools to increase efficiency, enhance national security, and modernize legacy systems.
“AI is a potential game changer for government agencies, which are generally understaffed and underfunded,” explained technology industry analyst Rob Enderle of the Enderle Group.
“Many are often unable to perform their missions as intended due to these staffing and funding shortfalls,” Enderle told ClearanceJobs. “AI can review massive amounts of data and form much more informed decisions; they can better handle the massive number of forms and compliance tasks that often bog down both government agencies and those that deal with them.”
The adoption of AI could further streamline workflow with the federal government.
“AI can remove much of the friction in both doing government work and working with governmental bureaucracies,” Enderle added. “If they function as intended, AI will reduce the frustration of dealing with government as well as the frustrations of working in government.”
The Adoption of Generative AI and Large Language Models
Several platforms are now seeing adoption by the government, including Google’s Gemini for Government, which offers enterprise search, video/image generation, and NotebookLM. Gemini for Government has already been adopted for government agency use.
Claude for Government is now offered at low-cost, token-based prices for pilot programs to improve employee productivity. Claude for Government is a specialized and highly secure version of Anthropic’s AI assistant. It was designed to meet the strict security, compliance, and operational needs of U.S. federal agencies.
Another new AI tool is GASi, a custom generative AI chatbot that was developed by the General Services Administration and introduced last March. The internal generative AI tool assists federal employees with daily, non-sensitive tasks and increases productivity by enabling users to automate routine functions.
Last year,, ChatGPT Gov, a tailored version of the popular AI chatbot, was introduced to provide government agencies with ways to access OpenAI’s frontier models and also to make government more efficient.
Although ChatGPT and other consumer AI platforms aren’t approved for use by Department of Homeland Security (DHS) employees, they are approved to use the agency’s self-built AI-powered chatbot, DHSChat.
AI in Defense and the IC
It has been two and a half years since the National Security Agency (NSA) launched its Artificial Intelligence Center, which is now a focal point for developing best practices, evaluation methodology, and risk frameworks with the aim of promoting the secure adoption of new AI capabilities across the national security enterprise and the defense industrial base.
There are now several defense, security, and intelligence tools that utilize AI. Among these is the Pentagon-led Palantir Maven Smart System (MSS), an AI-enabled defense platform designed to analyze massive datasets, including drone video, satellite imagery, and radar data, and to provide real-time battlefield intelligence and targeting. It is a key component of the Pentagon’s Project Maven, integrating AI and machine learning (ML) to enhance decision-making for military commanders.
BigBear.ai is also used by the Department of Defense (DoD) to analyze foreign media trends and support Joint Chiefs of Staff force management with open-source intelligence (OSINT).
D.C. is Embracing AI
Although the government has been slow to keep pace with technological development and is often cited as being behind the curve, that isn’t the case with AI, ML, and LLM.
“AI adoption has exploded in the government just like in most sectors,” said Dr. Jim Purtilo, associate professor of computer science at the University of Maryland.
Purtilo told ClearanceJobts that numbers can be used to gauge how widespread AI usage is, as there is no single clearinghouse for authoritatively tracking such things. However, AI is being widely embraced across the federal government.
“There isn’t an obvious federal shop to advise agencies on AI tools based on the experiences of early adopters,” Purtilo added. “The pace of innovation is exciting as offices try different solutions on for size – that’s what breakneck progress looks like!”
What is happening within the government very much mirrors the private sector, where the strategy has been to “Build fast, fail fast, learn fast, repeat fast.”
However, as Purtilo noted, as the government fails to track what works and what doesn’t, then many offices risk reinventing expensive wheels along the way.
“I guarantee you not all the AI tools are working out well for stakeholders,” he warned. “No one AI product serves all needs, so the variety of tools available to be tried necessarily increases.”
AI Remains a Catch-all
It is also hard to track the adoption of AI, specifically, which tools are being used. AI has become a catch-all term, but in 2026, AI tools span a wide range of functions.
ChatGPT, GPT-3, and Claude lead in text generation, reasoning, and coding, while productivity and creative tools are more focused on research, video generation, and video creation. There are even specialized tools that can execute complex, multi-step workflows.
“Generative AI gets the most attention of late,” said Purtilo. “These are the chatbots and assistants found in tools from web browsers to custom apps.”
The pace of development is increasing, so the current AI is far more advanced than it was just last summer.
“A product for which an administrator submits a purchase request today could be ‘old news’ before the order is approved and filled next month,” warned Purtilo. “It really is moving that fast. But the government is also adopting AI in specialized sectors like healthcare, finance, and data analytics, where machine learning techniques are more on point than LLMs. And none of this accounts for AI use in DOW applications, cybersecurity, and other areas, which we lightheartedly hope are not tracked on the outside.”
The bigger gap than just tracking AI tool adoption is measuring efficacy.
“The research on this lags far behind for no other reason than there is more reward for making ‘new stuff’ than in studying hard how well the ‘existing stuff’ works,” Purtilo continued. “The metrics for testing functionality, safety, and security just aren’t there. This adds to the overhead of AI adoption. Agencies that should be focusing on their specific missions must split their efforts on figuring out how to tell whether the AI tools they bought are right for the job.”



