Last month, the United States Department of Defense (DoD) announced that it had selected Scale AI to help test and evaluate generative artificial intelligence (AI) for military applications. Generative AI made headlines last year for its ability to create text, images and other data by utilizing algorithmic models in response to a user’s prompt.

This has lead to concerns in what it means for content creators, as well as whether any photos and videos could be trusted as being “real” and not AI-generated. However, generative AI still offers the potential to streamline workflows and review troves of information within seconds – an important capability as the world is increasingly data driven.

The Power of a Large Language Model – LLM

This includes harnessing the power of so-called “large language models” (LLMs), which are able to review gargantuan troves of information within seconds and crystallize it into a few key points.

“Any forward-thinking company will be studying the potential for LLMs to improve work flow and productivity in ordinary tasks,” explained Dr. Jim Purtilo, associate professor of computer science at the University of Maryland.

LLMs could certainly have a place in the government’s vast networks of federal contractors, where it could aid in the development of new systems and platforms. Data is power but time is money.

Impact of LLMs on national security

The ability to condense data so quickly could be a benefit for any company, as it could allow teams to be provided with nearly instantaneous pointers – and perhaps even limiting the flow of information on a need-to-know basis.

“Defense contractors would be no different, but they more than most companies must do so with a careful eye on security,” Purtilo told ClearanceJobs. “LLMs might help with, say, understanding an RFP or drafting a proposal, but doing so involves disclosing substantial information so the model has something to work with. Any prompt after that is a new opportunity for the model to volunteer your secrets.”

This is where any use of LLMs that involve classified information will need to be carefully employed.

“Because defense contractors deal with government sensitive data and have to follow strict government guidelines and requirements, most of them are, and may be for a while, in the exploratory phase with generative AI,” said Melissa Ruzzi, director of artificial intelligence at SaaS security provider AppOmni.

Defense contractors will therefore tread carefully when it comes to LLM adoption.

“The area of marketing and communications are the first areas used to explore its use, as they see that so many other companies are already making use of generative AI in these areas,” Ruzzi told ClearanceJobs. “Small task force teams, and the creation of rules and guardrails of dos and don’ts around generative AI seem to be in focus to assure proper utilization with safety and cybersecurity in mind, which can in the future be used to generate policies that would enable a wider adoption and further generative AI exploration in other areas.”

Focus is on security

The use of LLMs will certainly include greater focus on security – and ensuring that the information can’t be improperly accessed or disseminated.

“Inappropriate disclosure of intellectual property can be a problem for any company, but in the defense industry it could mean spilling important classified information,” added Purtilo. “They almost certainly would consider building their own models in-house, or at least procure outside services with extreme care. That’s expensive, but not as expensive as compromising national security.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at You can follow him on Twitter: @PeterSuciu.