Facebook parent Meta made its open-source Llama models available to U.S. government agencies and contractors working on national security applications. The tech giant says the responsible use of open-source AI models promotes global security. It also establishes the U.S. in the global race for AI leadership.

“We are pleased to confirm that we are also making Llama available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work,” wrote Nick Clegg, president of global affairs for Meta in a blog post.

The company highlighted its partnership with Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies.

“As open source models become more capable and more widely adopted, a global open source standard for AI models is likely to emerge, as it has with technologies like Linux and Android,” Clegg added. “This will happen whether the United States engages or not. This standard will form the foundation for AI development around the world and become embedded in technology, infrastructure and manufacturing, and global finance and e-commerce.”

Securing AI Dominance

As Meta pushes AI as a tool that needs to be embraced, albeit cautiously, to aid in reviewing and/or processing the ever-increasing amount of data that is generated daily.

“The government has access to an enormous amount of data,” said technology industry analyst Roger Entner of Recon Analytics.

“Gen AI can help with processing more of the data even faster,” Entner told ClearanceJobs. “Signal Intelligence is one of the areas where AI should become the most useful.”

Yet, the keyword should still be cautious.

“The company made the right decision, especially in light of the People’s Republic of China military’s unauthorized use of an earlier version of Llama to enhance drone performance,” explained Charles King, principal technology analyst at Pund-IT. “A potential shortcoming of open-sourcing technologies, such as Llama is that it assumes users will abide by the policies laid down by the tech’s creators. The PRC has shown time and again that it doesn’t feel constrained by such rules or by more formal regulations, like U.S. patent and copyright laws.”

Meta’s partnership can ensure that the Llama is used responsibly.

“Its decision to help U.S. defense agencies and contractors stay ahead of adversaries by providing access to the latest versions of Llama seems both practical and sensible,” King also told ClearanceJobs.

What Cybersecurity Experts Think

Those on the frontlines in the fight against cyber threats have also called this a good decision on Meta’s part, but again one that needs to be monitored and controlled. Llama is simply another tool in an ever-growing arsenal of AI-based applications.

“This is an instance of how AI can be enabled for good, legitimate work. To be clear, for sure, most AI will be used for good, legitimate things. In the near future, nearly everything will be assisted by AI in some way,” suggested Roger Grimes, data-driven defense evangelist at KnowBe4. “Everything you and everyone else’s posts will be AI-assisted in some way.”

However, it needs to be remembered that AI can also be used for nefarious purposes.

“Yes, bad guys will be using AI to do bad things more easily and better, but the good guys invented AI, have spent more time developing AI, and will use AI far more than the bad guys,” Grimes added. “It’s already happening and this is just one example of it happening even more. The larger question is whether something being AI-assisted is something we need to ‘detect’ and be aware of.”

Yet, because AI could be so widely employed, the detection may not matter.

Grimes told ClearanceJobs, “It will all be AI-enabled.”

It is also important to distinguish that free and open AI could be most useful for jobs that don’t require a lot of creativity or innovation.

“A lot of cybersecurity is just basic blocking and tackling, and it requires diligence,” said Jeff Williams, founder and CTO at Contrast Security.

“So AI can definitely help,” Williams told ClearanceJobs. “But the danger is over-reliance on AI for tasks that do require creativity and innovation. AI hallucinates, makes mistakes, and is easily confused by novel requests. Until we have better transparency and explainability, it’s hard to imagine trusting our national security to AI.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.