The United States Space Force has banned the use of web-based generative artificial intelligence (AI) tools including ChatGPT for its workforce over security concerns. The sixth and newest branch of the United States military issued a memo at the end of last month that called upon Guardians to stop using the AI tools including large-language models on government computers until they receive formal approval by the force’s Chief Technology and Innovation Office.

Though the ban is reportedly temporary, it was “due to data aggregation risks,” the memo stated. The Air Force has confirmed the ban, which was first reported by Bloomberg.

“An Air Force spokesperson confirmed the temporary ban, which was first reported by Bloomberg.

“A strategic pause on the use of Generative AI and Large Language Models within the U.S. Space Force has been implemented as we determine the best path forward to integrate these capabilities into Guardians’ roles and the USSF mission,” Air Force spokesperson Tanya Downsworth said in a statement, and added, “This is a temporary measure to protect the data of our service and Guardians.”

However, Dr. Lisa Costa, the U.S. Space Force’s chief technology and innovation officer, said in the memo that the technology could serve a purpose in the future. She added that generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed.”

 AI and the DoD

Earlier this year, the Government Accountability Office (GAO) warned that while the private sector has employed AI for years, the Department of Defense (DoD) hasn’t issued department-wide AI acquisitions guidance needed to ensure consistency. The watchdog recommended that the Pentagon develop such guidance and consider private company practices as appropriate.

However, efforts are already moving forward.

In August, the DoD announced the establishment of a generative AI task force, an initiative that reflects the Pentagon’s commitment to harnessing the power of artificial intelligence in a responsible and strategic manner. The DoD recognized that AI has emerged as a transformative technology with the potential to revolutionize various sectors, including defense.

By leveraging generative AI models, which can use vast datasets to train algorithms and generate products efficiently, the DoD aimed to enhance its operations in areas such as warfighting, business affairs, health, readiness, and policy.

Yet, as noted by the Space Force’s pause on its use, concerns remain.

“AI is a new technology, and the risks aren’t well understood, for instance, how is the information being used to train the model assured and protected,” explained technology industry analyst Rob Enderle of the Enderle Group.

Not a Fear of AI

It would be wrong to suggest that the Space Force fears the technology specifically, nor is it simply a matter of security.

“This isn’t so much a fear of AI but the common fear of someone who doesn’t understand the risks implementing a technology that is either unsecured or installed improperly resulting in the potential for a breach,” Enderle told ClearanceJobs. “It isn’t just security either that this kind of policy tries to address because back when Windows 95 was released there were massive outages that occurred because that technology wasn’t properly vetted before implementation. AI’s potential to do harm is far greater than Windows 95, or anything like it ever was, thus a measured approach with the proper approvals and vetting process is required as this policy attempts to assure. I’m actually more surprised this didn’t happen faster than that it happened at all.”

The Space Force’s ban is simply being cautious about new technology that has the potential to do as much harm as good.

 

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.