You’ve no doubt heard about ChatGPT, the new software bot that can write like a human. Businesses, content creators, and Web developers in a swath of sectors are either speculating about how they might use it or fretting that it could take their jobs. Guess who else is checking the new bot out? The CIA.
Lakshmi Raman, the CIA’s artificial-intelligence (AI) director, told the Potomac Officers’ Club during its annual AI summit that ChatGPT and other generative artificial intelligence (AI) apps could be useful for CIA operations. She said that her agency will explore “in a disciplined way” how its officers could use this new technology in their everyday work, including the work of gathering and analyzing intelligence.
“We’ve seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to (be exploring) ways in which we can leverage new and upcoming technologies,” Raman said.
Raman said that this is all part of a larger effort to integrate AI into CIA operations and systems–and to keep up with adversaries like China, Russia, and Iran, who may all be developing AI to use against us: “A lot of work is underway to ensure the CIA’s success in becoming a mature and AI-driven organization, as well as expanding our understanding of adversaries’ use of AI and (machine learning) capabilities,” she said.
Raman isn’t the only official who thinks generative AI has national-security potential. Stephen Wallace, chief technology officer of the Defense Information Systems Agency (DISA), said at an event in January that DISA is looking at how generative AI could change the agency’s mission and what it could do for the Department of Defense in years to come.
AI-enabled Analysis
“Generative AI” is a broad term for new AI systems that can create original content, including text, videos, images, and audio. ChatGPT isn’t the first generative AI, but it’s taken the technology mainstream in a huge way, with its highly publicized ability to write eerily articulate answers to just about any question a human asks it. The bot can glean information from anywhere on the Web and weave it into complete, coherent paragraphs and essays.
The CIA and DISA’s interest in this technology is inevitable. Long before ChatGPT, U.S. defense leaders have been seeking AI software that can read volumes of data and spot patterns and trends within it, which human operators could turn into actionable reports. Back in 2017, the Air Force Research Laboratory (AFRL) and IBM collaborated on TrueNorth, a line of “neuromorphic” computer chips containing software modeled to emulate the human brain’s neurons. Defense systems outfitted with the new brain-inspired chips identified military and civilian vehicles in radar-generated imagery with less than one-twentieth of the energy consumption of conventional computer systems.
To defense officials now, ChatGPT may look like a promising next-generation smart AI, able to help churn out even more useful intel in less time. They may envision chatbots sifting through satellite data and identifying evidence that an enemy nation is preparing to launch ballistic missiles or clandestinely building chemical weapons. Should this computer be tasked with scouring foreign media and online message boards, perhaps it could help reveal signs that an extremist group is recruiting new members, or that a country is about to experience civil unrest.
The Fake News Threat
It’s not just about what we might do with generative AI, however. It’s about what our enemies might do with it. Chatbots that can instantly craft engaging written text would be frighteningly useful in the hands of a government like Russia’s, which thrives on state-sponsored propaganda. The Kremlin currently pays human bloggers to post pro-Kremlin content across social media sites, much of it chock full of misinformation and lies. Just imagine the volumes of falsehoods Moscow could flood the Web with if it had smart bots writing at its behest 24-7.
There’s no stopping bots from posting persuasive fake news. But we can get smart about how the bots create the news, how they post it, and how to spot their handiwork in time to get it flagged as fake. And U.S. intelligence experts are no doubt looking into what our own generative AI tools can teach us about all of this.
Generative AI’s Limits
Unfortunately, ChatGPT cannot distinguish credible information from misinformation. It pulls together everything it finds online and writes about it, so its finished articles can be full of things that aren’t true. This is a problem for U.S. intelligence officers who want to gather information. The chatbot could easily feed them junk, forcing its human operators to sort out the junk from the honest intel.
Which will be all the harder because of another ChatGPT shortcoming: It doesn’t cite its sources. The bot could write a coherent brief about labor strikes in Turkey, but it’s not going to tell you whether it gleaned the information from eyewitness accounts, Turkish government reports, or outside bloggers who have never set foot in the country and have never spoken to a Turkish worker in their lives.
Reliability of data is important when we’re dealing in intelligence work. For ChatGPT or an app like it to be useful, we need to be able to trust the output. Lives depend on getting the information right. And as of now, there is just no staking anyone’s life on the bots getting anything right.
Final Analysis: Proceed With Caution
With every new AI breakthrough, human professionals wonder–or worry–about the new AI technology doing their jobs for them. It’s clear that intelligence and defense officials foresee practical uses for generative AI. But the technology is bound to create some problems, as well. Whether the problems are solvable, and whether the pros will outweigh their cons, will be something for agencies to figure out.