The Central Intelligence Agency has routinely published, in-house, a series of publications titled “Studies In Intelligence.” These are classified documents that address a wide variety of topics in the intelligence community but are not “official” CIA opinions. Business Insider notes in a Sept. 2014 story that some 249 of the articles from over the decades have been declassified by the Agency as part of an on-going lawsuit seeking the declassification of a total of 419 documents.
The recently released documents are a treasure trove of interesting insights, from reading lists to experimental ideas about topics from the economics of overthrow to the history of espionage.
Experimenting in Interogation
Back in 1983, the CIA was experimenting with artificial intelligence. In an article titled “Interrogation of an Alleged CIA Agent”, the agency reveals an experimental interrogation of a CIA officer by a computer configured to learn from his responses.
The study calls the program “Analiza.” Using a variety of algorithms, it looked for key words and phrases in the subject’s response to formulate new questions. It also had a master list of questions to ask. Since it recorded each response, as time passed it would begin to build a tighter and tighter line of questioning. In particular, it would zero in on topics that the subject seems to avoid or to spend a great deal of time on.
The advantage of using a computer for interrogation is the relentless nature of the questioning. The computer will not tire, need to eat or drink, or use the restroom. Properly constructed, an interrogation AI provide the basis for additional, human, interrogation at a more advanced level of effort.
Understandably, there is not a great deal of material available on the Internet about the use of AI in interrogation. It is unclear how much further the CIA pursued the AI interrogator program but it would be foolish to assume that it did not.
A similar type of effort, the Turing Test, is well known and the concept passed a milestone in June, 2014. First postulated by Alan Turing in the early 1950’s, the question is if a computer can ever be mistaken for a human through a process of dialog and questioning. In a five minute conversation, the first AI to do so convinced ten of thirty judges that it was human. The criteria for success was set at 30 percent of the judges.
Thirty years ago the CIA was looking at using artificial intelligence in place of humans. This year, the Turing Test was successfully completed with a computer program being mistaken for human by one third of those judging. It remains unclear how far away Skynet and the events of the Terminator movies are from bringing fiction to reality.