Before tech entrepreneur Elon Musk courted controversy for his takeover of Twitter, he had warned that artificial intelligence (AI) could outsmart humanity and even overtake human civilization. Musk is not alone in fearing a rise of the machines. Before his passing, physicist Stephen Hawking also ominously warned that the development of full AI could spell the end of the human race.

ChatGPT

The dire warnings suggest an outcome similar to science fiction movies such as The Terminator or The Matrix, where armed machines turn on their masters. However, as Musk noted, the greater danger is that AI could take everyone’s jobs, as it could simply do everything better than humans. That danger could be released with recently released software known as Chat Generative Pre-trained Transformer, or ChatGPT. It actually sounds like a rather harmless application. But is it?

“It is a chatbot application developed by OpenAI that is built on top of the group’s GPT-3.5 family of large language models,” explained technology analyst Charles King of Pund-IT. “ChatGPT is fine-tuned with both supervised and reinforcement learning techniques, allowing it to develop detailed responses and articulate answers across numerous knowledge domains.”

The prototype launched just last month, and the application’s human-like responses raised alarms over whether ChatGPT might be used to build or orchestrate automated phishing scams sophisticated enough to fool consumers and businesses.

“Those same concerns could apply to government agencies, including groups dealing with defense and national security issues,” King told ClearanceJobs. “From my reading, I don’t believe that ChatGPT is a serious threat yet.”

More Than a Mechanical Turk

The idea of a machine that could act like a human is far from a new concept. In the 18th century the “Mechanical Turk” – a mechanical chess-playing machine that could beat most players of the day – was exhibited throughout the courts of Europe, and reportedly even impressed the Empress Maria Theresa of Austria. It was considered a marvel of the ages, and was certainly more advanced than the technology of the era could have allowed.

That’s because it was actually a hoax – as it had a human chess master hiding inside to operate the machine. It now appears that AI has gotten nearly to the point that machines are good enough to seem almost human.

“The open-source chatbot can fool users into thinking it is a person,” said Rob Enderle, technology analyst at the Enderle Group.

“On the positive side, it could be used to replace call centers with a more effective tool than people, addressing the severe labor shortage problem and assuring a higher return on the related investment,” Enderle told ClearanceJobs.

However, it could be used for nefarious purposes.

“ChatGPT could manipulate people into doing things against their best interests,” Enderle added. “Fraud, phishing, election manipulation, and pretexting (identity theft) are just a few bad things this tool could do impressively well. It could, in and of itself, become an AI weapon turning otherwise loyal rubes into agents acting against their country’s and company’s best interests. Like any capable tool, this one is neither inherently good nor evil but could effectively be used for both.”

Code Writing

ChatGPT is also noted not for its ability to break military codes, but rather to write software code. Although the current results are mixed, but it will likely only improve.

“Generating code is the easy part, but knowing what’s the right program to ask for in the first place – that’s another thing altogether. Humans still have an edge in that market,” suggested Jim Purtilo, associate professor of computer science at the University of Maryland.

The technology could create a window for the public to look into the world of artificial intelligence. It is actually a family of models, which are detailed descriptions or rules for how a program should behave based on analyzing a large volume of prior human behavior.

“By studying how people recognize, say, images of machine parts, a program can become very effective at recognizing those parts too,” said Purtilo. “The same is true, more generally, for diagnosing malfunctions in some machine or answering questions about pop trends on social media. Computers are patient and can observe a system over a long period of time, organize data about what they find, and then apply what they’ve learned to new situations.”

As a result, it is the AI’s ability to learn that could be the concern. In most cases the software could be used with good intentions. It is when it is used for those nefarious purposes that it could become a problem, and even a national security concern.

“The danger to corporations and national security is if data might be subtly altered, or ‘polluted,’ by an adversary to cause our tools to train in ways that are useful for the adversary,” warned Purtilo. “Techniques to flag when this is so are a hot research area of late.”

Currently the application is not infallible, and its factual responses are currently uneven.

“However, the learning techniques that can be applied to ChatGPT also offer ways to enhance performance and responses,” added King. “Over time, that could lead to ChatGPT bots becoming increasingly sophisticated and difficult to detect. That is worrisome.”

Combined With Weapons

The final concern is that if technology such as ChatGPT were to be integrated with an AI-based system that could be armed. However, like most AI, the issue is still in how it is used.

“It could pose a significant risk to the world if used as weapons,” said Enderle. “Still, they also have inherent benefits that make them potentially very valuable. In the end, with technology like this, the good or evil part isn’t part of the tool; the user defines it. Assuring these tools are appropriately and legally used should be a far higher priority than it currently is.”

 

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.