If you spend hours, days, weeks or even months authoring something that you have put every bit of your creative license and brainpower into, it is probably safe to say you would be more than a little angry, when your work showed up in another article without being cited. With all of the plagiarism filters, online sleuths, and otherwise nosy fact checkers out there, it should be safe to assume that it’s tough to get away with plagiarism. However, there is a new breed of cheater in town, one who does not care where the words came from that they use, only that they are easy to find and remotely accurate. This cheater has, in the past-emphasized quantity over quality, but lately has improved their skills enough that their work is starting to be accepted by many, including those who think a real person wrote it. That new cheater is known as Artificial Intelligence or AI for short.

Recently, programs like ChatGPT have upped the game in creating written products in rapid fashion that does not necessarily read as if a computer wrote it, like most of the programs I have encountered over the past few years. Granted, there are wonderful uses for AI like ChatGPT such as data organization and analysis, but let’s face it, most of their work is based on what others research and write. Nick Vincent and Hamlin Li wrote a great article in Wired Magazine recently, ChatGPT Stole Your Work, on this very subject. They laid out four courses of action in fighting AI in preventing them from stealing your work. They include websites reconfiguring web crawler filters to protect specific pages or works, collaborative sites such as Wikipedia blocking certain IP traffic and program interface access, taking advantage of AI opt-out measures if offered by the AI company to the artist or author, and my personal favorite, do everything in your power to get the law changed to offer writers and artists some protection.

Already, some lawsuits were filed after code used to create AI programs was taken from open source repositories without compensation to the original authors. (This does not even include all the malicious code that ChatGPT can write that no one wants to claim as being his or hers). Very generic topics such as a paper on the history of the Cherokee Nation in Oklahoma would always be hard to limit and as long as they were tagged as written by an AI program, could be useful and time saving. However, if your work focused on a specific family of Cherokee people who settled in a particular part of Oklahoma and gave detail of their personal stories, and that was taken with no credit or compensation given to the author, then a huge inequity would have occurred. AI detectors are being developed, but like deepfake detectors, it will be a cat and mouse game that may have no end.

If congress and the administrative agencies that are relevant to this issue are going to change the law and require AI content to buy and credit copyrighted content, the staffers that make up their brain trusts will have a steep curve to educate their bosses given past examples, such as the Facebook hearings. While some state lawmakers are possibly more tech savvy, AI and copyright infringement doesn’t seem to make their list of issues, probably because they are unaware or uncaring about how good it has become. It is up to those who write, critique writing, and do original research to bring the unfairness of the use of materials to lawmakers, their staffers, and local representatives immediately.

 

Related News

Joe Jabara, JD, is the Director, of the Hub, For Cyber Education and Awareness, Wichita State University. He also serves as an adjunct faculty at two other universities teaching Intelligence and Cyber Law. Prior to his current job, he served 30 years in the Air Force, Air Force Reserve, and Kansas Air National Guard. His last ten years were spent in command/leadership positions, the bulk of which were at the 184th Intelligence Wing as Vice Commander.