“AI is a lot like a mediocre grad student.”

I appreciated the sentiment, if not the metaphor. I was already seeing artificial intelligence making its way into coursework in my classes and the results were mixed. For students who existed on the lower end of the traditional bell curve, AI helped them just enough to push them a little higher. For students on the other end of the same curve, however, it dropped them by as much as a full grade.

While I didn’t specifically forbid the use of AI, I set boundaries. Hard boundaries. There are limits to the use of AI, and every model has its own peculiarities based on how it was trained and the underlying algorithm behind the platform.

But it’s not a mediocre graduate student. A graduate student should, in a perfect world, have some capacity for critical thought. Some degree of human judgment. Maybe even some practical wisdom. An AI possesses none of that, any more than my table saw is a creative genius. And therein lies the risk – the more we humanize AI, the greater the degree to which we anthropomorphize it, the higher the risk that we cede our thinking to it.

The Good of AI

When it comes to using AI, rarely does a day pass when I’m not putting it to use for something. Curate a playlist of the best synth-pop hits of the 80s (which is currently playing as I write this). Conduct a pattern analysis for phrasing similarities across a hundred case studies. Summarize the mentions of the Kobayashi Maru scenario in Star Trek canon.

In each case, I’m very specific about what I ask of the particular model I’m using. I set the parameters, I evaluate the results, and I iterate with it until I get to a point where I’m satisfied with the output. Then I fine tune that output to my unique needs. AI is good, but it didn’t include Modern English on my playlist, and “I Melt with You” is arguably one of the greatest synth-pops of that era.

For me, AI is not all that different than my table saw. It’s an incredibly powerful tool that can make me far more efficient when building a piece of furniture. But a blade spinning at 3,000 RPM can do a lot of damage if you use it unsafely. And the euphemism “measure twice, cut once” is as applicable to AI as it is to woodworking. You don’t have the luxury of blaming the tool when you don’t check your work. I’ve had a few wobbly table legs over the years that serve as unpleasant reminders.

The Bad of AI

Not everyone will agree with the metaphor, but thinking of AI as a shop tool has helped me to gain the most from the experience. It’s also why, after 40 years of woodworking, I still have all of my fingers. I respect those safety features; the guardrails put in place to prevent potentially catastrophic mistakes.

That doesn’t hold true for everyone, unfortunately. A recent RAND study suggested that students increasingly use AI as a substitute for their own critical thinking. They do. But they’re hardly alone.

A Department of Justice lawyer found himself out of a job recently after “he filed a legal brief with fake quotations and legal citations.” A day after the judge in the case threatened sanctions against the attorney, The Eastern District of North Carolina warned against generating legal filings with AI: “AI may hallucinate… Always personally verify each quote or proposition with your eyes in an actual case or law or other valid source.”

In 2025, a prominent University of Hong Kong professor resigned his associate dean role after the school found that at least 20 of the 61 references used in a research paper cited non-existent publications. The lead author of the paper – for which the professor was a co-author – “had used AI to assist with referencing but failed to verify the citations.”

Also last year, a reporter for the Wisconsin State Journal was terminated after a story she filed for the paper proved to include AI-generated quotes and sources, as well as a number of factual errors. The story was immediately withdrawn and she was fired the same day by the executive editor. When it comes to AI, accountability is increasingly part of the landscape.

The Ugly: AI Without GuardRails

Then there’s Project Maven.

The Maven Smart System was envisioned in 2017 as a tool that could “help military analysts sort through the firehose” of data produced by various intelligence collection platforms and systems.  As a war planner and strategist, I appreciated the revolutionary shift in capability the algorithm promised, as long as the necessary infrastructure and guardrails remained in place. It isn’t as simple as keeping a human in the kill chain, it’s ensuring that human oversight is present for ethical and legal oversight, applying a contemporary version of Asimov’s Laws of Robotics to the use of AI.

When it comes to the kill chain, the stakes are a lot higher than simply hallucinating quotes and references. Anthropic CEO Dario Amodei, whose Claude is the beating heart of Maven, expressed concerns that ultimately resulted in the Pentagon declaring the company a national security risk: “Anyone who’s worked with AI models understands that there’s a basic unpredictability to them that in a purely technical way, we have not solved.”

On the first day of the war with Iran, a Tomahawk cruise missile struck the Shajarah Tayyebeh girls’ elementary school; at least 165 children, teachers, and parents were killed. Congress was quick to ask if Maven was at the source of the strike, an acknowledgment of the AI’s role in developing the war’s initial target packages.

As the Guardian subsequently reported, the school – adjacent to a now-defunct Islamic Revolutionary Guard Corps naval compound – had been mistakenly classified as a military facility in a Defense Intelligence Agency database that had not been updated since at least 2016. “A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”

According to the Washington Post, Maven generated the coordinates that allowed U.S. forces to strike more than 1,000 targets in the first 24 hours of the war. As CNN reported, “Central Command created target coordinates for the strike using outdated information.” Which leaves a single, vital question to be answered: “If so, did a human verify the accuracy of this target?

Amodei warned of the risks, only to be blacklisted by the Pentagon.

AI isn’t a mediocre grad student. It’s not a human being, it’s an algorithm. A tool. And, when you remove the safety features and the end result goes horribly wrong, you don’t get to blame the tool.

Related News

Steve Leonard is a former senior military strategist and the creative force behind the defense microblog, Doctrine Man!!. A career writer and speaker with a passion for developing and mentoring the next generation of thought leaders, he is a co-founder and emeritus board member of the Military Writers Guild; the co-founder of the national security blog, Divergent Options; a member of the editorial review board of the Arthur D. Simons Center’s Interagency Journal; a member of the editorial advisory panel of Military Strategy Magazine; and an emeritus senior fellow at the Modern War Institute at West Point. He is the author, co-author, or editor of several books and is a prolific military cartoonist.