In the latest warning sign that AI is no longer a theoretical threat vector, OpenAI’s June 2025 report unveiled 10 distinct campaigns using its tools for coordinated cyber, espionage, and influence operations by adversaries, including China, Russia, North Korea, and Iran. These cases are no longer “what if” scenarios. They are active operations that leverage generative AI to scale deception, obfuscate origins, and manipulate digital ecosystems in real-time.
From fake resumes aimed at breaching enterprise infrastructure to TikTok botnets flooding the algorithm with coordinated propaganda, OpenAI’s findings underscore a stark new reality: when artificial intelligence meets ubiquitous technology, the threat landscape doesn’t just evolve, it accelerates.

Ubiquity Means Vulnerability

The digital ecosystem that professionals operate within, including smartphones, cloud services,  and biometric systems, creates a constant connection. This level of ubiquity offers convenience and speed, but provides adversaries a wide surface area for influence and intrusion. When combined with generative AI, even marginal actors can scale complex campaigns across devices, apps, and identities in seconds.

What used to require time, expertise, and significant infrastructure can now be done with a chatbot and a script. Worse, these tactics are designed to blend in, using tone-matched language, culturally fluent personas, and algorithmically boosted content to pass as authentic, human-origin behavior.

Operation Sneer Review, ScopeCreep, and Resume Espionage

OpenAI’s report outlined three standout operations:

  • “Operation Sneer Review” involved Chinese-linked actors using AI to generate convincing fake personas across TikTok and X (formerly Twitter), amplifying pro-Beijing narratives. These bot networks appeared multinational but were centrally controlled, a classic tactic of information laundering made easier by AI’s ability to simulate diverse perspectives rapidly.
  • “ScopeCreep”, a Russian attributed campaign, utilized ChatGPT to aid in the development and troubleshooting of Windows-based malware, including fine-tuning a Telegram alert function.  Notably, AI both aided in building and operating the malicious code in real time.
  • A North Korean-linked campaign used generative AI to mass-produce fake resumes targeting remote tech jobs. The objective? Gain access to sensitive corporate systems through employer-issued hardware and virtual environments, a strategy that merges classic insider threat tactics with digital-first infiltration.

Each campaign represents a different facet of AI-accelerated tradecraft: propaganda, intrusion, and identity subversion. For the national security community, these extend beyond digital threats. They are strategic probes into the fabric of U.S. trust systems, from job screening to social media norms.

What This Means for Cleared Professionals

The message for the cleared workforce is clear: AI isn’t just a tool, it’s a threat surface. The systems you trust, the resumes you review, and the content you scroll past may all be shaped, or manipulated, by foreign enemies.
Three takeaways:

1. Verify Beyond the Resume

Vetting processes must now assume that AI could generate any professional profile. This affects security clearances, contractor vetting, and even personnel onboarding.

2. Rethink Endpoint Assumptions

AI-assisted infiltration methods compromise systems through credentialed access, not brute force. CI must treat corporate-issued devices and remote access tools as potential attack vectors from Day One.

3. Report Unusual Digital Behavior

Disinformation operations now use AI to “soften” narratives through repetition and manufactured consensus. If something feels off, especially coordinated messaging that doesn’t originate from real people, report it.

A Call for Proactive CI and Tech Literacy

OpenAI’s report offers a rare public glimpse into what national security professionals have long suspected: our adversaries are adapting faster than our institutions. It’s not enough to focus on traditional cyber hygiene. We must integrate CI awareness, tech literacy, and operational vigilance into a unified approach.
This means cleared professionals should treat every aspect of the digital environment as a potential CI or influence vector. Ubiquitous technology is not neutral. It reflects the intentions of its users, many of whom are operating under the direction of foreign intelligence services.
The subsequent breach may not start with a USB drive or a stolen credential. It may begin with an AI-generated resume, a misleading post, or a friendly chatbot that asks too many questions.
The Bottom Line: The AI arms race is already underway, and the battlefield is your inbox, your algorithm, and your identity.

Related News

Shane McNeil is a doctoral student at the Institute of World Politics, specializing in statesmanship and national security. As the Counterintelligence Policy Advisor on the Joint Staff, Mr. McNeil brings a wealth of expertise to the forefront of national defense strategies. In addition to his advisory role, Mr. McNeil is a prolific freelance and academic writer, contributing insightful articles on data privacy, national security, and creative counterintelligence. He also shares his knowledge as a guest lecturer at the University of Maryland, focusing on data privacy and secure communications. Mr. McNeil is also the founding director of the Sentinel Research Society (SRS) - a university think tank dedicated to developing creative, unconventional, and non-governmental solutions to counterintelligence challenges. At SRS, Mr. McNeil hosts the Common Ground podcast and serves as the Editor-in-Chief of the Sentinel Journal. All articles written by Mr. McNeil are done in his personal capacity. The opinions expressed in this article are the author’s own and do not reflect the view of the Department of Defense, the Defense Intelligence Agency, or the United States government.