The terms “disinformation” and “misinformation” are often used interchangeably, but for cleared professionals operating in national security, understanding the distinction is critical. These two concepts play distinct but overlapping roles in the information operations of adversarial states like China and Russia. Both nations leverage social media to erode trust, sow division, and advance their strategic objectives in the United States, but the tools they use—and the intent behind them—vary significantly.

Cleared professionals who understand the difference between disinformation and misinformation, as well as the tactics used by these adversaries, are better equipped to identify, mitigate, and counteract these threats in their own professional spaces and personal online activity.

Disinformation: Intentional Manipulation

Disinformation refers to false or misleading information that is deliberately created and disseminated to achieve a specific objective. It is not merely about spreading lies—it’s about weaponizing falsehoods to influence public opinion, policy decisions, or institutional trust.

Russia is perhaps the most prolific practitioner of disinformation, with decades of experience dating back to Soviet-era “active measures.” Today, Russia’s disinformation campaigns on U.S. social media typically aim to:

1. Inflame Social Divisions

Russian operatives create and amplify false narratives on polarizing issues such as race, immigration, gun control, and public health. These campaigns are designed to exacerbate societal fractures and distract from broader strategic threats.

2. Undermine Trust in Institutions

By spreading false claims about election integrity, vaccine efficacy, or government transparency, Russia erodes public trust in democratic institutions and processes.

3. Manipulate International Perception

Disinformation campaigns also seek to shape global narratives, casting Russia as a victim of Western aggression or promoting its foreign policy interests in regions like Eastern Europe and the Middle East.

Notable examples of Russian disinformation include fabricated stories about U.S.-funded bioweapons labs in Ukraine and coordinated efforts to spread conspiracy theories during the COVID-19 pandemic. These operations often rely on networks of fake accounts, bots, and state-affiliated media to amplify their reach.

Misinformation: Unintentional Spread

Misinformation, by contrast, is false or misleading information shared without malicious intent. While it lacks the deliberate design of disinformation, misinformation can be just as damaging, particularly when it originates from disinformation campaigns.

China, while also engaging in disinformation, has proven adept at fueling the spread of misinformation on U.S. social media. Beijing often plants seeds of false narratives and relies on unwitting social media users to propagate them organically.

Key tactics used by China include:

1. Amplifying Domestic Propaganda Abroad

Chinese state media and affiliated accounts promote narratives about the superiority of China’s governance model, often framing it as more stable and effective than Western democracies. These narratives frequently enter U.S. social media through echo chambers and misinformation spreaders.

2. Exploiting Crisis Events

During the COVID-19 pandemic, Chinese actors promoted unverified theories, such as the virus originating in a U.S. lab. These narratives spread rapidly as misinformation, with unwitting users amplifying them in the absence of verified information.3.

Flooding the Zone

China’s approach often involves overwhelming the information space with large volumes of content, making it difficult for users to distinguish between credible sources and misleading narratives. This tactic relies heavily on misinformation, as confused users are more likely to share incomplete or inaccurate information.

How These Strategies Impact U.S. Social Media

While disinformation and misinformation differ in intent, their effects are complementary. Disinformation primes the pump with fabricated narratives, while misinformation ensures these narratives gain traction and evolve organically. Together, they create a chaotic information environment where facts are difficult to discern and trust in institutions erodes.

For cleared professionals, this chaos has real-world implications. Adversaries target not just the general public but also key industries, including defense, intelligence, and critical infrastructure. Social media platforms are rife with content designed to manipulate decision-makers, influence public opinion, and gather information on individuals with access to sensitive information.

How to Mitigate the Threat

Professionals in national security must remain vigilant to the tactics of adversarial information operations. Key steps include:

1. Understand the Source

Before engaging with or sharing content online, verify the source. If a narrative seems inflammatory or highly partisan, consider whether it may originate from a disinformation campaign.

2. Identify Red Flags

Look for hallmarks of disinformation, such as coordinated messaging across multiple accounts, heavy reliance on emotionally charged language, or sudden spikes in engagement around obscure topics.

3. Educate Networks

Share best practices with colleagues, friends, and family to help prevent the spread of misinformation. The more informed people are about these tactics, the harder it is for adversaries to succeed.

4. Engage with Alternative Platforms

As mentioned in prior discussions, alternatives like BlueSky or other decentralized platforms may offer more resilience against bot infiltration and malign influence campaigns.

A Contested Information Space

Disinformation and misinformation are critical components of adversarial strategies to undermine U.S. national security. While Russia often leads with disinformation, spreading intentional falsehoods to destabilize and distract, China amplifies misinformation to shape perceptions and muddy the waters. For cleared professionals, understanding these tactics is essential to safeguarding both personal and professional integrity in an increasingly contested information space. By staying informed and vigilant, we can counter these threats and preserve trust in the institutions and values we serve to protect.

Related News

Shane McNeil is a doctoral student at the Institute of World Politics, specializing in statesmanship and national security. As the Counterintelligence Policy Advisor on the Joint Staff, Mr. McNeil brings a wealth of expertise to the forefront of national defense strategies. In addition to his advisory role, Mr. McNeil is a prolific freelance and academic writer, contributing insightful articles on data privacy, national security, and creative counterintelligence. He also shares his knowledge as a guest lecturer at the University of Maryland, focusing on data privacy and secure communications. Mr. McNeil is also the founding director of the Sentinel Research Society (SRS) - a university think tank dedicated to developing creative, unconventional, and non-governmental solutions to counterintelligence challenges. At SRS, Mr. McNeil hosts the Common Ground podcast and serves as the Editor-in-Chief of the Sentinel Journal. All articles written by Mr. McNeil are done in his personal capacity. The opinions expressed in this article are the author’s own and do not reflect the view of the Department of Defense, the Defense Intelligence Agency, or the United States government.