The more I learn about Russian disinformation campaigns on social media, the more I am convinced that they can completely control public opinion in this country with relatively little effort and cost. X (formerly Twitter) is a breeding ground for opinions to take shape based on trending topics that are at the very least, magnified by Russian trolls and bots and quite often, are started by them from ground zero.

Bots, Bots, Bots

The Justice Department announced recently the seizure of two domain names and the search of 968 social media accounts. The social media bot farm relied upon AI to craft fictitious social media profiles to create an AI-enhanced social media bot farm that spread disinformation in the United States and abroad. Almost all impersonated U.S. citizens that were used to promote Russian objectives. Bots are usually fake people with fake accounts; trolls are real people with fake accounts. Trolls are far more labor intensive from an information operations standpoint as to be viewed as credible they must “age” their account and actively and personally interact with others to influence opinion. However, with AI advances, bots are starting to take on the persona of troll accounts and actively start and respond to conversations instead of simply retweeting or liking (the ability to see who was liking each other’s posts was recently taken away from users by X unless the likes were from their followers). These responses are particularly original and often are out of context with the line of discussion but even so, they are effective enough to affect trends and incite the flaring of public polarization.

A drop in the bucket

Almost 1,000 bots removed sounds like a nice coup for the DOJ (and X) until you realize this is tantamount to removing a dinghy from a cruise ship and expecting improved fuel economy. According to SocialMediaToday.com in an article last week on Russian Bots, there are 250 million users on X every day. Before Elon Musk took over Twitter, his claim was that 30% of those users were bots. The previous Twitter ownership said only 5% was a more accurate number. Even splitting the difference, at say 17%, and without really knowing what current numbers are, that still means 42.5 million users are actually not human. Granted many of these bots are benign (if you want to describe Tommy Chong selling edibles littering your feed as benign), but that still leaves a tremendous amount of weapons spreading disinformation (not to mention troll accounts that will often start the flames that bots spread.) In addition, most of the bots currently taken down were promoting pro-Russia, anti-Ukraine messages. Compared to the damage that could be done by spreading disinformation about current US social, political, and domestic security events, the removed bot’s potential for inciting anger and panic is obviously very low.

The rising tide of disinformation

The threat of foreign disinformation is so rapidly evolving that stopping it, in my opinion, will take a complete change of policy, funding, and talent recruitment methods by the United States and its allies. Intelligence employment in both the public and private sectors should rise dramatically if those changes even partially take place.

 

 

Related News

Joe Jabara, JD, is the Director, of the Hub, For Cyber Education and Awareness, Wichita State University. He also serves as an adjunct faculty at two other universities teaching Intelligence and Cyber Law. Prior to his current job, he served 30 years in the Air Force, Air Force Reserve, and Kansas Air National Guard. His last ten years were spent in command/leadership positions, the bulk of which were at the 184th Intelligence Wing as Vice Commander.