Let me make this clear before I pat myself on the back. Out of 100 ideas I have, 50% have been thought of before and 49% are just bad. However, several years ago, roughly during the time of the 2016 elections, I defined two hot topics as “cybersecurity issues” that should be part of any academic program: disinformation and the enormity of social media growing in people’s lives. I know that psychology, journalism, strategic planning and marketing also touch those topics, but from a liability standpoint, the effects social media would have on cybersecurity and online behavior of users, it was easy for me to see the train wreck that was coming. When I moved over to teaching, the above was immediately incorporated into my curriculum with the same importance as web exploitation or network defense operations.  So as to my 99 bad thoughts and ideas…no comment.

Social Media Companies Under Legal Scrutiny

Social media has been the target of multiple lawsuits over the past few years. First plaintiffs came after companies like Meta and Google for violations of privacy laws in the sense of data sharing and confusing opt out guidance. Next, we started to see cases where social media providers were sued for failure to act by not taking down harmful or fake content or the converse, taking down content, which was protected free speech. This past year marked a huge increase in “addictive algorithm cases, in which multiple school districts sued companies for getting kids hooked on content that was detrimental to their mental health with the use of algorithm based on their likes and interests. Two weeks ago, the families of victims of the Buffalo, NY mass shooting filed suit in New York Supreme Court against multiple social media companies alleging their algorithms promoted racist behavior and violent extremism.  The following companies were mentioned in the lawsuit, including Meta (which owns both Facebook and Instagram), Reddit, Amazon (which owns Twitch), Google, YouTube, Discord and 4Chan.

Complex Legal Landscape

Some interesting language in the complaint declared the gunman, 20 year old Payton Gendron, was not radicalized or even racist until he became addicted to social media and was lured into the “vortex” of racism and extremist propaganda. The lawsuit also alleges Gendron was drawn into isolation by social media immersing him in harmful behaviors of others online. Last month, the Supreme Court ruled in favor of social media in the Twitter v Taamneh case where they found Twitter algorithms were not responsible for ISIS behavior under the Anti Terrorism Act, and the Court also declared the same would be true under Section 230 of the Communications Decency Act.

Implications on Humanity and Cybersecurity

The road to success, based on the above, may be a difficult one for the plaintiffs, but the case still represents yet another avenue of legal means pursued against social media companies, as the court of public opinion continues to evolve as well as to the merits of the platforms outweighing the negative consequences. To reiterate, cybersecurity does not only include theft of property or destruction of data, but also online content consumed and both the strategic and everyday implications on humanity that content has.

Related News

Joe Jabara, JD, is the Director, of the Hub, For Cyber Education and Awareness, Wichita State University. He also serves as an adjunct faculty at two other universities teaching Intelligence and Cyber Law. Prior to his current job, he served 30 years in the Air Force, Air Force Reserve, and Kansas Air National Guard. His last ten years were spent in command/leadership positions, the bulk of which were at the 184th Intelligence Wing as Vice Commander.