As 2023 comes to a close, it is safe to say that it was the year that the threat from artificial intelligence (AI) truly came into focus. It will likely remain a concern into 2024 and beyond, but it is just one of several cybersecurity trends that we can expect in the New Year.

AI-Driven Threats Not Going Away

Experts are warning that next year, we could see an exponential increase in the use of AI across the board, transforming both offense and defense in the cybersecurity landscape.

“In 2024, This will include the development of even more powerful and focused AI tools that generate deceptive content and sophisticated threats, such as deepfakes and spear phishing attacks, faster and on a much broader scale,” suggested Bob Rudis, vice president of data science at cybersecurity provider GreyNoise Intelligence.

“Attackers and defenders will both race to weaponize AI, ushering in a new era of sophisticated threats and defenses powered by machine learning,” Rudis told ClearanceJobs.

Deepfakes could increasingly be used as part of phishing and social engineering attacks, which are already seen as the weakest link in a cybersecurity chain.

“We’re just starting to scratch the surface on what will happen and how audio and video deep-fakes will be used not just to sway or mislead public opinion but as new and powerful tools to penetrate the enterprise,” added Davie Ratner, CEO of cybersecurity platform maker HYAS.

“Despite awareness and continual training, social engineering attacks still prevail,” Ratner explained.

He told ClearanceJobs that the use of AI to create credible and impressive video and audio deep fakes has the potential to supercharge social engineering and phishing attacks.

“Employees are well trained to ignore the obviously-fake email purportedly from the CEO,” Ratner further noted. “When there is a near-perfect digital copy of the CEO in the wild, utilizing both natural voice and video, identifying fact from fiction becomes increasingly difficult and incredibly complex.”

2024 is an Election Year!

Among the greatest threats could be election interference, especially as AI can be employed to manipulate videos, but also be used to spread misinformation/disinformation.

“For those already fatigued by the AI hype cycle of 2023 – brace yourselves. With the upcoming U.S. presidential election, nation-state cyber activity is expected to surge, targeting election infrastructure and influencing the outcome through disinformation campaigns and other tactics. In addition, there will likely be a wave of espionage and information theft,” said Rudis.

Bad Use of AI

Ratner further told ClearanceJobs that it isn’t just the bad actors using AI that are an imminent threat. Improper use of AI can result in other problems.

“A story that emerged at BlackHat this year demonstrates once again that employees can inadvertently be their organization’s biggest threat,” Ratner continued. “Apparently, an employee at ‘Company A’ used an LLM (large language model) to help complete a white paper and asked the AI to write an executive summary and a conclusion.”

In doing so, the employee had to feed the paper into the LLM, which absorbed and spit it back out in summarized form before the author even published their work, raising privacy and NDA concerns.

“Everyone who conducts workforce security training needs to start warning about this concern, along with their phishing and social-engineering training,” said Ratner.

The Positive’s of AI in 2024

It isn’t all doom and gloom when it comes to AI however. The technology could have some positives, said

Russell Sherman, CTO and co-founder of third party cyber risk management platform VISO TRUST

“The democratization of Generative AI (GenAI) is set to transform our workplaces, breaking down barriers to and making collective knowledge accessible across roles,” Sherman offered. “It’s not just a trend; it’s an evolution in how we work and learn.”

GenAI’s democratization could also be more than just boosting productivity and efficiency; as Sherman suggested, it could empower everyone, regardless of technical expertise, to contribute and innovate.

“However, as we embrace this transformation, it’s crucial to acknowledge the concerns it brings, especially around security. Our journey toward progress should be both inclusive and secure,” Sherman told ClearanceJobs. “As we democratize GenAI, fostering a workplace where everyone can thrive, security becomes paramount. It’s not just about data; it’s about trust and responsibility.”

The Insider Threat Will Remain

Even without AI, the greatest cybersecurity threats will be from those within an organization.

“Over 90% of the world’s organizations are completely unprepared for the risks imposed by insiders. Furthermore, these threats are growing in frequency by nearly 50% each year, and the scope of the damage for single event is growing as well. Insiders already have access to an organization’s most valuable assets, including customer information, intellectual property, trade secrets, etc. Insiders inherently know what is valuable. Their theft or leakage can even become an ‘extinction event; for an organization,” warned Troy Batterberry, CEO and founder security provider EchoMark.

One concern is that there may be many technologies that try to monitor or even block end user behavior to help guard against threats, but such systems can block very legitimate behaviors and anger those in the organization trying to do their jobs.

Moreover, these can be noisy with ”false positives” and can even flag some of an organization’s very best hard-working employees, creating real employee morale issues.

“A different approach is required,” Batterberry told ClearanceJobs. “By making each person’s copy of private information securely watermarked and tied to their identity, organizations can dramatically raise the stewardship and accountability of private information without further impeding the ability of everyone to get their job done. The mere presence of watermarks will reduce leaks, and should one still happen, organizations can easily and quickly find the source.”

New Buzz Term: Deception Engineering

Finally, 2024 could be the year that “Deception Engineering” more widely enters the cybersecurity lexicon. It is the process of building, testing, and implementing deception-based defenses within the enterprise to disrupt adversary operations and playbooks.

“As cybercriminals continue to evolve their strategies, enterprises will increasingly adopt deception engineering techniques to better understand their security vulnerabilities and protect their assets,” said Rudis.

“CISOs will undertake a delicate balancing act, racing to enable AI innovation while ensuring robust protections are built-in by design,” added Rudis. “AI security will emerge as a top priority, much like mobile security during the BYOD era. The cat-and-mouse game will intensify, but with careful planning and responsible AI adoption, cyber defenders can gain an edge over attackers in 2024. Enterprises will have an increasing interest in deception technology in the coming year because it illustrates how secure assets are most easily exploited and who wants them.”

Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at petersuciu@gmail.com. You can follow him on Twitter: @PeterSuciu.