by Val LeTellier and David Niccolini

The emerging rise of Generative Artificial Intelligence (GenAI) has the potential to radically impact and change insider risk management in the United States. This article examines four intersection points between GenAI and insider risk management. The first (#1) is the incredible value of emerging GenAI intellectual property (IP) to our adversaries, and the assumption that they will use all means necessary, including insiders, to attempt to steal that IP. The remaining three intersections are identified in the use of the technology itself, specifically, with malicious actors using GenAI to penetrate current insider threat countermeasures (#2), insider threat practitioners using it to strengthen their defenses (#3), and unaware employees unintentionally leaking sensitive information via the use of commercially available AI tools (#4).

Intersection point #1: GenAI is highly valuable IP sought by adversaries.

Arguably, GenAI is the most critical technology currently within modern nation-state and commercial competition, including its value to innovative weaponry. In fact, we believe that it is now directly related to the worldwide balance of power. Vladimir Putin recently said that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has indicated that it aims to become the global leader in AI by 2030.

As such, it is a primary target for foreign intelligence services, yes, but also corporate competitors, criminal groups and ideological hackers. Moreover, as is common in the rush to be first to market with an emerging technology, GenAI IP protection and insider threat countermeasures are lagging at the leading firms, leaving them open to potential insider theft.

Intersection point #2:  It enables more sophisticated, refined, and creative malicious attacks.

GenAI is a powerful tool for malicious actors. It can create human-like text, voice cloning, and even video deepfakes that can fool the human eye. These advances significantly increase the efficiency and effectiveness of traditional phishing, spear-phishing, and smishing attacks. If that wasn’t enough, GenAI can quickly can find code vulnerabilities, create tailored malware, and write software exploits. There is even a subscription-based malicious generative AI tool that creates tailored deceptive content for the most novice cyberattacker. This one illicit product uses existing automated tools, exploits, and datasets of publicly available data to instantly develop customized attacks in a SaaS model. Put simply, a robust, tailored, and sophisticated attack plan is available for anyone with a little money, a little knowhow, and a motive.

A great example of the depth of this threat was criminals’ use of GenAI to clone voices and create video deepfakes to convince a Hong Kong bank clerk to make a series of transactions, netting $25 million.[1]

Intersection point #3:  It enables a stronger defensive insider risk posture.

 Now for some good news. GenAI is already being used by the “good guys” to enhance network user behavioral assessments to earlier identify indicators of potential insider activity. These advances are showing real promise, and likely, are just the beginning. We believe that much deeper, but as of yet undefined, applications of AI to traditional insider risk management are coming. It is clear that there is an AI ‘arms race’ coming, and we hope that we are up to the challenge against the multitudes of malicious actors on the other side. We have seen that we are already beginning to reap the benefits of greater information transparency and decentralized decision-making, the cornerstones of effective insider risk management. The emergence of the next level of GenAI insider threat countermeasure will be exciting to witness.

Intersection point #4:  It creates unintentional insider activity.

Commercially available GenAI is now available to individuals via low cost B2C models. As more and more people get comfortable using these tools, companies and agencies are starting to find that well-meaning employees are unintentionally leaking sensitive information into AI large language models (LLMs). There is a growing trend of blending the personal and the professional, and younger generations are not as aware, or frankly, as concerned about of the IP, proprietary and/or sensitive data of their respective companies or agencies. As the data leaks into the AI LLMs, malicious actors will use this information to their maximum benefit. This newly recognized risk of GenAI adoption will need serious security governance, policy, and procedures.

Be Brave in the GENAI World

We believe that GenAI is more substantially intertwined with insider risk management than any other technology, and it will continue to take us into murky and uncharted territory. For the moment, the U.S., and her allies, seem to have the advantage in GenAI development, but without stronger insider risk management — that position is at great risk. To quote Shakespeare and Huxley, “It’s a brave new world.” We must be as brave as the world around us.

 

 

Val LeTellier and David Niccolini are the co-founders of 4thGen, an insider risk consultancy focused on protecting the intellectual property (IP) of its clients. Val ran security, intelligence, and counterintelligence operations as a State Department Diplomatic Security Special Agent and CIA Operations Officer. Following his service, he co-founded a cybersecurity joint venture and developed insider threat programs for leading private and public sector organizations. He continues to serve the insider risk management domain in various commercial and non-profit manners. David is a serial entrepreneur in the enterprise risk management space. During his 25 years of professional experience, his work has spanned six continents across dozens of industry verticals, and has included work with over 120 multinational corporations, to include 20 percent of the Fortune 100.

[1] https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

Related News

Val LeTellier is a veteran intelligence officer. Before his career as a CIA case officer, he served as a State Department Diplomatic Security Special Agent. He has since worked with CACI, Booz Allen and Raytheon in creating specialized communication, virtual operations, and digital surveillance risk mitigation programs. He recently co-founded 4th Gen Solutions to develop next generation tradecraft capabilities for IC front-line operators.