The Dark Side of AI: Navigating the Risks of Shadow AI
The Unseen Dangers of Unsanctioned AI Use in the Workplace
With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.
For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools in their day-to-day routines.
“The truth, for now at least, lies somewhere between these opposing viewpoints. On one hand, any business trying to avoid the generative AI revolution risks becoming irrelevant. On the other, those that aggressively pursue its implementation with little regard for the security and privacy issues it presents risk leaving themselves open to falling foul of legislation like the EU’s AI Act.”
In any case, generative AI is here to stay, regardless of our views on it. With that realization comes the risk of the unsanctioned or inadequately governed use of AI in the workplace. Enter the next frontier of information security: Shadow AI.
Shadow AI: The New Threat on the Block
Security leaders are already familiar with the better-known concept of shadow IT, which refers to the use of any IT resource outside of the purview or consent of the IT department. Shadow IT first became a major risk factor when companies migrated to the cloud, even more so during the shift to remote and hybrid work models. Fortunately, by now, most IT departments have managed to get the problem under control, but now there’s a new threat to think about — shadow AI.
Shadow AI borrows from the same core concept of shadow IT, and it’s driven by the frenzied rush to adopt AI — especially generative AI — tools in the workplace. At the lower level, workers are starting to use popular LLMs like ChatGPT to assist with everything from writing corporate emails to addressing customer support queries. Shadow AI happens when they use unsanctioned tools or use cases without looping in the IT department.
The Lack of Governance and Strategic Alignment in AI Adoption
Almost every business already uses generative AI or plans to do so in the next few years, but, according to one recent report, just one in 25 companies have fully integrated AI throughout their organizations. Clearly, while adoption rates have soared, governance has lagged a long way behind. Without that governance and strategic alignment, there’s a lack of guidance and visibility, leading to a meteoric rise of shadow AI.
It’s Time to Get a Handle on AI Governance
All too often, disruptive new technologies lead to knee-jerk responses. That’s especially the case with generative AI in cash-strapped organizations, which often view it primarily as a way to cut costs — and lay off workers. Needless to say, however, the potential costs of shadow AI are orders of magnitude greater. To name a few, these include generating false information, developing code with AI-generated bugs, or exposing sensitive information via models trained on “private” chats, as is the case with ChatGPT by default.
We’ve already seen some major blunders at the hands of shadow AI, and we’ll likely see a lot more in the years ahead. In one case, a law firm was fined $5,000 for submitting fictitious legal research generated by ChatGPT in an aviation injury claim. Last year, Samsung banned the use of the popular LLM after employees leaked sensitive code over it.
The Unseen Risks of Shadow AI in Data Exfiltration
As employees — with or without the knowledge of their IT departments — input more and more information into LLMs, generative AI has become one of the biggest data exfiltration channels of all. Naturally, that’s a major internal security and compliance threat, and one that doesn’t necessarily have anything to do with external threat actors. Imagine, for example, an employee copying and pasting sensitive research and development material into a third-party AI tool or potentially breaking privacy laws like GDPR by uploading personally identifiable information.
Shore-up Cyber Defenses against Shadow AI
Because of these risks, it’s crucial that all AI tools fall under the same level of governance and scrutiny as any other business communications platform. Training and awareness also play a central role, especially since there’s a widespread assumption that publicly available models like ChatGPT, Claude, and Copilot are safe. The truth is they’re not a safe place for sensitive information, especially if you’re using them with default settings.
Above all, leaders must understand that using AI responsibly is a business problem, not just a technical challenge. After all, generative AI democratizes the use of advanced technology in the workplace to the extent that any knowledge worker can get value from it. But that also means, in their hurry to make their lives easier, there’s a huge risk of the unsanctioned use of AI at work spiraling out of control. No matter where you stand in the great debate around AI, if you’re a business leader, it’s essential that you extend your governance policies to cover the use of all internal and external AI tools.