Shadow AI

Across industries, AI is already woven into everyday operations — often without approval, awareness, or oversight. Employees are turning on AI-enabled tools inside their workflows, from document editing to data analysis, without realizing they’ve just changed the organization’s risk profile. This quiet adoption is called Shadow AI — and it’s emerging as one of the biggest compliance and governance challenges of 2025.

Shadow AI – whether you’ve heard about this or not…this is becoming a massive buzz word in the AI space. This blog is going to cover what Shadow AI is, what the legal landscape looks like for those leveraging this, and what you need to do if you want to “flip the switch” on Shadow AI.

Shadow AI – sounds mysterious Nick…what is it?

Imagine you have been collecting data within your company for months, years, or even decades. You have data coming out of your nose – data from HR applications, financial performance, user interactions on your webpage, purchase histories, contract analytics…the list goes on. Now – with all the handy-dandy AI tools that come in neatly packaged boxes…you have a team member that just “flipped on” the AI feature and your security team, governance team, legal team…all the people who basically can get fired for unauthorized AI tool usage, are unaware that this employee just started using AI. This is Shadow AI.

Then – there are the spinoffs of Shadow AI, I’ve seen terms such as Data Archaeology, Dark Data, and AI Archaeology to name a few…these are strategies that companies are using to either repurpose, explore, or analyze data that has been collected in order to find new AI use cases. These AI use cases could be external, client facing services that are sold as their own SaaS solution, or internal AI use cases to optimize workflows and processes. Let me underline here, that this is massive risk if you do not have a framework in place. I can’t even convey the risk you take on as an organization if you go down this path and have no framework to mitigate risk and ensure compliance with regulations. Even now, there are troves of copyright lawsuits that are stemming as a result of Shadow AI. We’ll talk about this more next.

Intrigued? Let’s talk legal considerations.

First – let me start this section off with the usual disclaimer…in no way should what I’m about to discuss be interpreted as legal advice. This is for educational purposes only – if you require legal advice…hire a lawyer.

Great – that part’s out of the way. Lets start off with a simple example – and I’ll show you how if you don’t have a framework in place, you’re most likely going to end up in a world of hurt.

Example: Your team just finished their Data Archaeology review of your snowflake DB, and found billions of rows of data on users who have purchased product from you. They’ve created a new AI use case, that will be used by your marketing team to target demographics which have the greatest propensity to turn a click, into a sale…driving your revenue up 20% this quarter. The data includes users in America and Europe.

If you do not have a framework in place that ensures this AI use case satisfies regulation requirements…buckle up for a visit from:

  • EU AI Act – even if this isn’t a High-Risk AI system, does it satisfy the transparency requirements?

  • CCPA – does it abide by the requirements of the California Consumer Protection Act?

  • COPPA – Oops, some of those users are minors, do you have for example, written permission from a legal parent to use the data?

  • GDPR – Oops, that data you have includes EU Citizen data…do you have, for example, signoff from every single user for the scope of the use case? How’s your data deletion process? How accurate is your data?

  • Copyright – Oops, that data you want to use, that was pulled and trained on an LLM model which used copyrighted materials to create.

Five regulations right there – that if you don’t have proper evidence and controls in place, you’re going to end up spending a few months, maybe a few years…in mahogany walls, padded chairs, and sitting across from you a pissed off Judge with an itchy hammer-hand.

GRC Frameworks – your best friend you never knew you wanted.

A GRC Framework honestly, is you’re only hope of navigating Shadow AI successfully. GRC stands for Governance, Risk, and Compliance – and a GRC team often comprises a legal, security, and audit team. Each team obviously has their roles – but a workflow typically looks like this:

All AI Use cases are submitted to an Audit team, who reviews the use case, vets out controls, and reviews all evidence to ensure the controls are satisfied. Controls are the key to ensure that regulations are being followed. For example, the EU AI Act requires a post-market monitoring plan – a control would require that a PMM is adequately defined, managed, and in place.

The legal team is extensively intertwined in the process – ensuring that the Audit team has applied the correct controls and that evidence provided and approved by the Audit team is satisfactory.

Security team covers all the technical / security requirements of controls. These are your API heroes, your certificate heroes (SOC 2, NIST, etc.), and data protection heroes.

This is a very basic framework – but you get the gist. A GRC Framework requires an investment – it needs to be bought in from the top down and is not something that you silo to a single person. Not convinced? Checkout the below...

In just 2025 alone:

  • Robinhood - $45M Security Law violations

  • Apple - $570M EU Digital Markets Act violation

  • Meta - $230M EU Digital Markets Act violation

  • Gucci, Chloe, Loewe - $170M EU competition rule breach

At the end of the day – if you’re going to use Shadow AI, you’d better have a GRC team and a framework in place to vet AI use cases. This is not a time to skimp, penny pinch, or budget cut. This is a time to invest. Scale your AI responsibily and avoid other's mistakes.

Next
Next

EU AI Act