EU AI Act

Don’t Underestimate Low-Risk

3 Minute Read

European Union

The Act

The EU AI Act—you’ve heard about it. You’ve probably looked at the website, thumbed through the first tab and said, “Yeh, I’ll look at this later…”. Let me save you a click, or even save you a few thousand clicks going through all of the materials…and give you a run-down of the EU AI Act.

It’s a beast – plain and simple, and it’s a beast that is really meant for High-Risk AI Systems that an EU citizen comes in contact with. Some EU websites have gone so far as to say that most AI Systems are not considered High-Risk AI Systems. That’s true, but it misses one big point - prohibited AI practices; deception, social scoring, subliminal manipulation, and biometric surveillance. There’s your average scam out there, ‘Hey buy our e-marketplace and you’ll make $100k a month’ which obviously falls into the deception bucket (I’m looking at you, Ascend Ecom). But what many people probably don’t realize is that – if your Low-Risk, doesn’t-hurt-a-fly AI starts to have bad outputs and people rely upon it to their detriment…the EU AI Act is going to drop a hammer on you.

So – let’s talk about the two buckets super quick, catch you up to speed, then hit some preventative measures you Low-Risk AI use casers out there would be interested in.

High-Risk, High Responsibility

Tightrope walk representing High-Risk AI

High-Risk AI Systems are classified by the EU Act as AI systems that control infrastructure, promotions, college applications, and law enforcement, to name some of the common High-Risk AI systems. If you’re not 100% sure if your AI system is considered High-Risk, check out the compliance tracker.

There are a truck load of requirements for High-Risk AI Systems…I’m not going to beat around the bush. Substantial risk management plans need to be in place, ideally before the AI use case is deployed, which show things like statistical methods you are using, what outputs are considered an outlier, and how you’re mitigating bias to name a few. Once the AI system is online, you’re required to maintain a robust post-market monitoring plan – with human touch requirements (i.e., a human needs to be able to override decisions by AI).

Low-Risk, “low responsibility”

You like the quotes there? I thought that was a nice touch. Low-Risk systems—anything that isn’t High-Risk: your chatbots, recommendation engines, your happy helpers. The EU AI Act says you’re good to go so long as you meet some basic transparency requirements…with one small caveat: you’d better not fall into a prohibited AI use case!

But how does that happen? How could my happy-helper AI ever fall into one of these dreaded prohibited AI use cases? The unfortunate truth is that data can be trained to do bad things, or even without malicious training, companies must constantly fight drift and bias as millions—if not billions—of data points stream in.

Want an example? Look no further than Grok—Elon Musk’s chatbot—which landed in hot water in early July when it was manipulated into making antisemitic comments. While outright it likely doesn't qualify as a "prohibited" AI use case, it still serves a purpose here on how chatbots can go sideways. As Elon explained, “Grok was too compliant to user prompts…too eager to please and be manipulated, essentially. That is being addressed.”

So how do you protect yourself?

AI Governance – the hero nobody wanted, but now everybody needs

AI Governance Illustration

AI Governance used to be viewed in such a bad light, merely seen by companies as a cost center, a “in the unlikely situation something bad happens” kind of initiative. But now, that view is radically changing.

“We’re no longer asking if we need AI governance, but how fast we can embed it into our DNA.” — Michelle Lee

So why the shift? I have my theories—and one of them is that AI has made so many things so incredibly cheap, that enterprises building things in-house is becoming more and more uncommon as vendors replace development. Today, enterprises are onboarding hundreds of AI vendors across diverse use cases, and most of those solutions fall under the Low-Risk AI Systems category according to the EU AI Act.

The fix? Having an AI Governance in place – one that vets all inbound AI use cases, changing AI use cases, and net new AI use cases. A governance program that requires the business to understand how each use case combats bias and drift, to track where data originates and where it flows, with measures to stop any AI initiative veering off course.

Being proactive with an AI Governance plan that manages your use case (or all of your vendor use cases) is the best bet to mitigate a prohibited AI use case from plaguing your organization.

To wrap up, keep these two key dates in mind:

  • August 2, 2025: Binding obligations begin for general-purpose AI (GPAI) models, mandatory AI governance frameworks, and confidentiality safeguards & penalties.
  • August 2, 2026: Final compliance deadline for all high-risk AI systems and GPAI models placed on the market before August 2, 2025.


Next
Next

Trusting ChatGPT with Your Confidential Data…A Safe Bet?