AI Governance Framework
AI governance has become an integral part of how organizations manage and deploy AI across the enterprise. This post looks at how governance frameworks operate in practice, the balance teams must strike between automation and oversight, and how different risk dimensions influence governance decisions. It offers a practical view of the considerations organizations face as AI use continues to expand.
Let’s talk AI governance frameworks – I’ve mentioned them in previous posts and mentioned the importance of having them with regards to things such as Shadow AI…but let’s dive a little deeper into what they actually are, common pitfalls I’ve seen some fall into, the ongoing balancing act of autonomous workflows versus risk, and finally a comparison between model risk and use-case risk.
Team Structures & Swim Lanes – Common Themes:
First, let’s chat what a governance framework actually is. A governance framework, is a system of processes and workflows that allow a team within a company to govern the use of AI within their company. The team, from the top down, will look like something like the below:
Executives:
Chief Risk Officer / Chief Compliance Officer / Chief Privacy Officer / Chief Information Officer
Senior Leadership:
Head / Director of Governance
Compliance Director
Boots-On-The-Ground:
Risk / Governance Analyst
Policy Manager
Data Governance Analyst
Third-Party Risk Manager
Bear in mind – this isn’t a hard and fast team structure, some companies have deep technical engineers on the team that assist with API’s, ETL’s, and integrations…while other teams have heavier presence of internal auditors. All in all, the common theme in team structure looks like the above.
With that said – what does the framework actually look like? This is where the complexity really comes in – and we could spend days discussing all the nuances that could be taken…but we’re going to focus again, on the common themes. First of which – is the use of “Swim Lanes”. I love this analogy – think of when you go to the YMCA, those swim lanes are separated by a buoy string and a whistle-happy lifeguard that yells at any swimmer who even appears to lean on one of those lines. The swim lanes are supposed to be separate from each other…one not crossing into the other. Likewise in a governance framework – you too want swim lanes when vetting out your AI use cases.
The low-risk swim lane – this is your swim lane dedicated to low-risk, AI use cases. These are use cases that pose very little, if any risk to the organization – and thus do not require a full review process of each and every part of the AI use case. This swim lane is intended to move AI use cases through the life cycle of a use case quickly – so that the governance team is freed up to do a deeper review for high risk swim lanes. These low-risk lanes are really where you want to focus on leverage autonomy…ask yourself what can safely be automated to move these low-risk use cases through the process quickly? Some common autonomy I see applied are risk classifications. For example, an organization might want to know if PII ever leaves their org…if the answer is “No”, then the use case is automatically flagged as “low risk”.
Now let’s talk about the high-risk swim lane – lets say for a moment that the answer to the “PII” question above is “Yes”…now we not only automatically flag that use case as a “high risk” AI use case…but we also want to have other team members review the use case. We need a security team to look at the use case to vet out how that data is moved out of the organization and what security measures are in place to protect the data in transit (i.e. encryption). The legal team is going to want to know who the data actually belongs to (i.e. EU Citizen?), what agreements are in place and what contracts are in place for data protection for whoever the recipient is. Finally, you may have a privacy team who needs to review the use case – to find out what kind of information is taken out of the organization.
Imagine now, putting a legal team review, a privacy team review, a security team review…on every…single…AI use case. This is why one of the most common themes in an AI governance framework are swim lanes.
Common Pitfalls:
The most common pitfalls I see are below – I pose these more for reflection of your current processes, than introducing additional discussion or dialogue:
- Lack of swim lanes. Governance teams are drowning in AI use cases, they can’t tackle their back log and they can’t keep up with net new AI use cases coming in.
- Lack of autonomy. Now, I want to voice my concern that there are regulations out there (i.e. EU AI Act) which require a human-in-the-loop for certain AI use cases, so when I say there is a lack of “autonomy” as a common pitfall…I am in no way saying that an organization should be putting every decision through an agentic AI service to automate as much as they can. What I mean here – is that there are ways to automate reviewer assignments, risk classifications, even control assignments…which take significant strain off a governance team.
- No investment into a governance team. I’ve seen first-hand the direct impact to a governance team that does not have the support of the executive suite of the organization as a whole. The cascading result is far too much automation as the team cannot keep up with the use cases, opening the organization up to a crushing level of risk, putting them on the path to a regulation violation as soon as that autonomy they have in place violates a regulation.
Autonomous workflows v. Risk Tolerance:
This may be an obvious statement – but it’s a good place to start. The more autonomy you put into a governance framework, the more risk you take on. Think about it – even if you only automate setting a risk classification on a use case…everything else is done manually, there is still a minute risk that the use case is classified incorrectly. Think of the below non-exhaustive list, which go into a use case review:
- Risk classification
- Risk scenarios
- Risk controls
- Regulations
- Regulation controls
- Assigning reviewer teams
- Completing use case questionnaires
- Providing evidence to controls
- Approving use cases
The more you automate – the greater the propensity that the autonomy is done incorrectly. So if you’re in charge of a governance team, and you’re looking at automating processes to keep up with the demand…you need to figure out what your risk tolerance is, and what the needs of the organization are. In large, multi-billion-dollar enterprises, it’s unrealistic to manually govern all AI use cases…the team required for that would be significant. Likewise, automating every decision of a use case life cycle within a governance framework takes on too much risk at the organization level…so you need to land somewhere in-between. But where should you land? That’s the million-dollar question. My recommendation would go back to one of my favorite questions…how do you eat an elephant? One bite at a time. If you find yourself in a situation where your governance framework has far too many manual inputs and touch points…start automating small at first. For example, if you decide to automate risk classification – start there first, then measure the delta over the next few days, weeks, even month. Was there a significant delta between the number of Low Risk use cases and High Risk use cases? If so – do a root cause analysis…manually review all the use cases that were automatically assigned high-risk and ascertain whether or not the automation is working as intended. Iterate and refine – until you are superbly confident that the autonomy accurately reflects what a human would decide, then look at automating something else. Take care not to automate anything where an applicable regulation requires human in the loop.
Model Risk v. Use-Case Risk:
One distinction that often gets overlooked in AI governance frameworks is the difference between model risk and use-case risk. They are related — but they are not the same thing, and treating them as interchangeable can create blind spots.
Model risk is centered on the AI system itself. This includes things like how the model was trained, the quality and provenance of the training data, how frequently the model changes or retrains, its level of autonomy, and whether its outputs are explainable. A model that continuously learns, pulls from external data sources, or operates as a black box inherently carries more risk than a static, well-understood model — regardless of where it’s used.
Use-case risk, on the other hand, is about how that model is applied. The exact same model can be low risk in one context and high risk in another. A generative model used to summarize internal meeting notes is fundamentally different from that same model being used to influence credit decisions, hiring outcomes, or customer communications. The impact to individuals, the potential for harm, and the regulatory exposure all live at the use-case level.
This is where governance teams can get into trouble. If you only assess model risk, you miss the downstream consequences of how the model is actually being used. If you only assess use-case risk, you may underestimate the underlying technical behavior of the model itself. Mature governance frameworks evaluate both dimensions together — and often route them through different swim lanes — before landing on an overall risk posture for the AI system.
Ignoring either side of this equation doesn’t just create inefficiency…it creates risk.
Hope this all helped! Good luck out there.

