AI governance: constraint and the circle

Stanford HAI published the 2026 AI Index Report last week. In past years the report read like an observational snapshot of where AI governance should go. This year, the conversation has shifted to what is actually being done, and how well.

Two angles from Chapter 3 and Chapter 8 are worth pulling out.

1. Responsible AI is institutionalizing, but the foundation is not keeping up

A striking contrast in the numbers.

On the institutionalizing side:

On the not-keeping-up side:

Put these together and you get a familiar enterprise scenario: compliance has finally assigned someone to AI governance, policy documents are being drafted, but when a business line asks how to run a compliance review on their Copilot pilot, no one has a clean answer. Incidents are growing, external standards are shifting, internal baselines are still under construction.

Three practical takeaways

ISO/IEC 42001 is worth getting familiar with early. From zero to 36% citation in a single year is the kind of trajectory that tends to produce a new compliance anchor, similar to what ISO 27001 became for information security. A set of signal certifications landed in 2025: Anthropic (January 2025), Microsoft extending scope to Microsoft 365 Copilot, KPMG International (December 2025, the first Big Four international entity). KPMG is worth a closer look. Its international certification came after member firms in Australia, Spain, India, and the US had already been certified. The full cycle ran roughly one to two years, which is a useful planning anchor.

AI governance in the Second Line of Defense works better. A 17% increase in roles sounds substantial, but the 59% knowledge gap is the more telling number. Most organizations are hiring people with “AI” in their title without a clear view of what that role owns, or how it divides responsibility with the CISO, DPO, and Internal Audit. AI risk cuts across data privacy, operational risk, model risk, and third-party risk. A position placed alongside existing Risk & Compliance tends to coordinate across these categories better than one placed under IT or Legal.

Plan for the transparency regression. The Transparency Index dropping back to 40 means less visibility into vendor training data and post-deployment behavior. But the reverse is also happening: vendors are starting to use certification as a trust signal. In its ISO 42001 documentation, Microsoft explicitly states that customers can use Microsoft’s certification in their own compliance assessment. Translation: “we take some of the third-party risk assessment burden off your plate.” The next revision of your Vendor Risk Assessment template should include a dedicated AI vendor section covering model cards, evaluation reports, incident disclosure practices, and ISO 42001 certification status.

2. Data sovereignty is forking, and cross-border templates may need a rebuild

The numbers from Chapter 8 worth paying attention to:

Data localization measures by region

Through 2024, East Asia and the Pacific adopted 77 data localization measures. Sub-Saharan Africa, 71. Europe and Central Asia, 66. North America, 3. Seventy-seven versus three is not regulatory convergence. It is systematic divergence.

Layer on the infrastructure data: between 2018 and 2025, Europe and Central Asia expanded state-backed AI supercomputing clusters from 3 to 44. South Asia, Latin America, and the Middle East and North Africa remain in single digits. AI sovereignty is a term the report returns to repeatedly.

Translated into enterprise terms: the cross-border compliance templates built over the last five years, where data follows the business, may need reworking for AI systems. AI involves not just where data moves, but where models are trained, where inference runs, and whether weights can be deployed across borders. Existing tools like GDPR SCCs or China’s data export security assessment don’t fully cover these new control points.

There is now an AI-specific layer to add on top of each regional framework:

The nearest-term deadline is the EU one. The AI Act’s high-risk AI system obligations (covering hiring, credit decisions, education, law enforcement) become enforceable on 2 August 2026. An appliedAI study of 106 enterprise AI systems found that 40% had unclear risk classifications. The European Commission’s proposed Digital Omnibus could push the deadline to December 2027, but a prudent compliance posture does not bet on the extension.

A practical near-term step: add AI-specific clauses to new and renewing Data Processing Agreements (DPAs). Specify whether the counterparty’s model training will use your data, whether user data can be used for model improvement (opt-out mechanism), and audit rights over cross-border inference. Negotiating this now is easier than renegotiating once the AI Act is fully in force.

Closing thought

One finding from this year’s report: training techniques aimed at improving one responsible AI dimension (such as safety) consistently degrade others (such as accuracy). The tradeoffs are systematic and not yet well understood.

This conclusion is not new. Stronger AML controls degrade customer experience. Strict GDPR enforcement limits product functionality. This has always been the nature of compliance work. AI governance isn’t a new problem. It’s an old problem in new packaging.

Three things worth doing now:

  1. Read ISO 42001 and NIST AI RMF. Run a gap analysis against current controls.
  2. Update the Vendor Risk Assessment template with a dedicated AI vendor section.
  3. Add AI-specific clauses to the next revision of the DPA template.

None of these require waiting on headquarters, regulators, or legal opinions.


The full AI Index 2026 report is available from Stanford HAI.