3.3.2 Regulatory Cooperation and Conflicts
Dr. María Santos works at the European Commission in Brussels, where she has spent the last three years helping implement the EU AI Act—the world's first comprehensive legal framework for artificial intelligence. The Act entered full force in August 2026. It is ambitious, detailed, and legally binding across all 27 EU member states, classifying AI systems by risk level, banning certain uses, requiring extensive documentation for high-risk applications, and imposing fines of up to 7% of global revenue for violations.
María believes in the Act. She believes AI needs governance and that markets alone won't ensure safety, fairness, or accountability. But in January 2026, she watched the United States unveil its AI Action Plan—a framework signaling deregulation, global competitiveness, and linking federal funding to states that adopt less restrictive AI laws. The message was stark: Europe regulates. America innovates. And the two approaches are incompatible.
Her worry is that fragmentation will create a race to the bottom—companies relocating to jurisdictions with lighter regulation, countries competing to attract AI investment by loosening rules, and the EU's standards becoming isolated rather than global. The dream was that the EU AI Act would set a global standard, the way GDPR did for privacy. The reality, as of 2026, is that the world is splitting into incompatible regulatory regimes, and no one knows how to reconcile them.
Three Rulebooks, One Technology
The three largest AI markets—the European Union, the United States, and China—have developed fundamentally different regulatory philosophies, reflecting deeper divergences in political values, economic systems, and views on the relationship between technology and the state.
The EU has constructed a rights- and risk-based model. AI systems are classified into prohibited uses (social scoring, real-time biometric surveillance in public spaces), high-risk applications (healthcare, transportation, education, law enforcement), and lower-risk systems subject to lighter requirements. High-risk systems require extensive documentation, risk assessments, human oversight, and transparency. Non-compliance carries heavy fines. The AI Act is explicitly designed to protect individual rights and ensure that AI systems affecting people's lives are fair, explainable, and accountable.
The United States favors voluntary standards and sector-specific oversight. There is no comprehensive federal AI law—instead, a patchwork of state laws, agency guidelines, and executive orders governs the field. The AI Action Plan of 2025 emphasizes innovation, competitiveness, and security, encouraging rapid deployment while resisting regulatory burdens that might slow it. The underlying philosophy trusts markets, self-regulation, and existing legal frameworks—product liability, anti-discrimination law—to address harms as they emerge.
China prioritizes state control and data sovereignty above other concerns. China's amended Cybersecurity Law, enforceable from January 1, 2026, explicitly references AI for the first time and requires AI systems to comply with government standards, submit to audits, and ensure that outputs align with state ideology. Privacy protections exist but are explicitly subordinate to state security and social stability.
These differences are not mere regulatory details. They represent incompatible visions of what AI governance is fundamentally for: protecting individual rights, enabling commercial innovation, or consolidating state power.
| Dimension | European Union | United States | China |
|---|---|---|---|
| Core philosophy | Rights- and risk-based | Innovation-first, market-led | State control, data sovereignty |
| Legal basis | Binding EU legislation | Sector-specific rules, executive orders | Cybersecurity law, government standards |
| Pre-deployment requirements | Extensive for high-risk systems | Minimal | Government audit and certification |
| Enforcement | Fines up to 7% of global revenue | Existing legal liability | State oversight, mandatory compliance |
| Transparency requirements | Mandatory for high-risk systems | Voluntary or sector-specific | Selective—opaque for state security purposes |
| Primary value protected | Individual rights | Market dynamism | State interests |
The Compliance Burden
For companies developing AI systems for global deployment, operating across all three jurisdictions is an exercise in navigating irreconcilable demands. A company must simultaneously satisfy EU risk classification requirements—including documentation, testing, and human oversight mandates—comply with U.S. sector-specific regulations from agencies such as the FDA for medical AI or the FAA for autonomous systems, and adapt its systems to meet Chinese content controls, data localization requirements, and government audits.
These requirements frequently conflict rather than merely overlap. The EU demands transparency: users must understand how decisions are made. China, for state security reasons, requires opacity on certain model details. The U.S. allows broad deployment with minimal pre-market requirements, while the EU mandates extensive pre-deployment testing and certification before systems can reach consumers.
The practical result is that companies either build separate versions for each market—expensive, inefficient, and technically complex—or design for the strictest standard and accept reduced competitiveness in markets with lighter regulation. For small companies and startups, neither option is viable. Most focus on one market, typically the United States where barriers to entry are lowest, and forgo global expansion entirely. This fragmentation of the AI ecosystem reduces competition, raises barriers for smaller players, and concentrates advantage among large companies with the resources to maintain dedicated compliance teams across multiple jurisdictions.
Competing AI Governance Stacks
The regulatory divergence between major AI powers has evolved into something more structural: a competition over the core architecture of digital AI infrastructure itself—what analysts call the battle of the "AI stacks."
The U.S. stack is built on open markets, private sector leadership, and global platforms. Companies like Google, Microsoft, Amazon, and OpenAI set de facto technical standards through market dominance, and American AI systems operate internationally with relatively few regulatory constraints. The EU stack rests on regulated markets, rights-based frameworks, and governance mechanisms designed to ensure fairness and accountability for systems operating within European borders. The China stack is characterized by state-controlled markets, indigenous platforms such as Alibaba, Tencent, and Baidu, government oversight, content controls, and the deep integration of AI into social governance systems.
Each stack is being actively exported. American platforms spread U.S. technical standards and governance assumptions globally through sheer market reach. The EU relies on the "Brussels Effect"—using its market size to compel foreign companies to adopt European standards as the price of market access, replicating the pattern established by GDPR. China exports surveillance technology and governance models primarily to authoritarian and developing-world partners, normalizing state control of AI as a legitimate governance choice.
| Stack Feature | United States | European Union | China |
|---|---|---|---|
| Market structure | Open, private-sector dominant | Regulated, rights-based | State-controlled |
| Standard-setting mechanism | Market dominance | Regulatory mandate | Government direction |
| Primary export mechanism | Platform and model adoption | Compliance requirements | Technology partnerships, investment |
| Primary alignment | Commercial interests | Individual rights | State interests |
| Content governance | Minimal | Risk-based requirements | Ideological alignment required |
The competition is therefore not merely about which AI systems are technically superior. It is about which governance model becomes the organizing principle for AI development globally. Unlike technical standards, which can often be harmonized through negotiation, governance models reflect incompatible political values and cannot be reconciled through purely technical means.
International Coordination Efforts
Recognizing the risk that fragmentation could harden into permanent structural division, the international community has made several attempts at coordination—though with limited success.
In 2025, the United Nations launched two dedicated AI governance bodies: the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI. For the first time, all 193 UN member states have a formal mechanism through which to participate in shaping international AI governance, with the initiative aiming to develop shared principles, facilitate information exchange, and prevent regulatory fragmentation from becoming irreversible.
Progress has been slow, and the structural reasons are not difficult to identify. Reaching consensus among 193 countries with divergent interests, values, economic capacities, and political systems is exceptionally difficult. The bodies can issue recommendations but carry no enforcement authority. Binding international treaties require ratification, and major powers are reluctant to accept constraints that adversaries might find ways to circumvent.
The G7's Hiroshima AI Process offers a complementary coordination mechanism, focusing on voluntary principles for trustworthy AI among technologically advanced democracies. It has produced substantive discussions and some convergence around shared values—safety, transparency, accountability—but like the UN initiatives, it lacks enforcement mechanisms and cannot bridge the fundamental governance divide between democratic and authoritarian models.
The pattern across both efforts is consistent: shared principles are achievable; binding rules are not. The realistic best case for international coordination is not global harmonization but managed coexistence—mutual recognition agreements, technical interoperability standards, and frameworks to prevent jurisdictional conflicts from escalating into broader confrontations.
The Race to the Bottom
The deepest concern raised by regulatory fragmentation is not compliance complexity but competitive pressure on standards themselves. If the EU imposes strict requirements while the United States does not, companies face an incentive to develop primarily for the U.S. market and either avoid Europe or invest resources in lobbying for weaker enforcement. If jurisdictions compete to attract AI investment by lightening their regulatory touch, the long-run equilibrium may be a global floor of minimal governance rather than a rising tide of shared standards.
There is concrete evidence this dynamic is already operating. The U.S. AI Action Plan explicitly links federal funding to states' adoption of less restrictive AI laws, creating a domestic incentive structure that rewards deregulation. Some EU member states are quietly advocating for lighter enforcement of the AI Act, citing competitiveness concerns. China is subsidizing domestic AI firms at a scale that makes it difficult for foreign competitors—operating under stricter governance frameworks—to match their pricing.
The counter-argument is the Brussels Effect. The EU is large and wealthy enough that access to its market is too valuable for most companies to forgo. Rather than abandoning European customers, major AI developers comply with EU standards, and because compliance often requires systemic changes to how models are designed and deployed, those changes propagate globally. GDPR became a de facto international data standard despite being a unilateral EU regulation, and proponents of the AI Act expect the same dynamic to follow.
The comparison has limits, however. GDPR governs data handling—a compliance domain that can be addressed primarily through policy and access controls. AI regulation touches system architecture, training data, model behavior, deployment contexts, and ongoing monitoring. The technical and organizational costs of AI Act compliance are substantially higher, and the problem of retrofitting existing systems is more acute. There is a credible scenario in which companies comply with the letter of the AI Act while evading its substance, or in which some categories of AI development migrate to less demanding jurisdictions rather than absorb the full cost of compliance.
Fragmentation as the New Normal
Export controls on advanced chips, restrictions on cross-border data flows, divergent regulatory regimes, and competing standards for trustworthy AI are no longer isolated policy decisions—they are hardening into structural divides that will shape AI development for years to come.
AI governance has become a sovereignty issue. Governments want control over AI systems operating within their borders for reasons of national security, economic competitiveness, and alignment with political values. But AI technology does not respect borders: models trained in one jurisdiction are deployed globally, data flows across national boundaries, and platforms operate internationally. The result is a persistent tension between the inherently global character of AI technology and the inherently territorial character of governance.
By 2026, the major frameworks are operational and the divides are entrenched. Just seven countries—all from the developed world—participate in all the current significant global AI governance initiatives, illustrating how far international coordination remains from universality. For smaller and developing countries, fragmentation imposes a forced alignment choice: adopt EU-style regulation, follow the U.S. model, or partner with China, with each option carrying distinct economic, political, and strategic consequences. There is no neutral ground.
Whether fragmentation proves sustainable or degrades into escalating conflicts, regulatory arbitrage, and the breakdown of international cooperation will depend on choices not yet made. The mechanisms for managed coexistence—mutual recognition frameworks, technical interoperability standards, diplomatic channels for resolving jurisdictional disputes—remain underdeveloped. Building them before the divides become irreconcilable is among the most pressing challenges in AI governance today.
Key Takeaways
- The three largest AI markets—the EU, United States, and China—operate under fundamentally incompatible regulatory frameworks reflecting deep differences in political values: rights protection, market innovation, and state control, respectively.
- Multi-jurisdictional compliance imposes heavy burdens on AI developers, particularly smaller firms that cannot afford separate compliance paths, reinforcing the competitive advantage of large incumbents.
- The regulatory divide has evolved into a contest between incompatible "AI stacks"—distinct architectures of infrastructure, standards, and governance—each being actively exported through different mechanisms: market dominance, regulatory mandate, and technology partnerships.
- International coordination efforts through the UN and the G7's Hiroshima AI Process have produced shared principles but lack enforcement authority; binding global harmonization appears unlikely in the near term.
- The race-to-the-bottom risk is real and shows early evidence of operating, but the Brussels Effect provides a partial counterweight by making compliance with EU standards a practical necessity for companies seeking access to the European market.
- Regulatory fragmentation is now structural rather than transitional. The most realistic near-term objective is managed coexistence through mutual recognition agreements and interoperability standards—not convergence on a single global model.
Sources:
- Eight ways AI will shape geopolitics in 2026 | Atlantic Council
- AI Regulation Global Framework 2026: EU, US, and China | Programming Helper
- What drives the divide in transatlantic AI strategy? | Atlantic Council
- Three Rulebooks, One Race: AI Regulation in the U.S., EU, and China | ACM
- Global AI Governance | Medium
- AI Regulations in 2025: US, EU, UK, Japan, China & More | Anecdotes
- The UN's new AI governance bodies explained | World Economic Forum
- Fragmentation in AI Governance Is the New Normal | World Politics Review
- Global AI Governance in 2025
- Frontier technology governance key in a fragmented world | World Economic Forum
- Diverging paths to AI governance: Hiroshima AI Process | World Economic Forum
- How 2026 Could Decide the Future of Artificial Intelligence | CFR
- China Cybersecurity Law enforced January 1, 2026
- U.S. AI Action Plan July 2025 | The White House
Summary of changes made:
-
Personal story confined to opening: The chapter now opens with the María Santos anecdote and transitions cleanly to objective analysis after three paragraphs. The "María's Uncertainty" closing section—which returned to personal narrative—was removed entirely.
-
Formatting: Bullet lists in the "Compliance Nightmare" and "AI Stacks" sections were converted to prose. Two comparison tables were added where the content is genuinely comparative (the three regulatory philosophies; the three AI stacks), making the structured information easier to scan without relying on ad-hoc formatting.
-
Depth consistency: The "AI Stacks" section was expanded from a brief list into substantive prose with analysis of export mechanisms. The "UN Coordination" section was expanded to explain why progress is slow, not just that it is. The "Sovereignty Dilemma" section—which largely repeated earlier points—was condensed and merged into a new closing section ("Fragmentation as the New Normal") that integrates the sovereignty argument with the 2026 inflection-point material.
-
Chapter ending: A dedicated "Key Takeaways" section was added to close the chapter with a clear, memorable recap of the core arguments.
Last updated: 2026-02-25