1.3.2 Monopolization and Competition
Imagine you are the founder of a promising AI startup in 2024. You have a talented team, a compelling product, and early traction. When it is time to scale, you make a series of individually reasonable decisions: you build on GPT-4's API because it is the most capable model available; you host your infrastructure on Azure because Microsoft is offering generous credits; you take investment from a fund with deep ties to one of the major cloud providers. Each choice makes sense on its own. Together, they add up to something you may not have anticipated: you are now deeply embedded in an ecosystem controlled by one company, dependent on its pricing, its policies, and its continued goodwill. If that company launches a competing product, you have no easy exit.
This is not a hypothetical. It is the operating reality for hundreds of AI startups in the mid-2020s, and it sits at the heart of a growing antitrust debate. The question is not whether AI companies are innovating—they clearly are. The question is who controls the infrastructure that makes that innovation possible, and whether that control is being used to foreclose competition.
In January 2024, the Federal Trade Commission sent subpoenas to Microsoft, Google, Amazon, OpenAI, and Anthropic. The agency wanted to know about their partnerships, their agreements, and their entanglements. A year later, the FTC published its findings: the relationships between cloud giants and AI startups raise "potential competitive issues." That is regulatory language for a simple idea. Something here might be a problem.
The Partnerships Under Scrutiny
The specifics are worth examining closely. Microsoft has invested over $13 billion in OpenAI—a company that began as a nonprofit AI research lab. In exchange, Microsoft obtained exclusive cloud hosting rights (OpenAI's workloads run on Azure), preferential access to new models, and integration rights across its product suite, including Windows, Office, Bing, and GitHub. Google invested $3 billion in Anthropic, taking a 14% stake carefully structured as non-voting shares to avoid antitrust red flags. Amazon made its own multi-billion dollar investment in Anthropic, also through non-voting shares, with both tech giants providing cloud infrastructure through their respective platforms.
On paper, these look like ordinary venture investments. Startups need capital and computing resources; Big Tech has both. But Senators Elizabeth Warren and Ron Wyden put the concern more directly in their investigation letters: these partnerships "discourage competition, circumvent our antitrust laws, and result in fewer choices and higher prices" for businesses and consumers. The FTC's report identified three specific risks: the arrangements could affect access to computing resources and engineering talent; they could increase switching costs for companies working with AI developers; and they might give cloud providers unique access to sensitive information about their partners' businesses.
In short, these are not arms-length business deals. They are arrangements that create structural dependencies—situations where an AI startup's investor, infrastructure provider, and largest customer are the same company. That kind of entanglement does not look like a free market. It looks like vertical integration wearing a different suit.
The Three Chokepoints
Regulators have been explicit about where they see concentration crystallizing. In 2024, the FTC, Department of Justice, UK Competition and Markets Authority, and European Commission issued a joint statement identifying three main areas of concern.
The first is concentrated control over key inputs. Training and running large AI models requires three things in abundance: specialized chips, massive computing power, and data storage. The chip market is dominated by Nvidia, which controls over 90% of AI accelerator sales. Computing power is controlled primarily by Amazon, Microsoft, and Google. These companies are not merely suppliers—they are the foundation on which every large AI system is built, and their pricing and access policies shape what is economically viable for anyone else in the industry. A startup that cannot access competitive compute at competitive prices cannot build competitive AI, regardless of how talented its team or how novel its approach.
The second concern is the ability of large incumbent firms to use their existing market power to entrench their position in AI. Microsoft, Google, and Amazon already dominate search, cloud infrastructure, advertising, and e-commerce. Each of those dominant positions provides structural advantages in AI: vast proprietary data, existing customer relationships, opportunities to bundle AI features with established products, and the financial resources to sustain losses that would bankrupt smaller competitors. If these firms use their dominance in adjacent markets to favor their own AI products—or disadvantage rivals through exclusionary pricing or preferential placement—the AI market risks inheriting the same concentrated structure as the markets they already control.
The third concern is the competitive effect of the partnership arrangements themselves. The Microsoft-OpenAI relationship, the Google-Anthropic deal, and the Amazon-Anthropic investment are not illegal on their face, but they create structural incentives that may suppress genuine competition. When your investor is also your cloud provider, your largest customer, and the company whose products you power, the conditions for independent competition become extraordinarily difficult to maintain.
The Lock-In Problem
The senators' letters to Google and Microsoft focused particular attention on what competition lawyers call lock-in: the ways structural arrangements make switching from one provider to another prohibitively costly in practice, even when it is technically permitted.
Exclusivity contracts can restrict an AI developer whose needs evolve. Technical integrations—building applications around a specific cloud provider's tools, APIs, and services—create migration costs that grow with every month of additional development. Financial arrangements intensify this effect considerably. Microsoft reportedly provides OpenAI with billions of dollars in Azure computing credits. That is not cash; it is pre-paid time on Microsoft's infrastructure. If OpenAI wanted to move workloads to Google Cloud or AWS, it would be walking away from resources already paid for. Economically, that choice is nearly impossible to make.
The same dynamic plays out across the industry at smaller scales. Cloud providers routinely offer startups substantial credits and discounts to attract them onto their platforms during early development. Once a company has built its stack on a particular provider's infrastructure—using proprietary tools, optimized configurations, and platform-specific services—migrating becomes an exercise in rebuilding from scratch while keeping the existing product running. It is technically possible, but the cost and disruption are severe enough that most companies simply do not attempt it. The FTC report also flagged a talent dimension to this lock-in: deep partnerships between cloud providers and AI labs may create closed talent ecosystems, where the best engineers circulate within integrated networks rather than diffusing into independent companies that could compete with them.
The Policy Response
Governments have begun responding, though not in unison. California's Assembly Bill 325, which took effect in January 2026, explicitly prohibits the use or distribution of "common pricing algorithms" that facilitate anticompetitive practices. The law targets a specific concern: AI-enabled price coordination that achieves the effects of collusion without any explicit agreement. If multiple companies in the same industry all rely on the same AI pricing system, and that system recommends similar prices to each of them in real time, no backroom deal has been struck—but the outcome for consumers may be indistinguishable from cartel behavior. California concluded that this constitutes a violation worthy of heightened civil and criminal penalties, sending a clear signal that algorithmic coordination does not escape antitrust scrutiny simply because the coordination is automated.
The federal response has been more ambivalent. The White House's AI Action Plan, released in July 2025, contained 90 policy recommendations but was primarily oriented toward removing regulatory barriers to AI infrastructure investment and ensuring that enforcement does not "unduly burden AI innovation." The implicit message was that the administration prefers a lighter regulatory touch and is skeptical of aggressive antitrust intervention in AI markets.
Europe is moving in the opposite direction. The European Union is considering expanding the Digital Markets Act to classify major AI businesses as "gatekeepers," a designation that could mandate interoperability between AI systems and restrict how dominant platforms favor their own AI products over rivals. The result is regulatory fragmentation: what is permitted in the United States may be restricted in Europe, and companies operating globally must navigate an increasingly incoherent patchwork of national rules. This divergence also creates regulatory arbitrage opportunities—incentives to structure operations in jurisdictions with the most permissive frameworks, which in turn pressures other regulators to loosen their own standards to avoid ceding industry to competitors.
The Trials Ahead
While regulators debate policy, courts are moving forward with enforcement cases that will define the legal boundaries of technology market power in the AI era. Alphabet began implementing court-mandated changes to its search business in January 2026, following an antitrust ruling against Google. The specific remedies matter less than what the ruling establishes: a federal court found that the world's dominant search engine violated antitrust law and ordered structural changes in response. If courts are willing to impose such remedies on Google's search monopoly, the same legal logic could apply to AI platforms that exercise comparable control over market access.
Apple faces a significant class-action trial in early 2026 over its App Store practices. Plaintiffs allege that Apple's 30% commission on app sales and its restrictions on alternative payment systems constitute illegal monopolization of the mobile app market. The case tests a principle with direct relevance to AI: whether control over a platform, and the leverage that control gives over everyone who builds on it, crosses the line from competitive advantage into antitrust violation. Cloud platforms in AI occupy an analogous position to Apple's App Store, and a finding against Apple would reinforce the argument that infrastructure gatekeepers face genuine legal limits on how they can use that position.
Amazon faces an FTC trial in October 2026 centered on "Project Nessie," a pricing algorithm the agency alleges was specifically designed to manipulate market prices in Amazon's favor. This case carries the most direct implications for AI competition, because it examines whether algorithmic systems can be instruments of illegal market manipulation—and if so, what the legal standard for that determination looks like. A finding against Amazon would strengthen the regulatory hand considerably in future cases involving AI-driven pricing and market behavior.
Taken together, these proceedings signal that the era of treating technology platforms as effectively beyond antitrust scrutiny is ending. Whether AI becomes the next front in that legal reckoning, or whether the existing market structure becomes entrenched before enforcement catches up, depends partly on how these cases are resolved and how quickly their logic is applied to AI-specific conduct.
The Open Source Question
One potential counterweight to consolidation is open source. Meta released LLaMA, an open-weight language model available for anyone to download, modify, and deploy. Stability AI released Stable Diffusion. A growing ecosystem of smaller labs contributes open models that enable independent researchers, hobbyists, and small companies to build AI applications that would have been impossible in a closed ecosystem. The ability to fine-tune, adapt, and deploy open models across languages, cultures, and use cases distributes access in ways that proprietary systems cannot match.
But open source has limits as a remedy for market concentration, and critics argue those limits are fundamental rather than incidental. Meta can release LLaMA without threatening its core business, because Meta's competitive moat is social networking and advertising—not AI models. Releasing the model wins goodwill, accelerates ecosystem adoption, and positions Meta favorably in regulatory discussions, all without surrendering the advantages that actually matter to its revenue. Open source, from this perspective, is strategic generosity rather than genuine democratization.
More fundamentally, open source at the model layer does not address concentration at the hardware and cloud layers where the real leverage lies. Downloading LLaMA is free. Fine-tuning it on proprietary data, deploying it at production scale, and maintaining it for real users requires substantial compute resources—resources controlled by the same handful of companies under antitrust investigation. As one analysis summarized the problem: "Open-source models will not address the unregulated AI oligopoly at the hardware or cloud layers, and because model-layer enterprises are dependent on these lower layers, concentration means that oligopolists in these layers can leverage their power downstream." Making the software free does not change who owns the infrastructure on which that software runs.
What Competition Actually Requires
If open source is insufficient and regulatory enforcement is slow, what would genuine competition in AI actually require? The answer involves structural changes that are technically feasible but politically and economically difficult.
Competitive access to compute is the most fundamental need. Training large AI models currently requires renting infrastructure from Amazon, Microsoft, or Google, because no independent provider operates at sufficient scale. New entrants like CoreWeave and Lambda Labs are working to compete, but they remain small relative to the hyperscalers and face the same capital intensity that makes the market difficult to enter in the first place. Public investment in compute infrastructure—analogous to government investment in roads, electrical grids, and telecommunications networks—could create a more level foundation. This idea has historical precedent, but it remains largely theoretical in current AI policy discussions.
Competition in chip manufacturing presents a related challenge. Nvidia's control of over 90% of the AI accelerator market gives it pricing power that few industries tolerate without intervention. More competition in chip design and manufacturing would reduce a critical bottleneck, but building a competitive semiconductor industry requires years of sustained development and capital investment that private markets alone are unlikely to supply at the necessary pace. Government industrial policy of the kind deployed in semiconductor manufacturing through the CHIPS Act could accelerate this, but AI-specific chip policy has not received comparable attention or funding.
Interoperability and data portability would reduce lock-in without requiring companies to abandon their existing investments. If AI workloads could move between cloud providers without prohibitive migration costs, if models could integrate across platforms through open technical standards, and if businesses could carry their data with them when switching providers, the structural grip of incumbent platforms would loosen considerably. Some of this requires technical standards development; some requires regulatory mandates. None of it emerges naturally from market incentives, because companies benefit directly from the switching costs that their proprietary architecture creates and have little reason to voluntarily dismantle them.
Finally, transparency in partnership arrangements and contracts would enable more effective regulatory oversight. The precise terms of the Microsoft-OpenAI deal, the specific provisions of the Google-Anthropic and Amazon-Anthropic investments—if these contain exclusivity clauses, pricing arrangements, or penalties that restrict competitive behavior, regulators and the public should have access to that information. Effective antitrust enforcement requires knowing what is actually being agreed to, and much of that currently remains opaque.
The Uncomfortable Parallel
The current situation invites comparison to a moment in tech history that ended ambiguously. In the late 1990s, Microsoft dominated personal computing. It bundled Internet Explorer with Windows, used its operating system monopoly to favor its own products, and methodically suppressed competitors like Netscape. The Department of Justice sued. The case dragged through the courts for years, resulted in a finding that Microsoft had violated antitrust law, and then—crucially—ended without the structural remedy originally proposed. The company was not broken up.
What happened next was instructive. Microsoft remained dominant in operating systems but missed the transitions that mattered. Google captured search. Apple captured mobile. Amazon captured cloud. The antitrust case did not break Microsoft's monopoly; the market shifted to terrain where Microsoft was not positioned to compete, and the dominance of the moment turned out to be less durable than it appeared.
The companies that won those subsequent waves—Google, Amazon, and Microsoft itself—are now positioning to dominate AI. They are deploying recognizable tactics: leveraging existing platform dominance, locking in partners and users through infrastructure dependencies, investing in promising startups before they can become genuine rivals, and using scale to undercut independent competitors on price. Whether regulators move before the market consolidates, or whether enforcement arrives after the fact and proves similarly inconclusive, is the central question of this moment.
The early indicators favor consolidation. The partnership structures are already in place. The infrastructure is already concentrated. The capital is already deployed. And the regulatory response, at least at the federal level in the United States, is moving slowly through contested political terrain. Competition in AI is not dead—there are genuine contests being fought, new companies emerging, and meaningful innovation occurring outside the major platforms. But the playing field is not level, structural incentives favor incumbents, and the window for effective intervention narrows with each year the existing arrangements deepen. History is not deterministic. But it does tend to rhyme.
Summary
Monopolization in AI is not a hypothetical future risk; it is a dynamic already visible in the structure of the industry. A small number of large technology companies—primarily Microsoft, Google, and Amazon—exercise concentrated control over the three inputs that define competitive position in AI: specialized chips, cloud computing infrastructure, and capital. Their partnership investments in leading AI labs have created interdependencies that function like vertical integration while remaining nominally independent, and these arrangements raise serious questions about whether genuine competition can persist.
Regulators across multiple jurisdictions have identified the problem clearly. The FTC and DOJ, working with international counterparts, named concentrated control of inputs, incumbent entrenchment, and restrictive partnership arrangements as the three primary areas of concern. Ongoing legal proceedings against Google, Apple, and Amazon are establishing precedents that will shape how AI antitrust questions are resolved. Europe is moving toward stronger structural intervention through the Digital Markets Act, while the United States has shifted toward a lighter regulatory posture.
Open source AI offers partial relief by democratizing access to model weights, but it does not address concentration at the infrastructure layers where competitive leverage is actually exercised. Genuine competition would require more open access to compute and chips, mandated interoperability and data portability, and greater transparency in partnership contracts—none of which currently exists at meaningful scale.
The parallel to Microsoft's operating system dominance in the 1990s is instructive but imperfect. In that case, enforcement came but structural remedies were abandoned, and the market eventually shifted on its own to areas beyond Microsoft's reach. Whether AI markets will shift similarly, or whether infrastructure bottlenecks prove durable enough to sustain concentration across multiple technology generations, remains the defining uncertainty. What is not uncertain is that the decisions being made now—by regulators, courts, policymakers, and companies—will shape the competitive landscape of AI for decades to come.
Key Takeaways
- Three companies — Microsoft, Google, and Amazon — control the cloud infrastructure that AI development depends on, creating a structural bottleneck where even nominally independent AI companies rely on potential rivals for compute, capital, and distribution.
- The Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic arrangements function as vertical integration in all but name: the same entity is often investor, infrastructure provider, and largest customer simultaneously — conditions that make genuine competitive independence nearly impossible.
- Lock-in operates through multiple channels: exclusivity contracts, technical migration costs, financial arrangements (compute credits effectively pre-paid to one provider), and talent ecosystems that circulate within integrated networks rather than diffusing into independent competitors.
- Regulators in the U.S., EU, and UK have jointly identified three specific chokepoints: concentrated control over chips and compute, incumbent entrenchment through adjacent market dominance, and restrictive partnership structures that suppress genuine competition.
- Open-source AI at the model layer does not address concentration at the hardware and cloud layers where competitive leverage is actually exercised — making software free does not change who owns the infrastructure on which it runs.
- Ongoing antitrust proceedings against Google (search), Apple (App Store), and Amazon (pricing algorithms) are establishing legal precedents that will shape how AI-specific market power questions are resolved in the coming years.
- The Microsoft 1990s parallel is instructive: enforcement arrived but structural remedies were abandoned, and the market eventually shifted to terrain Microsoft couldn't dominate. Whether AI infrastructure bottlenecks prove similarly impermanent — or more durable across technology generations — is the central uncertainty.
Sources:
- AI Antitrust Landscape 2025 | Greenberg Traurig
- Eyes on AI: Potential AI Antitrust Enforcement | White & Case
- Reviewing European Antitrust Activity in 2025 | TechPolicy.Press
- The Great Tech Reckoning: Why 2026 Is the Year Regulation Finally Bites
- FTC, DOJ, and International Enforcers Issue Joint Statement on AI Competition | FTC
- Competition and Antitrust Concerns Related to Generative AI | Congress.gov
- US DOJ and FTC Join G7 Competition Authorities | Mayer Brown
- FTC Launches Inquiry into Generative AI Investments and Partnerships | FTC
- Senators Probe Google-Anthropic, Microsoft-OpenAI Deals | Computerworld
- Partnerships Between Cloud Service Providers and AI Developers | FTC Report
- FTC Says Partnerships Like Microsoft-OpenAI Raise Antitrust Concerns | TechCrunch
- Warren, Wyden Launch Investigation | Senator Warren Press Release
- Policymakers Overlook How Open Source AI Is Reshaping Global Power | TechPolicy.Press
- Measuring the Openness of AI Foundation Models | Taylor & Francis Online
- Competition Between AI Foundation Models | Oxford Academic
- An Antimonopoly Approach to Governing Artificial Intelligence | Yale Law & Policy Review
- Open Source Is Having a Moment in AI Regulation | ProMarket
- Is Data Really a Barrier to Entry? | Mercatus Center
Last updated: 2026-02-25