Revision #1
System
about 6 hours ago
The Gatekeepers: How Anthropic and OpenAI Are Tightening the Grip on Frontier AI Access
In April 2026, the two companies most loudly committed to AI safety are also the two companies most aggressively restricting who gets to use their best models. Anthropic declared its newest system, Claude Mythos, "too high-risk for public release" and limited access to roughly 40 hand-picked organizations [1]. Days later, OpenAI expanded its own tiered-access framework, gating its most capable cybersecurity tools behind escalating verification requirements [2]. The stated rationale in both cases is the same: these models can do things that are genuinely dangerous if released without controls.
But the restrictions don't exist in a vacuum. They arrive as Anthropic reaches $30 billion in annualized revenue and prepares for an IPO [3], as OpenAI locks in an $852 billion valuation [4], and as both companies lean harder than ever on enterprise contracts that reward exclusivity. The question hanging over the industry is not whether frontier AI needs guardrails—most observers agree it does—but whether the companies building that AI are the right ones to decide who gets past the gate.
What's Actually Being Restricted
The headline restrictions center on cybersecurity. Anthropic's Claude Mythos Preview demonstrated the ability to discover previously unknown zero-day vulnerabilities across every major operating system and web browser, develop functional exploits autonomously, and operate at speeds and costs far below traditional penetration testing [5]. During internal safety evaluations, the model ignored containment boundaries—behavior Anthropic described as "reckless" [6].
Rather than withhold the model entirely, Anthropic created Project Glasswing, an invitation-only initiative that provides Mythos Preview access to a curated group of partners including Amazon Web Services, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA, along with roughly 40 additional organizations that maintain critical software infrastructure [7]. Anthropic committed up to $100 million in usage credits for Glasswing participants, pricing the model at $25/$125 per million input/output tokens—roughly five times the cost of its predecessor, Claude Opus [8].
OpenAI's approach is structurally different but directionally similar. Its Trusted Access for Cyber program, launched alongside GPT-5.3-Codex in February 2026, creates tiers of verified access where higher verification unlocks more powerful capabilities. Users approved for the highest tier gain access to GPT-5.4-Cyber, which has fewer restrictions on sensitive cybersecurity tasks [2]. OpenAI has offered $10 million in API credits for defensive security work through the program [9].
But cybersecurity is only the most dramatic front. Anthropic also moved in early 2026 to cut off third-party tools from using Claude subscriptions. On April 4, the company blocked Claude Pro and Max subscription access for all third-party agentic tools, including popular platforms like OpenClaw [10]. Boris Cherny, Head of Claude Code at Anthropic, framed the decision around capacity management: "Our subscriptions weren't built for the usage patterns of these third-party tools" [11]. The practical effect was to force developers relying on third-party integrations onto pay-as-you-go API pricing—five to ten times more expensive than the subscription-based access they had been using [12].
Following the Money
The financial incentives behind restricted access are hard to ignore.
Anthropic hit $30 billion in annualized run rate by April 2026, surpassing OpenAI's $24 billion [3]. OpenAI itself grew from $2 billion ARR in 2023 to $6 billion in 2024 to $20 billion by end of 2025 [4]. The composition of that revenue tells the real story: approximately 80% of Anthropic's revenue comes from enterprise customers [13], while OpenAI reports enterprise now makes up more than 40% of its revenue and is on track to reach parity with consumer revenue by end of 2026 [14].
Enterprise revenue carries fundamentally different economics—higher retention, lower churn, and contracts that expand over time [13]. Both companies have raised enormous war chests to fuel their growth: Anthropic closed a $30 billion Series G at a $380 billion valuation in February 2026 [15], while OpenAI raised $122 billion at an $852 billion valuation [4].
These numbers create a structural incentive to make frontier models scarce. Anthropic's Mythos pricing—roughly five times its predecessor—signals that the company's most capable systems will be premium products, not broadly accessible tools. OpenAI's Scale Tier program requires minimum 30-day token commitments for access to specific model snapshots [16]. Both structures reward large-spending enterprise customers and create barriers for independent developers, researchers, and small organizations.
OpenAI reports over 4 million developers building on its platform, with API throughput reaching 15 billion tokens per minute [17]. Anthropic reports over 300,000 business customers, including more than 500 spending over $1 million annually [18]. The vast majority of these users will never qualify for programs like Glasswing or Trusted Access for Cyber.
The Gap Between Founding Promises and Current Practice
OpenAI's charter commits to "broadly distributed benefits" and states that the organization's "primary fiduciary duty is to humanity" [19]. But Sam Altman acknowledged early on that "as we got closer to building AGI, it will make sense to start being less open" [20]. The company's 2019 shift from nonprofit to capped-profit structure formalized the tension between openness and commercial viability.
Anthropic was founded in 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei, who left partly over concerns about OpenAI's commitment to safety [21]. The company structured itself as a public benefit corporation and positioned safety as its core differentiator. Its Responsible Scaling Policy commits to staged evaluations before broad model deployment.
Both companies have moved further from open access over time. OpenAI's decision to initially withhold GPT-2 in 2019 over misuse concerns set the precedent; Anthropic's Mythos restriction follows the same playbook at a larger scale [8]. Neither company releases model weights. Both gate their most capable systems behind pricing tiers that favor well-funded organizations.
Former OpenAI policy chief Miles Brundage launched AVERI, an independent institute calling for rigorous third-party audits of frontier AI companies, noting that current auditing frameworks range from limited third-party testing (Level 1) to "treaty grade" assurance (Level 4)—and that most companies operate at the lowest levels [22].
Does Restriction Actually Reduce Harm?
The safety case for restricting access has a real foundation. Claude Mythos discovered "thousands of zero-day vulnerabilities, many of them critical," across major software systems [5]. A model capable of autonomous exploit development poses genuine risks if released without controls.
But the evidence on whether API restrictions are the right lever is mixed. Cisco research found that multi-turn jailbreak attacks succeed against open-weight models at a rate of 92.78% [23]. A joint paper by researchers from OpenAI, Anthropic, and Google DeepMind found that adaptive attacks bypassed published model defenses with success rates above 90% across most systems tested [23]. Underground criminal ecosystems continue to exploit commercial AI platforms through prompt engineering and API abuse [24].
Security researcher Bruce Schneier, analyzing Anthropic's Glasswing announcement, noted that "there is a difference between finding a vulnerability and turning it into an attack" and pointed out that another researcher replicated Anthropic's findings using older, cheaper public models [25]. He characterized the announcement as partly a PR strategy, writing that "lots of reporters are breathlessly repeating Anthropic's talking points, without engaging with them critically" [25].
Richard Whaling, a researcher cited in Fortune's analysis, suggested the restriction may reflect resource limitations as much as safety concerns: "Anthropic has not announced how large Mythos is...I think it is likely that they simply do not have the GPU and other compute resources available to serve it at scale" [8].
The Chinese distillation problem complicates the picture further. In February 2026, Anthropic disclosed that three Chinese AI firms—MiniMax, Moonshot/Kimi, and DeepSeek—generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, specifically to train competing models [26]. OpenAI, Anthropic, and Google responded by sharing intelligence through the Frontier Model Forum [27]. A frontier model costs roughly $1 billion to train; a successful distillation run can cost as little as $100,000–$200,000 [26]. Restricting API access is, in part, a defense against this economic threat.
The Competitive Landscape
Anthropic and OpenAI stand out among major AI labs for maintaining fully closed access models. The broader industry has moved toward hybrid approaches.
Meta released Llama 4 as open-weight, allowing researchers and companies to download and fine-tune models locally [28]. But Meta also released Muse Spark as a closed-source product—"a sharp break from Llama," signaling that even the most prominent open-weight champion sees limits to openness [29]. Google DeepMind operates both closed models (Gemini) and open-weight alternatives (Gemma 4, released under Apache 2.0 license with variants up to 31 billion parameters) [30]. Mistral differentiates between closed premier models and free models released under Apache 2.0 [31].
The hybrid approach—open models for broad use, closed models for frontier capabilities—has become the de facto industry standard. Anthropic and OpenAI are the primary exceptions, keeping all their models behind API access controls. Whether this reflects a genuine safety posture or competitive differentiation depends on whom you ask.
Professor Paulo Shakarian observed that Anthropic's approach "plays really well with the chief security officers of the world," strengthening enterprise relationships [8]. In a market where enterprise contracts drive the majority of revenue, the ability to promise controlled access is itself a selling point.
The Research Access Problem
Academic research on large language models reached 453,706 papers in 2025, according to OpenAlex data. The explosion of research has coincided with increasing friction around accessing the most capable models. Proposed regulatory frameworks like New York's RAISE Act would require annual independent audits of frontier models developed by companies with over $500 million in annual revenue [32]. But such audits depend on auditors having meaningful access—something the current system does not guarantee.
The Anthropic-METR partnership, where Anthropic paired internal safety analysis of Mythos with an external review by the Model Evaluation & Threat Research organization, offers one model for structured access [33]. But METR represents a narrow slice of the research community. Independent safety researchers, civil-society organizations, and academic institutions lack comparable access.
A January 2026 analysis of Claude API traffic found that 44% came from computer and mathematical occupations [18]—the technical community most directly affected by access restrictions. Forcing this community onto increasingly expensive API tiers while routing the most capable models exclusively to large enterprises and hand-picked partners creates a two-tier research ecosystem.
Second-Order Effects
If frontier model access continues to concentrate among a small number of paying enterprise customers, the downstream effects are likely to compound.
AI startups face a growing dependency problem. Building on closed APIs means a startup's core capability can be repriced, rate-limited, or deprecated without notice. The OpenClaw incident—where over 135,000 GitHub stars worth of infrastructure was disrupted by Anthropic's third-party access cutoff—illustrates the fragility [10].
Academic NLP research risks bifurcation into two tracks: well-funded labs with enterprise agreements that can access frontier models, and everyone else working with open-weight alternatives that lag behind on the most demanding tasks [28]. The gap between what open models can do and what frontier closed models can do has narrowed—Meta's Llama 4 and Google's Gemma 4 are "genuinely competitive" with GPT-5 and Claude on many benchmarks [28]—but on complex multi-step reasoning and novel coding challenges, closed frontier models still lead.
Regulatory oversight faces a catch-22. Regulators need access to the most capable models to assess their risks, but the companies that build those models control who gets access. The AVERI framework proposes structured audit levels [22], but adoption remains voluntary. Without mandated access, independent evaluation depends on the goodwill of the companies being evaluated.
The Steelman Case for Commercial Gatekeeping
Critics argue the restrictions serve commercial interests dressed up as safety. The timing—Anthropic restricting its most powerful model just as it prepares for an IPO, OpenAI gating capabilities behind tiered verification while pursuing an $852 billion valuation—invites skepticism.
But the steelman case for the companies is also real. The cybersecurity capabilities demonstrated by Mythos are not hypothetical. Zero-day vulnerabilities in major operating systems and browsers, discovered and exploited autonomously, represent a qualitative shift in what AI systems can do [5]. The Chinese distillation problem demonstrates that unrestricted API access has real costs: models costing $1 billion to build can be replicated for a fraction of a percent of that cost by adversaries [26].
The question is not whether restrictions are justified in some cases—few serious observers argue otherwise—but whether the companies imposing the restrictions are accountable to anyone other than their investors. As Schneier put it, the genuine urgency of the cybersecurity threat exists "regardless of the PR messaging" [25]. The challenge is building governance structures that match the urgency without simply handing the keys to the companies that profit most from scarcity.
Both companies, along with Google, have sought regulatory clarity from the U.S. government on what information they can legally share through the Frontier Model Forum without triggering antitrust concerns [26]. That clarity has not arrived. In its absence, the three largest AI labs are making decisions about access, safety, and competition through private agreements—a concentration of decision-making power that neither their founding charters nor public accountability structures were designed to check.
Sources (33)
- [1]Anthropic restricts release of new AI model to select companiesthehill.com
Anthropic says new AI model too dangerous for public release, limits access to roughly 40 organizations through Project Glasswing.
- [2]OpenAI rolls out tiered access to advanced AI cyber modelsaxios.com
OpenAI adds new tiers to its Trusted Access for Cyber program with higher verification unlocking more powerful capabilities including GPT-5.4-Cyber.
- [3]Anthropic Just Passed OpenAI in Revenue While Spending 4x Less to Train Their Modelssaastr.com
Anthropic confirmed $30 billion annualized run-rate, surpassing OpenAI's $24 billion, with roughly 80% of revenue from enterprise customers.
- [4]OpenAI revenue, valuation & fundingsacra.com
OpenAI confirmed $2 billion in monthly revenue alongside a $122 billion raise at an $852 billion valuation, growing from $2B ARR in 2023 to $24B in 2026.
- [5]Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systemsthehackernews.com
Claude Mythos Preview identified thousands of zero-day vulnerabilities, many critical, in every major operating system and web browser.
- [6]Anthropic withholds Mythos Preview model because its hacking is too powerfulaxios.com
During safety testing, Mythos didn't adhere to containment boundaries—behavior Anthropic described as 'reckless.'
- [7]Project Glasswing: Securing critical software for the AI eraanthropic.com
Launch partners include AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, NVIDIA, Palo Alto Networks, and the Linux Foundation.
- [8]What Anthropic's too-dangerous-to-release AI model means for the AI racefortune.com
Mythos priced at roughly five times Opus; Anthropic valued at ~$380B preparing for IPO. Researcher suggests restriction may reflect resource limitations.
- [9]OpenAI reportedly following Anthropic's lead in restricting access to powerful cybersecurity AIthe-decoder.com
OpenAI offers $10 million in API credits for defensive security work through Trusted Access for Cyber program.
- [10]Anthropic cuts off the ability to use Claude subscriptions with OpenClaw and third-party AI agentsventurebeat.com
Effective April 4, 2026, Anthropic blocked Claude Pro and Max subscription access for all third-party agentic tools.
- [11]Anthropic kills Claude subscription access for third-party tools like OpenClawdev.to
Boris Cherny: 'Our subscriptions weren't built for the usage patterns of these third-party tools.'
- [12]Anthropic Ends Paid Access for Claude in Third-Party Tools Like OpenClawmlq.ai
Third-party tools had been accessing Claude at five to ten times lower cost than direct API rates through subscription OAuth tokens.
- [13]Anthropic turns the tables on OpenAI in critical revenue categoryaxios.com
Anthropic's revenue is roughly 80% enterprise; enterprise contracts carry higher retention, lower churn, and expanding contract values.
- [14]OpenAI Business Model Explained: 900M Users, Only 5% Payingeuropeanbusinessmagazine.com
OpenAI reports enterprise now makes up more than 40% of revenue, up from ~30% last year, on track to reach parity with consumer by end of 2026.
- [15]Anthropic revenue, valuation & fundingsacra.com
Anthropic closed a $30 billion Series G at $380 billion post-money valuation in February 2026, led by GIC and Coatue.
- [16]Scale Tier for API Customersopenai.com
Scale Tier lets you purchase a set number of API tokens per minute upfront for access to one specific model snapshot, minimum 30-day commitment.
- [17]OpenAI Statistics 2026: Users, Revenue & Market Sharegetpanto.ai
Over 4 million developers have built using OpenAI tools, with API throughput reaching 15 billion tokens per minute as of March 2026.
- [18]Anthropic AI Statistics 2026: Users, Revenue & Market Sharegetpanto.ai
Over 300,000 business customers including 500+ spending over $1M annually; 44% of API traffic from computer and mathematical occupations.
- [19]OpenAI Charteropenai.com
OpenAI's mission: ensure AGI benefits all of humanity. Primary fiduciary duty is to humanity. Commits to broadly distributed benefits.
- [20]Report: OpenAI Business Breakdown & Founding Storyresearch.contrary.com
Altman stressed OpenAI would not release all source code and that as they got closer to AGI, 'it will make sense to start being less open.'
- [21]How Anthropic Designed Itself to Avoid OpenAI's Mistakestime.com
Anthropic founded 2021 by former OpenAI researchers including Dario and Daniela Amodei, structured as a public benefit corporation focused on AI safety.
- [22]Former OpenAI policy chief debuts new institute called AVERIfortune.com
Miles Brundage launches AVERI, proposing AI Assurance Levels from Level 1 (limited third-party testing) to Level 4 ('treaty grade' assurance).
- [23]Death by a Thousand Prompts: Open Model Vulnerability Analysisblogs.cisco.com
Cisco tested eight major open-weight AI models; multi-turn jailbreak attacks succeeded nearly 93% of the time across all models tested.
- [24]The State of Criminal AI: Crime as a Service, AI as the Multipliertrendmicro.com
Criminals continue to rely on jailbreaking commercial LLMs through increasingly sophisticated prompt engineering and API abuse.
- [25]On Anthropic's Mythos Preview and Project Glasswing - Schneier on Securityschneier.com
Schneier: 'There is a difference between finding a vulnerability and turning it into an attack.' Calls announcement partly PR, but acknowledges real urgency.
- [26]Inside the Frontier Model Forum's Quiet War on Chinese Distillationmedium.com
Anthropic disclosed 16M Claude exchanges across ~24,000 fraudulent accounts by MiniMax, Moonshot/Kimi, and DeepSeek. Frontier model costs ~$1B to train; distillation costs ~$100K–$200K.
- [27]OpenAI, Anthropic, Google Unite to Combat Model Copying in Chinabloomberg.com
The three labs began sharing intelligence on adversarial distillation through the Frontier Model Forum, seeking regulatory clarity on antitrust limits.
- [28]Open Source LLMs vs Closed Models: Key Differences Explainedaimlinsights.com
Meta Llama 4 and Google Gemma 4 have closed the gap with GPT-5 and Claude, making open source genuinely competitive on most benchmarks.
- [29]Meta's First Closed-Source AI Model Arrivesremio.ai
Meta's Muse Spark is closed-source—a sharp break from Llama—signaling Meta is done giving away its best AI work for free.
- [30]Google Gemma 4: The Open Source AI Model Challenging Llama and Qwenpasqualepillitteri.it
Google Gemma 4 launched April 2, 2026 with 4 variants up to 31B parameters under unrestricted Apache 2.0 license.
- [31]Mistral AI Review 2026: Europe's Open-Weight LLM Championvisionsparksolutions.com
Mistral differentiates between closed premier models and free models released under Apache 2.0 license for research use.
- [32]New York's RAISE Act: What Frontier Model Developers Need to Knownatlawreview.com
RAISE Act applies to developers with $500M+ annual revenue, requiring annual independent audits of frontier model compliance.
- [33]Common Elements of Frontier AI Safety Policiesmetr.org
METR analysis of frontier AI safety policies; Anthropic paired internal Mythos safety analysis with external METR review.