Eli's AI Daily 5 May 2026

Five stories. Twenty headlines. Everything that matters in AI today.

Good morning. Today OpenAI and Anthropic both launched enterprise deployment ventures within hours of each other, in what can only be described as a coordinated declaration of war on the consulting industry. The banks financing all of this are quietly choking on the debt. The White House has decided it wants to vet AI models before they reach the public, which is quite the turnaround from a president who spent his first day in office tearing up AI safety rules. And a chip company you may not have heard of just filed for a $3.5 billion IPO. Tuesday.

Story 01

OpenAI raised $4 billion to deploy AI inside your company. PE firms get a guaranteed 17.5% return.

The Deployment Company is OpenAI's most ambitious commercial move yet. The structure tells you everything about who they think has the money.

What happened

OpenAI closed more than $4 billion for a new joint venture called The Deployment Company, backed by 19 investors including TPG, Brookfield Asset Management, Advent International, Bain Capital, SoftBank, and Dragoneer. The venture is valued at $10 billion pre-money, with OpenAI retaining majority ownership and super-voting shares. Private equity investors are guaranteed a 17.5% annual return over a five-year window, with OpenAI covering the shortfall if targets are missed. OpenAI itself is contributing $500 million upfront with an option to add up to $1.5 billion more. The operating model is deliberately hands-on: rather than selling software licences, The Deployment Company will embed engineers directly inside client organisations, identifying where OpenAI tools can create impact and then building and maintaining those systems. The structure is modelled explicitly on Palantir's forward-deployed engineer approach. Combined, the 19 PE backers control or influence more than 2,000 portfolio companies and clients.

Why it matters

OpenAI's enterprise API market share has dropped from roughly 50% in 2023 to around 25% by mid-2025, with Anthropic and Google making significant inroads. The Deployment Company is the response: if you cannot win on the model alone, you win by owning the implementation layer. The 17.5% guaranteed return is the interesting structural detail. It means OpenAI has effectively underwritten the downside for its PE backers in exchange for access to their portfolio companies. That is not a software licensing deal. That is a distribution agreement dressed up as a joint venture. For McKinsey, Accenture, BCG, and every other firm that has spent the last three years building AI consulting practices, this is a direct threat. OpenAI is not just building the model anymore. It is coming for the implementation revenue too.

The question nobody's asking

OpenAI is guaranteeing PE investors a 17.5% annual return while simultaneously missing its own internal revenue targets. Who exactly is underwriting that guarantee?

Story 02

Anthropic did the same thing. Within hours. With Goldman Sachs and Blackstone.

The timing was not a coincidence. The two biggest AI labs declared war on enterprise consulting on the same Monday.

What happened

Anthropic announced a $1.5 billion enterprise AI services joint venture with Blackstone, Hellman and Friedman, and Goldman Sachs, backed further by Apollo Global Management, General Atlantic, Leonard Green, GIC, and Sequoia Capital. The anchor partners each contributed approximately $300 million, with Goldman Sachs putting in around $150 million. The venture is a standalone entity with Anthropic engineers embedded directly within its team, targeting mid-sized companies including regional health systems, community banks, and multi-site manufacturers. The announcement landed within hours of OpenAI's Deployment Company news. Anthropic CFO Krishna Rao said enterprise demand for Claude was "significantly outpacing any single delivery model." Blackstone President Jon Gray described it as addressing "one of the most significant bottlenecks to enterprise AI adoption," namely the scarcity of engineers who can implement frontier AI systems quickly.

Why it matters

The contrast between the two ventures is instructive. OpenAI is targeting large PE portfolios at scale with a guaranteed-return structure that prioritises volume and speed. Anthropic is targeting the middle market with a common equity approach and a focus on sectors requiring careful implementation: healthcare, financial services, manufacturing. Two different theories of how AI deployment actually works at enterprise scale. For the consulting industry, both models are threatening, but in different ways. OpenAI threatens the top of market. Anthropic threatens the mid-market. Between them, they are attempting to take the most profitable work in enterprise technology and bring it in-house, powered by their own models. The consulting response will be interesting to watch.

The question nobody's asking

Both ventures embed engineers inside client companies using their own models. When a client needs an honest assessment of whether their AI investment is working, who do they ask?

Story 03

The banks financing all of this are quietly choking on the debt

JPMorgan, Morgan Stanley, and SMBC are looking for ways to offload AI data centre loans. One deal alone is $38 billion.

What happened

Major banks including JPMorgan Chase, Morgan Stanley, SMBC, and MUFG are actively searching for ways to distribute the credit risk from AI data centre financing to other investors, according to Financial Times reporting published today. Loan volumes for new data centres have grown so large that individual institutions are hitting their internal concentration limits. One deal illustrates the scale: a $38 billion loan package financing Oracle data centres in Texas and Wisconsin has had JPMorgan and MUFG spending months trying to distribute portions of the debt across the broader market. Banks are exploring private sales of debt tranches and so-called Significant Risk Transfer structures to reduce their exposure. US Treasury Secretary Scott Bessent separately said today that banks and technology companies are working to strengthen defences against AI-enabled cyber threats, acknowledging that AI risk has moved into core financial infrastructure.

Why it matters

The AI infrastructure buildout is the largest single capital deployment in the history of technology. Oracle alone has signed a $300 billion deal with OpenAI. CoreWeave, which listed earlier this year, borrowed hundreds of billions to build capacity. The assumption underlying all of this is that AI demand will grow fast enough to service the debt. If it does not, the exposure does not sit with the AI companies, it sits with the banks. The fact that JPMorgan and Morgan Stanley are now trying to shed that exposure is a signal worth paying attention to. These are not cautious institutions by nature. When they start using phrases like "concentration risk" about a single sector, it means the loans are large enough to create systemic concern. The AI bubble debate has so far been conducted mostly in the context of equity valuations. The debt side of the story has received considerably less attention.

The question nobody's asking

If JPMorgan cannot find buyers for $38 billion in Oracle data centre debt, what does that say about how the broader market actually values AI infrastructure at this scale?

Story 04

The White House now wants to vet AI models before they go public. It tore up those rules 16 months ago.

The Trump administration is considering an executive order requiring government oversight of new AI models. The catalyst was Anthropic's Mythos.

What happened

The Trump administration is discussing an executive order to establish a working group of tech executives and government officials to examine review procedures for new AI models before public release, the New York Times reported today, citing US officials and people briefed on the deliberations. Senior administration officials have already briefed executives from Anthropic, Google, and OpenAI on some of the plans under consideration. The immediate trigger was Anthropic's Mythos model, which the company has withheld from public release due to its ability to autonomously identify and exploit software vulnerabilities across every major operating system. The shift is dramatic. On his first day in office in January 2025, Trump revoked a Biden-era executive order that required AI developers to share safety test results with the government before release. The White House on Tuesday declined to confirm the report, saying any policy announcement would come directly from the president.

Why it matters

The political calculus has shifted. Trump came into office treating AI regulation as a threat to US competitiveness against China. What has changed is the emergence of models like Mythos that are too capable to release publicly, combined with growing bipartisan anxiety about AI's impact on jobs, energy costs, and national security. The departure of David Sacks as AI czar in March removed the administration's most vocal deregulation voice. In his place, Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent are taking a harder line. The detail that administration officials have already briefed the three major labs privately, before any public announcement, is also notable. Whatever the final policy looks like, the era of completely hands-off federal AI policy in the United States appears to be ending.

The question nobody's asking

Trump tore up Biden's AI safety order on day one and called it anti-competitive. The policy reversal is now being driven by the same concern Biden had. So who was right?

Story 05

Cerebras just filed for a $3.5 billion IPO. Its entire thesis is that Nvidia has a competitor.

The AI chip company with a $20 billion OpenAI compute deal is heading to the Nasdaq. The timing is deliberate.

What happened

Cerebras Systems filed updated IPO paperwork today for a Nasdaq listing, planning to sell 28 million shares at $115 to $125 per share, targeting up to $3.5 billion in proceeds. The offering values the company at up to $26.6 billion, up from a $23 billion valuation in a February venture round backed by AMD. Cerebras builds wafer-scale processors, essentially chips the size of an entire silicon wafer rather than the small dies used in Nvidia GPUs, which it claims deliver dramatically faster AI inference for certain workloads. The company reported fourth-quarter 2025 revenue of $510 million, up 76% year on year, and net income of $87.9 million. Its largest commercial relationship is a multi-year deal with OpenAI for up to 750 megawatts of compute capacity, valued at more than $20 billion through 2028. CEO Andrew Feldman is not selling shares in the offering.

Why it matters

Cerebras is the first serious alternative AI chip play to attempt a public listing since CoreWeave's successful IPO earlier this year, and it arrives at an interesting moment. Nvidia's market dominance in AI compute is total and widely documented. What is less well understood is the inference market, where the economics are different. Training requires Nvidia's specific capabilities. Inference, running models after they are trained, is a more competitive space where alternative architectures can compete on cost and latency. Cerebras's wafer-scale design has genuine technical advantages for certain inference workloads, particularly large models running in real time. The $20 billion OpenAI deal is the most credible endorsement possible, but it also means the company's entire revenue trajectory depends on one customer. That concentration will be the central investor concern going into the IPO.

The question nobody's asking

Cerebras's valuation rests almost entirely on its OpenAI deal. OpenAI is simultaneously missing revenue targets and launching a new venture that needs to justify its own costs. What happens to Cerebras if that contract gets renegotiated?

Two tools worth your time

Anthropic's enterprise JV application — If you run a mid-sized business in healthcare, financial services, or manufacturing and want access to embedded Claude engineers, Anthropic is accepting expressions of interest via their official announcement page. Worth registering early given the initial pipeline comes from PE portfolio companies.

Cerebras Cloud — Available now before the IPO. If you are running large model inference workloads where latency matters, Cerebras's wafer-scale inference is worth benchmarking against your current GPU setup. The pricing is competitive and the speed gains on certain architectures are real.

What else happened in AI today

Twenty things worth knowing. One line each.

01 IBM Think 2026 opened this morning in Boston with CEO Arvind Krishna unveiling the company's most comprehensive enterprise AI announcements to date, centred on moving organisations from pilots to full-scale deployment.
02 IBM also announced expanded partnerships with MIT and Dallara to pair quantum computing and AI for research and advanced engineering, with live demos of agentic AI managing enterprise workflows on stage today.
03 US Treasury Secretary Scott Bessent confirmed today that banks and technology companies are actively strengthening defences against AI-enabled cyberattacks, marking the first time a senior Treasury official has framed AI as a financial stability issue.
04 David Sacks, former White House AI czar and champion of AI deregulation, departed the role in March. His successor policy is now the opposite of everything he stood for.
05 Elon Musk acknowledged in court that xAI had, to some extent, used outputs from OpenAI models to train its own systems — a practice known as distillation — adding a new legal flashpoint to an already crowded week for AI intellectual property.
06 Anthropic's annualised revenue run rate has reached $30 billion, up from $9 billion at the end of 2025 — a tripling in roughly four months that the company attributes largely to Claude Code adoption among enterprise engineering teams.
07 OpenAI's Codex now has over 4 million weekly active users, more than doubling since the start of the year, according to figures confirmed this week alongside the Amazon Bedrock partnership.
08 The 19 PE firms backing OpenAI's Deployment Company collectively control access to more than 2,000 portfolio companies — giving OpenAI a pre-built enterprise client pipeline that would have taken years to build through a traditional sales motion.
09 Cerebras CEO Andrew Feldman is not selling any shares in the IPO, a signal that insiders believe the current valuation is below where the stock will trade once the company is public.
10 The Musk v. OpenAI trial continues this week, with Greg Brockman expected on the stand and Sam Altman scheduled to testify during the week of 11 May.
11 Oracle has taken on $43 billion in debt in 2026 alone to finance the $300 billion data centre deal with OpenAI — a concentration of liability that analysts at Columbia Business School have described as "renting out its investment-grade credit rating."
12 McKinsey has announced it will use AI agents to help select client teams for engagements — a move that puts it in direct competition with the enterprise ventures that OpenAI and Anthropic just launched today.
13 Microsoft and Amazon have both handed the Pentagon broader control over their AI systems in classified networks, continuing the trend set by Google's deal last week.
14 Anthropic's board decision on whether to proceed with its $50 billion fundraise at a $900 billion valuation is expected this month — a figure that would make it the most valuable private company in history.
15 AI startup Artisan used KC Green's "This is Fine" comic in an ad campaign without permission, with Green publicly asking followers to vandalise the ads. The company said it was "reaching out to him directly."
16 The ASX in Australia has warned listed companies against using inflated AI claims to push stock prices — the first major stock exchange to explicitly flag AI narrative manipulation as a regulatory concern.
17 Singapore's Prime Minister warned today of significant AI-driven disruptions to the workforce and committed the government to active support for workers displaced by automation.
18 Goldman Sachs has barred its Hong Kong bankers from using Anthropic's Claude models, citing US-China tech tensions — even though Hong Kong is not subject to mainland China's AI restrictions.
19 Cerebras's wafer-scale chip design activates the entire silicon surface rather than small dies, meaning it has dramatically more on-chip memory and bandwidth than Nvidia GPUs — a genuine technical advantage for inference-heavy workloads at scale.
20 The White House's about-turn on AI oversight means that every major AI governance position taken in the past 16 months — by the US, EU, and UK — has now been revised, reversed, or significantly softened at least once. The regulatory landscape has never been less stable.
That is your Tuesday. Two AI labs declared war on the consulting industry before lunch. The banks financing the infrastructure are quietly trying to shed the risk. The White House reversed its own policy in 16 months. And a chip company no one outside the industry has heard of filed to go public on the back of a $20 billion deal with a company that is simultaneously missing its revenue targets. If any of this feels like it is moving too fast to track, that is because it is. See you tomorrow.

— Eli

Reply

Avatar

or to participate

Keep Reading