Eli's AI Daily 7 May 2026

Five stories. Twenty headlines. Everything that matters in AI today.

Good morning. This week Nvidia decided the future of AI runs on glass, not copper, and wrote a $3.2 billion cheque to prove it. AMD's Lisa Su told the market that agentic AI is driving demand she would not have imagined. Google and Microsoft quietly agreed to let the government test their models before anyone else sees them. Google Chrome installed a 4GB AI model on your computer without asking. And OpenAI shelved a plan to spin off its robotics division before anyone found out it existed. Thursday.

Story 01

Nvidia invested $3.2 billion in a glassmaker. The future of AI runs on fibre, not copper.

Nvidia and Corning announced a multiyear partnership yesterday to build three new US factories and expand optical connectivity tenfold. Corning shares jumped 14%.

What happened

Nvidia and Corning announced a multiyear commercial and technology partnership on Wednesday, backed by a $500 million Nvidia investment in Corning warrants giving it the right to invest up to $3.2 billion total. Under the deal, Corning will build three new advanced manufacturing facilities in North Carolina and Texas, expanding its US optical connectivity manufacturing capacity tenfold and its fibre production capacity by more than 50%. The expansion is expected to create more than 3,000 jobs. The partnership is centred on co-packaged optics, a technology that replaces copper interconnects between chips with optical glass fibre, dramatically increasing bandwidth and reducing energy consumption. Corning shares closed up roughly 14% on the announcement. Nvidia gained nearly 6%. Jensen Huang described it as "inventing the future of computing with advanced optical technologies."

Why it matters

The bottleneck in AI infrastructure is no longer just compute. It is the speed at which data can move between chips. As AI factories grow to tens of thousands of GPUs, copper wiring becomes a physical constraint: it generates heat, loses signal, and cannot carry the bandwidth modern workloads demand. Co-packaged optics replaces copper with glass, moving data at the speed of light with a fraction of the energy cost. This partnership is Nvidia's bet that the next generation of AI infrastructure requires a complete rethink of how chips talk to each other. The warrant structure is also worth noting. Nvidia is not just buying components. It is taking an equity-linked position in the upside of Corning's expansion. That is a more aggressive form of supplier alignment than a standard supply agreement, and it signals that Nvidia expects optical connectivity to become a material competitive advantage within the three-year warrant window.

The question nobody's asking

If optical fibre becomes the critical infrastructure layer for AI, and Nvidia controls the supply through a $3.2 billion equity stake, does that make Nvidia a chip company or an infrastructure monopoly?

Story 02

AMD surged 19% after Lisa Su said agentic AI is driving demand she "would not have imagined"

AMD's Q1 beat every metric. Q2 guidance of 46% revenue growth sent the stock to an all-time high. Su's explanation was three words: agentic AI demand.

What happened

AMD reported Q1 2026 earnings on Tuesday that beat Wall Street estimates on every major metric, sending the stock up 19% in pre-market trading to an all-time high of $420. Revenue came in ahead of consensus and the company guided Q2 revenue to approximately $11.2 billion, representing 46% year-on-year growth and well above the $10.5 billion analysts had anticipated. CEO Lisa Su appeared on CNBC Wednesday morning to explain the revision, attributing it directly to agentic AI. "Agents are really driving tremendous demand in the overall AI adoption cycle," she said, adding that she was seeing demand at "a pace that I would not have imagined." AMD also doubled its long-term forecast, now projecting the server CPU market will top $120 billion by 2030. Server CPU revenue alone is expected to grow more than 70% year-on-year in Q2. Super Micro Computer surged 18% on the same day.

Why it matters

AMD's results matter for two reasons beyond the stock price. First, they confirm that the AI infrastructure buildout is not slowing. Su's 70% server CPU growth projection for a single quarter is an extraordinary number, and it is being driven not by training runs but by inference, the compute required to actually run AI agents at scale. That is a different and more persistent source of demand than one-off model training. Second, the AMD result provides independent validation of the revenue story Big Tech told in its earnings last week. If Alphabet, Microsoft, and Amazon are buying AI infrastructure, and AMD is supplying the CPUs that power it, the demand is real. The question is whether the applications being built on top of that infrastructure will eventually justify the investment. Su's confidence suggests the hardware side of that equation is not the constraint.

The question nobody's asking

AMD's server CPU revenue is growing 70% in a single quarter because AI agents need compute to run. But agents only generate revenue if the tasks they complete create value. Where is the evidence that the applications are keeping pace with the infrastructure?

Story 03

Google, Microsoft and xAI agreed to let the US government test their AI models before anyone else

NIST announced the agreements today. The trigger was Anthropic's Mythos. The implications run well beyond cybersecurity.

What happened

The Center for AI Standards and Innovation at the US Department of Commerce announced today that Google DeepMind, Microsoft, and xAI have signed agreements to share unreleased versions of their AI models with the government before public launch. The centre will conduct pre-deployment evaluations and targeted research to assess frontier AI capabilities and their implications for national security and public safety. Developers will provide versions with safety guardrails stripped back so CAISI can probe for vulnerabilities. The announcement builds on existing 2024 agreements with OpenAI and Anthropic. The White House is also separately discussing an executive order to create a formal AI working group involving tech executives and government officials. The immediate catalyst for all of this was Anthropic's Claude Mythos model, which can autonomously identify and exploit software vulnerabilities across every major operating system. Anthropic CEO Dario Amodei met with senior Trump administration officials at the White House just days after Mythos was announced.

Why it matters

Six months ago, the Trump administration was the loudest voice in favour of AI deregulation. The White House explicitly tore up Biden's AI safety order on day one. The reversal that has followed is one of the most significant policy shifts in recent memory, and it has happened entirely because of one unreleased model. Mythos demonstrated that frontier AI capability is advancing faster than any government's ability to respond reactively. Pre-deployment evaluation is the logical response. The question is what it means in practice. The labs are handing over models with guardrails stripped, which gives the government genuine insight into capability. But it also means the most sensitive and capable AI models in existence are being shared with a federal agency that has already had highly classified documents leaked repeatedly. Pre-deployment evaluation is the right idea. The implementation details matter enormously.

The question nobody's asking

The labs are sharing their most capable unreleased models, with guardrails stripped, with a government agency. Who is auditing the auditor?

Story 04

Google Chrome installed a 4GB AI model on your computer. It did not ask. It will do it again if you delete it.

Security researcher Alexander Hanff discovered that Chrome is silently downloading Gemini Nano to user devices. At Chrome's scale, the environmental cost alone is staggering.

What happened

Security researcher and privacy lawyer Alexander Hanff, known online as That Privacy Guy, published findings this week that Google Chrome is silently downloading an approximately 4GB on-device AI model called Gemini Nano to user devices without notification, consent, or an opt-out option. The file, named weights.bin, is stored in a folder called OptGuideOnDeviceModel inside the Chrome user profile and is downloaded automatically on devices that meet certain hardware requirements. Hanff discovered the behaviour while running automated privacy audits, finding that a browser profile that had received zero human input had accumulated 4GB of model weights within days of creation. If users discover and delete the file, Chrome re-downloads it automatically. Disabling it requires navigating to chrome://flags and disabling a buried experimental setting. The model powers features including text composition assistance, on-device scam detection, and a Summarizer API. Google has not commented directly on the consent question.

Why it matters

Chrome has roughly 3.5 billion users. A silent 4GB download at that scale represents between 6,000 and 60,000 tonnes of CO2-equivalent emissions from bandwidth alone. That is before you consider the storage impact on devices with limited capacity, the data costs for users on metered connections, or the legal exposure under GDPR and CCPA, both of which require meaningful consent for this kind of system-level modification. The deeper issue is the pattern Hanff has identified. This is not the first time a major AI company has silently modified user environments. He identified similar behaviour in Anthropic's Claude Desktop, which reinstalled a browser integration bridge across multiple browsers without user knowledge. The common thread is that both companies appear to have concluded that asking for permission creates friction that harms adoption. That is a calculation with legal, regulatory, and reputational consequences that have not yet fully arrived.

The question nobody's asking

Google decided not to ask 3.5 billion people whether they wanted a 4GB AI model installed on their computers. What does it say about the industry that this is becoming the norm rather than the exception?

Story 05

OpenAI shelved a plan to spin off its robotics and hardware divisions before the IPO. Nobody knew the plan existed.

The Wall Street Journal reported the shelved spinoff plan on Wednesday. OpenAI's reasoning was that the entities would stay on the balance sheet regardless.

What happened

OpenAI considered spinning out its robotics and hardware divisions ahead of its planned IPO but shelved the idea after concluding the entities would remain on its balance sheet regardless of the corporate structure, according to a Wall Street Journal report published Wednesday. The existence of the plan itself was not previously known. OpenAI has been building out hardware capabilities including its Jony Ive-designed screenless device, its AI phone project reported earlier this month, and robotics capabilities connected to its physical AI ambitions. The company ultimately decided that separating these units would not achieve the balance sheet simplification that typically motivates a spinoff, and that keeping them integrated better supported its broader product strategy heading into a potential late 2026 IPO.

Why it matters

The story confirms two things simultaneously. First, OpenAI's hardware ambitions are more developed than its public communications suggest. A spinoff is not considered for a division that does not have meaningful assets, people, and a cost base. The fact that it was considered and rejected means the robotics and hardware operation is substantial enough to warrant that level of corporate planning. Second, the rejection tells you something important about the IPO calculus. OpenAI's CFO Sarah Friar has already flagged concerns about revenue growth and financial infrastructure. A spinoff that does not actually simplify the balance sheet would add complexity to the prospectus without any of the benefits. Investors buying the IPO will be buying a company with a model business, an enterprise deployment business, a hardware division, a robotics operation, a media acquisition, and an ongoing lawsuit. That is a complicated story to tell at $852 billion.

The question nobody's asking

OpenAI considered and rejected a spinoff because it would not help. What other structures has the company explored and quietly abandoned as it tries to make an $852 billion valuation look reasonable on a prospectus?

Two tools worth your time

Chrome flags fix — If you want to stop Chrome re-downloading the Gemini Nano model, go to chrome://flags in your address bar, search for "Enables optimization guide on device," and set it to Disabled. Worth doing before the model silently reappears.

AMD Instinct MI350 preview — AMD's next-generation GPU, expected in the second half of 2026, is already generating significant enterprise interest following this week's results. If you are evaluating AI infrastructure options beyond Nvidia, AMD's developer programme is now worth registering for ahead of the MI350 launch.

What else happened in AI today

Twenty things worth knowing. One line each.

01 Super Micro Computer surged 18% on Wednesday alongside AMD, after the server maker guided Q4 profit of 65 to 79 cents per share, trouncing Wall Street's 55 cent expectation.
02 Corning's US optical connectivity manufacturing capacity will increase tenfold under the Nvidia deal, making it the single largest expansion of AI-related optical infrastructure in American manufacturing history.
03 AWS introduced a preview feature allowing AI agents to access and operate WorkSpaces virtual desktops using assigned identities, a significant step toward agents operating enterprise software the way humans do.
04 Apple settled its Siri privacy lawsuit for $250 million on Wednesday, closing a case alleging that Siri had improperly recorded and shared private conversations without user consent.
05 iOS 27 will reportedly let users choose between Gemini, Claude, and other AI models for Siri features, ending Apple's exclusivity arrangement with OpenAI ahead of schedule.
06 CAISI, the US government's main AI model testing hub, has already completed more than 40 evaluations including on models not yet available to the public, before today's expanded agreements were even signed.
07 Gemini 3.2 Flash briefly appeared in the Gemini app for some users this week, ahead of Google I/O on May 19, where a Gemini 3.5 announcement is widely rumoured.
08 An Oxford study found that fine-tuning AI models to be kinder and more empathetic made them 60% more likely to give wrong answers, with error rates ballooning to nearly 12 percentage points above baseline in emotionally charged conversations.
09 Anthropic's Claude Mythos identified a 27-year-old vulnerability in OpenBSD and a 16-year-old bug in FFmpeg that automated scanning tools had passed millions of times without detecting, according to internal testing details published this week.
10 AMD doubled its long-term server CPU market forecast to $120 billion by 2030, up from its previous projection, driven entirely by agentic AI workload demand.
11 Google updated AI Search to include excerpts from Reddit and other web forums alongside standard results, and added a feature that highlights links from a user's own news subscriptions.
12 OpenAI's Jony Ive device, its AI phone project, and its robotics operation will all remain on the company's balance sheet heading into the IPO after the shelved spinoff plan.
13 US power consumption is forecast to hit record highs in both 2026 and 2027, with AI and cryptocurrency data centres accounting for the majority of the demand increase according to the Energy Information Administration.
14 SpaceX is developing its own GPUs, listed among "substantial capital expenditures" in its S-1 registration ahead of its IPO, marking a major vertical integration play alongside Tesla and xAI.
15 IREN agreed to acquire cloud infrastructure firm Mirantis in an all-stock deal worth approximately $625 million, adding Kubernetes orchestration and enterprise support to its AI cloud platform.
16 Anthropic's annualised revenue run rate has reached $30 billion, tripling from $9 billion in just four months, driven primarily by enterprise adoption of Claude Code and agentic workflows.
17 Meta is building an AI-powered shopping tool directly within Instagram that allows users to view product information, ask AI questions, and complete purchases without leaving a Reel or feed.
18 A Northern District of California ruling found that when an AI platform exercises "ultimate authority" over assembled ad content, it may be considered a maker of fraudulent statements under securities law, creating significant new legal exposure for Meta, Alphabet, Snap, TikTok, and X Corp.
19 Google's Gemini API File Search now supports multimodal RAG combining image and text, with custom metadata filtering and page-level citations, available free of charge to developers.
20 Google I/O is on 19 May. Leaks this week suggest Gemini 3.2 Flash and possibly Gemini 3.5 are imminent. If you want to know what Google's AI strategy actually looks like for the second half of 2026, that event is the one to watch.
That is your Thursday. Nvidia is betting AI's future runs on glass. AMD says agents are driving demand nobody predicted. The government is getting early access to AI models that the rest of us have not seen yet. Chrome installed something on your computer without asking. And OpenAI is going public with a hardware division, a robotics arm, a media acquisition, and an ongoing lawsuit. The story is getting more complicated by the week. See you tomorrow.

— Eli

Reply

Avatar

or to participate

Keep Reading