AI in the news: week of March 29, 2026

Q1 closed loud. The Sora public API got sunset, the White House dropped an AI framework, MCP crossed 97M installs, a Mythos leak surfaced Anthropic's next model, and Oracle led the largest layoff round in its history. Q2 starts with momentum, not from rest.

AI in the news: week of March 29, 2026

What this week actually changed: Q1 closed with the loudest mix yet of the four things that will define Q2, vendor pricing reality (Sora API gone), federal policy finally arriving (three bills in one week), agent protocols crossing into infrastructure (MCP at 97M installs), and labor cuts hitting their highest absolute number of the cycle.

Quarter-close week. The post-GTC press cycle was still running, last week's Challenger numbers were still being chewed on, and a handful of stories landed that will set the Q2 agenda. Let me walk through what I think it means.

The hosted-frontier ceiling is real, and Sora hit it first

On March 24, OpenAI announced the Sora public API is being shut down with 30 days' notice. The reason was honest for a change: the per-minute compute cost on video generation never penciled out at the price the market would pay. The consumer Sora app stays. The developer tier (the one third parties built products on) goes away.

This is the most honest signal we've had from a frontier lab in a while. Text generation amortizes across millions of cheap calls. Video generation doesn't, and the consumer app subsidizes the cost in a way the API tier can't. The press framing is "OpenAI is repositioning." The real message is "we couldn't make the margin work."

The bigger read: the "everything will eventually become an API call" assumption that a lot of agent-platform planning is built on has a real ceiling at the high-compute modalities. Some things will stay as products, and pricing will reflect that. The third-party teams who built on Sora API in the last six months are the practical casualty. Thirty days is a sprint, not a migration. Same lesson as every quarter: taking a hard dependency on a single vendor's hosted modality is a strategic exposure, not a vendor-management one.

The federal AI conversation finally arrived, incoherently

Three proposals dropped in the same week, pointing in three directions. On March 25, the White House released its National Policy Framework for Artificial Intelligence, with a push to preempt state AI laws, meaning the federal version, if Congress takes it up, would override SB 53 in California and the RAISE Act in New York. Same week, Rep. Beyer introduced the GUARDRAILS Act, designed to repeal the executive order behind that framework and block any preemption. And Senator Sanders and Rep. Ocasio-Cortez introduced the AI Data Center Moratorium Act, which would pause large-scale data center construction until national standards on energy, water, and worker protections pass.

The preemption fight is the one I'd watch. State laws have been where the real governance work has been getting done for 18 months. SB 53 set the transparency template, RAISE picked it up in New York, more states are queued. A federal bill that overrode all of that would be the wrong trade, federal law that doesn't go as far as the state laws it replaces is a step backward dressed up as harmonization.

The data center moratorium is the more interesting one to me, even though it has zero chance in this Congress. It's the first US proposal that puts the energy-and-water cost of AI infrastructure as the primary frame instead of a side note. The hyperscaler buildouts are reshaping regional grids. The political conversation is overdue. That the proposal got dismissed as fringe is itself the story. Same week, Hochul signed amendments to the RAISE Act on March 27, shifting NY toward the SB 53 model. The state-level template is hardening in real time.

MCP crossed into infrastructure

Anthropic's quarterly update on March 25 confirmed the Model Context Protocol crossed 97 million installs by end of Q1. (MCP is the open protocol agents use to talk to tools, if you've heard about it, this is the one.) A year ago this was a spec and a reference implementation. Now every major model vendor has shipped one and every major IDE supports it or has it on the roadmap.

I've been writing about MCP for a year now, and the same line keeps showing up: open protocols beat proprietary integrations on a long enough timeline. MCP is a year ahead of where I expected. The fact that it's Anthropic's spec rather than a neutral standards-body output is a footnote, the protocol behaves like an open standard, the reference implementations are usable, and the tooling treats it that way. That's the bar that matters.

The thing to flag: 97 million is install count, not active-use. The number I want to see (and that nobody publishes) is daily-active MCP servers in production. My guess is the production number is much smaller than 97M implies, and that's fine. Infrastructure adoption always front-loads the install graph. Same week, Lucid shipped MCP-server upgrades and a Process Agent, the mid-tier enterprise SaaS rolling MCP into the product, which is how install count actually translates to production use.

The Mythos leak, and what the frontier release cadence is doing

On March 26, Fortune published a story on a data leak revealing Anthropic is testing an internal model called Mythos, described as a "step change in capabilities." Anthropic confirmed it's in testing but declined to share specifics.

A frontier lab being tight-lipped about the next-tier model is not news. The leak itself is the more interesting story, as a security incident. Internal model documentation walking out the door is exactly what SB 53's transparency-and-incident-reporting requirements were written for. If Mythos is in scope under the 10²⁶ FLOPs threshold (and at "step change" framing, it likely is) then the leak is the kind of safety-relevant event that has to be disclosed under California law starting January. We don't have confirmation a reporting obligation triggered, and Anthropic's posture under the statute hasn't been publicly tested. This will be the test case the next time a leak is bigger.

The harder thing to put cleanly: frontier release cadence has been compressing for two years and isn't slowing. GPT-5.4 on the 17th, Gemini 3.1 on the 20th, Grok 4.20 on the 22nd, Mythos rumors on the 26th. Benchmarks are saturating, capability deltas are getting harder to read, lab-to-lab gaps are shrinking. By the second half of 2026, the question won't be "which model is best." It will be "which lab's stack (model plus agent platform plus enterprise pricing) actually fits the customer's workflow." A different competition, and the labs are positioning for it now.

Labor: the quarter closed loud

Q1 closed with Oracle confirming up to 30,000 layoffs (18% of global headcount, the largest in its 49-year history) explicitly framed as freeing budget for the $50B AI capex buildout (CNBC). Atlassian 1,600 and WiseTech 2,000 on the same theme (Newsweek). Q1 total: 217,000+ announced cuts, AI-attributed share rising every month. The cleanest version of the pattern this quarter is "fund the capex with the headcount." Q2 earnings calls will tell us whether the markets keep rewarding it. My longer position is here.

Smaller items worth tracking

  • DeepSeek V4 rumors got louder. Multiple Chinese-tech outlets pointed at an early-Q2 release window for a coding-optimized V4. The V4 preview ultimately landed in late April. Chinese-lab cadence is quietly resetting expectations on what frontier-class open-weight ships look like.
  • Mistral's Voxtral TTS landed March 23 rounding out their post-GTC sprint, open-source speech generation in nine languages. Quieter than Small 4 or Forge, but fills out the modality coverage.

What to watch next week

Q2 opens, and stay tuned. The big watch items: whether any of the three federal proposals makes it to committee, whether the EU Council position from earlier in March shows up in the US framework as a reference point, what Q1 earnings calls say about the AI-capex-vs-headcount math the layoff announcements were positioning for, and whether DeepSeek V4 or Mythos benchmarks land in the first ten days. Q1 closed louder than it opened. Q2 starts with that momentum, not from rest.

Sources