AI in the news: week of April 12, 2026

The closed-frontier club tightened. The big three US labs are sharing intelligence on Chinese distillation. Broadcom-Google-Anthropic locked TPU production through 2031 with 3.5 GW behind Claude. Meta walked away from open weights with Muse Spark. Mythos Preview held the safety line.

AI in the news: week of April 12, 2026

What this week actually changed: the closed-frontier club tightened on three fronts in five days, shared intelligence ops against Chinese distillation, a 2031 TPU lock-in worth 3.5 GW of compute, and Meta walking away from being the open-weight default.

I expected to lead with the Cloud Next aftermath. Three stories that hit on consecutive days displaced it: the Frontier Model Forum's coordinated intelligence-sharing against Chinese distillation, the Broadcom-Google-Anthropic TPU deal that puts 3.5 GW on Anthropic's roadmap through 2031, and Meta shipping Muse Spark, its first fully proprietary model. Underneath it, Anthropic confirmed Claude Mythos Preview but said no commercial release. Heavy week. Let me walk through it.

The three big US labs are now running joint counterintelligence

On April 6, Bloomberg reported that OpenAI, Anthropic, and Google have begun sharing intelligence through the Frontier Model Forum on what they're calling adversarial distillation. Chinese labs (DeepSeek, Moonshot, MiniMax named specifically) using fraudulent accounts to pull outputs from US frontier models and train student models on the responses. Anthropic documented 16 million such exchanges across roughly 24,000 fake accounts from three Chinese firms alone. US officials put annual cost to American labs in the billions.

This is the first coordinated intel-sharing operation between competing frontier labs that I'm aware of. The Forum has existed mostly as a policy-and-safety publication body until now. Pivoting it into an active counter-distillation operation is a meaningful institutional shift.

I have mixed feelings, and I want to be honest about why. The distillation problem is real, fraudulent accounts, terms-of-service violations, billions in extracted value. The labs are right to act. A cooperative-intelligence model is a better answer than each lab fighting alone or each lab pushing for its own export-control carve-outs.

The thing I'd push back on is the framing. The official version is "stopping model copying." The structural effect is "the three US frontier labs now have a shared operational layer for deciding who gets access to their outputs." That's a governance capability with a much bigger surface than the immediate distillation problem. If the Forum is going to operate as an industry consortium with real enforcement teeth, the governance and transparency obligations on the Forum itself should rise to match. There's currently no public audit of what gets shared, what gets blocked, or what the dispute process looks like for a flagged account that turns out to be legitimate. State-level transparency laws like SB 53 hit the labs as developers. The Forum-as-consortium is outside that scope. It probably shouldn't be.

Compute through 2031 just got locked to three labs

Same Monday. Broadcom filed an 8-K confirming a long-term agreement to design and produce future Google TPU generations through 2031. Alongside it, Anthropic announced an expanded deal with Google and Broadcom for roughly 3.5 GW of next-generation TPU capacity coming online from 2027. Anthropic also disclosed its revenue run rate has crossed $30 billion, up from $9B at end-of-2025. Over 1,000 customers spend more than $1M/year on Claude, that number has doubled since February.

A few things. First, the run-rate growth is real and the demand is real. I'm one of those customers at a much smaller tier, and the daily Claude Code workload is real and load-bearing. Second, the 2031 commitment is a structural lock-in of the closed-frontier compute supply chain worth naming as such. The three labs that just stood up the Forum intelligence-sharing also collectively hold the next five years of premium-fab AI accelerator output. That's a tight cartel by any reasonable measure.

The principled-user reading: open-weight matters more, not less, as the closed-frontier compute concentrates. The closed labs will keep widening the capability gap on the longest-context, hardest-reasoning workloads, and the gap on 80% of real workloads will keep narrowing because Gemma 4, Llama 4, DeepSeek V4, and the rest are running on much smaller compute and shipping faster than the frontier widens the lead. Open-weights keeps getting more credible precisely because the closed tier is consolidating supply.

Meta shipped a proprietary model, and walked away from being the open-weight sponsor

April 8. Meta unveiled Muse Spark, the first proprietary model out of Meta Superintelligence Labs under Alexandr Wang. Multimodal, reasoning-mode ("Contemplating"), HealthBench Hard at 42.8 with 1,000-physician curated training data, 52 on the Artificial Analysis Intelligence Index v4.0, fourth overall behind Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6. Llama 4 Maverick scored 18 on the same index. Powering Meta AI app, web, WhatsApp, Instagram, Facebook, Messenger. Private-preview API for select partners. No weights.

The capability jump is real. The strategic shift is bigger. Meta has been the de-facto sponsor of frontier-class open weights for two years. Muse Spark is the explicit reversal, proprietary, API-only, same posture as OpenAI and Anthropic. Wang's hire and the Superintelligence Labs reorg signaled this was coming. The April 8 launch confirms it.

I'll be honest: I'm disappointed. Llama-as-open-weight was the structural counterweight to closed-frontier consolidation, and Meta walking away removes the largest open-weight sponsor from the field. Llama 4 Scout and Maverick still exist, the weights are still out, but the next-frontier work is now closed at Meta the same way it's closed at the other three labs.

That said, the open-weight tier doesn't collapse because Meta closed up. Mistral shipped Codestral 2 under Apache 2.0 and Mistral Medium 3.5 in the same window. DeepSeek, Qwen, GLM, Nemotron, Gemma are all still on the open track. The bench depth is real. The headline sponsor changed; the field didn't.

Mythos Preview is the test case: the ASL framework actually held

April 7. Anthropic confirmed Claude Mythos Preview and announced it won't be commercially released. The capability disclosure is striking: Mythos Preview autonomously discovered a 17-year-old root-level RCE in FreeBSD's NFS path (now CVE-2026-4747), found vulns across every major OS and web browser, and contributed to Firefox 150 shipping fixes for 271 vulnerabilities it surfaced. Over 99% of the discovered vulns are reportedly unpatched. Instead of a commercial release, Anthropic launched Project Glasswing for defensive use with cleared partners.

The right call. Genuinely. A model that finds working exploits faster than the global vendor-patch loop can ship fixes is a model whose broad availability would be a net negative for security, and Anthropic seeing that and choosing not to ship is the kind of decision the ASL governance framework was designed to enable. The framework worked.

Two things worth tracking. First, the precedent matters more than the specific call. Anthropic just showed that a frontier lab will hold back a commercially viable model on safety grounds, which means the next call by Anthropic, or by OpenAI or Google at comparable capability, has a real reference point. Second, Project Glasswing's access model is the open question. Cleared partners, defensive use only, restricted distribution, fine in principle, but the operational details of who gets access, under what audit, with what oversight, will determine whether this is genuine restraint or a soft-launch into the offensive-security customer base. Worth asking the question now rather than after the answer is locked in.

Labor section

No qualifying labor-news driver this week. The April Challenger numbers won't post until early May. My full take on the labor question is here, short version: the displacement is real and accelerating faster than I expected, the pace is what's wrong, and the firms figuring out human+AI collaboration outperform the firms racing to cut. Back to this section when the April Challenger data lands.

Smaller items worth tracking

  • NVIDIA National Robotics Week highlighted surgical-robotics integrations including PeritasAI's multi-agent OR situational-awareness work. Physical-AI cadence keeps building.
  • Microsoft–Publicis expanded their 2021 partnership into AI-driven marketing solutions. Mostly positioning; the agent-layer integration is the interesting part of the contract.
  • Codestral 2 from Mistral shipped under Apache 2.0, a meaningful licensing shift for Mistral's code models. Benchmarks above GPT-4o on HumanEval and MBPP. The open-weight coding story keeps getting stronger.

What to watch next week

The closed-frontier club tightened. Forum intelligence-sharing, Broadcom-Google-Anthropic compute lock-in through 2031, and Meta closing up its frontier work all hit in the same five days. Three trends pointing the same direction: the closed-frontier layer is consolidating supply, distribution, and operational coordination. The principled-user response is the one I've been making, keep the open-weight option production-credible, keep the agent stack portable, keep the data flow controllable.

The governance framework worked once. Anthropic holding back Mythos Preview is the first time I can point to a frontier lab demonstrably not shipping a commercially viable model on safety grounds. SB 53's incident-reporting and framework-publication requirements come into effect over the same window. The combination of internal capability-gating and external transparency is starting to look like a working stack.

And the open-weight sponsor problem is now real. Meta walking away leaves a sponsor-shaped hole at the top of the open ecosystem. Mistral, DeepSeek, Qwen, GLM, Gemma still ship. The collective bench is deep. But none of them have Meta's distribution or capital to anchor the open frontier the way Llama did. The next sponsor is the question that matters, and right now I don't have a confident answer.

Next Sunday: April Challenger data if it lands midweek, more on Project Glasswing's access model, whatever Q1 earnings calls drop on AI-capex and headcount.

Sources