AI in the news: week of December 21, 2025

The week the model arms race went thermonuclear: GPT-5.2 lands as the answer to a 'code red' memo, Google ships Gemini 3 Flash three days later, NVIDIA drops Nemotron 3 as open weights. Meanwhile Trump signs an EO to preempt state AI laws and Accenture's AI-driven RIF crosses 11,000.

AI in the news: week of December 21, 2025

What this week actually changed: the frontier became unambiguously three-horse, federal AI policy reversed posture on state regulation, and the year-end labor number confirmed the pace-driven-by-incentives read I've held all year.

Week 12 of the Sunday roundup. The week before Christmas is usually the dead zone, labs shut, analysts file early, editors take Friday. This year it was the loudest week of Q4. Two frontier model launches inside seven days, a third major open-weights release in the middle, a White House executive order trying to preempt every state AI law on the books, and Challenger's year-end count of AI-cited layoffs landing at 50,000-plus. Strap in.

December 11 was the model-arms-race day. OpenAI shipped GPT-5.2 in three flavors (Instant, Thinking, Pro) pulled forward on the calendar after Sam Altman's internal "code red" memo about ChatGPT losing consumer share to Google. Same day, Google launched Gemini Deep Research 3, its deepest research-agent build yet. The press packaged it as a head-to-head, which is roughly accurate, the launches were timed to step on each other. Then on December 17-18 Google countered with Gemini 3 Flash, the cheap-and-fast variant that runs up to 3x faster than the previous Pro tier while keeping Gemini 3-class reasoning. Default model in the Gemini app starting that day. Pricing landed aggressive, output around $3 per million tokens. Axios called it "fast, cheap, and everywhere," which is the right summary. Gemini 3 has been topping LMArena across most non-coding benchmarks since November. GPT-5.2 was the response. Gemini 3 Flash was the counter-response, optimizing the price-performance corner OpenAI hadn't matched. Anthropic's Opus 4.5 still owns coding. The frontier is now genuinely three-horse, structurally different from six months ago when the conversation was OpenAI-and-then-everyone-else.

The cadence itself is the story. Two flagship launches in seven days, both pulled forward to land before Christmas, both clearly reactive. We're in the part of the cycle where labs are racing each other into the ground on benchmark deltas, and the deltas are getting smaller per release. GPT-5.2-Thinking beats 5.1 on most evals, but the gap is narrower than 5.0 to 5.1, which was narrower than 4 to 5. Capability is converging. The product/distribution frontier (who has the default-model slot in your phone, your IDE, your enterprise tenant) is where 2026 is actually going to be fought. The thing I push back on, again, is the framing that picking the "best" model matters as much as the press treats it. For most production workloads, a competently-deployed small model running locally gets you 80% of the answer at 5% of the cost and 100% of the data control.

NVIDIA dropped the open-weights counter on December 15. Nemotron 3 family. Nano (30B/3B active), Super (~100B/10B active), Ultra (~500B/50B active). Hybrid latent mixture-of-experts architecture, 1M context, trained for agentic workloads. All three open-weights. SiliconANGLE framed it as NVIDIA "doubling down on open source to cement GPU dominance," which is right. Two things to flag. The architecture is not a transformer in the strict sense. It's a hybrid latent MoE design, and the active-parameter ratios make Nano and Super genuinely deployable on local hardware that runs a 7-13B dense model today. The Nano variant is in the same operational class as Llama-7B for memory but plays in a meaningfully higher accuracy tier. That matters for the on-prem and edge story. Strategically, NVIDIA does not need to sell models to make money. Every Nemotron download trains a developer to think "the right way to build agents is on NVIDIA hardware running NVIDIA's model stack." It's a marketing expense that ships as open weights. The cynical read is correct, and the cynical read doesn't change that the weights are in the open and someone running a small-models-locally setup gets a meaningfully better tool. Take the gift. Just don't mistake it for charity. The Chinese open-weights cadence is the same shape from the other side. DeepSeek-V3.2 and V3.2-Speciale shipped December 1, Qwen kept extending, and the Chinese labs collectively held about 15% global model share by November.

Same December 11, the President signed Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence." The order identifies state AI laws as "onerous" or "obstructive" and directs federal action against them. The Sidley analysis walks through the mechanism, but the headline pieces are a new DOJ AI Litigation Task Force standing up within 30 days, Commerce Department evaluation of state AI laws, BEAD broadband funds conditioned on states not having "onerous" laws, FCC and FTC directives to publish preempting standards, and a legislative recommendation for uniform federal preemption. The carveouts are narrow, child safety, compute and datacenter infrastructure, and state procurement. Everything else is in scope. SB 53, the California frontier-AI safety law I covered in week one, is precisely the kind of law this EO is built to challenge.

This is a problem. The substance argument: there's no federal AI safety framework. There hasn't been one for three administrations. State action (California, Colorado, New York, Texas) has been the only meaningful governance happening at any level. Preempting state law without a federal replacement doesn't move governance to the federal level; it removes governance entirely. The EO doesn't establish any new federal safety standard. It directs agencies to challenge the only existing ones. The procedure argument: an executive order can't actually preempt a state statute. Preemption requires Congress or constitutional grounds. The EO is mostly a litigation strategy and a federal-funding-leverage play. SB 53 takes effect January 1, 2026 regardless. The strategic argument: the federal posture this encodes (that AI safety regulation is an obstacle to AI dominance) is the opposite of where I think the principled position lands. Governance is the work, not the friction. SB 53's transparency-and-incident-reporting model is genuinely lightweight regulation. Calling it "onerous" is positioning, not analysis. What I'll watch in Q1: whether the DOJ task force actually files, whether SB 53 takes effect on schedule and whether the first published frameworks land, whether Congress takes up the legislative-preemption proposal. The chilling effect is the realistic concern. Litigation is slow; the prospect of litigation can freeze a legislature for a session.

The labor story closed the year confirming the realistic view. On December 21, CNBC published the year-end Challenger, Gray & Christmas count. AI cited as the driver behind more than 50,000 layoffs in 2025, out of 1.17 million total cuts. Accenture CEO Julie Sweet's now-infamous "those we cannot reskill will be exited" is the year's executive quote on the topic. 50K-plus is a structural number, not a vibe, and it confirms the pace-driven-by-incentives read I've been holding all year, long form in the job-security piece.

Smaller items: the European Commission floated delaying the August 2026 high-risk-AI compliance deadline as part of the AI Act "simplification" package. If the delay sticks, the EU's regulatory advantage shrinks and the federal preemption play has more room. Beijing removed the full AI law from its 2025 agenda but is keeping the targeted-rules approach, incrementalism is now the global default. Stanford HAI's 2026 forecast piece and Understanding AI's predictions are the two I'd read; both converge on "the era of AI evangelism is giving way to the era of AI evaluation."

What to watch next: SB 53 going live on January 1, the first DOJ task force filings, and whatever the labs ship as they come back from holiday. The pattern that held this week: the frontier is genuinely three-horse and the deltas are shrinking, the federal AI policy frame just inverted and state-level governance work matters more not less, and the labor story closed the year with the realistic read intact. Last roundup before Christmas. Next Sunday I'll do the year in review, what landed in 2025, what didn't, and what I'm watching for in Q1.