AI in the news: week of January 4, 2026
The bridge week. SB 53 went live on January 1, the year-end retrospectives all said roughly the same thing about 2025, the 2026 prediction pieces from Sequoia and a16z dropped with surprisingly compatible takes, and CES 2026 is sitting on Tuesday's runway. What I make of the turn of the year.
What this week actually changed: SB 53 went live as the first US frontier-AI law, and the year-ahead prediction industry converged on a platform-lock-in story that's a choice, not an inevitability.
The bridge week. December 29 to January 4 is always a strange one, wires are quiet, labs are off, and the only thing publishing at full volume is the year-end-and-next-year industrial complex. So that's mostly what this is. Retrospectives, predictions, a governance milestone that landed on January 1, and CES sitting on Tuesday's runway. I covered the 2025 wrap-up in its own piece on the 31st and the 2026 predictions in the companion post on the 5th, so this is the news-of-the-week version: who else was saying what, and what I make of the consensus that's forming.
The big concrete thing that actually happened. California's SB 53, the Transparency in Frontier Artificial Intelligence Act, became effective January 1, 2026. As of Thursday morning, every developer training models above the 10²⁶ FLOPs threshold owes the state a published safety framework and a 15-day clock on serious-incident reporting. I covered the signing in the October 5 roundup and the legal-architecture take in the governance-frameworks piece, and my read hasn't changed. The precedent matters more than the specifics, the threshold is correctly drawn, and the next state law starts where this one ended. What I'm watching now: who publishes their first framework first, and what shape it takes. Anthropic and OpenAI both already publish responsible-scaling and preparedness documents, so the marginal cost is low for them. The interesting test is what xAI and Meta publish, and whether the documents converge on a recognizable format or whether each lab invents its own. Convergence is how the framework hardens; divergence is how it gets re-litigated.
The retrospectives all said roughly the same thing. CNN's year-end piece, the Stanford HAI index, Almost Timely's review, and roughly a dozen others told the same story with different emphasis. Capabilities kept climbing. Claude moved from 3.1% to nearly 29% on Humanity's Last Exam, Gemini Pro from 6.8% to 37.2%, coding benchmarks effectively saturated. Documented AI incidents rose to 362. Model transparency scores dropped from 58 to 40. The capability curve and the responsibility curve diverged for the second year running. The framing I keep seeing is some version of "2025 was the year AI became serious." Stanford's James Landay was quoted to that effect, previous years were shiny-object years, 2025 was when use cases turned operational. Gary Marcus called it "peak bubble." Both frames are right and both are partial. Operational adoption is real. The bubble in the financing layer is also real. Those two things can be true at the same time at this stage of an infrastructure cycle. The piece of the retrospective consensus I want to recalibrate is the labor framing. Multiple year-end pieces ran the "AI displaced 50,000+ jobs in 2025" headline as the whole story. The size is right; the why is what they're missing. Long version in the job-security piece.
The two biggest prediction pieces of the week were Sequoia's "Tale of Two AIs" and a16z's Big Ideas 2026. They disagree on a lot of specifics and agree on more than I expected. Sequoia's frame is supply-constrained capability with adoption running ahead. Ben Thompson's "TSMC Brake," AGI-style long-horizon agents land but datacenter delays bite, a "0-to-$1B revenue club" of startups in 2026 growing faster than the previous cohort. a16z's frame is multimodality-and-agents-as-the-foundation. Everything becomes multimodal, multi-agent systems become how enterprises operate, the OpenAI Apps SDK plus ChatGPT's 900M weekly users sets up a "once-in-a-decade gold rush in consumer tech." The convergence I'm tracking: both firms expect 2026 to be the year platform layers consolidate. Sequoia from the infrastructure-and-agent-foundation side, a16z from the consumer-and-app-distribution side. Both bets, distilled, are "the value accrues to the platform that locks in the agent stack and the user surface." That's the thesis I push back on in the vendor-lock-in piece. The principled counter-position is: build agent stacks that run against local model endpoints so that the platform layer is optional infrastructure rather than structural lock-in. The VCs are calling the lock-in inevitable. It is not inevitable. It is a choice that gets made one architecture decision at a time. Geoffrey Hinton told CNN that 2026 will see "many, many" jobs displaced by AI, and on direction I think he's right.
CES runs January 6-9 in Las Vegas. The previews pointed at: Nvidia's Rubin platform official launch, Intel/AMD/Qualcomm NPU announcements aimed at on-device AI PCs, production-ready humanoids including the electric Boston Dynamics Atlas, a Lucid/Uber/Nuro robotaxi, Ford's sensor-integrated AI assistant, and Caterpillar's Cat AI Assistant. The theme that interests me most is the on-device-NPU push. PC vendors and chipmakers spent 2025 quietly building the foundation for local model execution and 2026 is when marketing catches up. That's the shift that makes the on-prem case for sensitive data increasingly default rather than fringe, once the workstation runs a 70B-class model locally with reasonable latency, the gravitational pull of "send the data to a hosted endpoint" weakens significantly. Watch for the actual benchmarks, not the keynote framing. I'll cover the actuals next Sunday.
Smaller items: multiple year-end pieces cited a leaked OpenAI internal target of $30B in 2026 revenue against an estimated $13B for 2025, a 2.3x growth assumption baked into internal planning, worth noting because the financing layer is making decisions on those numbers. NVIDIA's Nemotron 3 Super and Ultra variants expected in H1. The AI Futures model December update is worth reading on takeoff-timeline arguments. Tech sector layoffs for 2025 totaled 244,000+ globally; the "AI caused this" framing keeps getting attached to numbers that include marketing-services and consulting cuts that have nothing to do with AI.
What to watch next week: CES actuals, the first SB 53 framework publications if any land early, and whatever the labs ship as they come back from the holiday. The pattern from this week: the governance framework is real now and the convergence-or-divergence question with the EU AI Act becomes practical as the August deadline approaches; the platform-consolidation prediction from both VC houses is the bet to push back on; and the retrospective consensus on capabilities is right while the consensus on labor is half-right.