AI in the news: week of January 25, 2026
Davos week. Amodei calls chip exports to China selling-nukes-to-North-Korea. Nadella warns the AI bubble is real if adoption stays inside big tech. Musk promises AGI by year-end. The labor displacement story hardens, and the pace is the part to watch.
What this week actually changed: Davos surfaced the executive-class framing on labor, chips, and bubble risk all at once, and the disagreement among the people actually building these systems became visible enough to plan against.
Davos week. The annual ritual of CEOs, finance ministers, and central bankers gathering in Switzerland to decide what the AI conversation is going to sound like for the next twelve months landed heavy this year. The four threads I keep pulling on, displacement, governance, distributed-vs-concentrated, and where sensitive data ends up, showed up in the Davos transcripts more directly than they have in any other week. The CEO statements wrote themselves into the framing. The week ending Sunday January 25 is almost entirely the World Economic Forum (Jan 19-23), with the labor and energy narratives building through midweek and the Amodei chip-export comments dominating the geopolitical wire on Tuesday. Quieter on the model-release side than recent weeks (most labs are saving their Q1 releases for February) which is why the policy talk has so much room to breathe.
Davos 2026 was the labor week. Per CNBC's wrap from Tuesday, the percentage of workers worried about AI-driven displacement jumped from 28% in 2024 to 40% in 2026. IMF managing director Kristalina Georgieva called the wave "like a tsunami" and said most countries and most businesses aren't prepared. Verizon CEO Dan Schulman and Microsoft president Brad Smith openly clashed on a panel. Schulman saying widespread layoffs are inevitable, Smith saying AI is fundamentally an upskilling tool. Both can be partially right; only one of those framings is what the executive class is actually acting on. Deutsche Bank's analysts marked something I want to flag: AI contributed to roughly 55,000 US layoffs in 2025 by their count, and they expect the figure to grow materially in 2026. The displacement is real and it's accelerating faster than I expected. The thing I keep coming back to is the pace. Short-term incentives are driving the rush, companies aren't cutting because the AI is ready, they're cutting because the AI narrative is convenient and the markets reward the cuts. Both halves matter. There's real productivity gain in narrow domains. There's also a wave of opportunistic cuts riding on top of it, and the gap between "92% increase in hiring for AI roles" and "the workers being laid off are not the workers being hired" tells you which one is louder right now. Longer version in the job security piece. I'm fine with AI in IT systems automation, that's been my career, and the displacement of repetitive systems work that should have been automated long ago is the appropriate kind. Human+AI collaboration is the sustainable working model. The firms that figure out the collaboration will outperform the firms that just cut. To be clear: the headcount still shrinks. The collaboration model just shrinks it less and shrinks it well.
Tuesday January 20 was the geopolitical day. Anthropic CEO Dario Amodei told Bloomberg on the sidelines that the administration's decision to allow the sale of advanced AI chips (Nvidia H200s, AMD MI325Xs) to China is like "selling nuclear weapons to North Korea." Axios followed up with the policy detail. The Bureau of Industry and Security revised its licensing posture late last year; the chips are now flowing. The framing is dramatic and the framing is also approximately the policy frame Amodei has held publicly for two years. Last year at Davos he warned about "1984 scenarios, or worse." This year he upgraded the metaphor. The argument is structurally the same: the US has a multi-year lead on training-grade compute, exporting closes the gap, and a closed gap means the regime the chips are sold to gets to set the terms of frontier AI deployment in their sphere. The metaphor is too hot and the underlying concern is correct. The "nuclear weapons to North Korea" comparison isn't useful, it overstates the immediate harm and ends conversations rather than starting them. Amodei knows that. The metaphor is for the press cycle, not the policy paper. The concern under it is one I'd take seriously. Frontier training compute is a strategic asset. The labs that benefit most from being able to sell into China are also the labs whose competitive position is most threatened by the gap closing. The optimal policy from a US national-interest framing isn't the same as the optimal policy from an Nvidia revenue framing. Whatever you think of Anthropic, having a frontier-lab CEO publicly arguing for tighter export controls against his own short-term commercial interest is worth noting.
Same Tuesday, Satya Nadella sat with BlackRock's Larry Fink on the Davos main stage and delivered the most pointed public bubble warning he's given. Quoted in Fortune: "A telltale sign of if it's a bubble would be if all we are talking about are the tech firms." Per CNBC he tied it to energy (energy costs will decide which countries win the AI race) and to the global adoption gap, comparing it to the early smartphone rollout. This is the most interesting Davos statement of the week from where I sit. Nadella is the CEO with the most to lose if the AI bubble pops, and he's the one publicly saying the bubble is real if the technology stays inside the Fortune 500 hyperscaler ecosystem. The strategic logic is clear: Microsoft's bet only pays out if AI deployment goes broad, into mid-market and emerging markets and the long tail of organizations that don't currently have a hyperscaler relationship. If it stays narrow, the capex doesn't pencil out and the multiple compresses and the whole sector takes a hit. The "managers of infinite minds" line he delivered later is worth a separate note. It's a clean piece of executive framing for what AI-augmented work looks like, and it's also exactly the framing that lets the labor-displacement story be smoothed over. Everybody becomes a manager. Nobody mentions that the people being managed are software, that managing software requires a smaller team than managing people, and that "manager of infinite minds" describes maybe 10% of the actual jobs that exist. The metaphor is good. The implication is incomplete.
Wednesday January 21, Musk told Davos that AI smarter than any human will arrive by the end of this year, no later than next, and that Tesla will start selling humanoid robots to the general public by end of 2027. Both familiar. Musk has been making rolling versions of both for years. Noting for completeness. The AGI-by-year-end claim was wrong in 2024 and 2025 and will most likely be wrong in 2026, in the sense that the thing he calls AGI won't exist by December and the goalposts will move. The humanoid claim is more interesting because Tesla has shipped Optimus units to internal use, but "general public availability by end of 2027" is the kind of timeline that has historically slipped 18-24 months. The reason this matters is that Musk's framing influences public perception of where AI actually is, and that perception drives layoff, regulation, and investment decisions made by people who don't have time to verify the technical specifics. The gap between the actual capability frontier and the publicly-believed capability frontier is one of the most important variables in this whole story, and Musk widens that gap on purpose.
Friday January 23, the disagreement the public conversation should be tracking surfaced. Per Fortune, the late-week panel showed a clean split among the people who actually build these systems. Amodei: human-level intelligence soon. Hassabis: more distant. LeCun: more distant, and current architectures don't get there. Three frontier-lab leaders, three different timelines, and the disagreement is genuine and substantive rather than performative. The public framing oscillates between "AGI imminent" and "AI is hype" because both are easy to write headlines about. The actual position of the people building this stuff is "it's somewhere on a five-to-twenty-year horizon, it depends on architectural questions that haven't been resolved, and reasonable experts disagree by an order of magnitude on the timeline." That's the honest read. Plan for the long tail. Don't make irreversible decisions on the assumption that the short tail is the central case.
Smaller items: Google released Gemma 3 in late January, open-weights model family at 1B, 4B, 12B, 27B sizes, with the 27B variant running on a single RTX 4090. Open-weights momentum continues; small models that fit on consumer hardware are eating the workloads that don't actually need frontier compute, which is most of them. The EU Digital Omnibus package is moving through Parliament; the first August 2026 enforcement milestone is now genuinely in question. YouTube announced AI-likeness Shorts on January 23, creators can generate Shorts using AI versions of their own likenesses. Same biometric-data-into-hosted-AI pattern I flagged with the Sora app launch back in October. The frame is "creator empowerment." The transaction is "we get a permanent biometric of you." Mastercard and OpenAI/Microsoft announced agentic commerce integrations the same day. Mastercard Agent Pay into Copilot Checkout and ChatGPT Instant Checkout. The agent-driven shopping infrastructure is being built into the rails.
What to watch next week: post-Davos hangover, the first ChatGPT-ads incident reports, and whether any state passes a new AI law in the face of the DOJ task force. The pattern from the week: the governance ceiling moved (SB 53 is law, the EU's GPAI obligations are live, the transparency-and-reporting shift is now the operating model, the framework I argued for in the spring is now what most jurisdictions are converging on, faster than I expected). The hosted-frontier bet doubled down and the alternative got more credible at the same time, open-weights models from Llama 4, Mistral Large 3, Gemma 3, and the Chinese labs got good enough that running serious workloads on local infrastructure is now a real engineering choice. The vendor-lock-in question got more urgent and the alternative got more achievable, both at once. The labor displacement is real and the pace is the problem. And the bubble question is real and the timing is unknowable. Nadella's Davos line is the cleanest articulation of where the actual risk is.
The pattern that keeps holding week to week: the news cycle moves about three times faster than the actual underlying technology, and the executive framing of any given week is almost always 6-12 months ahead of where the deployment reality is. That gap is where most of the bad decisions get made, the layoffs, the capex commitments, the regulatory rushes. The four threads I keep pulling on, displacement, sensitive data in public AI, governance, distributed-vs-concentrated, aren't predictions. They're principles, and the news keeps giving me new reasons to hold them.