AI in the news: week of November 23, 2025

Microsoft Ignite reframes the stack around agents. Google ships Gemini 3 and Antigravity. The $45B Microsoft-Nvidia-Anthropic circular deal. Nvidia prints another record. Brussels proposes an AI Act delay. UPS cuts 48k jobs. A loud week.

AI in the news: week of November 23, 2025

What this week actually changed: centralization is now winning the architecture argument at every layer of the stack, and the policy machinery is starting to wobble in response. Microsoft Ignite reframed essentially the entire Microsoft stack around agents. Google shipped Gemini 3 and Antigravity the same day Ignite opened. Microsoft, Nvidia, and Anthropic announced a $45B circular partnership. Nvidia posted another blowout quarter. The European Commission proposed delaying parts of the AI Act. UPS announced 48,000 job cuts and waved at automation. Pick a thread, you can write a column on it.

I'm going to focus on what the week tells us about concentration, because that's the through-line. Two hyperscalers staking the agent layer at almost the same hour, the three biggest names in frontier AI fusing their balance sheets, the second-biggest logistics employer in the country citing AI as cover for the largest single layoff of the year.

Microsoft Ignite: the Frontier Firm reframe

Ignite ran November 18-21 and the volume of announcements was deliberate. Microsoft's framing for the conference was the "Frontier Firm", every employee paired with agents, every workflow re-platformed on Copilot, every system of record exposed to the agent layer. The product announcements that mattered:

  • Work IQ, Foundry IQ, Fabric IQ. New "intelligence layers" that give agents persistent context about the user, the data, and the business. Translation: Microsoft now indexes your work, your tenant, and your warehouse and sells it back to your agents as the foundation.
  • Office app agents in chat plus Agent Mode in the apps. Word, Excel, and PowerPoint now run agents both as conversational entities and as embedded in-document workers.
  • Agent 365. The control plane for agents, identity, governance, telemetry, and lifecycle for every agent in the tenant, regardless of whether it was built in Copilot Studio, Foundry, an open-source framework, or a third party. Adobe, Manus, SAP, ServiceNow, Workday all enrolling.
  • Microsoft Agent Factory. Build in Foundry or Copilot Studio, deploy anywhere, single metered plan.
  • Microsoft 365 Copilot Business at $21/user/month for SMBs under 300 seats, December launch.
  • Windows Copilot updates with deeper OS integration of Agent Mode.

The Azure side of Ignite was a parallel set of announcements. Foundry expansion, the model router that picks between OpenAI, Anthropic, and Llama based on cost and performance, the Anthropic-on-Azure availability (Sonnet 4.5, Opus 4.1, Haiku 4.5 in Foundry).

Here's the read I want to push. Ignite 2025 was Microsoft's most aggressive concentration play since Office 365. The pitch is "we'll be the foundation for every agent in your business," and the underlying architecture is one identity plane, one governance plane, one billing plane, one set of intelligence layers, all sitting on Azure with model choice as the only competitive surface left to the customer. That's not model choice in any meaningful sense, that's choice within the cage.

I'm not against agents. I'm against the assumption that the agent layer should belong to the same company that owns your identity, your documents, your CRM connector, your warehouse connector, and the agent control plane. That's not "AI for your business." That's vendor lock-in at a scale that makes 2010-era cloud lock-in look quaint. The Frontier Firm framing is doing a lot of work to obscure how much surface area Microsoft is asking enterprises to surrender in exchange for an agent UX they could build themselves with an MCP-shaped architecture (this is called MCP (Model Context Protocol) if you want to look it up later) and a model endpoint of their choosing.

The Agent 365 announcement is the one I'd actually celebrate if it were positioned differently. A control plane for agents (identity, audit, lifecycle) is the right primitive. Every enterprise needs it. The problem is that Agent 365 is structurally a Microsoft product with Microsoft tenancy assumptions. The right shape is an open standard that runs the same regardless of where the agents live. We don't have that yet. Agent 365 will probably become the de facto version because Microsoft ships first, which is the same dynamic that made Active Directory inevitable in 2003 and which we spent the next twenty years working around.

Gemini 3 and Antigravity, same day

Google launched Gemini 3 on November 18, the same day Ignite opened, which was clearly intentional. Gemini 3 Pro is the new flagship: 1M-token context, multimodal in/out, Deep Think mode, 1501 Elo on LMArena (top of the leaderboard), 76.2% on SWE-bench Verified. Day-one availability across Search, the Gemini app, AI Studio, Vertex AI, and the Gemini CLI, first time Google has shipped a flagship model into Search on launch day rather than over six rolling weeks.

Same announcement, same hour: Google Antigravity, an agent-first IDE that lets developers delegate end-to-end coding tasks to autonomous agents with direct access to the editor, terminal, and browser. The framing is "your agents work in parallel while you supervise." It's an agentic IDE built on Gemini 3.

Two reads. The model is genuinely competitive (top of LMArena, real SWE-bench numbers, the Deep Think mode is novel) and Google has clearly closed the model-quality gap with OpenAI and Anthropic at the frontier. The bigger story is the day-one Search integration. Gemini in Search at launch means the deployment surface for Gemini 3 isn't a developer API audience. It's the entire Google query stream. That's a different competitive dynamic than what Anthropic or OpenAI can do.

Antigravity is the more interesting strategic move. Google saw what GitHub Copilot and Cursor and Claude Code became and decided "we need our own end-to-end agentic IDE, not a plugin." It's a plain bet that the IDE is the foundation developers will live in for agentic work, not the chat window. That bet might be right (I spend most days in Claude Code rather than ChatGPT for exactly this reason) but the concentration concern is the same as Microsoft's. Antigravity wants to own the IDE, the model, the cloud, the agent runtime, and the deployment target. Same shape, different vendor. The principled position remains: agentic coding is the right direction, but the architecture should let you swap the model out without rewriting your workflow. MCP is doing the work of making that possible at the protocol layer. The IDEs that win in three years will be the ones that took the lock-in question seriously now.

The $45B circular trade: Microsoft, Nvidia, Anthropic

This one deserves its own section. Announced at Ignite: Microsoft invests $5B in Anthropic. Nvidia invests $10B in Anthropic. Anthropic commits to $30B of Azure compute powered by Nvidia hardware, with options for up to 1 GW of additional capacity. Net effect: $15B flows from Microsoft and Nvidia into Anthropic: $30B flows back from Anthropic to Microsoft (for Azure) and indirectly to Nvidia (for the chips Azure runs on top of).

This is a circular financing arrangement. The cash makes a loop. Microsoft gets a "neutral" frontier-model partner on Azure to balance OpenAI. Nvidia gets a $30B forward purchase commitment booked against its capacity. Anthropic gets a war chest and unlocked capacity at a moment when it would otherwise be GPU-constrained. Everybody wins, at the level of the participants. The question is whether the broader market is reading this as three independent companies making independent capital allocation decisions, or as a coordinated balance-sheet maneuver to lock in market position before the next correction.

I think it's mostly the second. The participants are increasingly coupled in a way that makes "Anthropic on Azure with Nvidia chips" structurally analogous to "OpenAI on Azure with Nvidia chips", same hardware, same hyperscaler, different model brand. The model-diversity story Microsoft is telling at Ignite ("OpenAI and Anthropic, you choose") is real at the API surface and increasingly fictional at the supply chain. If you're an enterprise picking between Claude Opus 4.5 and GPT-5 on Azure, your data goes to the same datacenters running on the same chips paid for by the same circular capital flows. That's not the model-choice diversity that the marketing implies. Worth flagging: Anthropic shipped Claude Opus 4.5 the day after this roundup window closes (Nov 24), and it's available in Microsoft Foundry on day one. I'll cover the model itself next week.

Nvidia prints another quarter

Nvidia reported Q3 FY26 earnings on November 19: $57.0B revenue, up 62% YoY. Data center revenue $51B, up 66% YoY. Q4 guide of $65B. Jensen on the call: "Blackwell sales are off the charts, and cloud GPUs are sold out." The forward-looking number that turned heads: $500B of visibility on advanced chip spending over the next 14 months, and a stated belief in $3-4 trillion of annual AI infrastructure spending by the end of the decade.

Take the trillions number with the appropriate amount of salt. It's a CFO setting expectations for a CapEx supercycle, not a forecast in any defensible sense. The $500B number is more credible because it's based on contracted demand the company can actually see. Either way, the print is the print: Nvidia continues to compound at a rate that should not be possible at this scale. The data center segment alone is now larger than the entire company was 18 months ago.

The thing I'd flag is the supply-chain implication. If Nvidia is sold out of Blackwell and the visible demand is $500B over 14 months, GPU access will continue to be the rate-limiter for everyone who isn't a hyperscaler. Which is exactly the dynamic that pushes mid-market and enterprise customers toward "let Microsoft or Google host the model for you," because they can get GPU capacity and you can't. The compute concentration story and the agent-platform concentration story are the same story viewed from different angles.

EU AI Act: Brussels proposes a delay

On November 19, the European Commission published the Digital Omnibus on AI, a proposal to simplify and partially delay implementation of the AI Act ahead of the August 2026 full-application deadline. The headline changes: align high-risk AI rules with the actual availability of technical standards (which are running late), centralize supervisory authority over general-purpose AI and very-large-platform-embedded AI under the EU AI Office, introduce an EU-level regulatory sandbox alongside the national ones, clarify the conformity-assessment interplay between the AI Act and existing product regulation.

The framing from the Commission is "reality check", the Act's compliance machinery isn't ready, the standards aren't published, and forcing a hard August 2026 cutover would create chaos. The framing from a chunk of the AI-policy community is "industry lobbying succeeded in pushing the deadlines back." Both framings are partly right. The standards genuinely aren't ready, and forcing compliance against unwritten standards would be bad policy. The lobbying genuinely happened, and the resulting proposal does soften some of the obligations the labs were most worried about. The honest read is that the EU AI Act is having its 2024 GDPR-implementation moment, the regulation is on the books, the enforcement machinery is years behind, and the politics of who gets supervised by whom is being relitigated.

I'm cautiously fine with this proposal. Linking compliance to actual standard availability is correct. Centralizing GPAI supervision under the EU AI Office (rather than 27 national regulators) is correct. The sandbox addition is correct. What I'd watch is whether the "simplification" creeps into substantive obligations, the foundation of good AI governance is auditability and reporting, and if the omnibus quietly weakens those, then it's not a delay. It's a retreat. The convergence question I flagged in week one is now urgent: the EU is re-opening its rulebook, California's SB 53 takes effect January 1, and the federal US conversation is incoherent. The window for international convergence on transparency-and-reporting standards is narrower than it looked six weeks ago.

The labor story keeps building

UPS announced job cuts of approximately 48,000 employees under its "Network of the Future" initiative, citing automation and AI-enabled logistics as the enabler. McKinsey laid off ~200 internal technology and support staff after automating non-client-facing work. IBM cut 1% globally (~3,000). HP signaled 4,000-6,000 cuts. Total US layoff announcements in November: 71,321, with AI cited as the driver for over 6,000 of them, which is the lower bound, because most cuts that are AI-rationalized internally don't carry that label in the press release.

UPS is the one I want to dwell on. 48,000 people. The largest single corporate layoff announcement of 2025. I want to be straight about my position. The displacement is real and it's accelerating faster than I expected. Logistics automation specifically is the kind of automation I've spent my career building variants of in IT and infrastructure, route optimization, sortation robotics, AI-driven scheduling are doing real work and they will continue to displace roles in this category. The honest read is not "AI didn't do it." The honest read is that AI and logistics automation are doing some of it, and the rest is a contraction (Amazon insourcing, post-pandemic normalization) that the AI framing is being layered on top of. Both are happening at once.

The thing I keep coming back to is the pace. Short-term incentives are driving the rush. UPS isn't cutting 48,000 people because the automation is fully ready to take on the work tomorrow; it's cutting because the markets reward the AI-and-automation narrative right now and there's a transition window where leadership can move that hard without immediate consequence. The pace is the issue, not the direction. There's a version of this transition where the same automation gets deployed over five years with retraining, attrition, and human+AI collaboration absorbing most of the displacement, and the work product is better at the end of it. The companies that figure out the collaboration shape will outperform the ones optimizing for the cut. But to be clear: the headcount still shrinks in the collaboration model. It just shrinks less and shrinks well.

The 2025 total is now north of 1.17 million job cuts announced, the highest since the 2020 pandemic. AI was cited in roughly 50,000 of them, and the number has been growing month over month. Logistics automation displacing routing and sortation roles is a category I'm fine with, even if the pace is the problem. The fuller version of where the lines sit is in the job-security piece; UPS isn't where that fight gets fought, but the news cycle keeps producing the cases where it should. I'd rather be wrong about the pace than be caught off guard by it.

Smaller items

xAI Grok 4.1 shipped earlier in November. Real capability bump, similar release tempo to the rest of the frontier labs. The Grok product remains the most ideologically loaded of the frontier offerings, which colors how it gets used in production. OpenAI had a quiet week by their standards, no major launches, but the Mixpanel breach hit on Nov 27 and exposed customer profile data for the API portal. The breach itself is a reminder of how the data-handling surface area expands with every integration, even when the AI vendor is doing everything right, an analytics partner gets popped and the data lands somewhere it shouldn't. Anthropic Claude Opus 4.5 drops November 24 (next week's roundup). Day-one availability in Microsoft Foundry, GitHub Copilot, Microsoft 365 Copilot.

What this week tells me

Three takeaways. Concentration is winning the architecture argument right now. Microsoft's Ignite, Google's Gemini 3 + Antigravity, the $45B circular trade, the Nvidia visibility number, all four point at a market where the agent stack consolidates onto two or three hyperscaler-scale platforms with the model layer reduced to a SKU choice. The principled-distributed-AI position got harder to argue this week, not easier. Which means it matters more.

The model-diversity narrative is increasingly fictional at the infrastructure layer. Anthropic on Azure on Nvidia is OpenAI on Azure on Nvidia is Llama on Azure on Nvidia. The supply-chain stack collapsed even as the model selection menu expanded. If the data-residency, vendor-independence, and lock-in-avoidance reasons for picking a model mattered to you, the menu just got smaller in the ways that count.

And governance is in a reactive posture while the labor story accelerates. The EU is rewriting its own rulebook to keep up. The US has SB 53 and not much else. UPS, HP, IBM, McKinsey announced cuts in a single month, and the displacement is real, faster than I expected, driven harder by short-term incentives and market reward than by the underlying readiness of the automation. Both stories (governance lag and labor pace) are downstream of the concentration story. When the agent layer belongs to two companies, regulating it gets simpler in some ways and politically harder in others, and the pace of cuts gets set by what the market rewards rather than by what the workforce can absorb. Next Sunday: Claude Opus 4.5 actuals, the OpenAI breach fallout, whatever post-Ignite reality check lands midweek.

Sources