AI in the news: week of November 30, 2025
Thanksgiving week, but Anthropic shipped Claude Opus 4.5 with Chrome and Excel on Monday. The EU softens the AI Act. Black Friday becomes the first AI-native shopping holiday. China hits 15% of global AI share. What I make of a quieter-than-it-felt week.
What this week actually changed: the Opus 4.5 price cut reset the frontier-model market, and AI-driven Black Friday made every model provider a behavioral-data broker without anyone calling it that.
Thanksgiving week is usually dead. This year it wasn't, because on November 24 Anthropic shipped Claude Opus 4.5 with Chrome and Excel integrations going broadly available the same day. The model scored 80.9% on SWE-bench Verified, first frontier model to clear 80%, edging out Gemini 3 Pro (76.2%) and GPT-5.1 (76.3%). The benchmark is fine. The price cut is the actual story. Opus dropped to $5/$25 per million tokens, roughly a third of what 4.1 cost, which puts it in the price band where it competes with Sonnet for default-tier work rather than just for the hard problems. The honest read: Gemini 3 and GPT-5.1 got good enough that the Opus premium became a tax most customers wouldn't pay, and Anthropic chose market share over the margin story. I'm running Opus 4.5 in Claude Code this week and the coding gains are real.
The Chrome integration matters more for what it implies than what it ships. Claude for Chrome is broadly available to Max users now, the model can act on tabs, fill forms, navigate sites. That's the same pattern OpenAI is running with the Apps SDK and Google with Gemini-in-Workspace: the model wants to live inside the apps you already use. The thing that flags me is the data flow. Running Claude as an agent in Chrome means the model reads the contents of every tab the agent touches. The reflexive enterprise question, what tabs can the agent see, and where does that data go?, is one more orgs need to ask before rolling Claude-in-Chrome across the workforce. Keeping sensitive data out of public AI was already hard when the surface was a chat window. It's much harder when the surface is the browser session you also use for banking, your CRM, and your medical portal. Worth noting from the system card: Anthropic claims the strongest prompt-injection defenses of any frontier model. That matters specifically because an agent browsing arbitrary web content has to be hard to subvert.
The holiday-shopping numbers landed and they're the under-the-radar story of the week. AI-driven traffic to US retail sites was up 805% year-over-year on Black Friday. 33% of holiday shoppers said they planned to use AI to shop, double the prior year. Amazon's Rufus chatbot surged. The Walmart-ChatGPT integration from October had its first holiday peak. "AI is now load-bearing for consumer shopping" is true and uncontroversial. The thing under that headline I want to flag is data flow. When a user asks ChatGPT to find them the best deal on a stroller, the model now holds the stated needs, the price sensitivity, the demographic context, the brand preferences, and eventually the purchase via Instant Checkout. That's a richer behavioral profile than any single retailer has ever had. The retailers used to own this data because the customer transacted with them. The chat layer moves ownership upstream. OpenAI now sits between customer and store, collecting the higher-resolution signal. Same shift Google did to retail in 2005 and the App Store did to mobile in 2010, same consequences. The consumer-side concern is the PII problem at scale: shopping queries reveal pregnancies, divorces, illnesses, financial stress, religious observance. ChatGPT now has all of this for hundreds of millions of people, and the disclosures are exactly as vague as you'd expect.
The regulatory side moved the other direction. On November 19 the European Commission issued the Digital Omnibus on AI Regulation Proposal, proposing to delay high-risk AI rules by up to 16 months and extend SME exemptions. The Commission's framing is "reduce the regulatory burden by 25-35% to strengthen EU competitiveness." The AI-safety community's read is "the EU just blinked." I'm somewhere in the middle. The high-risk timeline was set before anyone (regulators included) knew what compliance machinery would actually look like. The harmonised standards aren't ready, the enforcement bodies aren't fully staffed, and enforcing rules against orgs that have no way to demonstrate compliance is bad regulation. What I don't love is the framing. "Reduce the regulatory burden" is the lobby's language, and the next round of softening amendments will land easier than they should. The cross-Atlantic comparison is the part to watch: SB 53 takes effect January 1, 2026. California pulling forward while Europe pushes back. The global baseline I expected by mid-2026 is less settled now. Governance is still the work.
A TrendForce analysis this week put Chinese AI models at roughly 15% of global share in November, up from about 1% a year earlier. DeepSeek-R1, Qwen with 700M+ Hugging Face downloads, Moonshot's Kimi, and others all available under permissive licenses. Stanford HAI's piece on the Chinese open-weight ecosystem is worth reading on the policy implications. Whatever you think of the politics, the practical effect is that any developer can now run a near-frontier model on their own hardware without paying a US lab. That's a significant change to competitive structure that the Sonnet-vs-GPT-vs-Gemini coverage tends to under-weight. Open weights are doing for AI what Linux did for server OSes in the 2000s, forcing the proprietary players to compete on integration and polish rather than capability gating. Small models that punch above their weight used to be the niche story. It's becoming a structural force.
Two smaller items worth flagging. The Information broke a reported $200B Anthropic-Google Cloud commitment, five years, 5GW of TPU capacity, beginning 2027. Neither company confirmed the figure. If accurate, that's over 40% of Google Cloud's revenue backlog. The compute-deal-as-strategic-commitment pattern is the consolidation story under the consolidation story. And reports surfaced that Sam Altman declared an internal "code red" for ChatGPT, pausing peripheral launches to refocus on core performance while OpenAI plans to nearly double headcount to 8,000 by year-end. Slow product cadence plus aggressive hiring is a "we're under pressure from Claude and Gemini" signal. The frontier labs growing while their customers cut on the AI rationale is the value capture concentrating where you'd expect.
What to watch next week: first weekend of December, year-end retrospectives starting to land, whether anything material drops in the last gap before the holiday break. The pattern I'd hold onto: the data-flow story is escalating faster than the governance story, and the gap is going to keep widening through Q1. Keep sensitive data out of hosted services, keep the option to switch providers open, and don't wait for the regulator to catch up before you make your own choices.