AI in the news: week of December 14, 2025

Trump signs an executive order to preempt state AI laws. OpenAI rushes GPT-5.2 out under a self-declared code red. Disney puts a billion into OpenAI and licenses 200 characters into Sora. Accenture pledges 30,000 Claude practitioners. The week the AI story stopped being about models.

AI in the news: week of December 14, 2025

What this week actually changed: federal AI policy openly tried to roll back state AI policy, OpenAI shipped a real capability jump under "code red," and the capital structure of frontier AI consolidated another step through Disney and Accenture.

Week eleven of the Sunday roundup, and the headline AI story wasn't a model release. Models did ship. GPT-5.2 landed midweek under what OpenAI's own memo called a "code red", but the center of gravity moved to governance and to deals. A federal executive order trying to preempt state AI laws. A billion-dollar Disney investment in OpenAI tied to a Sora character-licensing deal. A multi-year Accenture-Anthropic partnership that puts 30,000 Claude-trained consultants into the enterprise market. The week AI policy and AI capital both got loud.

On December 11 the President signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence", polite framing for "stop the states from regulating AI." The order directs the Attorney General to stand up an AI Litigation Task Force within 30 days to challenge state AI laws on commerce-clause and preemption theories, instructs the FCC to consider a federal disclosure standard that would preempt conflicting state rules, tells the FTC to publish guidance on when state laws "requiring alterations to truthful AI outputs" run afoul of the FTC Act, and threatens to cut Broadband Equity (BEAD) funding to states whose AI laws are deemed in conflict. The order names California's SB 53 and Colorado's AI Act as examples. SB 53, the bill I covered favorably in week one, is now a federal target two months before its January 1 effective date.

I'm not in favor of this order. Be specific about why. The order doesn't actually preempt anything on its own, only Congress or a court can do that, and several legal commentators have flagged it'll face significant hurdles. State laws remain enforceable while litigation runs. So in the short term it's theater. But it's theater with consequences. The signal to AI labs is "we will fight on your behalf if you don't want to comply with state transparency requirements." The signal to states is "regulating AI will cost you federal money." The policy I want is the opposite. I want states to keep building the governance scaffolding because the federal version isn't coming and probably wouldn't be the right shape if it did. SB 53's transparency-and-reporting framework is the right starting point. Colorado's law is reasonable. States experimenting and the best models converging into federal law is how this is supposed to work. Preempting it before it happens is the wrong intervention. The thing to watch: whether the Task Force actually files in Q1, whether states amend their laws to evade the preemption claim, and whether any state's BEAD money actually gets withheld. Most directives have a 90-day clock, so we'll know by early March whether this becomes a real fight or stays a press release.

Same day, OpenAI released GPT-5.2. Instant, Thinking, Pro, API on day one. The headline: GPT-5.2 Thinking beats or ties human experts on 70.9% of GDPval knowledge-work tasks, up from 38.8% on GPT-5.1. GDPval covers 44 occupations across the top nine US-GDP industries, real work products like sales decks, accounting spreadsheets, urgent-care schedules. A meaningful capability jump. The release was reportedly accelerated; multiple outlets confirmed Sam Altman's internal "code red" memo after Gemini 3 Pro landed Google in front, and GPT-5.2 shifted from late December to December 11 to close the gap. The companion GPT-5.2-Codex release targets long-horizon agentic coding, context compaction, stronger cybersecurity, the bits that matter for agents that run for hours rather than minutes.

The model is good. The benchmark numbers are real. GPT-5.2 Thinking is now the model I'd reach for first on hard reasoning, and the Codex variant will land well with the agentic-coding crowd. Worth saying that plainly before the critique. The critique is the GDPval framing. "Outperforms industry professionals on 70.9% of knowledge work tasks" is the line that got pulled into every executive deck this week, and it's the line that'll get cited the next time a CFO is deciding whether to cut a team. This is the labor narrative I keep coming back to. GDPval measures "well-specified" tasks. The work professionals do is mostly not well-specified. It's the badly-specified, ambiguous, context-dependent middle of the job that the headline number doesn't measure. The 70.9% is real for the slice it measures and misleading for the slice it doesn't, and the misleading slice is the one that matters for headcount.

December 11 was the day that wouldn't quit. OpenAI and Disney announced a multi-year licensing and equity deal: Disney makes a $1 billion equity investment in OpenAI, gets warrants for additional equity, and licenses more than 200 characters from Disney, Marvel, Pixar, and Star Wars into Sora for user-generated short-form video. Disney also commits to using ChatGPT internally and OpenAI APIs across products including Disney+. Three-year licensing, with exclusivity for the first year before opening to other AI platforms. Talent likenesses and voices explicitly excluded. This is the deal that resolves the Sora copyright fight that's been building since the Sora 2 launch in late September. The answer: pay them, and bring them inside the tent. A billion-dollar equity check makes the IP holder also an investor, which substantially changes the alignment for the next round of disputes. The strategic read is that this is the template. Universal, Warner, Sony, every IP catalog of consequence is going to have a similar deal on the table, and the one-year exclusivity is the rate-limiter on how fast OpenAI consolidates.

December 9, the same week. Accenture and Anthropic announced a multi-year strategic partnership that puts approximately 30,000 Accenture professionals through Claude training and stands up the "Accenture Anthropic Business Group" as a dedicated commercial unit. This is the enterprise-services pincer move on Claude I've been expecting. Anthropic's pitch all year has been "Claude as the foundation for enterprise agents." The thing missing was the consultancy layer that puts those agents into Fortune 500 environments. Accenture is that layer. 30,000 Claude-trained consultants billing Fortune 500 clients is, structurally, the largest commercial mobilization of a single AI vendor's stack so far this year. The strategic concern is the vendor-lock-in pattern. When the consultancy helping you stand up your AI strategy has 30,000 people trained on one vendor's stack, the architectural recommendations skew that direction regardless of whether it's the right answer for your shape of problem. The 2010-era cloud parallel is exact: by 2014 every Big-Four consultant was "AWS-trained" and the architectural defaults shifted to AWS for reasons that had nothing to do with comparing platforms. The Claude version of that is now being seeded. I like Claude. I run Sonnet 4.5 daily. The thing I want preserved is optionality, that an enterprise can rationally pick Claude for some workloads, Llama for others, a local model for the regulated ones, and switch when the calculus changes. Consultancies trained on one vendor's stack erode that structurally, even when nobody intends it.

Smaller items: NVIDIA debuted the Nemotron 3 family of open models, open weights, multiple sizes, optimized for agentic and reasoning workloads. AWS re:Invent wrapped early in the week with three new "frontier agents". Kiro for development, plus security and DevOps agents. The agentic-platform pitch from every hyperscaler is now identical in shape. The EU's first draft Code of Practice on AI-generated content marking was published December 17, provenance and labelling rules are the next regulatory surface. Accenture announced ~11,000 layoffs as part of an AI-adoption restructuring, same week as the Anthropic partnership. CEO Julie Sweet's framing: "those we cannot reskill will be exited." The juxtaposition is instructive. And a December survey of 1,000 hiring managers found that 59% admit they emphasize AI in layoff announcements because it "plays better with stakeholders" than admitting financial constraints. That's the incentive-driven narrative laundering I keep coming back to. The underlying displacement is real and accelerating. The framing is just dishonest about why.

What to watch next week: GPT-5.2 in production usage, the Code of Practice on AI-content labelling, whatever the AI Litigation Task Force does in its first week. The pattern this week: federal AI policy is now actively trying to prevent state AI policy, the capital structure of frontier AI consolidated by another step through Disney and Accenture, and the labor displacement is real and the framing around it is dishonest. Accenture laying off 11,000 the same week it's training 30,000 in Claude is the picture in a single frame.