AI in the news: week of January 11, 2026

CES 2026 week. NVIDIA unveils Rubin and the DGX Spark/Station desktop AI line, AMD ships Ryzen AI 400 with a 60-TOPS NPU, Qualcomm pushes Snapdragon X2 to 80 TOPS, Samsung wires Bixby and Alexa+ into every appliance, and Mercedes ships Nvidia-powered L2 in the CLA. My take.

AI in the news: week of January 11, 2026

What this week actually changed: CES 2026 made local AI infrastructure the default sales pitch, and the same week dressed up consumer surveillance as "AI companions" with cameras in your kitchen.

CES 2026 week, Tuesday through Friday in Vegas, with AMD's keynote on Monday evening as the curtain-raiser and NVIDIA's special address Tuesday morning setting the tone. The whole show was AI-shaped this year, chip vendors, PC OEMs, automakers, appliance makers, all converged on the same pitch: the model runs locally, on a thing you buy, in your house or your car or your laptop. After three years of "the model lives in the cloud," that's a meaningful tilt. The principled-distributed-AI position I keep arguing for is, broadly, what every CES keynote was selling. The fine print is where it gets interesting.

The on-device-AI hardware story is here. On January 6, Jensen Huang's special address unveiled the Rubin platform, successor to Blackwell, six co-designed chips operating as a single AI supercomputer. Roughly 5x training throughput over Blackwell and a claimed ~10x reduction in inference token cost. That's the datacenter story, and it matters mostly for what it means about token prices in the back half of the year. The piece I care more about is the DGX Spark and DGX Station personal AI computers. Spark runs models up to ~200B parameters on 128GB of unified memory in a desktop form factor on the GB10 superchip. Station goes further. GB300 Grace Blackwell Ultra, ~784GB coherent memory, trillion-parameter models on a desk. These are not toys. These are workstations priced for serious users that will run frontier-scale models without the cloud. The most strategically significant CES announcement in years, and the framing has been almost completely missed. NVIDIA has historically been the company that benefits most from concentrated, hosted AI, every hyperscaler buys their chips. Pivoting hard toward a desktop AI supercomputer line is NVIDIA hedging against the world where the model belongs near the user, not in someone else's datacenter. They're reading the same tea leaves I have. The Apple Silicon open-weights inflection point just got a much louder counterpart from the GPU vendor whose business model is supposedly the cloud-AI business model. What I'll watch: pricing, availability, and what software stack ships day-one. A trillion-parameter local model that needs CUDA-only tooling is less interesting than one that integrates cleanly with the open inference stacks people already use.

AMD picked it up on the evening of January 5. Lisa Su's opening keynote led with the Ryzen AI 400 series, 60 TOPS NPU, full ROCm support, first systems shipping this month. The Ryzen AI Max+ 392 and 388 push to 128GB unified memory and claim 128B-parameter local inference. AMD also previewed the Instinct MI500 GPU for 2027. The AI-PC NPU race is now genuinely competitive. AMD at 60 TOPS, Qualcomm at 80 TOPS on the Snapdragon X2 Elite/Extreme, Intel Panther Lake at 50 TOPS shipping across 200+ Core Ultra Series 3 designs. I argued in PCs trying to catch up to Apple's Neural Engines that the gap was closing. It has now closed. By end of Q1, the average new Windows laptop will have an NPU capable of running a 7B-parameter model with reasonable latency without ever talking to a vendor's API. This is good. Whatever you think of any individual chip vendor, the trend of "every laptop sold in 2026 ships with hardware sufficient for serious local inference" is the trend that breaks the hosted-AI lock-in pattern. The work for the rest of us is making sure the software catches up to the hardware fast enough. Worth noting separately: Lisa Su brought the White House OSTP director on stage for a Genesis Mission segment and announced $150M for AI in classrooms. The geopolitical-industrial framing of US AI is now standard keynote material.

Then there's the smart-home pitch, same conference, very different shape. Samsung's First Look event pitched "Your Companion to AI Living". Bespoke AI Refrigerator with Family Hub 32", AI Laundry Combo, AI Jet Bot Combo, all wired through SmartThings and Bixby, all equipped with cameras, microphones, and screens that "see, hear, and understand to proactively respond." Samsung also became the first third-party to build Alexa+ directly into its TVs. LG's CLOiD home robot opens the fridge, loads the oven, folds laundry, takes commands through Google Assistant. Here's the part that needs saying out loud. Every one of these devices is a microphone and a camera in your home, connected to a vendor's cloud, recording behavior to train a "companion" that gets smarter over time. The pitch is convenience. The transaction is a continuous high-fidelity stream of what you say, what you eat, what you wear, what time you go to bed, and increasingly what your face looks like while you do those things. With 430 million SmartThings users, Samsung now has an unprecedented behavioral dataset on the inside of homes. I've written before about how nobody actually wants to own the PII problem but everyone is happy to collect more PII. The AI smart home is the most aggressive PII-combination move I've seen the consumer-electronics industry attempt, and it's framed as a feature. The cameras and mics inside Samsung's appliances would have been the dystopian premise of a 2015 sci-fi short story. In 2026 they're the marketing. The principled position is the boring one. Don't put a camera-and-microphone AI device in your home if the model runs in someone else's cloud. The chip vendors are shipping the hardware to do this locally (same NPU story above) and there's no good reason an AI fridge needs a constant connection to vendor servers other than that the vendor wants the data.

The car version is the same pitch wearing a different jacket. The Mercedes-Benz CLA starts shipping this quarter as the first production vehicle on the full NVIDIA DRIVE stack with the new Alpamayo open-source reasoning model. Level 2 driver assistance for city streets. The MBUX Hyperscreen integrates AI from both Microsoft and Google in the same vehicle. BMW debuted a new Intelligent Personal Assistant on the iX3 powered by Amazon Alexa+ architecture, rolling to Germany and US in H2. The Mercedes/Nvidia partnership is the interesting one. Alpamayo being open-source is a meaningful choice. Nvidia could have kept the AV reasoning models proprietary and made automakers lease the brain. They chose open weights instead, and that decision will accelerate the AV industry by years. The voice-assistant-in-the-dashboard story is the same smart-home pitch with a wrinkle: the car is on the road, recording, in places that aren't your house. Worth thinking about.

Smaller items: Qualcomm announced a humanoid robotics initiative alongside the Snapdragon X2 announcements, chip vendors are all positioning for the humanoid-robotics wave that probably crests in 2027-28. Microsoft's Windows 11 tooling update at CES highlighted AI features across OEM partner laptops; Copilot+ keeps expanding. Pinterest layoffs hit, about 700 people, 15% of headcount, cited AI as a contributing factor. First sizable AI-cited cut of 2026. The displacement is real and the pace is being driven by short-term incentives, markets reward "we're an AI company" cuts, so cuts get framed that way regardless of whether the AI is actually ready. I keep arguing the sustainable model is human + AI collaboration, where the headcount still shrinks but it shrinks well.

What to watch next week: post-CES dust settles, model releases that got buried under the keynote noise, whatever lands midweek. The pattern from CES: the on-device-AI infrastructure is here, the AI-companion pitch is the new privacy fight, and NVIDIA hedged. The DGX Spark/Station line is the most interesting thing NVIDIA shipped this week, because it tells you what the company that benefits most from hosted AI thinks the world looks like in 2027. They think it looks more local. Take the hint.