AI in the news: week of February 15, 2026
Delhi hosts the first Global South AI summit, Chinese labs race to ship before Lunar New Year, OpenAI quietly retires GPT-4o and GPT-5 from ChatGPT, the Microsoft-OpenAI-AWS triangle gets rewired, and Block crosses the half-the-workforce mark on AI-cited cuts.
What this week actually changed: the center of gravity in frontier AI moved away from US labs. Delhi opens the first Global South AI summit, Chinese labs ship a fresh wave ahead of Lunar New Year, OpenAI quietly retires its old flagship models with no marketing, and the Microsoft-OpenAI exclusive era ends.
The week ending Sunday February 15 was the prelude to two big set-pieces, the India AI Impact Summit opening Monday February 16 in Delhi, and the Lunar New Year on the same day. Both pulled news forward into the back half of the week. Chinese labs raced to ship before the holiday break, the summit drew pre-event positioning from every major lab, and the Microsoft-OpenAI-AWS reshuffle that's been rumored for a quarter finally surfaced in concrete form. Heavy week, mostly about positioning rather than capability.
Delhi opens the first Global South AI summit
The India AI Impact Summit 2026 opens Monday February 16 at Bharat Mandapam in New Delhi, running through the 21st. It's the follow-on to the Paris AI Action Summit (Feb 2025) and the Bletchley/Seoul track before that, and it's the first time the convening has happened in the Global South. That framing is doing a lot of the work this week.
The pre-event coverage focused on three things. The "People, Planet, Progress" framing India is pushing as the structural alternative to the safety-vs-acceleration binary the prior summits got stuck on. The attendance list, which includes Pichai, Altman, Amodei, Hassabis, and Ambani plus roughly 20 heads of state. And the Indian government's positioning that the summit's deliverable isn't a treaty but a set of practical, applied-AI commitments.
I think the Global South framing matters, and I think it's underweighted in the Western coverage. The frontier-AI conversation has been a US/UK/EU/China conversation for three years, and the governance frameworks that have emerged (SB 53, the EU AI Act, the UK AISI charter) reflect that. Adding a venue where the convening party is neither lab-headquarters nor superpower-regulator changes which questions get asked. Whether the summit produces anything binding is almost beside the point in year one, the precedent is that the next one won't be in Paris or San Francisco either.
What I'll watch next week: whether the summit communique includes anything on training-data sourcing from non-English corpora (where India has actual leverage), whether the labs commit to anything operational beyond the same voluntary frameworks they've already published, and whether the Global South framing survives contact with the actual delegations.
The cadence inside Anthropic
February 5, Anthropic released Claude Opus 4.6, frontier coding model with a 1M-token context window in beta, "agent teams" inside Claude Code, and effort controls that let developers tune the intelligence/speed/cost trade per call. TechCrunch's coverage framed the launch around the agent-teams feature; GitHub made it generally available in Copilot the same day.
The benchmark numbers are real, top of Terminal-Bench 2.0, 144 Elo points ahead of GPT-5.2 on GDPval-AA. The 1M context is a meaningful capacity jump. The agent-teams feature inside Claude Code is the more interesting product move: it's the first mainstream-product surfacing of multi-agent orchestration as a UX, not a developer-side abstraction. I've been writing about this pattern for months (who composes the team, what the handoff looks like, how state gets shared) and Anthropic's bet is that the team-of-agents abstraction belongs in the IDE rather than buried in an SDK.
The thing I want to flag is the cadence. Sonnet 4.5 in late September, Sonnet 4.6 in November, Opus 4.6 in early February. Anthropic is on a quarterly half-step rhythm now, and the half-step versioning is doing real work. Capability deltas at each step are bounded, the API contract holds, and the upgrade is mostly a drop-in for downstream code. That's how you build trust with the agent-stack builders, and I think it's why Anthropic is winning the SDK and IDE conversation even when individual benchmark wins swap back and forth.
OpenAI retires its old flagships with no marketing
February 13, OpenAI retired GPT-4o, GPT-4.1, GPT-4.1 mini, and GPT-5 (Instant and Thinking) from ChatGPT. Quiet announcement, no marketing. The models stay in the API; they're just gone from the consumer surface.
Worth dwelling on for a second. GPT-4o was the model that defined ChatGPT for a year. GPT-5 launched with a major event in summer 2025. Retiring both from the consumer product on a Friday in February with no event is a statement about what OpenAI thinks the consumer product is for now. It's a router to the current best-available model, not a catalog of options. The user shouldn't be picking between five model names. The product picks for them.
I think this is right product-wise, and worth flagging operationally. For anyone building against the OpenAI consumer surface (custom GPTs, browser plugins, agent harnesses that hit chat.openai.com), the model-selection assumption you made six months ago is now wrong. For anyone building against the API the impact is smaller (the API contract is more durable than the consumer surface) but the signal is the same: model-name-as-stable-identifier isn't a guarantee anyone is making. Vendor lock-in in the AI era keeps showing up in new shapes, and "the model you built around no longer exists" is one of them.
China ships ahead of Lunar New Year
Lunar New Year landed Monday February 16. The Chinese AI labs treated the week before as a hard ship deadline, and the volume of releases reflects it.
Alibaba released Qwen 3.5 hours before the holiday started. Multimodal, text, photo, video up to two hours of input. Designed for agent tasks. 60% cheaper than Qwen 2.5. The weights ship under the same permissive license as the prior Qwen generation. ByteDance, Zhipu, and several smaller labs also pushed releases in the same window. DeepSeek didn't. V4 is expected later, and the silence is itself a piece of news after the V3-then-R2 cadence the lab held last year.
Two things on Qwen 3.5 specifically. First, open-weights release at frontier-adjacent capability is the move that keeps the open-weights conversation honest. Llama 4 is open-weights at scale; Qwen 3.5 is open-weights at scale and on a faster ship cadence; Mistral 3 (released earlier this month) is open-weights at scale from a European lab. The "open-weights is one generation behind frontier" narrative is no longer true and hasn't been for a year. Second, the price, 60% cheaper than Qwen 2.5, keeps the China-driven price compression on the same trajectory it's been on since DeepSeek V3 in late 2024. The economics of running a hosted model are a moving target, and the moving direction is down.
I don't run Qwen as a daily driver. I run open-weights models locally for sensitive workloads, and the locally-runnable variants of Qwen 3.5 are now in the conversation in a way they weren't six months ago. Worth a serious look for anyone who's been waiting for the open-weights option to be unambiguously good enough.
The Microsoft-OpenAI-AWS triangle gets rewired
Mid-week, OpenAI and AWS announced a major partnership expansion. Amazon committing up to $50B in OpenAI investment, OpenAI expanding the existing $38B AWS spend by an additional $100B over eight years, and AWS becoming the exclusive third-party cloud distribution partner for OpenAI's enterprise platform. The Microsoft side of the triangle was rewired in parallel; the practical upshot is that the Microsoft-as-exclusive-OpenAI-cloud era is over.
The substance here matters more than the headline. OpenAI's compute-and-distribution dependencies are now multi-cloud by design rather than de-facto. Enterprise customers who picked Azure-OpenAI specifically because it was the only OpenAI-on-cloud surface now have AWS as a parallel option with a different set of integrations and a different commercial structure. Microsoft's Copilot strategy, which had been built around the assumption that the OpenAI relationship was a moat, now has to compete on Copilot-as-product rather than Copilot-as-only-path.
The frame I keep coming back to is concentration. Two years ago the frontier-AI capital story was Microsoft-OpenAI as a single unit. The reshuffle splits it into Microsoft, OpenAI, AWS, plus the Google-Anthropic axis forming in parallel. The number of distinct frontier-capital relationships is going up, not down. That's a healthier shape than where we were, even if the absolute capital concentration is still extreme.
Block crosses the half-the-workforce mark
Block cut nearly half its workforce (over 4,000 jobs) with Jack Dorsey explicitly attributing it to AI (Programs.com). Salesforce added another 1,000 on top of last year's customer-service cuts, and Challenger pegged February AI-cited cuts at ~4,680. Block is the loudest version yet of "AI replaces workers" being load-bearing for an executive narrative, see the longer piece for where I land on the pace. Short version: the displacement is real, the pace is wrong, and the financial logic driving the rush is the part that needs naming.
A few smaller items worth flagging
- Mistral shipped Voxtral on February 4, open-weights audio model with state-of-the-art transcription, diarization, and real-time processing. The open-weights story isn't only language anymore; audio and video are catching up faster than I'd expected. The Mistral release sits inside the broader Mistral 3 push from earlier in the quarter.
- California's SB 53 is now 45 days into enforcement. First incident reports under the new transparency regime are due any week. I'll cover the actual filings when they land, the shape of the first reports is what tells us whether the law has teeth.
- Texas's Responsible AI Governance Act also went live January 1. Two-state convergence on AI transparency is the federal-floor question now. The next legislative session will tell us how many more states copy the pattern.
What to watch next week
The hosted-frontier-AI lock-in shape is changing, not loosening. Anthropic's agent-teams in Claude Code, OpenAI's consumer-surface model auto-routing, the AWS-as-OpenAI-distribution deal, all three deepen the platform-layer lock-in even as the model-name lock-in gets weaker. The principled-user response is the same: build the agent stack against open-weights endpoints so the lock-in is optional, not structural.
The labor narrative consolidates further. Block crediting AI for nearly half its workforce going is the loudest version yet of "AI replaces workers" being load-bearing for executive stories. The displacement is real. The pace is what's wrong, and the financial logic is the driver. The firms figuring out human+AI collaboration will outperform the firms racing to cut headcount on the next earnings call. The headcount shrinks either way, just better under collaboration.
Next Sunday: the India summit communique, whatever DeepSeek does or doesn't ship after the holiday, and the first SB 53 incident reports if they land.
Sources
- India AI Impact Summit 2026, official site
- India to host AI Impact Summit in February 2026. Press Information Bureau, Government of India
- Introducing Claude Opus 4.6. Anthropic
- Anthropic releases Opus 4.6 with new 'agent teams'. TechCrunch
- Claude Opus 4.6 generally available for GitHub Copilot. GitHub Changelog
- AI Updates Today (model retirements). LLM Stats
- Alibaba unveils major AI model upgrade ahead of DeepSeek release. Bloomberg
- These are China's new AI models released ahead of Lunar New Year. Euronews
- Mistral AI releases new open-source models 2026
- OpenAI shakes up partnership with Microsoft. CNBC
- List of companies announcing AI-driven layoffs. Programs.com
- Challenger Report: AI leads reasons for cuts. Challenger, Gray & Christmas