Apple Intelligence one year later: what landed and what didn't

WWDC 2024 promised the most ambitious consumer AI rollout of the year. Most of what shipped is the easy part. The interesting part, the part that would have actually changed how iPhones feel, is still pending.

Modern smartphone face-up with a holographic neural-network diagram hovering above it

Nine months after the WWDC 2024 keynote that introduced Apple Intelligence as the most ambitious consumer AI rollout of the year, the picture is mixed in a way that's worth being honest about. Some of the announced features shipped on schedule and work reasonably well. Some shipped late and feel underbaked. The most important promised features (the ones that would have actually changed how an iPhone feels day to day) are still pending. The gap between what was demoed and what's currently in the wild is a useful lens for thinking about Apple's AI strategy specifically and the deploy-AI-on-device problem generally.

Apple Intelligence feature status nine months after WWDC 2024, features that shipped and work, features that slipped to 2025, and the quiet infrastructure layer.

What actually shipped

A few features genuinely arrived more or less as promised:

Writing Tools, proofreading, rewriting, summarization, tone adjustment in any text field on the OS. This works. It's not as polished as a hosted frontier model, but for the in-line "fix this paragraph" use case it's faster than tabbing into a separate AI app, and the privacy story (most of it runs on-device) is genuine. This is the feature most users have actually adopted.

Notification summarization, collapsing a stack of notifications from one app into a one-liner. Useful when it works. Famously embarrassing when it doesn't (the BBC News summarization complaints in late 2024 got real coverage and forced Apple to add disclaimers to summarized notifications). The capability is there, the calibration isn't.

Image cleanup in Photos, the background-object removal tool. Works well for simple cases, struggles with complex ones, mostly delivers what the demo promised.

Genmoji and Image Playground, these shipped. They're fine. The cultural footprint is small.

Reduce-Interruptions Focus mode, uses on-device classification to decide which notifications are urgent. Useful for the niche of users who configure Focus modes carefully.

What slipped

The personalized Siri overhaul, the demo from WWDC 2024 that showed Siri accessing personal context across apps, doing multi-step tasks based on what you'd said and emailed and noted, has been delayed from the original "in the coming months after iOS 18.1" timeline to a 2025 target with no specific date. The features that were released for Siri (more conversational responses, ChatGPT integration for general questions, type-to-Siri) are useful but they are not the personalized Siri Apple announced. The capability gap between what was demoed and what shipped is large, and Apple's communication has been quieter about the slip than competitors' communication tends to be about delays of similar size.

The cross-app App Intents extensions that would have powered the personalized Siri demos are also pending broader rollout. Some apps have hooks; the tooling-wide adoption that would make "Siri, do X across these three apps" actually work hasn't happened.

What this gap says

The simplest read is that the on-device model running on the iPhone (a roughly 3-billion-parameter foundation model, plus task-specific adapters) is genuinely capable for narrow tasks (Writing Tools, summarization) and not yet capable enough for the agentic personal-context tasks Apple actually wants Siri to do. Apple's bet is that the on-device model is the right foundation for a privacy-first AI assistant. That bet is real, but it's also why the harder use cases haven't shipped, agentic personal-context work is hard for any model, and it's harder for a 3B-parameter on-device one.

The Private Cloud Compute architecture. Apple's offering to send harder queries to attested Apple-controlled servers when the on-device model isn't enough, is technically interesting but rarely visible in the user experience. The hand-off is supposed to be seamless; in practice there isn't a clear way to know what ran where, and the features that would benefit most from PCC are also the features that haven't shipped.

How this reads next to the alternatives

The contrast against what Google has been doing on Pixel devices, and against what's possible on Mac for users willing to run open-weights models locally, is uncomfortable for the Apple Intelligence story. Pixel users have had Gemini Nano features running on-device for a while, with looser privacy guarantees but more visible capability. Mac users with M-series silicon and 64GB+ unified memory can run 70B-class open models that are dramatically more capable than the 3B-parameter Apple Intelligence model, though doing so requires deliberate setup and isn't an out-of-box consumer experience.

Apple's positioning has been "the AI that respects your privacy by running on-device when possible and on attested infrastructure when not." That's a real value proposition, and for the narrow use cases that have shipped it's been delivered. The harder tier (the agentic, personal-context, multi-app Siri) is the one that would actually convert the strategic positioning into a daily user experience, and it's the one still missing nine months in.

What I'd watch next

Two things worth tracking through the next couple quarters:

The first is whether the personalized Siri features ship on iPhone in the iOS 18.4 or iOS 19 timeframe, and whether they ship across all supported devices or only the latest hardware. The original WWDC demos didn't gate features by chip generation, but the practical reality of running adapter-tuned on-device models at meaningful capability seems to push toward gating. Watch the device compatibility list when the features land.

The second is whether Apple opens up Apple Intelligence to third-party model providers beyond ChatGPT. The current architecture ostensibly supports plugging in alternative models (Gemini, Claude were both rumored), and how that develops will say a lot about whether Apple Intelligence becomes the AI layer of the platform or remains an Apple-only offering with ChatGPT bolted on.

The shipped features are useful. The unshipped features are the ones that would have made Apple Intelligence the story of mobile AI in 2024–25. Whether that story still gets to happen, or whether competitors have closed enough of the gap to make it moot, is what the next twelve months are going to settle.