Called my shot: what's happening with personal AI

Two and a half years ago I wrote a piece arguing personal AI would be the durable category, not enterprise chatbots. The 2025 version of that bet is partway right and wrong in interesting ways. Worth being clear about what landed and what didn't.

A polished green billiards table viewed at a dramatic angle with a single white cue ball near the center and a wooden cue stick across the felt

Two and a half years ago, in the middle of the early ChatGPT wave, I wrote a framing piece for this site arguing that the durable category from the AI moment would be personal AI rather than enterprise chatbots. The bet was specific: that the value of AI compounds with private data and personal context, and that the people who'd benefit most would be individuals running models against their own information rather than employees querying a vendor's hosted assistant.

Two and a half years in, the bet is partway right and wrong in interesting ways. Worth being clear about both because the second half of the decade is going to settle which version of the personal-AI story is the actual one.

What landed

The structural pieces I called are there:

Local inference on consumer hardware became real. Apple Silicon plus open-weights crossed the line where workhorse-tier capability runs on hardware an individual can buy. The "60 GB models on a Mac Studio" picture I sketched in 2023 happened, on roughly the timeline I expected, with the open-weights model line that I expected. The hardware to run personal AI exists.

Privacy-bound use cases are working. The always-on personal AI assistant that watches your local files, your local conversations, your local activity is a real category for individuals who care to set it up. The data stays local; the inference is local; the value compounds with the accumulated personal context. The pattern works exactly the way the encoding-a-person framing suggested it would.

The discipline emerged. Memory hygiene, scoped indexing, capability isolation, the practitioner discipline around personal AI is forming. The shops doing it well are getting outsized value. The shops doing it badly are accumulating quiet failures. Same shape as any good engineering practice.

The cloud-frontier remained a complement, not a substitute. Hosted Claude / GPT / Gemini are still the right answer for the hardest problems where capability is the binding constraint. The pattern that emerged is exactly the routing-per-workload one I expected, local for the privacy-bound, high-volume, latency-sensitive cases; cloud for the frontier-capability cases.

That's a reasonable amount right.

What's wrong

Some places where the 2023 framing was off:

The consumer side moved slower than I expected. I thought by mid-2025 we'd see meaningful consumer products built around personal AI, apps that ran on the user's hardware, apps that respected the privacy boundary, apps that compounded value with personal context over time. The reality is closer to "individual practitioners build their own setups" plus "the major hosted assistants accumulate the same value with the privacy trade-off." The product layer for principled personal AI doesn't really exist as a consumer category yet.

Enterprise chatbots got more durable than I gave credit for. The Microsoft Copilot / Workspace AI / Salesforce Einstein / etc category I was bearish on turned out to be a meaningful category for actual enterprise productivity. Not the trillion-dollar one the marketing promised; not zero either. The "AI everywhere in enterprise software" pattern works for the things it works for and the bet against it was too aggressive.

The data-portability layer didn't materialize. I expected by now we'd have meaningful standards for moving personal-AI data between products, the equivalent of email's IMAP for AI assistants. The MCP standardization is getting partway there for tools, but the personal-data-and-memory portability is still missing. Without it, the "your AI knows you" value is locked to whichever vendor you committed to.

The price floor compressed faster than I predicted. I expected hosted-AI prices to come down maybe 5× over two years; they came down 10-20× depending on tier. The pricing pressure changed the local-vs-hosted calculation in ways that pushed more workloads back to hosted than I'd planned. Local still wins for the workloads where it wins; the cases for hosted got broader because hosted got cheaper faster than expected.

The current shape, halfway through 2025

What the personal-AI category actually looks like in mid-2025, with the benefit of two-and-a-half years of evidence:

Two distinct populations. There's a small population of people who actually use this stuff, me, the people running home setups, the privacy-focused individuals, who've built deliberate personal-AI configurations. There's a much larger population of casual users who get personal-AI value through hosted assistants without the principled setup. The two populations have different value capture, different risk profiles, different capability ceilings.

The principled population is small and durable. It's not growing as fast as the casual population, but the people in it are deeply committed and the setups they're building are durable. The individual hands-on group is a meaningful fraction of the AI infrastructure decision-making; the shops they work for adopt patterns from their personal setups.

The casual population is broad and shallow. Hosted-AI assistants have meaningful penetration in consumer use; the value capture is real but the user has surrendered most of the data and most of the leverage to the vendor. The pattern is fine for users who don't care about the trade-off and uncomfortable for users who do.

The bridge layer is missing. What doesn't exist yet, and what I'd call the actual product gap of the category, is the bridge from "casual hosted user" to "principled local user" without requiring full home-lab construction. A consumer product that handles the discipline pieces (scoped indexing, capability isolation, redaction at ingest, memory hygiene) without requiring the user to be a platform engineer. The market for this exists; nobody has shipped it well.

What I'd predict from here

Three things I'd put the next bet on, for the second half of the decade:

Apple ships the bridge layer. The bridge product that takes casual users to principled personal AI is most-likely to come from Apple. They have the hardware, the OS-level position, the privacy stance, and the install base. Whether they ship it well is the question; the strategic position to do it is theirs to lose.

Open-source consumer-friendly tooling matures. The current open-source personal-AI tools (Ollama, LM Studio, the various MCP servers) are practitioner-grade. The consumer-grade versions of these emerge over the next 12-18 months. The category that builds these well captures the bridge that Apple might or might not.

The principled-user discipline becomes a recognizable consumer pattern. People will know what "I run my AI locally" means as a consumer concept the way "I use ad-blockers" became a recognizable consumer pattern. The principled population grows from "small and weird" to "small and visible," which changes the conversation about what default consumer AI should look like.

These are bets, not predictions. The second-half-of-decade space is more contested than the first-half was, and the personal-AI category specifically is the one where the strategic positioning matters most for the kind of internet that emerges.

What I'd write differently if starting today

The 2023 framing was right about the structural direction and wrong about the specific timing and product shapes. The version I'd write today would be:

  • Lean even harder on local hardware. The Apple Silicon trajectory is more durable than I gave credit for; the open-weights gap is closing faster than I predicted.
  • Be more humble about consumer-product timing. The category exists; the consumer-friendly products lag the practitioner setups by years, not months.
  • Be more pointed about the data-portability problem. The lock-in story matters more for personal AI than I emphasized; the vendor-lock-in cost in the AI era hits personal AI hardest because the personal context is the most expensive thing to recreate.
  • Be more specific about the principled-user / casual-user split. Two populations, two value-capture stories, two product-design conversations. Treating them as one population (which the 2023 framing did) misses the most important thing about how the category is actually evolving.

The shot landed mostly. Not exactly. The version of personal AI that's emerging is closer to my framing than to the alternatives, and the alternatives didn't fully fail either. The interesting work for the next two years is in the bridge layer between principled and casual. That's where I'd put the new bet.

The arc I started in 2023 isn't done. The category is forming. Worth coming back to in another two years to call it again.