The personal AI framing

Three years into writing about personal AI. The framing has held in the parts that mattered, bent in the parts I expected to bend, and surprised me in places I didn't see coming. Worth restating the thesis cleanly and saying where it stands.

The personal AI framing

Three years into writing about personal AI, and I want to come back to the foundational framing one more time. Not because the framing has changed (it hasn't, much) but because the conversation around it has, and the version of the thesis I've been carrying in my head deserves to be restated cleanly. For me, and for anyone reading along.

I keep getting asked variants of the same question. "Is personal AI a real category yet?" "Did the imprint thesis hold?" "Is this still the bet you'd make in 2026?" The answers exist scattered across forty-something essays. This piece is the consolidated version.

What the framing was

The core claim, the one I wrote in 2023 and have been writing around ever since: the durable AI category for individuals is the personal one, a model that holds enough of your context to be useful as a thinking partner over time, running on a foundation you control, with data that stays yours.

Four pieces, stated as plainly as I know how to state them:

Personal context is the moat. Not parameters, not benchmarks, not the latest model release. The thing that makes AI useful to a specific human is how much of that human's context, history, work, and way-of-thinking the model can reliably draw on. Everything else is a commodity over a long enough timeline.

The foundation has to be one the individual controls. Hosted assistants are useful entry points; they aren't a destination. The version of personal AI that survives the long run is one where the user owns the hardware, the weights, and the data store. Not because of ideological purity, because the dependency profile of "we promise not to look" doesn't survive normal corporate decision-making at scale.

The work of encoding a person is mostly curation. Not training. Curation. Deciding what should be in the model's working memory of you, what shouldn't, what stays current, what gets pruned. The technical work of fine-tuning and retrieval is the smaller half. The harder half is the human work of figuring out what's worth keeping.

The cognitive foundation that produces a person's outputs belongs to that person. Their writing, sure. Their voice, sure. But also the underlying way of reasoning that makes the outputs theirs in the first place. That's the deeper layer, and it's the one I keep coming back to because nobody else seems to want to.

That's the framing. It's barely changed since encoding a person. The thing that's changed is the world around it.

What held

Of those four pieces, the first three have aged better than I had any right to expect.

Personal context as the moat is now obvious enough that the hosted-AI companies have all built memory features and personalization layers. They had to. The tools that became daily habits were the ones that accumulated context; the ones that didn't stayed in the occasional-query bucket. I said this in 2023 and the market said it back to me through 2025.

The control-the-foundation argument held in the parts of the population that care about it. The principled-user population, the people who run their own inference, who keep their data local, who treat their AI like infrastructure they own, is small but real. Larger than it was two years ago. Sustaining its own open-source tools and model release cadence. Not mass-market. Doesn't need to be to be real.

The curation-not-training point held everywhere. Every mature personal-AI workflow I've watched develop spends most of its tuning time on what goes into the context window, not on what's in the weights. The teams that figured this out shipped useful things; the teams that kept chasing fine-tunes mostly didn't.

These are the parts of the framing I'd write the same way today.

What bent

A few places where the framing was directionally right but specifically off.

The consumer product layer didn't arrive on the timeline I sketched. I expected by 2025 to see real consumer products built around principled-personal-AI shapes. The reality is that people who actually use this stuff are still building their own setups, the casual-but-principled user category remains underbuilt, and Apple (the most likely vendor) has shipped the foundation without the bridge product. That mismatch is wider than I expected.

The hosted assistants matter more than I weighted. The version of personal AI that ships in the daily lives of most people in 2026 is a hosted one with the privacy compromises that implies. The principled-personal-AI population I write for is small relative to the hosted-AI population. Both categories are real; the framing weighted the principled side too heavily relative to where most users actually landed.

Data portability didn't arrive. I expected something like email's IMAP for personal AI by year two or three. We have MCP for tools, which is more than I expected on the tools side. The personal-data-and-memory portability is still vendor-specific. The lock-in is real and continues, and it's a worse outcome than the framing predicted.

These are timing and magnitude misses on a thesis that pointed in the right direction. Not pivots.

What I didn't see coming

The piece of the framing that's grown the most in the last two years, and the one that I think matters most going forward, is the cognitive-IP question.

In 2023 I wrote that the cognitive foundation that produces a person's outputs belongs to that person. I said it almost as an aside. Three years later it's the part of the framing I'd put first if I were writing it fresh.

The reason: the rest of the industry caught up to "personal context matters" faster than I expected, and the hosted-AI companies all built versions of it. Personalization isn't a contested frontier anymore. The contested frontier is who owns the underlying way-of-thinking that the personalization is built on top of. Whose cognitive patterns are getting captured. Whose process is getting reproduced. Whose engine (not whose outputs) is being copied.

The legal framework for this is essentially absent. Copyright covers outputs. Trademark covers marks. Rights of publicity cover likenesses. There is no body of law that protects the underlying cognitive architecture, the decision patterns, the way a person works that produces all of those visible artifacts. And that architecture is exactly what fine-tuning on a sufficient corpus of a person's work product can capture.

This is the next IP fight. It's not about song lyrics or visual style. It's about the cognitive process. And the framework for thinking about it doesn't exist yet.

I want to flag that I'm not writing this from any kind of certainty about the right legal answer. I'm writing it from confidence that the question matters and from concern that almost nobody else is treating it as a first-class problem in the personal-AI conversation. The framing is incomplete without it.

Why I'm still writing from inside the same position

A reasonable question after three years and forty-something essays: is the bet still the bet?

Yes. With sharper edges than it had in 2023, and with the cognitive-IP layer pulled forward to where I'd argue it should always have been, but yes.

The thing that made me write the original framing was the read that the durable value of AI for individuals was going to be in the personal direction, not the general-capability direction. The 2026 version of that read: more confident, not less. The general-capability layer is increasingly a commodity. The thing that's hard to replicate (the thing that compounds for a specific human over time) is the personal layer. That hasn't changed. If anything the commoditization on the capability side is sharpening the contrast.

What I'd add now that I wouldn't have written in 2023:

The personal-AI category is not going to be one product. It's going to be a foundation (hardware plus weights plus protocols plus data store plus curation discipline) and people will build their own from the pieces. The product layer that fits casual users will arrive eventually and won't be the most interesting part of the category by then. The interesting part will be what individuals do with the foundation when it's mature enough that the technical lift drops below the threshold most people will pay.

And the cognitive-IP question is going to matter more, not less, as the foundation matures. Because once the technology can reliably capture a person's way-of-thinking, the question of whether it should, on whose authority, for whose benefit, becomes the central question. Not a footnote.

The short version

The framing in three sentences, since this essay went long and I want the consolidated version to exist in one paragraph somewhere on this site:

Personal AI is the durable category because personal context is the moat. The foundation has to be one the individual controls, because hosted-only doesn't survive the dependency profile in the long run. And the underlying cognitive architecture that produces a person's outputs is theirs. That's the deepest layer of the framing and the one the rest of the conversation hasn't caught up to yet.

That's the position. It's the position I started writing from in 2023, the position I'm still writing from in 2026, and the position I expect to keep writing from for as long as the category keeps developing. The framing held. The parts of it the world hasn't reached yet are the parts I'm most interested in.

That's where the personal AI framing stands.