Two years on from the Imprint thesis: what changed, what didn't
Two years past the encoding-a-person framing. The thesis held in the parts I expected and bent in the parts I didn't. Worth being honest about what survived contact with the actual technology and what was just well-aged speculation.
Two years and change past the encoding-a-person framing, the original "imprint thesis" piece on this site. The framing was: the durable AI category would be the personal one, where a model holds enough of your context to be useful as a thinking partner over time, and the technology to make that work would mature faster than most of the industry expected. The companion piece last month gave the mid-2025 version of the score-card. This one is the longer two-year retrospective, written deliberately as the "what survived, what didn't" version because the anniversary deserves a more thorough accounting than the called-my-shot piece had room for.
Worth being honest about both halves.
What survived
A few things from the 2023 framing that look more right two years in than they did at the time:
The core claim that personal context is the durable moat. The 2023 piece argued that the long-run value of AI to individuals comes from the model holding enough about you to be useful in your specific context, and that anything else (better benchmark scores, more capable models in the abstract, faster inference) would be smaller in importance than the personal-context dimension. That bet has held. The AI tools that have become daily habits for me and the people who actually use this stuff I read are uniformly the ones that have accumulated context. The ones that haven't accumulated context have stayed in the "occasional query" bucket regardless of how impressive the underlying model is.
The hardware-trajectory argument. The piece argued that the technical foundation for personal AI (running models against your own data on hardware you own) would be available faster than the industry conversation expected. The Apple Silicon plus open-weights inflection point arrived roughly on the timeline I sketched, slightly faster on the open-weights side, slightly slower on the consumer-product-polish side. Net: substantially right.
The privacy-as-architecture argument. The piece argued that "we promise not to look" wouldn't be enough as a privacy stance for personal AI, and that the architectures that put the user in control of their data would compound advantages over the architectures that assumed user trust. That's playing out. The hosted-AI privacy stories are more sophisticated than they were two years ago and still don't compete on the dimensions that matter for the principled-user population.
The encoding-as-curation framing. The original piece argued that the work of "encoding a person" is mostly the work of curating what should be encoded (selecting what to surface, what to ignore, what to keep current) and that this work would persist regardless of how capable the underlying technology became. That's held. The mature personal-AI workflows I've watched develop in public reports and in my own stack spend most of their tuning time on the curation layer, not on the model layer.
That's the survivor list. Reasonable hit rate.
What bent
A few places where the framing was directionally right but specifically off:
The pace of consumer-friendly product emergence. I expected by mid-2025 we'd see meaningful consumer products built around the principled-personal-AI shape. The reality is closer to "individual users build their own setups." The product layer for the casual-but-principled user remains underbuilt. The mismatch between the foundation maturity and the product maturity is wider than the framing anticipated.
The role of the hosted models. I underestimated how much the hosted-AI assistants would matter as the entry point for casual users. The version of personal AI that ships in the daily lives of most people in 2025 is a hosted assistant with the privacy compromises that implies. The principled-personal-AI population I write for is small relative to the hosted-AI population. Both categories matter; the framing weighted the principled side too heavily relative to where the user mass actually landed.
The data-portability prediction. I expected meaningful standards for personal-AI data portability (the equivalent of email's IMAP for AI assistants) to emerge by year two. They haven't. MCP is partway there for tools; the personal-data-and-memory layer is still vendor-specific. The "your AI knows you" value remains locked to whatever vendor you committed to. That's a worse outcome than I sketched.
The economics of cloud inference. I expected hosted-AI prices to come down meaningfully but underestimated the magnitude. The price floor compressed faster than I predicted. The unintended consequence was that more workloads stayed on hosted (because hosted got cheaper faster than the operational case for local improved) than the framing suggested. Local won the categories I called; the size of those categories grew more slowly than I'd hoped.
The competitive space on the hardware side. Apple Silicon's lead is more durable than I gave credit for; the AMD and NVIDIA alternatives in the unified-memory space took longer to mature than I expected. Net, the principled-personal-AI conversation has been more Apple-centric than the framing anticipated.
That's the "bent" list. Most of these are timing or magnitude misses on directionally-correct claims.
What I got plain wrong
A few specific predictions that didn't survive at all:
The "everyone will run their own model" prediction at the consumer scale. I argued in the original framing that within two years, running a model on your own hardware would be a routine consumer habit. It isn't. The barriers (technical setup, hardware cost, ongoing maintenance) are still substantial enough that this stays a small population's habit. Two years isn't enough for this to become routine; my estimate of how it would propagate was off.
The early-mover product opportunity for principled personal AI. I argued that the first consumer product to ship principled personal AI well would have a multi-year head start. No such product has shipped well; the head start window has effectively expired without anyone claiming it. The opportunity still exists; the prediction that it would be claimed within two years didn't hold.
The Apple-as-bridge prediction's timeline. I expected Apple to ship the bridge product (consumer-friendly principled personal AI) within the two-year window. They haven't. The foundation is right; the product hasn't materialized. The prediction directionally holds; the timeline was wrong.
These are real misses. Worth saying out loud.
What the next two years should focus on
If I were writing the framing piece today rather than two years ago, the things I'd point at as the most important questions:
The bridge-layer problem. Whoever solves "casual user gets principled personal AI without becoming a platform engineer" wins meaningful share of the consumer category. Apple is the most-likely vendor. The window for someone else to do it is narrowing but still open.
The data-portability standards. Without portability, the personal-context value is captive to whichever vendor you committed to. The standards work that needs to happen. MCP-for-memory equivalents, vendor-neutral conversation formats, exportable persona definitions, is the thing the next two years should produce. Slower-moving than the model layer; more important for the long-run user position.
The economics of small-scale principled personal AI. The cost story is improving but still requires deliberate investment from the user. The bridge that takes the principled-personal-AI cost from "thousands of dollars and ongoing maintenance" to "a hundred dollars and zero maintenance" is the thing that lets this category grow beyond the small principled-user population.
The trust calibration for hosted alternatives. Even with all the local-first work, hosted-AI assistants will continue to dominate consumer use. The work to make those hosted assistants more genuinely respectful of user privacy (not just in marketing but in architecture) is the conversation that will shape what most people's AI experience actually looks like. Worth being engaged in even if it's not the thing the principled-user population most wants.
The honest summary
Two years in, the imprint thesis is more right than wrong, off on the timing of several specific things, and basically wrong about how broadly the principled-personal-AI pattern would spread by now. The category is real, the technology is mature enough, the consumer product layer hasn't caught up, the data-portability layer hasn't materialized.
The next two years probably resolve some of these. They might not. The framing that survived two years should survive another two; the specific predictions that bent should be replaced with sharper ones. Worth coming back to in 2027 to call this round again.
The arc that started in early 2023 is still running. Worth being honest about where it is at the midpoint.