Personal AI assistants: where we actually are
Mid-2026 honest read on personal AI assistants. What actually works, what's still vapor, and where Apple, Google, Anthropic, and OpenAI land on the consumer question. The gap between the demos and the daily-driver experience is still wide, but not as wide as it was.
I’ve been writing about personal AI for almost three years and the honest mid-2026 read is this: the gap between the demos and the daily-driver experience is narrower than it was, wider than the keynotes suggest, and the part that’s hardest is the part nobody at the big consumer companies has actually solved yet.
Worth taking stock. Where we actually are, not where the marketing says we are.
What “personal AI” was supposed to be
The pitch, from roughly 2023 onward, was that you’d have an assistant. Yours. It would know you, your calendar, your inbox, your projects, your preferences, the half-finished things you keep meaning to get back to, and it would do work on your behalf. Schedule the thing. Draft the reply. Pull up the doc you were looking at last Tuesday. Remember that you don’t like phone calls after 6pm. Notice when something on your plate is starting to slip.
That was the pitch. The pitch wasn’t crazy. The pitch is still the right framing. The question is how close any of the actual products are to delivering it.
What works in mid-2026
The honest answer is: more than worked a year ago, and less than the keynotes claim.
Conversational interfaces are excellent. The frontier models in 2026 are genuinely good at the back-and-forth of figuring out what you want, asking clarifying questions, drafting and iterating. The chat surface (the thing you sit down at and talk to) is largely a solved problem. Both Anthropic and OpenAI ship reliably useful chat assistants, and the consumer versions of both are competent enough that the median person can get real work out of them without much training.
Tool use is real. The models can call APIs, search the web, run code, browse, manipulate files, and string several of those together to complete a task. This was the demo in 2024 and the brittle reality of 2025; in 2026 it’s actually-working, most of the time, for tasks of moderate complexity. The agentic workflows that felt aspirational two years ago are now the unremarkable middle of what these systems do.
Voice is good. Latency is low enough for natural conversation. The voice modes from OpenAI and Google in particular are pleasant to use; they’re the surface where the assistant feeling is closest to what the original pitch implied.
Local-and-private setups have matured. The open-weights models you can run on a laptop are useful for a real share of the work, not the hardest reasoning, but the routine assistant tasks. The local-first second brain I built last year is more capable now than when I first stood it up, with the same hardware and the same architecture.
So the foundation is there. The pieces work.
What doesn’t work
The thing that doesn’t work is the thing that actually matters: the assistant being yours.
Personal AI requires personal context. It requires the assistant to know what’s on your calendar, what’s in your inbox, what’s in your documents, what your relationships are, what you’re working on, what you’ve decided, what you’ve ruled out. Without that, what you have is a smart chatbot, which is useful, but is not a personal assistant.
The big consumer companies have not solved the personal-context problem. Apple has the data and not the model. Google has the data and the model and not the willingness to actually wire them together in a way that crosses Workspace silos cleanly. OpenAI has the model and not the data. Anthropic has the model and explicitly not the data, they’ve staked out the position that the model should not be the data store, which I think is right, but it means the personal-context problem has to be solved by somebody else and nobody at consumer scale is solving it.
Memory is the second unsolved piece. The chat assistants from 2026 do have memory features. They are not good enough yet to substitute for what an actual personal assistant would remember about you. I wrote about building an assistant that actually remembers and the gap between what I have running for myself and what the consumer products offer is still substantial. The consumer memory is shallow and lossy. The thing you want is durable and queryable and yours.
The cross-app workflow is the third unsolved piece. The reason your assistant should be able to schedule a meeting is that it can see your calendar, see who you’re trying to meet with, draft the email, send it, watch for the reply, and update the calendar when the reply comes in. That whole loop, end-to-end, across the apps you actually use, working reliably without you babysitting it, that’s not a thing any consumer assistant does well in May 2026. Pieces of it exist. The whole loop does not.
Where the big four actually are
Apple. Apple Intelligence in 2026 is better than it was at launch and still not what was promised. The on-device privacy-preserving framing is the right framing (it’s the architecture I’d want) and the execution is a year-plus behind where it needed to be. The Siri rebuild keeps getting punted. The model quality is below the frontier and there’s no clear path to closing that gap with the on-device-first constraint. Apple is the company with the most personal context on the device and the least ability to make use of it. That’s the painful position.
Google. Gemini-the-model is genuinely competitive. Gemini-the-assistant inside the Google ecosystem is the closest thing the big four have to a working personal AI for normal users, because Google does have the data and is willing to use it. The problem is that Gemini is still optimized for Google’s products rather than for the user’s life, and the line between “your assistant” and “Google’s surface for showing you things” is not as clear as it needs to be. They’re closer than anyone else to the consumer-personal-AI position. They have the most to lose by getting the trust question wrong.
Anthropic. Claude is the strongest model for the kind of work I actually want a personal assistant to do, careful, accurate, willing to push back, capable of long-running tasks. It is explicitly not a personal-assistant product in the consumer sense. It’s a powerful chat assistant with strong tool use and an artifact/agent surface, plus Claude Code on the developer side. The bet is that the model is the product and the personal layer gets built on top, by users and by other companies, not by Anthropic. That’s the position I think ages best, but it’s a developer-and-power-user posture more than a consumer one.
OpenAI. ChatGPT remains the consumer brand. The memory and personalization features are the most developed of the chat assistants. The voice and the agentic features are strong. The trust position is the weakest of the four, the privacy posture is muddy, the training-data and IP situation is unresolved, and the “ChatGPT knows everything about you” pitch sits uneasily against any read of who’s actually running the company and what their incentives are. The product is excellent. The trust question is the part I keep coming back to.
The principled-user position
I write about this as someone who actually runs this stuff for himself. My day-to-day setup is a local-first second brain with assistant capabilities on top, plus the frontier chat assistants for the work that needs them, plus a careful posture about what data lives where and what’s allowed to leave the device. That setup is more work than a consumer product should require. It is also, currently, the only way to get the personal-AI experience that the keynotes have been promising since 2023.
The principled-user lens I’ve been writing from: the assistant should be yours. The data should be yours. The model can be borrowed; the context can’t. The provider relationship should be a tool relationship, not a custody relationship.
That position is going to get harder to hold as the big four push deeper into the personal layer, because the convenient thing is to let one provider hold all of it. The convenient thing is also the thing I won’t do, and I think the readers of this blog should be thoughtful about doing.
What I’d rather be wrong about
I’d rather be wrong about the consumer companies being this far behind. I’d rather Apple ship the Siri rebuild and have it be excellent. I’d rather Google sort out the trust line. I’d rather one of the four make the personal-context problem easy for normal users without forcing the custody trade I’m not willing to make.
The gap I keep coming back to is the gap between “the model works” and “the assistant is mine.” The model works. The assistant being mine is still mostly DIY in mid-2026, and the people who’ll have the best personal AI a year from now are the people who treat the personal layer as their own problem to solve, not the providers’.
That’s where we actually are. Closer than 2024. Further than the keynotes. The interesting work is still the work of making it yours.