What if your AI didn't know everything about you all at once?

The AI leak we keep worrying about isn't bad data. It's bad scoping. Here's the shape that fixes it.

What if your AI didn't know everything about you all at once?

Here's the failure shape that crystallized this for me. An AI assistant, given full access, tries to book a personal event into a Tuesday-morning slot that (as far as the calendar is concerned) is a work one-on-one. The AI isn't wrong about availability. It's wrong about which me it's helping.

That's the moment I keep coming back to. The assistant has access to everything, work calendar, personal calendar, household scheduling, notes from a dozen contexts, threads from every direction. It mashes it all into one bowl and serves back the most plausible answer it can find. The answer is nonsense. Not because the data is bad. Because nothing told the AI which slice of life it was supposed to be living in at that moment.

One context, one bowl AI context work calendar family chat client invoices school portal production DB personal email investor memo voice notes support tickets credit-card txns AI reach Every prompt sees everything. The leak isn't bad data, it's no walls.
The current shape, every piece of context available to every prompt.

Here's the thing I want to say, and I'm going to spend the next thirteen Tuesdays saying it from different angles: the problem with AI today isn't that it knows too little about us. It knows plenty. The problem is that it knows all of it at once, with no walls between the rooms.

The leak is bad scoping, not bad data

Most of the AI horror stories I read in 2026 had the same shape under the hood. Somebody's customer-service AI replied in the founder's casual voice, complete with a Slack-style "lol." A productivity AI quoted text from a confidential investor memo when summarizing a public LinkedIn post. A household-planner AI cheerfully shared, during a casual chat thread, a sensitive detail the user had told it about a medical issue six weeks earlier.

None of that is a data quality problem. The AI had perfectly accurate information. It just had all of the information, available to every prompt, callable by every tool. There was no notion of "this context belongs to that room, not this one."

I want to give that failure mode a name, because once you see it you can't unsee it. Call it the one-big-bowl problem. Every memory, every connector, every document, every embedding, every credential, all of it dropped into the same bowl and stirred. Then the AI reaches in to answer your question and pulls out whatever's closest to the surface. Sometimes that's fine. Sometimes it pulls out something that should never have been in the same bowl.

If you've ever had your phone autocorrect your boss's name to your ex's, you already know how this feels. The system isn't broken. It's just unscoped.

Three audiences, same anatomy

Personal. This is the wrong-event-in-the-wrong-slot version. There's a Personal me and a Work me, and they're not the same person in any way that matters operationally. The Personal me uses one voice. The Personal me has its own context, the running route, the books in the queue, the side projects, the everyday stuff. The Personal me does not know my quarterly forecast and does not need to. When the AI gets confused about which me it's serving, the failure mode is mild today (a misrouted reminder, a casual tone in a stiff context) and embarrassing tomorrow (a private joke landing in a board prep doc). Today's leakage is the friendly preview of the version that's going to hurt later.

Small Business. If you run a side business (let's say you're a freelance designer with three retainer clients) you already feel this. You don't want client A's mood board showing up in client B's invoice email. You don't want your business research mixing with your personal Pinterest queries. And you really don't want your AI's helpful "I remembered this from before" to be a window into the wrong client. Right now most people running small businesses solve this by using two browsers, two Notion workspaces, sometimes two laptops. That's not a fix. That's a duct-tape strap holding the bowls apart. The minute you forget which browser you're in (and you will) the wall is gone. I've watched friends accidentally autocomplete a client email with another client's pricing because their AI helpfully "remembered" what they'd been writing about earlier that day. The data wasn't wrong. The scope was.

Enterprise. At a real company, the one-big-bowl problem isn't just embarrassing. It's audit-failing. If you're operating under SOC 2 (a security and operational controls audit, if you want to look it up later) and your AI assistant can pull production database rows into the same context where it's drafting marketing copy, your auditors are going to have a long conversation with you about scope. The whole point of an audit boundary is that things on one side of the boundary aren't supposed to be visible from the other side. AI that flattens every context into one searchable blob deletes that boundary. Your AI vendor's marketing page might say "enterprise-ready." Your auditor is going to ask what enforces the room walls. If the answer is "the system prompt asks the model nicely," that's not an answer.

The shape is the same in all three. Personal, small business, enterprise, same one-bowl mistake, different stakes.

The shape that fixes it

Here's the punchline early, and the next thirteen articles are going to unpack it: the missing primitive is the persona.

A persona is a container. It holds a slice of who you are when you're doing a certain kind of thing. My Personal persona. My Work persona. My Family persona. My Blogging persona. Each one is its own room. Each one has its own:

  • context (what's currently being discussed)
  • memory (what gets remembered between sessions)
  • tools (which connectors and apps the AI can reach for)
  • identity (who the AI is acting as, when it acts)
  • audit trail (what gets logged, for who, for how long)

When my AI is in the Family persona, it can see the family calendar. It cannot see my work calendar. When it's in the Work persona, that reverses. Not because we're asking the model to "remember to stay in character." Because the room itself doesn't have a door to the other room's stuff.

That last part is what people miss when they hear "persona" and think "system prompt that says be friendly." This isn't a tone setting. It's a fence. A door. A wall. The persona isn't what the AI sounds like. It's what the AI can see, what it can do, what it remembers, and who it says it is.

Same total information. Walls between rooms. audit Personal context memory tools identity Work context memory tools identity Family context memory tools identity AI active room The AI is only ever in one room at a time. So are you.
The persona is the room. The AI is only ever in one room at a time.
Want the longer definition? I dig into the container framing in A persona is a container, not a costume.

What the next thirteen Tuesdays will cover

I'm going to take this apart in public, one piece a week, through April. Some of the pieces are about the shape itself. Some are about why the existing alternatives don't work. Some are about how this scales from a household up to a real company without changing the primitive.

A few of the ones I'm most excited to write:

  • The piece on first-class identity, where I argue your AI should have its own email address, its own Slack handle, its own login, treated the same way as an employee in your RBAC system (role-based access control, the thing that controls who can do what). Shared service accounts like [email protected] are an anti-pattern and I'll explain why.
  • The piece on the symmetry rule, when you switch personas, your AI switches with you. The user and the AI are always in the same room. Without that, you drift apart and you're right back to the wrong-bowl problem.
  • The piece on memory isolation, which is probably the single biggest unlock for AI being actually useful day-to-day instead of vaguely useful in flashes.
  • The piece on personas all the way down to the database, because if the room walls stop at the model layer and the database below it still sees everything, you didn't build walls. You built curtains.

The through-line is this: AI today is one big bowl, and the next generation of useful AI is going to be a house with rooms. The walls aren't a cage. They're what makes each room safe to actually live in.

What I'd ask first if you're building this

If you're a person: notice, this week, every time your AI surfaces something from the wrong slice of your life. Don't fix it yet. Just notice. The pattern is louder than people think once you start watching for it.

If you're running a small business: count how many of your AI workflows have access to all of your accounts, all of your clients, all of your documents, versus how many are scoped to one. My bet is the ratio is bad.

If you're at an enterprise: ask your AI vendor where the boundary is between contexts. Not the marketing answer. The technical one. If they say "we have role-based prompts," that's not a boundary. That's a request. A boundary is something the model literally cannot cross because the data isn't on its side of the wall.

That's the series. The fix isn't a smarter model. The fix is a shape we forgot to build under the model. I'm calling that shape the persona, and starting next Tuesday I'm going to take it apart piece by piece.