Personas are first-class identities, not service accounts

Stop running your AI under a shared service account. Give the persona its own login, its own email, its own audit trail. Treat it like an employee, because that's what it is now.

Personas are first-class identities, not service accounts

I'm going to make the most opinionated argument of this whole series in this article, so I'll get it out of the way up top.

Your AI should have its own email address. Its own Slack handle. Its own login in your identity system. Its own role and permission set. Its own audit trail. Its own performance reviews. Its own termination process. Treat it exactly the same way you'd treat an employee, because at the level of "thing that takes action in your systems," that's what it is.

Same six actions. Two audit shapes. Shared service account ai-bot@co 09:01 sent email ai-bot@co 09:14 created ticket ai-bot@co 09:31 ran report ai-bot@co 10:02 posted message ai-bot@co 10:15 closed PR ai-bot@co 10:48 paid invoice ? who? why? for whom? First-class persona identity marketing-persona@ 09:01 sent email by:P1 support-persona@ 09:14 created ticket by:P1 billing-persona@ 09:31 ran report by:P2 comms-persona@ 10:02 posted msg by:P1 repo-persona@ 10:15 closed PR by:P3 ap-persona@ 10:48 paid invoice by:P2 all three answered Same actions. Left: unanswerable log. Right: a story per row.
Same six actions. Two audit shapes.

The shared [email protected] service account that does everything for everyone? That's the anti-pattern. It's the load-bearing mistake under a huge amount of bad AI deployment in 2026 and it's going to be the cause of most of the AI incidents you read about in the next two years. I'll explain why, and I'll explain the shape that replaces it. The shape, as the title gives away, is the persona, but specifically the persona as a first-class identity, not as a label glued onto a shared account.

This is the core thesis piece of the series. Everything else in the next thirteen weeks assumes you buy this.

What a "first-class identity" actually means

In identity-management land, "first-class" means a thing is a real citizen of your system, not a second-class proxy for something else.

A first-class identity:

  • has a unique account ID in your directory (Active Directory, Okta, Google Workspace, whatever runs your org)
  • can be granted and revoked permissions the same way a human user can
  • is the actor in audit logs (not a stand-in for someone else)
  • has a lifecycle, created, modified, suspended, retired
  • can own things: an email address, a chat handle, files, calendar invites
  • can be a member of groups, included in policies, named in workflows

A service account is a different beast. It's a generic account you create for "the system" or "the automation" to operate as. It's usually shared. It's usually permissioned broadly because it has to cover lots of cases. Its audit trail is "the bot did it." Suspending it breaks fifty workflows at once, so nobody suspends it. It accumulates access over time and nobody can ever quite say what it should and shouldn't have. You know what I'm describing because you've seen it. Every company has at least one of these accounts, and the older the company is, the more it has.

When you wire your AI to one of those, you're piling AI's actions onto the worst account in your org. Don't do that.

A persona, properly built, is a first-class identity for the AI itself. Not the human running the AI, not the team owning the AI: the AI itself. Each persona gets its own.

Why this matters: the audit trail problem

Let me make this concrete with the scenario I've seen play out at every company that runs AI through service accounts.

Something goes wrong. An email gets sent that shouldn't have, a record gets updated incorrectly, or a document gets shared with the wrong group. Somebody needs to figure out what happened.

You pull the audit log. The log says: [email protected] did the thing.

Now what?

Who initiated it? You don't know, the bot account doesn't carry the upstream identity. Was it a scheduled job? A user request? A different AI calling this one? The log doesn't say.

Who was the bot acting for? You don't know. The bot serves everyone.

What context was it in? You don't know. The bot has access to everything.

What permission did it use? It used the union of all its permissions, which is enormous, because it has to be in order to serve every use case.

You're stuck. The audit log can tell you that an action happened. It cannot tell you the story of why it happened, on whose behalf, in which scope. And that's the whole point of an audit log, the story, not the event.

Now imagine the same scenario with persona-as-identity. The log says [email protected] did the thing, initiated by [email protected], in the context of "Q3 product launch campaign," using the persona's marketing-tooling permission set. You can answer every question about that action. You can decide whether it was correct. You can scope the fix.

Want to go deeper on what "user behind the persona" means structurally? Next week I'm writing about The user and the AI share the same persona, which is the symmetry rule that makes the audit story work.

That's the difference. One log tells you nothing. The other tells you a story.

Treat the AI like an employee

The cleanest mental model I've found for this is: treat the persona like an employee. Not metaphorically. Operationally.

When you hire a new person, what happens?

  1. They get an account in your identity system.
  2. They get an email address.
  3. They get added to the chat tools. Slack, Discord, whatever.
  4. They get a role with specific permissions.
  5. They get a manager.
  6. They get a job description.
  7. Their actions get logged under their name.
  8. They get performance feedback.
  9. When they leave, their account gets suspended, their access revoked, their files transitioned, their records retained per policy.

When you stand up a new AI persona, what should happen?

Exactly the same nine things.

I'm not being cute about this. The mistake in current AI deployments is that we're treating the AI as a tool (like a database, or a script) when it's behaving as a worker. Tools don't need identity. Workers do. The AI takes action in the world: it sends messages, updates records, books meetings, writes code, and commits things. That's worker behavior. And workers need first-class identity, the same way human workers do.

So: my blogging persona has its own email address ([email protected]). When it replies to a comment, the audit trail shows the persona's identity, not mine. Its chat handle is its own. Its permission set is its own (it can post to the blog, it can search the blog's media library, it cannot touch my work calendar or my family chat). It has a manager (me) who reviews what it's been doing. When I want to change models or retire the persona, the lifecycle motion is the same one I'd run for a departing employee.

The reason this matters isn't because I think AI is sentient. I don't. It matters because the records AI generates need the same accountability shape as the records humans generate. And the only way you get that is by giving the AI the same identity primitive.

What changes at three scales

Personal. This sounds like overkill at the personal level, and a lot of it is. I'm not asking you to spin up Okta to manage your home AI. But the principle is the same even at small scale. My Blogging persona has its own email address because the day a comment-spam wave hits, I want to be able to mute that one address without muting everything. My Homelab persona has its own SSH key (a kind of digital identity for connecting to servers, if you want to look it up later) (separate from mine) so if the homelab gets compromised, I rotate one key, not all of them. The principle is "one identity per persona," and even at personal scale it gives you the ability to scope down a problem without blowing up your life.

Standing up a persona 1 Directory account AD entry / IAM role human hire AI persona 2 Email address Owns its inbox human hire AI persona 3 Chat handle Slack / Discord login human hire AI persona 4 Permissions RBAC scope, least-privilege human hire AI persona 5 Manager Human accountable human hire AI persona 6 Job description Why it exists, what it does human hire AI persona 7 Audit under its name Log shows persona, not bot human hire AI persona 8 Performance review Outcomes evaluated human hire AI persona 9 Suspension / retirement Documented offboarding human hire AI persona Anything you'd skip for an employee is the thing that bites you.
Same nine steps for a person, same nine steps for a persona.

Small Business. This is where the cost-benefit of first-class identity shifts hard in your favor. If you run a side business through a single Gmail account and let your AI act through that account, every email it sends looks like it came from you personally, and every audit question about what happened becomes "did I do this or did the AI?" Give the business AI its own login. Its own inbox. When clients reply, the replies route correctly. When you eventually hire a human assistant, you can hand them the same identity to inherit, instead of trying to disentangle "your stuff" from "the AI's stuff" inside your own personal accounts. The persona was always meant to be a separate worker. Treating it as one makes the handoff to a human (or to a different AI, or to nobody, if you retire it) clean.

Enterprise. At an enterprise, this isn't a nice-to-have. It's the difference between an AI program you can defend in an audit and one you can't. Your security team (if you have one paying attention) is going to ask you the following questions in the next twelve months: How do you provision AI access? How do you revoke it? How do you audit AI actions independently of the human users who triggered them? How do you enforce least privilege on AI? How do you separate duties between AI agents and human approvers? Every one of those questions has a clean answer if your AI is built on first-class persona identities. Every one of those questions has a stammering answer if your AI is running through [email protected].

Specifically: if you're under SOC 2, ISO 27001, HIPAA, or any regulated regime where "who did what and when" matters, the service-account pattern is going to fail an audit eventually. Auditors are getting more sophisticated about AI. The grace period (where "the AI did it" was an acceptable answer) is closing. I'd rather get ahead of that than get caught by it.

The objection I hear most

The pushback I get when I talk about this goes something like: "Isn't this overhead? Standing up an identity per persona sounds like a lot of bureaucracy for what's essentially just a script."

Three answers.

It's only a script until it isn't. Today's "just a script" AI is going to be tomorrow's autonomous agent making decisions on your behalf at 2am. Better to set up the identity primitive while the stakes are low than to retrofit it while you're in the middle of an incident.

The overhead is one-time per persona, not per action. You set up the identity once, and after that every action flows through it automatically. The cost is in standing it up; the benefit compounds every day after.

The alternative isn't cheaper. It's just deferred. You'll pay the cost eventually, either when you have to forensic-trace what your bot account did, or when you have to disentangle four years of accumulated permissions on a single shared account, or when you have to explain to a customer or regulator what happened and you don't have the records. I've watched all three. None of them are cheap.

The one rule

If you take one rule from this whole article, take this:

Every persona has its own identity. No persona ever shares an account with another persona, or with a human, or with a generic service account.

That's the rule. The next thirteen weeks of this series are basically about what follows once you accept it.

If your AI today shares an account with another AI, or with you, or with "the bot," that's the first thing I'd change. Pick one persona (start with the highest-risk one, or the noisiest one) and give it its own identity. Watch how the audit trail clears up. Watch how it becomes possible to talk about what that specific persona did this week. Watch how easy it becomes to scope down a problem or retire a use case.

Then do it for the next one.

Next Tuesday: the symmetry rule. The user and the AI are always in the same persona at the same time. When you walk into a room, the AI walks with you. When you leave, it leaves too. Without that, the rooms drift apart, and you're right back to the leak from week one.