Autonomous agents inherit the persona
When an agent runs unattended, it acts as a persona. The audit trail names the persona, not 'the AI'. The persona is the answerable party.
The question I keep getting about autonomous agents is the wrong question. People ask me, "who's responsible when an AI agent does something on its own?" The way they ask it, they're imagining the AI as this floating, ambient thing that exists somewhere out there, and the question is whether you can pin a decision on it.
The frame I'd offer instead: there is no ambient AI making the decision. There is a persona, and the persona made the decision, and the persona is the answerable party. The agent is the runtime the persona acted through. That's a meaningful shift in how you think about the whole thing, and I want to walk you through why I'm so insistent on it.
The shape of the claim
When I run an agent unattended (let it loose for an hour, a night, a week, on a defined task) the agent is not its own creature. It is logged in as one of my personas. My blogging persona, say, or my homelab persona, or, at work, the deal-ops persona somebody set up to handle the boring half of CRM hygiene. The agent isn't operating "on behalf of the AI." The AI is the engine. The persona is the driver of record.
This matters because the audit trail has to name somebody. When an action happens (an email sent, a file moved, a Jira ticket transitioned, a row updated) somebody's name goes on the line where it asks who did this. The instinct, in a lot of early agent systems I've looked at, is to put "AI agent" there. Or worse, the name of the human who started the run, three days earlier, who has now gone to sleep and isn't watching. Both of those are wrong, and they're wrong in the same way: they don't name an entity that you could go talk to, suspend, retrain, or fire.
The persona is the entity you can go talk to, suspend, retrain, or fire. Because the persona has its own identity (I wrote about that two months ago, see Personas are first-class identities, not service accounts), it has the same kind of credentials a human employee has. It has an email address. It has a Slack handle. It has RBAC. It has a record of what it can and can't do. When the agent acts as the persona, every one of those facts comes with it. The decision goes on the persona's ledger.
Picture how that lands at each scale.
Personal. I let my blogging persona run overnight to draft replies to comments on the site. The next morning I look at the audit log. It says: blogging-persona@eotm posted three comment replies, drafted one and flagged it for review, archived two spam comments. It does not say "the AI did three things and one thing." It says the persona did them. If one of the replies is dumb, the blogging persona is the one I retrain. I don't blame "my AI" in the abstract, because there is no my AI in the abstract, there's a persona, and that persona is the thing I'd take corrective action against.
Small business. Say you've got a deal-ops persona that handles the unloved end of your CRM. Overnight it cleans up stale opportunities, sends standard nudges, flags weird stuff for a human in the morning. When the team comes in, they see deal-ops-persona did X, Y, and Z. Not "the AI did some stuff." If deal-ops sent a nudge that shouldn't have gone out, you have a name to point at, a scope to adjust, and a record of exactly what credentials and tools it was operating with at the moment of the mistake. That's enormously better than "the AI made a mistake somewhere, we'll look into it."
Enterprise. This is where it stops being a nicety. If you run an agent against your production systems and the audit trail says "AI did this," your security team's life just got a lot worse. They can't tell that agent apart from any other agent. They can't suspend it without suspending all the others. They can't run a usage report on what this agent has been doing this quarter, because there isn't a this agent, there's just the abstract AI doing things. The persona model gives security a thing they recognize: a named identity, with credentials, with tool grants, with an audit trail. They already know how to handle that. They handle humans that way. They can handle a persona-shaped agent the same way.
"But the AI made the decision"
I want to address this head-on, because it's the part where the intuition pushes back hardest.
It is true that the actual choice (to send this email and not that one, to update this record and not that one) was produced by a model running inference. The model did the prediction. The model is "the AI" in the colloquial sense.
But the model is not the answerable party, in the same way that a junior employee's brain is not the answerable party for the decisions the junior employee makes. The brain produced the decision. The employee (a named, hireable, fireable, accountable person) is the one who owns it. The brain is the engine. The employee is the driver of record.
The persona is the same shape. The model produced the decision. The persona (a named, suspendable, retrainable, RBAC-scoped identity) is the one who owns it. The persona was acting under your direction. Authority backs it. But the decision goes on its line of the ledger, not on a generic "AI" line.
The reason I keep pulling this metaphor back to the junior-employee shape is that I think it's the only frame that produces the right behavior. If you treat the AI as ambient, you don't know who to talk to when something breaks. If you treat the AI as "just the user, automated," you blur every action back to the human in a way that makes the audit story useless. The persona-as-employee frame lands in the middle: the persona is a real party. It has authority delegated to it. It does its job. If it does it badly, you have somebody specific to deal with, and a specific set of grants you can pull back.
What this changes about how you design
A few practical things drop out of treating the agent as the persona it's running as, rather than as a separate floating thing.
The agent's credentials are the persona's credentials. Not a service account, not a shared bot token, not the human's personal API key from when they were testing. The persona owns the token. The token is provisioned to the persona. When the agent makes a tool call, that call hits the tool surface with the persona's identity attached. The tool sees who's asking. The tool's logs name the persona.
The agent's data scope is the persona's data scope. This is where it ties into everything else in this series: the agent reaching into memory hits the persona's memory, the agent reaching into the vector index hits the persona's index (see last week's piece, Embeddings need personas too), the agent reaching into the database hits rows scoped to the persona. The agent doesn't get a wider view than the persona it's running as. Whatever the persona can see, the agent can see. Whatever the persona can't see, the agent can't see, no matter how clever the agent gets.
The agent's audit trail is the persona's audit trail. Every action goes on the persona's record. End of week, end of quarter, end of incident, you can pull the persona's log and read what happened in plain English. If a regulator asks who did the thing, you point at the persona. The persona is the party of record.
And (this is the one most people miss) the agent's scope of authority is the persona's scope of authority. If the persona doesn't have the right to approve invoices over $5,000, the agent running as that persona doesn't either. The model can't talk its way out of the persona's grants. The persona is a hard wall. The agent is bounded by it because the agent doesn't have an identity of its own that isn't the persona's.
Want the depth version of the audit-and-accountability side of this? The pattern I'm describing here lines up with the lifecycle piece coming in two weeks, Suspending, retiring, and delegating personas. If a persona-driven agent does something it shouldn't, the move you make is the same move you'd make against a human acting under delegated authority: pause the persona, pull the grants, review the trail, decide whether to bring it back. That's not a metaphor. That's the operational pattern.
The part I want to be honest about
I am not pretending this is a finished story. The hard part about autonomous agents is that they can chain. A persona runs an agent that calls a tool that calls another tool that delegates to another agent, and the chain has to keep carrying the persona through every step or the whole accountability story collapses at the first hop. If somewhere down the chain the call goes out as a generic service identity, you've lost the trail. The persona-as-identity discipline only works if it holds end-to-end. Halfway counts as zero.
I think that's solvable. I think it requires the tool layer to ask, on every call, "who is the persona behind this, really", not "what token am I seeing right now," but a real provenance chain back to a named persona. That's what makes the audit story bulletproof. Tools that don't ask that question are tools you can't safely give to an autonomous agent, because they break the chain.
If you're building agents in your shop and you only do one thing on this front, do this: make sure every action your agent takes carries the persona it's running as, and make sure your audit log writes that persona's name, not "the AI" and not the human who clicked "run" three days ago. The rest of this (the credentials, the scopes, the suspension story) follows from that one habit. The audit log is where the truth lives. Put a name in it. Put the persona's name.
The agent's identity is the persona. The persona is the answerable party. If the agent makes a decision, the persona made it. That's the rule.