The OPA / Rego renaissance, courtesy of AI policy
OPA and Rego had a quiet decade as the policy-as-code layer for Kubernetes and IaC pipelines. The AI agent wave is making them load-bearing in a way they weren't before. The renaissance is real and the reasons are structural.
Open Policy Agent and the Rego language had a quiet decade. They became the de-facto standard for Kubernetes admission control, infrastructure-as-code policy, and a handful of API-gateway use cases. Solid niche; not a category that drove the industry conversation. The CNCF graduated OPA, the user list grew steadily, the conferences had a track for it, and most engineers outside the platform-team subset didn't think about it.
That's changed in the last six months. The AI agent wave needs a policy layer, the layer that fits is approximately what OPA does, and a meaningful slice of new AI-platform work is reaching for OPA + Rego as the first answer. The renaissance is real, the reasons are structural, and worth being concrete about.
What changed
Two things shifted that bring OPA into the AI-platform conversation:
The agent-everywhere pattern needs policy decisions at scale. Every agent action (call a tool, access a data source, perform a state-changing operation) is a policy decision. "Is this agent allowed to call this tool with these arguments in this context?" The decision needs to be fast, auditable, and decoupled from the agent code. That's the exact shape OPA was designed for.
The agentic-governance gap from Build / I/O is a policy gap. The contradiction in the agents-everywhere pitch is mostly about not having a coherent policy story. OPA is the most-mature open-source answer to "policy decisions as a service." The fit isn't subtle.
The result: AI platform teams are finding OPA, and OPA's roadmap is bending toward AI-policy use cases. The renaissance is mid-stride.
The shape of the AI-policy use cases
Three patterns where OPA is showing up in AI-platform work:
Tool-call authorization. An agent wants to call a tool. The agent's request, plus the agent identity, plus the user identity, plus the tool's metadata, gets evaluated against a Rego policy that returns allow/deny plus reasoning. The policy lives separately from the agent code; engineers can update the policy without redeploying agents; auditors can inspect what's allowed without reading code.
The pattern fits OPA's strengths: structured input, structured policy, structured output, fast evaluation, decoupled from the calling system.
Data-access scoping. When an agent retrieves from a data source, the per-document filtering is policy-driven. Rego evaluates "should this user, via this agent, see this document?" against the document metadata and user attributes. Same pattern as the scoped-indexing approach but enforced at the policy layer rather than baked into application code.
Pre-execution checks for destructive actions. Before an agent performs a state-changing operation, OPA evaluates whether the action is allowed in the current context. Approval workflows, rate limits, change-control windows, all expressible as policy. The agent asks; the policy answers; the action proceeds or doesn't.
Audit trail generation. Every OPA decision is naturally auditable, input, output, policy version, decision rationale. The agent-action audit trail the governance work needs is a side-effect of policy evaluation rather than a separate concern.
What Rego brings
Rego itself is having a moment because the language properties that make it good for cloud-policy work also fit the AI-policy shape:
- Declarative. Policies describe what's allowed, not how to check. AI policy benefits from the same shape, "this agent may call this tool when these conditions" reads naturally.
- Structured input/output. AI agent actions are naturally structured (action type, agent ID, user context, tool args). Rego operates on structured data without ceremony.
- Composable. Policies can be built from smaller policies. A complex AI policy can be broken down into "agent-identity policy", "tool-permissions policy", "data-access policy", each evaluatable independently.
- Fast evaluation. OPA's compiled Rego is fast enough to be inline in the agent decision loop without meaningful latency cost.
- Versioning and rollback story. Same reasons OPA worked for K8s admission: policy bundles can be versioned, deployed, rolled back, audited, the same way infra changes are.
The thing Rego doesn't bring naturally: it's not the easiest language to learn. The AI-platform engineers picking it up have a learning curve. The teams that have been using OPA for cloud policy for years pick up the AI use case quickly; the teams new to OPA find the learning curve real.
What's specifically working in production
A few patterns I've seen working in production AI-platform setups:
OPA as a sidecar to the agent gateway. The gateway intercepts agent requests, calls OPA for the policy decision, allows or denies based on the result. The agent code doesn't change; the policy code lives separately and can be updated independently.
OPA as an MCP server. Wrapping OPA as an MCP server lets agents query policy decisions natively through the same MCP surface they use for other tools. Pattern is clean; the agent doesn't need a separate policy SDK.
Per-tool policies stored alongside tool definitions. Each MCP server ships with a Rego policy that governs its use. Updates to the tool ship together with policy updates. The deployment unit is "tool plus policy" rather than two separate concerns.
Policy bundles for agent personas. Different agents (HR-bot, IT-bot, dev-bot) get different policy bundles. Rolling out a new agent persona is largely a policy-bundle deployment.
These aren't speculative; they're showing up in real shops doing real work.
What's still missing
A few gaps that the renaissance has exposed:
Better Rego authoring for non-policy-engineers. Most AI-platform engineers aren't OPA specialists. The current Rego authoring tooling assumes a policy-engineering background. The teams making this work have a policy specialist; the teams without one struggle. Better tooling, visual policy editors, Rego linters that understand AI-policy patterns, generators from natural-language descriptions, would meaningfully expand adoption.
Standardized AI-policy schemas. Every team building AI-policy is defining its own input shape (what an "agent action" looks like, what an "agent identity" looks like). A standardized schema would let policies port across platforms. None exists yet; the Cloud Native AI working group is the most plausible incubator.
Performance for very-high-volume cases. OPA evaluation is fast but not free. Agent platforms doing millions of policy decisions per minute need careful optimization or different evaluation strategies. The OPA team is shipping improvements; not all use cases are well-served yet.
Integration with the agent governance surfaces. OPA decisions are auditable; surfacing them in the agent-management UIs is mostly DIY. The platforms that bake OPA in well will earn the renaissance share; the ones that don't will see policy-as-code adoption stall.
What I'd recommend
For platform teams adopting AI internally:
- If you're not already using OPA, start now. The investment cost is real but bounded; the leverage is meaningful and growing.
- If you are, extend it to the AI surface. Don't reinvent policy for the AI use case; the existing OPA infrastructure mostly fits.
- Treat policy as a deployment unit alongside agents and tools. Policies should ship together with the things they govern; the cloud-deployment patterns apply.
- Hire or train at least one Rego specialist. The fluency matters; the tooling isn't yet good enough to hide the language.
The renaissance is real because the fit is real. OPA + Rego solves a piece of the AI agent problem that nothing else solves as cleanly. The teams that recognize this and adopt early get a few quarters' head start on a layer the rest of the industry will catch up to. The teams that wait will be doing the same adoption later under more pressure.
Worth being early.