vRA, vRO, and the LLMs they spawned
The workflow-orchestration tools of the early 2010s tried to solve a problem the LLM agents of the mid-2020s are now taking another swing at. The problem is the same. The substrate is different. Worth being honest about which parts of the previous attempt to bring forward and which to leave.
The workflow-orchestration category had its peak in enterprise infrastructure roughly 2012 to 2017. VMware's vRealize Automation (vRA, formerly vCAC, formerly vCloud Automation Center) handled the catalog and request-and-approval side; vRealize Orchestrator (vRO) handled the workflow execution; together they were how most large VMware shops automated provisioning, configuration changes, decommissioning, and the long tail of operational work that didn't fit into a Puppet manifest. The DH archive has a fair amount of vRA and vRO writing from those years, and looking at it now in light of how LLM-driven agents are being deployed today is a useful exercise. The category these tools occupied is the category LLM agents are now taking another swing at. The shape of the problem hasn't changed. The available foundation has.
What vRA and vRO actually were
For anyone who didn't live in the VMware automation world: vRA was a portal where operators or end-users requested resources from a service catalog. The catalog items mapped to "blueprints", declarative descriptions of what to provision and how. vRO was a workflow engine where you'd compose JavaScript-based actions into multi-step procedures, with approval gates, error handlers, retries, and integrations into whatever else the operations team needed to touch. Active Directory, ServiceNow, monitoring, IPAM, switch ports, the storage array.
The selling point was not the workflow engine itself. The selling point was that operations work could be modeled, written down, versioned, audited, executed by something other than a person typing commands at three in the morning. Once a procedure was in vRO and exposed via vRA, it ran the same way every time, with a record of who requested it, when, and what happened. That was a real change from "page the on-call when X needs to happen."
The category has always been about the same thing: take repeatable operational work that currently lives in someone's head and turn it into something a machine can execute reliably with the right permissions and auditability.
What the LLM-agent generation is doing
The LLM-agent stack of 2025, terminal-hosted agents like Claude Code, browser-driving agents like Computer Use and Operator, tool-using integrations via MCP, is taking another swing at the same target. The pitch is the same: take operational work currently done manually and let a machine execute it. The foundation is different: instead of declarative workflows compiled by an operator and run by an engine, we have a probabilistic model deciding what to do based on the prompt and the available tools.
That difference has consequences in both directions, and the comparison to vRA/vRO is the cleanest way to see them.
What carries over
The first thing the workflow-orchestration era got right and the LLM-agent era is rediscovering: integrations are most of the work. Building a vRO workflow that talked to ServiceNow and Active Directory and the storage array took ten times the effort of building the workflow logic itself. The MCP server explosion of late 2024 and early 2025 is the same dynamic, the model can decide what to do, but it can only act through whatever integrations someone built. The integration tax is real and it's where most of the actual engineering effort lives.
The second is that auditability matters more than the demos suggest. vRA's value was as much about the audit log (who requested what, what got approved, what ran, what failed) as it was about the automation itself. The first wave of LLM-agent deployments has been notably quieter about audit trails than vRA users would have tolerated, and the postmortems after the first agentic-ops mishaps will mostly be about the missing log entries.
The third is that approval gates aren't optional for anything that touches production. vRA had an approval-flow concept built in for exactly this reason. LLM agents that can take destructive actions need the same affordance, and the field is going to converge on it the hard way over the next year.
What doesn't carry over
The biggest break with the workflow-orchestration era is that declarative authoring isn't the model anymore. vRA's value proposition assumed someone authored the workflow once, carefully, with all the edge cases handled, and then it ran the same way forever. LLM agents work the opposite way: the agent decides at runtime what to do, based on the situation and the available tools, and the same prompt won't necessarily produce the same execution path twice.
That's a feature for some workloads (the long tail of slightly-different operational tasks the workflow engine couldn't economically model) and a bug for others (the regulated, auditable, must-run-the-same-way-every-time workloads the workflow engine handled well). The current confusion in agentic-ops conversations is mostly a category error: people pitching LLM agents as a replacement for the workflow engine on workloads where the workflow engine's determinism was actually the value proposition.
The other break is around the unit of expertise. A vRO workflow encoded the experience of the engineer who wrote it. Once written, it was the asset; the engineer could leave and the workflow kept running. LLM agents encode their judgment in the model weights and the prompt, which means the operational expertise is more diffuse and harder to extract. There isn't a clean equivalent of "the vRO workflow library that survives the engineer who wrote it." Some of the prompt-library and agent-recipe efforts in late 2024 and early 2025 are early attempts to fill that gap; none of them have the durability of a versioned workflow file yet.
The lesson the older tools have to offer
The most useful thing the vRA/vRO era can teach the LLM-agent era is that the hard part of automation is not the engine. It's the relationships, to the systems being automated, to the humans approving the work, to the audit trail, to the rollback story when something goes wrong. The LLM-agent stack has spent its first couple of years building impressive engines and undersized everything else. The next couple of years are going to be about catching up on the parts vRA had figured out a decade ago.
Two years ago I wrote that agents were coming and most of us weren't ready. That was before MCP, before Claude Code, before any of the production deployments that exist now. Most of that prediction has come true. The part that hasn't yet (the operational maturity layer underneath the agents) is the part the workflow-orchestration era already wrote a draft of. There's no need to rediscover what blueprints, approvals, audit trails, and rollback procedures look like. The vRA generation already did the work. The new foundation just has to figure out which of those patterns to keep and which to redesign for a probabilistic engine.
The thing I keep coming back to, looking at the DH archive entries on vRA and the agentic-ops conversations of 2025: the engineers who lived through the workflow-orchestration era have a head start they don't always realize they have. The patterns transfer. Most of what's hard about agentic operations has a recognizable shape if you spent a few years authoring vRO workflows. The foundation changed; the problem space mostly didn't.