Helm Values as Business Standards: Decisions as Code for Kubernetes
Helm values.yaml is where Decisions as Code lives in Kubernetes. Centralized business decisions, schema-validated, composed via library charts, projected into every workload. The approach is the contribution. The tool changed.
Decisions as Code (DaC) is the approach behind nearly every self-service and automation system I've designed: pull business decisions out of platform configuration into a small, curated layer, often five real decisions where the raw config exposed eighty-nine. The remaining choices get absorbed into templates and defaults the platform owns. (I called this Property Toolkit during my OneFuse days; the shape of the idea hasn't changed, only the foundation underneath.)
The foundations rotate. The approach persists. The problem DaC solves, duplicated business decisions across platforms, drift, the need to define-once-consume-many, didn't go anywhere when the world moved off vRA and onto Kubernetes. The platforms changed. The problem is still the problem.
This piece is about where DaC lives in Kubernetes. Specifically: Helm values.yaml + JSON Schema validation + library charts is the DaC values surface for the K8s era. Same shape. Different tool. The approach is the contribution. The tool changed.
The problem hasn't changed
Pick a large-ish org running Kubernetes in 2025. They have:
- Thirty teams, each with a few Helm charts.
- An environment promotion model (dev → staging → prod).
- Sizing tiers (small, medium, large, or some bespoke names).
- Resource limits that should be consistent within a tier.
- Labels for cost allocation, ownership, environment, application.
- Network policies that should be uniform per environment.
- Security contexts (non-root, drop capabilities, read-only root filesystem).
- A handful of cluster-wide affinities and tolerations.
These are business decisions. They belong to the organization, not to any individual chart. They should be defined once. They should be projected onto every consuming chart in a predictable way.
What actually happens? Each chart's values.yaml duplicates a slightly different version of the same standards. Team A's "small" is 0.5 CPU / 1 Gi memory. Team B's "small" is 1 CPU / 2 Gi. Team C didn't set requests at all. The labels are inconsistent. The security context is whatever the original chart's author copied from a tutorial. The network policy was added by a security review three years ago and never applied uniformly.
This is the same duplication problem DaC has always solved. The fix is the same.
The five primitives, in Helm
DaC required five primitives. Each one has a clean Helm expression.
Centralization. One authoritative source for the business decisions. In Helm, this is a small chart or a values file that lives in a standards/ repo. Other charts pull from it.
Platform-aware shapes. Earlier foundations expressed this with reserved namespaces, one per consuming platform. Each consumer pulled the slice that matched its primitives. In Helm, the platform-aware shape is the chart's own templates: the standards values are the standard input, and each chart's templates render them into the platform-correct YAML, a Deployment's resources.requests, a HorizontalPodAutoscaler's metrics, a NetworkPolicy's selectors. Same input, platform-correct outputs.
Variable interpolation. Helm's template engine plus tpl function gives you the expressive power to reference a standard value and derive everything downstream. Change the standard, propagate.
Composition / nesting. This is where Helm has gotten very good. Library charts, declared as chart dependencies with import-values, let one chart pull in standard primitives from another. Application charts depend on a standards library chart; the library chart's values become available under a known key; the application chart's templates reference them. Change the standard OS or sizing definition and every dependent application picks it up on the next render.
Discovery convention. A predictable naming scheme so adapters can find what they need. In Helm, the convention is the values-tree shape, standards.env, standards.sizing.small, standards.labels.cost, standards.security.default. Predictable, stable, documented. Adapters (your templates) find what they need by path.
What the standards chart looks like
In practice, the standards live in a small library chart. Something like:
standards/
Chart.yaml # type: library
values.yaml # the standard defaults
values.schema.json # JSON Schema for validation
templates/
_env.tpl # named templates for env-specific projections
_labels.tpl # standard label set
_security.tpl # security context defaults
_sizing.tpl # t-shirt-sizing resource definitions
The values.yaml defines the standard structure. Sizing tiers, environment metadata, label taxonomies, security defaults. The values.schema.json validates that the standard structure is honored. JSON Schema is the OPA-light enforcement layer at the values surface, refusing to render a chart that violates the standard shape.
The named templates in _helpers.tpl and the underscore-prefixed partial files are the platform-aware adapters. standards.labels returns the standard label set, populated from the values tree. standards.resources "small" returns the resource block for the small tier. standards.securityContext "restricted" returns the standard restricted security context.
Application charts list standards as a dependency in their Chart.yaml, set its values in their own values.yaml (or inherit the defaults), and call into the named templates from their workload manifests.
The result: one place to change the standard small-tier sizing. Every dependent chart picks it up on the next render. The drift problem dies because the standard values aren't copied, they're referenced.
JSON Schema as the enforcement complement
DaC has always needed an enforcement complement, somewhere to verify that consumers were honoring the standard structure. OPA is the dynamic-verification answer; OPA shows up as the AI-policy foundation now in 2025 and the pattern has only gotten cleaner.
In Helm, the static-time complement is values.schema.json. The schema declares the shape: standards.env is a string enum of dev | staging | prod. standards.sizing has exactly the tiers you allow. standards.labels.cost is required. The schema rejects renders that violate the standard structure. Helm runs the validation at install / upgrade time.
This pairs with OPA / Gatekeeper / Kyverno running at admission time as the runtime gate. Two layers of enforcement, both anchored on the same standard decisions. The approach is two-layer:
- Specify the decisions once (values.yaml + values.schema.json).
- Verify at install (Helm schema validation) and at admission (OPA / Gatekeeper).
The K8s expression of DaC is Helm-with-schema at configuration time and OPA-at-admission at policy time. Different tools from the foundations I worked with a few years ago. Identical shape.
Library charts as composition
The piece I want to dwell on, because it's where the nesting model maps most cleanly onto Helm, is library charts.
The nesting trick: "an application's standards reference an OS set's standards. Change the OS set, every dependent application picks up the new template." That's the headline shape DaC has always needed.
Library charts are exactly that pattern in Helm. A library chart exposes named templates and values. Application charts depend on it. The application chart's templates call into the library's helpers. Change the library; every dependent chart re-renders with the new behavior on the next upgrade.
Real example shape:
standardslibrary chart exposesstandards.workload.deploymentwhich renders a fully-conformant Deployment with resources, labels, security context, affinities, and a PodDisruptionBudget sibling.- An application chart sets its image, command, ports, and env vars, and calls
{{ include "standards.workload.deployment" . }}. Three lines of input. Fully-conformant output. - Change the standard security context in the standards chart. Every application chart that uses the helper picks up the change on the next deploy. No PR storm. No drift.
That's the DaC nesting model, projected into 2025.
What the AI platform adds
The piece that's specific to AI workloads (and the reason this article is in this batch) is that AI services magnify the value of centralized standards. A typical org running AI on K8s has:
- Multiple model-serving frameworks (vLLM, KServe, BentoML, Triton).
- GPU and CPU node pools with very different scheduling primitives.
- Heterogeneous sizing, a 7B model fits on a small GPU node, a 70B doesn't.
- Tighter cost pressure because GPUs are expensive.
- Eval and CI workloads that need consistent labeling for cost attribution.
- Security contexts that are stricter because the workloads handle sensitive data.
Pushing those concerns into per-chart values.yaml files duplicates the divergence problem at higher cost. The standards chart absorbs them: standards.gpu.tolerations, standards.gpu.nodeSelector, standards.gpu.sizing.small (the 7B-class config), standards.gpu.sizing.large (the 70B-class config). Application charts reference; standards chart owns. Same DaC approach, applied to a workload class where the cost of drift is more expensive than it used to be.
What I keep coming back to
DaC is the technical contribution from my career that I'm most proud of. Not because of any single tool (the foundations change, products change hands, vendors come and go) but because the approach survives. Centralize the business decisions, project them onto every consumer through platform-aware adapters, validate at the boundary, enforce at admission. That's the whole shape.
In 2024 I wrote about it as Terraform locals modules and the centralized-standards pattern, the same shape projected onto Terraform. In 2025 it's Helm values + JSON Schema + library charts, projected onto Kubernetes. In a couple of years it'll be Crossplane compositions doing the same work, projected onto the next abstraction.
The tools rotate. The approach persists. If you take one thing from this whole AI-on-Kubernetes batch, take this: don't reinvent the wheel for AI workloads. Apply the same DaC discipline you should have been applying to non-AI workloads. The cost of skipping it just got higher because the workloads got more expensive. The fix is the same fix it's always been.
Define it once. Project it everywhere. Validate it at the boundary. Pair it with policy at admission. That's the whole shape. It worked in vRA. It worked in Terraform. It works in Helm. It will work in whatever the 2030 version is called.