Introduction
Running Kubernetes clusters across multiple cloud providers introduces a governance challenge: how do you ensure every cluster follows the same security and operational policies? Without centralized enforcement, teams end up with inconsistent configurations — one cluster allows privileged containers, another has no resource limits, and a third has namespaces missing required labels. These gaps create security risks and make audits painful.
Policy-as-code solves this problem by defining rules that Kubernetes enforces automatically at admission time. OPA Gatekeeper is the standard tool for this in the Kubernetes ecosystem. It uses a webhook-based admission controller that evaluates every API request against your policies before the request is persisted. When combined with Kubermatic Kubernetes Platform (KKP), you can manage policy enforcement across all your clusters from a single control plane.
In this tutorial, you will enable OPA Gatekeeper integration on a KKP Community Edition seed cluster, deploy Gatekeeper to a user cluster, and create two policies: one that requires specific labels on namespaces and another that blocks privileged containers. By the end, you will have a working governance setup that you can extend with additional policies for your own requirements.
Note: The OPA Gatekeeper integration described here is available in KKP Community Edition. It is configured through the Seed resource, which is fully accessible in KKP CE. No enterprise license is required.
What you will learn:
- How to enable OPA integration on a KKP CE seed cluster
- How to write ConstraintTemplates and Constraints for Gatekeeper
- How to test policy enforcement and audit existing resources
- How to build a library of governance policies for multi-cloud environments
Prerequisites
Before you begin, ensure you have:
- KKP Community Edition — A running KKP CE installation with at least one seed cluster configured
- A user cluster — At least one user cluster provisioned through KKP (any cloud provider)
- kubectl — Version 1.27 or later, configured with access to both the seed cluster and the user cluster
- Cluster admin access — You need permission to edit Seed resources and create CRDs on the user cluster
- Helm — Version 3.12 or later (for installing Gatekeeper)
Estimated time: 25 minutes
Environment used in this tutorial:
- KKP Community Edition: v2.25+
- Kubernetes (user cluster): 1.28+
- OPA Gatekeeper: v3.15+
- Helm: 3.12+
Step 1: Verify Your KKP CE Installation and Seed Cluster
Before enabling OPA integration, confirm that your KKP CE seed cluster is running and accessible. You will need the name of your seed to update its configuration in the next step.
Switch your kubectl context to the seed cluster:
# List available contexts
kubectl config get-contexts
# Switch to the seed cluster context
kubectl config use-context seed-cluster
Verify that the Seed resource exists:
kubectl get seeds -n kubermatic
Expected output:
NAME AGE
europe-west 45d
Note the seed name — you will use it in the next step. If you have multiple seeds, you can enable OPA integration on each one independently.
Tip: If you do not see any Seed resources, verify that you are connected to the correct cluster. Seed resources exist on the master/seed cluster, not on user clusters.
Step 2: Enable OPA Integration in the Seed Resource
KKP CE provides OPA Gatekeeper integration through the Seed resource configuration. Enabling this tells KKP to support Gatekeeper policy management for all user clusters managed by this seed.
Edit the Seed resource to enable OPA integration:
kubectl edit seed europe-west -n kubermatic
Add the opaIntegration section under spec:
apiVersion: kubermatic.k8c.io/v1
kind: Seed
metadata:
name: europe-west
namespace: kubermatic
spec:
opaIntegration:
enabled: true
webhookTimeout: 10
# ... rest of your existing seed configuration
The webhookTimeout field sets the maximum number of seconds the Gatekeeper admission webhook has to respond before the request is allowed through. A value of 10 seconds provides a reasonable balance between policy evaluation time and user experience.
Verify that the Seed resource updated successfully:
kubectl get seed europe-west -n kubermatic -o jsonpath='{.spec.opaIntegration}' | python3 -m json.tool
Expected output:
{
"enabled": true,
"webhookTimeout": 10
}
Step 3: Install Gatekeeper on the User Cluster
With OPA integration enabled at the seed level, you now install Gatekeeper on the user cluster where you want to enforce policies. Switch your kubectl context to the target user cluster.
kubectl config use-context user-cluster-01
Add the Gatekeeper Helm repository and install it:
# Add the Gatekeeper Helm repo
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
# Update the repo index
helm repo update
# Install Gatekeeper into the gatekeeper-system namespace
helm install gatekeeper gatekeeper/gatekeeper \
--namespace gatekeeper-system \
--create-namespace \
--set auditInterval=60 \
--set constraintViolationsLimit=50
The auditInterval setting tells Gatekeeper to scan existing resources every 60 seconds for policy violations. The constraintViolationsLimit controls how many violations are stored per constraint.
Wait for the Gatekeeper pods to become ready:
kubectl -n gatekeeper-system wait --for=condition=ready pod -l control-plane=controller-manager --timeout=120s
Verify the installation:
kubectl get pods -n gatekeeper-system
Expected output:
NAME READY STATUS RESTARTS AGE
gatekeeper-audit-6b7d4b5f9c-xk2mn 1/1 Running 0 45s
gatekeeper-controller-manager-7c9f8d6b4d-2hj7p 1/1 Running 0 45s
gatekeeper-controller-manager-7c9f8d6b4d-9xt4k 1/1 Running 0 45s
gatekeeper-controller-manager-7c9f8d6b4d-rn3vw 1/1 Running 0 45s
You should see one audit pod and three controller-manager pods running.
Warning: Do not skip the wait step. If you apply ConstraintTemplates before the Gatekeeper webhook is ready, they may not register correctly.
Step 4: Create a ConstraintTemplate to Require Namespace Labels
Now you will create your first policy. A ConstraintTemplate defines the policy logic using Rego (the OPA policy language), and a Constraint applies that logic to specific resources.
This template requires that certain labels exist on namespaces. This is a common governance requirement — teams often need namespaces labeled with an owner, cost center, or environment designation.
Create a file called require-labels-template.yaml:
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
annotations:
description: "Requires specified labels on resources"
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
type: object
properties:
labels:
type: array
description: "List of required label keys"
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("Resource is missing required labels: %v", [missing])
}
Apply the ConstraintTemplate:
kubectl apply -f require-labels-template.yaml
Verify that the template was created and the corresponding CRD is available:
kubectl get constrainttemplates
Expected output:
NAME AGE
k8srequiredlabels 10s
# Confirm the CRD was generated
kubectl get crd | grep requiredlabels
Expected output:
k8srequiredlabels.constraints.gatekeeper.sh 2026-03-17T10:00:00Z
Tip: The ConstraintTemplate generates a CRD dynamically. If the CRD does not appear within 30 seconds, check the Gatekeeper controller logs with
kubectl logs -n gatekeeper-system -l control-plane=controller-manager.
Step 5: Create a Constraint to Enforce the Template
With the template in place, create a Constraint that enforces it. This Constraint will require the team and environment labels on all namespaces in the cluster.
First, create a production namespace that you will use for testing:
kubectl create namespace production
Now create a file called require-ns-labels-constraint.yaml:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: require-namespace-labels
spec:
enforcementAction: deny
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
excludedNamespaces:
- kube-system
- kube-public
- kube-node-lease
- default
- gatekeeper-system
- kubermatic
parameters:
labels:
- "team"
- "environment"
The excludedNamespaces list ensures that system namespaces are not blocked by this policy. Without these exclusions, you could accidentally prevent Kubernetes from creating namespaces it needs to operate.
Apply the Constraint:
kubectl apply -f require-ns-labels-constraint.yaml
Verify that the Constraint is active:
kubectl get k8srequiredlabels
Expected output:
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
require-namespace-labels deny 1
The TOTAL-VIOLATIONS count may already show existing namespaces (like the production namespace you just created) that do not have the required labels.
Step 6: Test Policy Enforcement
Now test that the policy is working. Try to create a namespace without the required labels. Gatekeeper should reject it.
kubectl create namespace test-no-labels
Expected output:
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [require-namespace-labels] Resource is missing required labels: {"environment", "team"}
The request was denied because the namespace is missing the team and environment labels.
Now create a namespace with the required labels:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: test-with-labels
labels:
team: "platform"
environment: "staging"
EOF
Expected output:
namespace/test-with-labels created
The namespace was created successfully because it includes both required labels.
Fix the production namespace that was created earlier without labels:
kubectl label namespace production team=backend environment=production
Verify the violation count has decreased:
kubectl get k8srequiredlabels require-namespace-labels -o jsonpath='{.status.totalViolations}'
Expected output:
0
Clean up the test namespace:
kubectl delete namespace test-with-labels
Step 7: Add a Second Policy to Block Privileged Containers
A single policy is a good start, but real governance requires multiple layers. Add a second policy that prevents containers from running in privileged mode. Privileged containers have full access to the host, which is a significant security risk in any multi-tenant or production environment.
Create the ConstraintTemplate in a file called block-privileged-template.yaml:
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sblockprivilegedcontainer
annotations:
description: "Blocks containers from running in privileged mode"
spec:
crd:
spec:
names:
kind: K8sBlockPrivilegedContainer
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblockprivilegedcontainer
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Container <%v> is not allowed to run as privileged", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
container.securityContext.privileged == true
msg := sprintf("Init container <%v> is not allowed to run as privileged", [container.name])
}
This template checks both regular containers and init containers. Apply it:
kubectl apply -f block-privileged-template.yaml
Now create the Constraint in a file called block-privileged-constraint.yaml:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockPrivilegedContainer
metadata:
name: block-privileged-containers
spec:
enforcementAction: deny
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- production
excludedNamespaces:
- kube-system
This Constraint targets pods in the production namespace specifically. Apply it:
kubectl apply -f block-privileged-constraint.yaml
Test by attempting to create a privileged pod:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: privileged-test
namespace: production
spec:
containers:
- name: nginx
image: nginx:latest
securityContext:
privileged: true
EOF
Expected output:
Error from server (Forbidden): error when creating "STDIN": admission webhook "validation.gatekeeper.sh" denied the request: [block-privileged-containers] Container <nginx> is not allowed to run as privileged
Now try a non-privileged pod to confirm legitimate workloads still deploy:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: safe-test
namespace: production
spec:
containers:
- name: nginx
image: nginx:latest
securityContext:
privileged: false
EOF
Expected output:
pod/safe-test created
Clean up the test pod:
kubectl delete pod safe-test -n production
Step 8: Audit Existing Resources for Policy Violations
Gatekeeper does not only enforce policies on new resources — it also audits existing resources on a regular interval (configured by auditInterval during installation). This is valuable for understanding your current compliance posture across the cluster.
Check for violations across all constraints:
kubectl get k8srequiredlabels require-namespace-labels -o yaml | grep -A 50 'violations:'
You can also list all constraints and their violation counts in a single command:
kubectl get constraints
Expected output:
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
block-privileged-containers deny 0
require-namespace-labels deny 0
For a detailed view of which resources are violating a specific constraint:
kubectl describe k8srequiredlabels require-namespace-labels
Look for the Status.Violations section in the output. Each entry lists the violating resource, its namespace, and the specific policy message.
Tip: To get a cluster-wide compliance report, you can set
enforcementAction: warninstead ofdenyon new constraints. This logs violations without blocking requests, which is useful during a rollout period where you want to assess impact before enforcing.
To switch an existing constraint to warn-only mode for auditing purposes:
kubectl patch k8srequiredlabels require-namespace-labels \
--type='merge' \
-p '{"spec":{"enforcementAction":"warn"}}'
Remember to set it back to deny once you have addressed the violations:
kubectl patch k8srequiredlabels require-namespace-labels \
--type='merge' \
-p '{"spec":{"enforcementAction":"deny"}}'
Troubleshooting
Gatekeeper webhook is not intercepting requests
Cause: The webhook configuration may not be registered, or Gatekeeper pods are not ready.
Solution:
# Check that the webhook exists
kubectl get validatingwebhookconfigurations | grep gatekeeper
# Verify pods are running
kubectl get pods -n gatekeeper-system
# Check controller logs for errors
kubectl logs -n gatekeeper-system -l control-plane=controller-manager --tail=50
ConstraintTemplate CRD is not created
Cause: The Rego code in the template may have syntax errors, preventing the CRD from being generated.
Solution:
# Check the ConstraintTemplate status for errors
kubectl get constrainttemplate k8srequiredlabels -o jsonpath='{.status}'
# Look for error events
kubectl describe constrainttemplate k8srequiredlabels
Review the Rego code for typos or incorrect package names. The package name in the Rego block must match the ConstraintTemplate name (lowercased).
Constraint shows violations but does not block requests
Cause: The enforcementAction may be set to warn or dryrun instead of deny.
Solution:
# Check the enforcement action
kubectl get k8srequiredlabels require-namespace-labels -o jsonpath='{.spec.enforcementAction}'
If the output is warn or dryrun, update it to deny:
kubectl patch k8srequiredlabels require-namespace-labels \
--type='merge' \
-p '{"spec":{"enforcementAction":"deny"}}'
Audit violations are not updating
Cause: The audit controller runs on a schedule defined by auditInterval. There may also be resource limits preventing the audit pod from completing its scan.
Solution:
# Check the audit pod logs
kubectl logs -n gatekeeper-system -l control-plane=audit-controller --tail=50
# Restart the audit pod to trigger an immediate scan
kubectl delete pod -n gatekeeper-system -l control-plane=audit-controller
Next Steps
Now that you have a working governance setup, consider these directions:
- Gatekeeper Policy Library — Browse pre-built policies for common requirements like image registries, ingress restrictions, and resource quotas
- KKP User Cluster Management — Learn more about managing user clusters across multiple clouds with KKP
- Expand your policy library — Add policies for container image allowlists, ingress hostname restrictions, and required annotations for cost tracking
- Integrate with CI/CD — Use
gator test(the Gatekeeper CLI) to validate policies in your pipeline before they reach the cluster
Summary
You enabled OPA Gatekeeper integration on a KKP Community Edition seed cluster and installed Gatekeeper on a user cluster. You created two policies: one requiring team and environment labels on namespaces, and another blocking privileged containers in the production namespace. You tested enforcement by verifying that non-compliant resources are rejected while compliant ones are accepted, and you used Gatekeeper’s audit capability to check existing resources against your policies. This same pattern extends to any number of clusters managed by KKP — define your policies once and deploy them across your entire multi-cloud fleet.
