The Kubermatic Virtualization Dashboard is the per-cluster operator UI introduced in v1.1.0. It gives you a VM-aware browser — start/stop/console controls, Nodes view, Data Volumes, Firewalls, Load Balancers — for the cluster it lives in. It is shipped and reconciled by the kubev installer itself, not as a separate Helm chart or add-on. Enabling it is one config block and a re-apply.
This tutorial covers turning it on, the three authentication modes, and the access pattern for both lab and production.
Per-cluster, not multi-cluster
The dashboard scope matters: it shows the cluster it’s installed in, not a fleet. If you operate many clusters and want a single cross-cluster inventory, KKP stays the right tool — and the two complement each other rather than overlap. KKP answers “which cluster?”; the Kubermatic Virtualization Dashboard answers “what’s happening inside that cluster?”
If you only run one cluster, you might never need KKP. The dashboard plus kubectl is enough.
What gets installed
Enabling the dashboard creates:
- A new namespace,
kubermatic-virtualization - Two Deployments:
kubev-api-server(talks to the Kubernetes API on the user’s behalf) andkubev-dashboard(a React SPA served by nginx) - A Service named
kubev-dashboardon port 8080 - The image-pull secret needed to pull the dashboard images from quay.io
Both pods are small. There’s no separate database — the dashboard reads and writes through the same Kubernetes API your kubectl already uses, which means YAML stays the source of truth. The UI is a convenience layer; if anything looks off in it, kubectl get and kubectl describe remain the fall-back.
Pull-secret pre-flight
The dashboard images live on quay.io and are gated. The installer pre-flights for either of:
KUBEV_USERNAMEandKUBEV_PASSWORDenv vars (the same quay.io credentials used elsewhere in the install)- An inline
imagePullSecret:block incluster.yamlcontaining a docker-config JSON
Apply is rejected if neither is present. This is a known stumbling block — debugging “why aren’t my pods pulling?” after apply succeeds is more painful than failing fast on missing credentials, which is why the installer does the latter.
Confirm before you start:
echo "${KUBEV_USERNAME:-MISSING}"
test -n "$KUBEV_PASSWORD" && echo "set" || echo "MISSING"
Add the dashboard: block
Open your existing cluster.yaml and add a dashboard: block. The minimal lab version (no auth, port-forward access):
dashboard:
enabled: true
auth:
none: {}
The minimal basic-auth version:
dashboard:
enabled: true
auth:
basic: {} # uses KUBEV_USERNAME / KUBEV_PASSWORD
The OIDC version (production standard — works with Keycloak, Dex, Okta, Azure AD, Google, anything that speaks OIDC):
dashboard:
enabled: true
dashboardURL: https://kubev.example.com
auth:
oidc:
issuerURL: https://keycloak.example.com/realms/kubev
clientID: kubev-dashboard
clientSecret: <generated>
redirectURL: https://kubev.example.com/oauth/callback
Note dashboardURL — for OIDC the issuer needs a stable redirect target, so this isn’t optional in production.
Apply
Re-run the installer:
kubermatic-virtualization apply -f cluster.yaml
The installer is declarative and self-healing — re-running it picks up the new dashboard: block, creates the namespace and Deployments, and reconciles them on every subsequent apply. There’s no separate dashboard install step.
Watch the rollout:
kubectl -n kubermatic-virtualization rollout status deploy/kubev-api-server
kubectl -n kubermatic-virtualization rollout status deploy/kubev-dashboard
Lab access via port-forward
For an isolated lab cluster, port-forward straight to the Service:
kubectl port-forward svc/kubev-dashboard -n kubermatic-virtualization 8080:8080
Open http://localhost:8080. With auth.none you’ll land directly on the Dashboard view. With auth.basic you’ll be challenged for the same KUBEV_USERNAME / KUBEV_PASSWORD you set on the install.
The left sidebar groups the operational surface:
- Dashboard — resource overview tiles
- Compute — Virtual Machines, VM Pools, Auto Scaling
- Infrastructure — Nodes, Instance Types
- Storage — Data Volumes, Images
- Network — VPCs, Subnets, Load Balancers
- Security — Firewalls, SSH Keys
VM detail view has four tabs: Networking, Disks, Monitoring, Console. The Console tab is the same serial console you’ve been reaching with virtctl console, embedded in the browser.
Production access
kubectl port-forward is fine for a lab. For production, front the kubev-dashboard Service with an Ingress (TLS-terminated) or a LoadBalancer Service, point dashboardURL: at the externally-reachable URL, and use auth.oidc against your existing identity provider. Production exposure is out of scope here, but the Kubermatic Virtualization operator handbook walks through the full pattern.
What’s next
The dashboard is the entry point most operators end up using day-to-day. Underneath it, every screen is a read/write against the same Kubernetes API — so kubectl stays as the diagnostic layer when something doesn’t look right in the UI. The next tutorial in this series covers GitOps installs, where kubev apply runs from a CI pipeline and the dashboard becomes a read-mostly window into a cluster that’s reconciled from version control.
