kubevirt

What is KubeVirt? Running VMs on Kubernetes Explained

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Mar 17, 2026 8 min read Beginner
getting-started virtualization

Prerequisites

  • Basic understanding of Kubernetes concepts (pods, nodes, deployments)
  • Familiarity with virtual machines and hypervisors

Introduction

Most organizations are not 100% containerized, and they probably never will be. You have legacy applications that assume a full operating system. You have Windows workloads that cannot run in a container. You have specialized software — think telecom network functions, database appliances, or machine learning toolkits — that needs direct hardware access and a traditional VM environment.

For years, the answer was to maintain two separate stacks: a Kubernetes cluster for your containerized workloads, and a hypervisor platform (typically VMware vSphere) for your VMs. Two sets of tooling, two teams, two billing models.

KubeVirt changes that equation. It extends Kubernetes to run virtual machines alongside your containers, on the same cluster, managed with the same API and tooling you already know.

The timing matters too. Broadcom’s acquisition of VMware has reshaped the virtualization market. License costs have increased significantly, and many organizations are actively looking for alternatives to proprietary hypervisors. KubeVirt — a CNCF Incubating project backed by Red Hat, NVIDIA, Intel, and a large open-source community — has emerged as a serious contender.

This guide explains what KubeVirt is, how it works under the hood, and when it makes sense to use it.

What is KubeVirt?

KubeVirt is an open-source project that adds virtual machine management capabilities to Kubernetes. Rather than replacing Kubernetes or building a separate platform, KubeVirt works as a Kubernetes extension — it introduces new custom resources (CRDs) that let you define, start, stop, and manage VMs using kubectl and the Kubernetes API.

Here is what that means in practice:

  • It is a Kubernetes-native solution. You create a VirtualMachine YAML manifest the same way you create a Deployment. You apply it with kubectl apply. The VM shows up in your cluster alongside your pods.
  • It uses KVM and libvirt under the hood. These are the same battle-tested Linux virtualization technologies that power Google Cloud, AWS (Nitro is KVM-based), and most of the cloud industry. This is not experimental technology.
  • It is a CNCF Incubating project. KubeVirt graduated from the CNCF Sandbox and is on the path toward graduation. It has broad industry backing and an active contributor community.
  • VMs and containers coexist. A VM running on KubeVirt can communicate with pods over the same cluster network. You can put a VM-based database backend and a containerized API frontend in the same namespace with a Kubernetes Service in front of them.

KubeVirt does not replace your container workloads. It gives Kubernetes the ability to handle the workloads that containers cannot.

How KubeVirt Works

KubeVirt’s architecture follows standard Kubernetes patterns — controllers, DaemonSets, and CRDs. If you understand how Kubernetes operators work, the architecture will feel familiar.

Core Components

virt-controller runs as a Deployment at the cluster level. It watches for VirtualMachine and VirtualMachineInstance custom resources. When you create a VM, virt-controller is responsible for creating the corresponding virt-launcher pod on an appropriate node.

virt-handler runs as a DaemonSet on every node that should host VMs. It acts as the bridge between Kubernetes and the underlying virtualization layer. When a virt-launcher pod lands on a node, virt-handler configures the VM through libvirt and manages its lifecycle — start, stop, migrate, and monitor.

virt-launcher is a pod that wraps a single VM. Each running VM gets its own virt-launcher pod, which contains the QEMU process that actually executes the virtual machine. From Kubernetes’ perspective, a VM is just a pod with resource requests. The scheduler places it, networking connects it, and monitoring observes it — all through the standard Kubernetes machinery.

CDI (Containerized Data Importer) is a companion project that handles importing VM disk images into your cluster. It can pull QCOW2, ISO, VMDK, and raw disk images from HTTP endpoints, container registries, or S3-compatible storage, and write them into PersistentVolumes that your VMs use as disks.

Architecture Flow

Here is what happens when you create a virtual machine:

flowchart TD
    User["kubectl apply -f vm.yaml"]
    User --> Ctrl

    subgraph ControlPlane["Control plane"]
        Ctrl["virt-controller
(Deployment)"] end Ctrl -->|creates VMI| API[(Kubernetes API)] Ctrl -->|schedules virt-launcher pod| Sched[kube-scheduler] subgraph Node["Worker node"] Launcher["virt-launcher pod
(QEMU process)"] Handler["virt-handler
(DaemonSet)"] Libvirt[libvirt + KVM] Handler --> Libvirt Libvirt --> Launcher end Sched --> Launcher Handler -.->|reports status| API Launcher -.->|mounts| PV[(PersistentVolumes
= VM disks)]

The key insight: Kubernetes never needs to “know” about virtual machines in any special way. From the scheduler’s perspective, a virt-launcher pod is a pod that requests a certain amount of CPU, memory, and possibly devices (like a GPU). The virtualization happens inside the pod.

Tip: Because VMs run inside pods, all your existing Kubernetes tooling — monitoring with Prometheus, logging with Fluentd, network policies, RBAC — works with KubeVirt VMs out of the box.

When Should You Use KubeVirt?

KubeVirt is not the right choice for every situation. Here are three scenarios where it makes strong sense.

1. Legacy Application Modernization

You have applications that cannot be containerized today — maybe they depend on specific kernel versions, require systemd, or run on Windows. Instead of maintaining a separate hypervisor cluster for these workloads, you run them as VMs on your existing Kubernetes infrastructure.

This lets you consolidate infrastructure immediately while giving your teams time to modernize those applications at their own pace. The VM workloads use the same CI/CD pipelines, the same monitoring stack, and the same network policies as everything else.

2. VMware Exit Strategy

Broadcom’s licensing changes have made VMware significantly more expensive for many organizations. If you are evaluating alternatives, KubeVirt offers a path that does not require buying into another proprietary hypervisor.

KubeVirt runs on commodity Linux servers with KVM support — no per-socket licensing, no enterprise agreements. Your operations team manages VMs through the Kubernetes API instead of learning yet another management plane. Tools like the forklift project (another CNCF project) can help migrate existing VMware VMs to KubeVirt.

Warning: Migrating from VMware to KubeVirt is not a weekend project. Production migrations require careful planning around storage, networking, and VM compatibility. Evaluate your workloads thoroughly before committing.

3. Mixed Workloads on a Single Platform

Some teams need VMs — data scientists running GPU-accelerated workloads with specific driver requirements, developers testing against multiple operating systems, or operations teams running network appliances that only ship as VM images. Other teams need containers.

KubeVirt gives you one platform for both. Your platform team manages one cluster, one set of infrastructure, and one set of policies. Teams that need VMs get VMs. Teams that need containers get containers. Everyone uses kubectl.

KubeVirt vs Other Approaches

ApproachProsConsBest For
KubeVirtOpen source, Kubernetes-native, CNCF project, no license feesRequires KVM-capable nodes, younger ecosystemTeams already running Kubernetes
OpenShift VirtualizationEnterprise support, integrated with OpenShiftRequires OpenShift subscriptionRed Hat shops needing vendor support
ProxmoxMature, solid web UI, easy to set upNot Kubernetes-native, separate management planeSmall teams, homelab, standalone virtualization
Traditional hypervisors (VMware, Hyper-V)Proven at scale, feature-rich, deep ecosystemExpensive licensing, separate tooling and teamsEnterprises locked into existing agreements

If you are already running Kubernetes and want to bring VMs into that ecosystem, KubeVirt is the most direct path. If you need enterprise support and are in the Red Hat ecosystem, OpenShift Virtualization (which is built on KubeVirt) is worth evaluating. If you have no Kubernetes footprint and just need virtualization, Proxmox or a traditional hypervisor may be simpler.

Key Concepts

Before you start working with KubeVirt, you need to understand four core resources.

VirtualMachine (VM) is the persistent definition of your virtual machine. It includes the VM’s CPU, memory, disk, and network configuration. Think of it like a Deployment — it defines the desired state and survives restarts. When you set running: true on a VirtualMachine, KubeVirt creates the corresponding instance.

VirtualMachineInstance (VMI) is the running instance of a VM. It is analogous to a Pod — it represents the actual running workload and is ephemeral. When you stop a VirtualMachine, the VMI is deleted. When you start it again, a new VMI is created.

DataVolume manages the process of importing and preparing disk images for your VMs. It uses CDI under the hood. You point a DataVolume at a QCOW2 image URL, and it handles downloading the image, converting it, and writing it to a PersistentVolumeClaim that your VM can mount as a disk.

virtctl is the KubeVirt CLI tool. While you can manage most VM lifecycle operations with kubectl, virtctl provides VM-specific commands that do not have a kubectl equivalent — things like opening a serial console (virtctl console), starting an SSH session (virtctl ssh), or live-migrating a VM between nodes (virtctl migrate).

Tip: You can install virtctl as a kubectl plugin via krew: kubectl krew install virt. This lets you use kubectl virt instead of virtctl.

Next Steps

Now that you understand what KubeVirt is and how it fits into the Kubernetes ecosystem, the next step is to set it up. Head to Installing KubeVirt on a Kubernetes Cluster to deploy KubeVirt on a test cluster and create your first virtual machine.

If you are specifically planning a migration away from VMware, keep an eye out for the VMware to KubeVirt migration planning guide later in this series.

Summary

KubeVirt extends Kubernetes to run virtual machines alongside containers, using the production-proven KVM/libvirt virtualization stack. It is a CNCF Incubating project with broad industry support and an active community. If you have workloads that cannot be containerized — legacy applications, Windows VMs, or specialized software — KubeVirt lets you manage them on the same platform, with the same tools, and through the same API as the rest of your infrastructure.