kubeone

Installing KubeOne: Your First Cluster in 15 Minutes

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Mar 17, 2026 10 min read Beginner
getting-started automation

Prerequisites

  • One server or VM with Ubuntu 22.04+ (minimum 2 vCPU, 4 GB RAM)
  • SSH key-based access to the server
  • kubectl installed on your local machine
  • Basic understanding of KubeOne — see What is KubeOne?

Introduction

In this tutorial, you will install KubeOne and use it to provision a single-node Kubernetes cluster on any server you can SSH into. This is the fastest way to get a working cluster with KubeOne — no cloud provider account required, no Terraform, no load balancer setup. Just a server, an SSH key, and a YAML file.

Production setups use three control plane nodes for high availability. Starting with one node gets you familiar with the workflow without the infrastructure overhead. Once you understand how KubeOne works, scaling to a full HA cluster is a matter of adding more hosts to the manifest.

By the end of this tutorial, you will have a running Kubernetes cluster, a deployed test workload, and a solid understanding of KubeOne’s apply-based workflow.

Step 1: Install KubeOne

KubeOne ships as a single static binary with no external dependencies. The fastest way to install it is with the official install script:

curl -sfL https://get.kubeone.io | sh

This downloads the latest stable release and places the kubeone binary in /usr/local/bin. Verify the installation:

kubeone version

You should see output showing the KubeOne version, the Go version it was compiled with, and your operating system. Any recent version (1.7+) will work for this tutorial.

If you prefer to install a specific version, download it directly from the GitHub releases page. Each release includes binaries for Linux (amd64, arm64), macOS, and Windows.

Tip: KubeOne is a single static binary with no dependencies. It works on Linux, macOS, and Windows (WSL). You do not need Docker, Helm, or any other tool installed locally — just KubeOne and kubectl.

Step 2: Prepare Your Server

You need one server that meets these requirements:

  • Operating system: Ubuntu 22.04 or later (Debian, CentOS, and other distributions also work, but this tutorial uses Ubuntu)
  • Resources: At least 2 vCPU and 4 GB RAM
  • Network: A public IP address reachable from your local machine
  • Access: SSH key-based authentication configured for a user with sudo privileges

This can be a cloud VM from any provider — DigitalOcean, Hetzner, AWS EC2, Google Compute Engine, Azure — or a local VM created with Multipass, Vagrant, or VirtualBox. The only requirement is that you can reach it over SSH.

Verify that your SSH connection works:

ssh -i ~/.ssh/your-key ubuntu@YOUR_SERVER_IP "hostname"

Replace ~/.ssh/your-key with the path to your private SSH key, ubuntu with the SSH user on your server, and YOUR_SERVER_IP with the actual IP address. If this command prints the server’s hostname, you are ready to proceed.

Warning: KubeOne requires passwordless SSH key authentication. Password-based SSH will not work. If you are using a cloud provider, most create VMs with SSH key access by default. For local VMs, generate a key pair with ssh-keygen and copy the public key to the server with ssh-copy-id.

Make sure the following ports are open on the server’s firewall or security group:

PortProtocolPurpose
22TCPSSH access for KubeOne
6443TCPKubernetes API server
30000-32767TCPNodePort services (for testing)

If your server is behind a cloud security group, add rules to allow inbound traffic on these ports from your local IP address.

Step 3: Create the KubeOneCluster Manifest

The KubeOneCluster manifest is a YAML file that describes your desired cluster state. KubeOne reads this file and handles everything needed to make the cluster match what you declared.

Create a file called kubeone.yaml in a new directory on your local machine:

mkdir my-first-cluster && cd my-first-cluster

Then create the manifest:

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
name: my-first-cluster

versions:
  kubernetes: "v1.30.2"

cloudProvider:
  none: {}

controlPlane:
  hosts:
    - publicAddress: "YOUR_SERVER_IP"
      privateAddress: "YOUR_SERVER_IP"
      sshUser: "ubuntu"
      sshPrivateKeyFile: "~/.ssh/your-key"

Replace YOUR_SERVER_IP with your server’s actual IP address, ubuntu with your SSH user, and ~/.ssh/your-key with the path to your SSH private key.

Here is what each field does:

  • versions.kubernetes: The exact Kubernetes version to install. KubeOne supports all actively maintained Kubernetes releases. Check the compatibility matrix for the full list.
  • cloudProvider.none: Tells KubeOne there is no cloud provider integration. This is the right choice when you are working with bare metal servers, local VMs, or any infrastructure where you do not need cloud-specific features like automatic load balancers or persistent volume provisioning.
  • controlPlane.hosts: The list of servers that will run the Kubernetes control plane. For this tutorial, there is one host. The publicAddress is the IP that KubeOne connects to over SSH. The privateAddress is the IP that Kubernetes components use to communicate with each other. On a single server with one network interface, these are the same IP.

This manifest is your single source of truth for the cluster. Version control it alongside your other infrastructure files.

Step 4: Provision the Cluster

With the manifest ready, run:

kubeone apply --manifest kubeone.yaml

KubeOne will ask you to confirm the operation. Type yes to proceed. Here is what happens next:

  1. SSH connection — KubeOne connects to the server using the credentials in your manifest.
  2. Container runtime — It installs and configures containerd as the container runtime.
  3. Kubernetes components — It installs kubeadm, kubelet, and kubectl on the server.
  4. Cluster bootstrap — It runs kubeadm init to initialize the Kubernetes control plane, including etcd, the API server, controller manager, and scheduler.
  5. CNI installation — It deploys Canal (Calico + Flannel) as the Container Network Interface plugin, which handles pod-to-pod networking.
  6. Machine controller — It deploys the Kubermatic machine-controller, which manages worker nodes. For this single-node setup you will not use it, but it is available for future expansion.
  7. Kubeconfig generation — It creates a kubeconfig file for accessing the cluster from your local machine.

The entire process takes 3 to 5 minutes, depending on your server’s internet speed and resources. KubeOne prints detailed output as it works through each step, so you can follow along and see exactly what is happening.

When the process completes, KubeOne creates a file called my-first-cluster-kubeconfig in your current directory. This file contains the credentials and endpoint information needed to connect to your cluster.

Step 5: Access Your Cluster

Set the KUBECONFIG environment variable to point to the generated kubeconfig:

export KUBECONFIG=$(pwd)/my-first-cluster-kubeconfig

Now verify that you can reach the cluster:

kubectl get nodes

You should see output similar to this:

NAME              STATUS   ROLES           AGE   VERSION
your-server       Ready    control-plane   2m    v1.30.2

The Ready status means the node is healthy and accepting workloads. The control-plane role confirms this is the control plane node.

Check that all system components are running:

kubectl get pods -A

You should see pods in the Running state for these components:

  • coredns — cluster DNS
  • canal (or calico/flannel) — pod networking
  • kube-proxy — service networking
  • kube-apiserver — the Kubernetes API
  • kube-controller-manager — reconciliation loops
  • kube-scheduler — pod scheduling
  • etcd — cluster state storage

If any pod is not in Running state, wait a minute and check again. Some components take a moment to pull their container images and start up.

Step 6: Deploy a Test Workload

With the cluster running, deploy a simple nginx web server to confirm everything works end to end:

kubectl create deployment nginx --image=nginx:latest --replicas=2

This creates a Deployment with two nginx pods. Check that the pods are running:

kubectl get pods

Both pods should reach Running status within 30 seconds. On a single-node cluster, both pods run on the same node — this is expected.

Expose the deployment as a NodePort service so you can access it from outside the cluster:

kubectl expose deployment nginx --port=80 --type=NodePort

Find the assigned NodePort:

kubectl get svc nginx

The output will show a port mapping like 80:31234/TCP. The number after the colon (31234 in this example) is the NodePort. Access the nginx welcome page using your server’s IP and this port:

NODE_PORT=$(kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}')
curl http://YOUR_SERVER_IP:${NODE_PORT}

You should see the default nginx welcome page HTML. This confirms that the cluster is running, pod networking works, and services are correctly routing traffic from the node to the pods.

Tip: In a production setup, you would use three control plane nodes for high availability. A single control plane node is fine for development and learning, but a failure of that node means the entire cluster is unavailable. See Provisioning Bare Metal Kubernetes with KubeOne and Terraform for a full HA setup.

Step 7: Understand the KubeOne Workflow

Now that you have a running cluster, it is worth understanding how KubeOne manages it going forward. This is where KubeOne differs from tools that only handle initial installation.

Idempotent Apply

KubeOne uses an idempotent apply model. You can run kubeone apply as many times as you want, and it will converge to the desired state described in your manifest. If the cluster already matches the manifest, KubeOne does nothing. If something has drifted — a different Kubernetes version, a missing component, a misconfigured setting — KubeOne fixes it.

This means your manifest is always the source of truth. Change the manifest, run apply, and KubeOne figures out what needs to happen.

Upgrading Kubernetes

To upgrade your cluster to a newer Kubernetes version, edit kubeone.yaml and change the version:

versions:
  kubernetes: "v1.31.0"

Then run:

kubeone apply --manifest kubeone.yaml

KubeOne handles the upgrade process: it drains the node, upgrades the control plane components, upgrades kubelet, and uncordons the node. For multi-node clusters, it performs this as a rolling upgrade — one node at a time — to maintain availability.

Checking Cluster Status

To see the current state of your cluster without making any changes:

kubeone status --manifest kubeone.yaml

This reports the Kubernetes version running on each node, node health, and whether the cluster matches the manifest.

Tearing Down the Cluster

If you want to remove the cluster entirely:

kubeone reset --manifest kubeone.yaml

Warning: kubeone reset destroys the cluster and removes all Kubernetes components from the server. All workloads, data, and configuration are lost. The server itself is not deleted — only the Kubernetes installation is removed. Use this with caution.

Troubleshooting Common Issues

SSH connection timeout

Verify that the server’s firewall allows inbound SSH on port 22. Double-check that the SSH key path in your manifest matches an authorized key on the server. Run the SSH command from Step 2 manually to isolate the issue.

Cluster provisioning fails at container runtime

KubeOne needs to pull container images from the internet during provisioning. If your server is behind a corporate proxy or has restricted outbound access, configure the proxy settings in the KubeOneCluster manifest under the proxy section:

proxy:
  http: "http://proxy.example.com:8080"
  https: "http://proxy.example.com:8080"
  noProxy: "localhost,127.0.0.1,10.0.0.0/8"

kubectl cannot connect after provisioning

Make sure the server’s firewall allows inbound traffic on port 6443, which is the Kubernetes API server port. If you are using a cloud provider, check the security group or firewall rules attached to the VM. Also verify that the publicAddress in your manifest is the correct IP.

Pods stuck in Pending state

On a single-node setup, the control plane node must also run regular workloads. KubeOne removes the default control plane taint for single-node clusters, but if pods remain in Pending state, inspect the node for issues:

kubectl describe node

Look for conditions like DiskPressure, MemoryPressure, or PIDPressure. These indicate the server does not have enough resources. For a single-node cluster running test workloads, 2 vCPU and 4 GB RAM is the minimum — more is better.

Clean Up

If you are done experimenting and want to clean up:

kubeone reset --manifest kubeone.yaml

This removes Kubernetes from the server. If you created a cloud VM specifically for this tutorial, delete it through your cloud provider’s console or CLI to stop incurring charges.

Next Steps

You now have the foundation to work with KubeOne. Here is where to go from here:

Summary

You installed KubeOne, created a KubeOneCluster manifest targeting a single server, and provisioned a working Kubernetes cluster in minutes. KubeOne handled the container runtime installation, kubeadm bootstrapping, CNI configuration, and kubeconfig generation — all from one command.

The key takeaway is KubeOne’s workflow: describe the desired state in a YAML manifest, run kubeone apply, and KubeOne makes it happen. The same command handles initial installation, configuration changes, and Kubernetes version upgrades. This is the pattern you will use whether you are managing one node for development or fifty nodes across multiple data centers.