Building Your Own Home Kubernetes Cluster with k0s and Remote Access

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator
Kubernetes is the powerhouse of modern container orchestration, but setting it up at home or on minimal infrastructure can feel daunting. In this blog, I will walk you through creating a lightweight, fully functional Kubernetes cluster using k0s, complete with a control plane and a worker node, and make it accessible remotely via a Pangolin tunnel.
By the end, you will have a cluster you can experiment on from anywhere.
Why k0s?
k0s is a lightweight, all-in-one Kubernetes distribution that simplifies the setup process:
Single binary for control plane and worker.
Minimal resource usage: ideal for home servers or VMs.
Easy to manage, yet fully compliant with Kubernetes APIs.
Perfect for learning, experimentation, or small production projects.
This makes it ideal for our goal: a home lab cluster with remote access.
Step 1: Setting Up the Control Plane
The control plane is the “brain” of the Kubernetes cluster: it manages nodes, schedules workloads, and exposes the API server.
Install k0s
sudo apt update && sudo apt install curl -y
curl -sSLf https://get.k0s.sh | sudo bash
k0s version
Next, install the controller and start it:
sudo k0s install controller
sudo k0s start
sudo k0s status
Output example:
Version: v1.34.1+k0s.1
Role: controller
Workloads: false
SingleNode: false
This confirms the control plane is running.
Step 2: Configure kubectl on the Control Plane
To interact with Kubernetes, we need kubectl, the CLI tool.
- Generate your kubeconfig:
mkdir -p ~/.kube
sudo k0s kubeconfig admin > ~/.kube/config
chmod 600 ~/.kube/config
- Install kubectl:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
- Verify the cluster is reachable:
kubectl get nodes
At this point, you have a single-node control plane ready.
Step 3: Add a Worker Node
The worker node is where your workloads (pods, deployments, services) will actually run.
- On the control plane, generate a token for the worker:
sudo k0s token create --role=worker
- On the worker machine:
sudo apt update && sudo apt install curl -y
curl -sSLf https://get.k0s.sh | sudo bash
nano tokenfile # paste the token from control plane
sudo k0s install worker --token-file tokenfile
sudo k0s start
sudo systemctl enable --now k0sworker
sudo journalctl -fu k0sworker
- Back on the control plane, verify the worker joined:
kubectl get nodes -o wide
Output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu-workernode Ready <none> 20m v1.34.1+k0s 192.168.29.39 <none> Ubuntu 24.04.3 LTS 6.8.0-87-generic containerd://1.7.28
Step 4: Accessing Your Cluster Remotely
One of the most exciting parts is accessing your cluster from outside your network. For this, we will use a Pangolin tunnel to expose the control plane.
You can follow my previous blog regarding Pangolin setup:
https://blog.nyzex.in/self-hosting-pangolin-newt-on-your-own-server
- Copy your kubeconfig to your remote machine.
#first create the kubeconfig file in the controlplane vm
sudo k0s kubeconfig admin > ~/.kube/config
chmod 600 ~/.kube/config
cat ~/.kube/config
- Update the
server:field to your Pangolin hostname (after copying this config to our remote machine):
clusters:
- cluster:
server: https://tunnel.nyzex.in:6443
insecure-skip-tls-verify: true
name: local
Note:
insecure-skip-tls-verify: truebypasses the TLS hostname check since our certificate is for internal names. This is fine for personal labs, but not recommended for production.
- Set your kubeconfig and verify:
export KUBECONFIG=$(pwd)/kubeconfig
kubectl get nodes -o wide
You should see both the control plane and worker node, now accessible remotely.
Step 5: How It Works
Here’s a simple view of the setup:

Control Plane: API server and cluster management.
Worker Node: Runs workloads.
Remote Machine: Access via Pangolin tunnel.
What to do next?
For production-grade security, generate a certificate that includes your external hostname instead of skipping TLS verification.
Add more workers to scale your cluster.
Deploy your first workloads and explore Kubernetes features.
Conclusion
With a few steps, you now have a home Kubernetes lab:
Control plane + worker node cluster.
Remote kubectl access via Pangolin tunnel.
Fully functional, ready to deploy workloads.
This setup is perfect for experimenting with Kubernetes, testing CI/CD pipelines, or just learning cluster management hands-on.






