Running Talos Kubernetes Behind a Pangolin Tunnel

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator
Most Kubernetes guides assume your cluster nodes are directly reachable: static public IP, clean network, wide-open ports, the usual. But sometimes life is not that simple:
Your cluster is running at home, behind NAT
Your home ISP gives no public IP
Your cluster needs to be reachable from the cloud (CI/CD, GitOps, remote access)
You want secure, encrypted access without exposing your whole LAN
That was exactly my situation and that’s where Pangolin Tunnel + Talos proved to be the perfect combination!
The Setup

Why Talos?
Talos is a minimal, immutable, API-driven operating system for Kubernetes.
No SSH. No shell. No package manager.
Everything is configured via talosctl.
This makes it perfect for remote, locked-down nodes because:
Managing security becomes ridiculously simple
Configuration is declarative and version-controlled
If something breaks, you replace not repair!

After spinning TalOS in VM, it is in maintenance stage unless it is configured using talosctl remotely.
Why Pangolin?
Pangolin Tunnel creates secure, relay-assisted overlay connections between machines without needing to open your network to the internet or struggle with port forwarding.
Think of it like WireGuard with NAT traversal + automatic routing + hostname access.
In our setup:
| Port | Purpose | Where it goes |
| 6443 | Kubernetes API Server | Talos VM (internal) |
| 50000 | Talos Machine API | Talos VM (internal) |
The internal IP never becomes public: Pangolin handles the transport.
Steps that I followed:
0. Install talosctl
brew install siderolabs/tap/talosctl
## or
curl -sL https://talos.dev/install | sh
1. Configure Talosctl to Use the Public Endpoint
talosctl config endpoint mytunnel.nyzex.in
talosctl config node mytunnel.nyzex.in
2. Generate the Cluster Configuration

talosctl gen config mycluster https://mytunnel.nyzex.in:6443
This creates:
controlplane.yaml
worker.yaml
talosconfig
3. Add Your Public Hostname to TLS SANs
Open controlplane.yaml:
machine:
certSANs:
- mytunnel.nyzex.in

We can preset the sans as well, this eliminates the need to edit the file! For that I used the following:
talosctl gen config mycluster https://mytunnel.nyzex.in:6443 \
--output ./cluster \
--additional-sans mytunnel.nyzex.in \
--force
Additionally, in this command I also used a specific path for the output files to be at.
4. Apply the Config
talosctl apply-config --insecure --nodes mytunnel.nyzex.in --file controlplane.yaml
The node will reboot automatically.
5. Bootstrap the Cluster
talosctl bootstrap -n mytunnel.nyzex.in
This initializes etcd and the Kubernetes control plane.

After some the health is fine and we are good to go!

We also see the logs in the TalOS VM become healthy now!

6. Merge Config and Verify
talosctl config merge talosconfig
talosctl config endpoint mytunnel.nyzex.in
talosctl config node mytunnel.nyzex.in
talosctl config info
If certificate matches, you are good to go!

talosctl version -n mytunnel.nyzex.in

7. Get Your Kubeconfig
talosctl kubeconfig -n mytunnel.nyzex.in .
export KUBECONFIG=$(pwd)/kubeconfig
Verify:
kubectl get nodes
kubectl get pods -A
Output should look like:
NAME STATUS ROLES AGE VERSION
talos-1lh-tsx Ready control-plane 2m24s v1.34.1

We are yet to add the worker nodes to this, perhaps we shall talk about it in a different blog!
Done! Secure Remote Kubernetes, No Public LAN Exposure
You now have:
Talos control plane
Accessible remotely
Over secure encrypted overlay
Without exposing your home network
Without static IP requirements
Without painful firewall rules
This setup is super homelab friendly and production-grade secure.






