Understanding Kubernetes Networking, Load Balancers, Subnets, and MetalLB

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator
Kubernetes networking often feels abstract when you first begin working with clusters, Services, and Ingress controllers. Many engineers are able to “make things work” without fully understanding how packets travel across machines, how load balancers actually assign IP addresses, or what role MetalLB plays in bare-metal deployments.
This guide removes the mystery. It begins with foundational networking concepts and gradually builds toward a complete, unified understanding of Kubernetes networking, external access, load balancers, ingress controllers, and MetalLB.
If you want a single resource that connects all of these concepts clearly, this article is designed to be your go-to reference.
The Fundamentals: What Is a Network?
A network is a group of connected machines that communicate by sending packets to each other. Every machine on the network must have a unique address so that packets know where to go.
That unique address is an IP address.
What Is an IP Address?
An IP address is a numerical identifier assigned to every device on a network. The most common format is IPv4, which looks like this:
192.168.1.50
Each IP address has two parts:
Network portion
Host portion
The network portion identifies which network the device belongs to.
The host portion identifies which specific device it is.
How do we know where one portion ends and the other begins?
That is decided by the subnet mask.
What Is a Subnet?
A subnet (sub-network) divides a large network into smaller logical networks.
Example:
192.168.1.0/24
Here:
/24means the first 24 bits are the network portion.The network contains:
192.168.1.0 (network address)
192.168.1.1 to 192.168.1.254 (usable host IPs)
192.168.1.255 (broadcast address)
This gives 254 usable IPs. So:
256 total addresses (0–255)
254 usable host addresses (1–254)
1 network address (.0)
1 broadcast address (.255)
A subnet tells devices where to send packets.
If an IP is inside the same subnet, communication is local.
If the IP is outside the subnet, traffic goes through a gateway.
Subnets define the boundaries within which Kubernetes nodes, pods, and services receive IP addresses.
Your home router typically gives IPs from this subnet to devices using DHCP.
How Are IPs Allocated in a Network?
There are two ways:
Static allocation
The IP is manually configured.
Example:
Your server is set to always use 192.168.1.200.
Dynamic allocation (DHCP)
Your router assigns IPs automatically.
Most home networks use DHCP for laptops, phones, TV, etc.
Servers and Kubernetes nodes often use static IPs.
How Kubernetes Uses IP Addresses
Kubernetes uses three layers of IP assignment:
1. Node IPs
These are normal IPs assigned by your network (your router or your cloud).
Examples:
192.168.1.10
192.168.1.11
2. Pod IPs
Assigned by Kubernetes CNI (Container Network Interface).
Pods must be reachable from any node.
They belong to the cluster's internal network, such as:
10.244.0.15
10.244.1.9
3. Service IPs
These are stable virtual IPs created by Kubernetes for Services.
Examples:
10.98.50.1
10.109.22.18
Pod IPs change.
Service IPs never change.
Services act as stable front doors that point to Pods.
What Is a Kubernetes Service?
A Service groups Pods and exposes them in predictable ways.
Types of Services
ClusterIP (internal only)
NodePort (opens ports on each node)
LoadBalancer (gets a real external IP)
Headless Service (no virtual IP, direct Pod DNS)
For exposing applications outside the cluster, NodePort and LoadBalancer are relevant.
What Is a Load Balancer?
A load balancer accepts traffic on one IP address and distributes it to backend servers (Pods).
There are two major kinds:
A. Cloud Load Balancers (AWS, Azure, GCP)
When you create:
type: LoadBalancer
The cloud provider:
Allocates a public IP
Creates a load balancer appliance
Forwards traffic to your Service
Example:
52.14.222.8 → Kubernetes Service → Pods
B. Bare-Metal Load Balancers (MetalLB)
On bare-metal or home labs you do not have a cloud provider.
Kubernetes cannot create an external LoadBalancer by itself.
This is where MetalLB comes in.
What Is MetalLB?
MetalLB implements LoadBalancer behavior for bare-metal clusters.
When you create:
type: LoadBalancer
MetalLB:
Picks an IP from a configured pool
Assigns it to the Service
Announces the IP on your local network using ARP or BGP
Your router and devices now believe that your cluster owns that IP.
How MetalLB Assigns IPs?
You create an IPAddressPool, for example:
192.168.29.200 - 192.168.29.220
This is a range of 21 IP addresses.
MetalLB can assign at most 21 LoadBalancer Services at the same time.
This has nothing to do with the number of nodes.
Nodes = compute
Pool = number of external Services
You could have:
1 node
100 nodes
500 nodes
The number of nodes does not change your available LoadBalancer IPs.
What MetalLB Actually Does
MetalLB is a load balancer implementation for bare-metal Kubernetes clusters.
MetalLB does not route traffic like a Layer 7 reverse proxy.
It does not do TLS termination or HTTP routing.
It simply:
assigns external IPs for LoadBalancer services
makes nodes answer ARP/NDP for those IPs
This allows devices on your LAN to send traffic to Kubernetes.
The Layer 2 Mode
Your ConfigMap:
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.29.200-192.168.29.220
MetalLB will:
take one IP from that range
respond on the network as if the node owns it
direct traffic toward the appropriate service
If you have 21 IPs in that range, you can have 21 LoadBalancer services, regardless of how many Kubernetes nodes exist.
What Happens When a LoadBalancer IP Is Assigned?
Example:
Your ingress-nginx Service gets:
EXTERNAL-IP: 192.168.29.200
MetalLB announces:
- "192.168.29.200 is located here"
Either:
the node where nginx is running (ARP mode), or
via BGP to your router (BGP mode)
When a client sends traffic to:
http://192.168.29.200
It reaches the correct node, then:
kube-proxy forwards it to the nginx Pod
nginx reads the host header
nginx forwards to the correct internal Service
Service load balances traffic to Pods
Pod responds back through the chain
The client receives the response
Why the Number of IPs Has No Relation to the Number of Nodes
Nodes have their own IPs from your LAN or DHCP server.
MetalLB uses completely separate IPs from the pool.
For example:
Your LAN might be
192.168.29.0/24Your nodes may be
192.168.29.101,.102,.103MetalLB pool might be
192.168.29.200-220
These ranges do not overlap and have different purposes.
Node IPs do not limit the number of LoadBalancer IPs.
LoadBalancer IPs do not limit the number of nodes.
They serve two different layers:
Nodes = physical cluster machines
External IPs = addresses for exposing services
In simpler terms,
Node count = how many machines run Pods
Pool size = how many external IPs are available for Services
You could have:
100 nodes
But only 5 LoadBalancer IPs in MetalLB
Then:
At most 5 Services can have external IPs
But 100 nodes can run thousands of Pods inside the cluster
They are independent resources.
What Is an Ingress Controller?
Ingress defines routing rules:
Example:
app1.example.com → service: app1
app2.example.com → service: app2
grafana.example.com → service: grafana
However, Ingress does not provide an IP.
It needs a Service to expose it.
This is why ingress-nginx uses:
kind: Service
type: LoadBalancer
MetalLB gives this Service an IP.
That single IP can route unlimited HTTP/HTTPS applications using hostnames.
This saves your IP pool.
Nginx Ingress Controller is essentially a Layer 7 HTTP/S reverse proxy inside your cluster.
You expose the Ingress Controller using a LoadBalancer service:
ingress-nginx-controller → MetalLB assigns 192.168.29.200
Your users hit 192.168.29.200, and Nginx:
receives HTTP/S traffic
routes it to services inside the cluster
handles hostnames, paths, TLS, rate limits, etc.
Full End-to-End Traffic Flow
Step 1: A client requests:
https://app1.example.com
Step 2: DNS resolves it to:
192.168.29.200
(This IP is provided by MetalLB)
Step 3: The packet reaches the node
MetalLB Speaker announced this IP via ARP or BGP.
Step 4: ingress-nginx Pod receives traffic
Its LoadBalancer Service forwards port 80 or 443 to nginx.
Step 5: nginx reads routing rules
Defined in Kubernetes Ingress resources.
Step 6: nginx forwards to the correct Service
Example:
app1-service → Pod(s)
Step 7: Pod responds
Traffic flows back through the same path to the client.
Why Ingress Saves IP Addresses
Without ingress:
- 10 Services exposed externally = 10 LoadBalancer IPs consumed
With ingress:
10 Services exposed externally = 1 LoadBalancer IP used
nginx handles routing internally
This is the reason most production setups use ingress controllers.
Practical Example IP Planning (Home Lab)
Suppose your subnet is:
192.168.29.0/24
Your router uses:
192.168.29.1
Your DHCP range:
192.168.29.50 - 192.168.29.150
You can choose:
192.168.29.200 - 192.168.29.220
This is:
outside the DHCP range
inside the same subnet
safe to use for MetalLB
This gives 21 external IPs.
Flow and Architecture
A. Network overview
+----------------------+
| Your Router |
| 192.168.29.1 |
+----------+-----------+
|
|
Local Network (L2)
|
-----------------------------------
| | |
+-------------+ +-------------+ +-------------+
| Kubernetes | | Kubernetes | | Kubernetes |
| Node 1 | | Node 2 | | Node 3 |
|192.168.29.10| |192.168.29.11| |192.168.29.12|
+------+------+ +------+------+ +------ ------+
| | |
MetalLB Speaker on all nodes
| | |
Announces IPs such as 192.168.29.200
B. Ingress routing
Client → 192.168.29.200 → ingress-nginx → app1-service → app1 Pods
↳ app2-service → app2 Pods
↳ grafana-service → grafana Pods
Understanding Load Balancers in General
A load balancer distributes incoming network traffic across multiple targets.
There are two broad categories:
Layer 4 Load Balancers
These operate at the connection level (TCP/UDP).
They see ports and IP addresses only.
Examples:
MetalLB
AWS NLB
HAProxy in TCP mode
Layer 7 Load Balancers
These operate at the application layer (HTTP/S).
They understand paths, headers, cookies, hostnames.
Examples:
Nginx Ingress Controller
Traefik
AWS ALB
In Kubernetes, it is common to use both:
MetalLB for Layer 4 external IP allocation
Nginx for Layer 7 routing
How Everything Connects Together
Here is the conceptual hierarchy:
Level 0 – The Network
You have a subnet such as
192.168.29.0/24The subnet defines the address space for your LAN
Nodes receive IPs from this range
Level 1 – Kubernetes
Pods get IPs from the Pod CIDR
Services get Cluster IPs
Nodes route traffic internally via CNI
Level 2 – MetalLB
Provides external IPs from a dedicated pool
These IPs map to LoadBalancer services
MetalLB advertises these IPs at Layer 2
Level 3 – Ingress
Receives HTTP/S traffic at the MetalLB IP
Routes requests to internal services and pods
Handles hostnames, TLS, etc.
Level 4 – Your Applications
- Finally receive traffic that originated outside the cluster
This layered architecture is what makes Kubernetes networking powerful, scalable, and modular.
Summary Table
| Component | Purpose | IP Source | Layer |
| Node IP | Identify physical machines | LAN/DHCP | Layer 3 |
| Pod IP | Identify individual containers | Pod CIDR | Layer 3 |
| Service IP | Internal virtual service endpoints | Cluster CIDR | Layer 3 |
| MetalLB IP | External access IPs for services | MetalLB pool | Layer 2 |
| Ingress Controller | Routes HTTP/S traffic | Behind MetalLB | Layer 7 |
Conclusion
Kubernetes networking is far easier to understand when each layer is viewed separately and then combined into a complete model. Nodes receive IPs from your network. Pods receive internal IPs from Kubernetes. Services act as stable access points. Load balancers provide external connectivity. MetalLB brings cloud-style load balancers to bare-metal clusters. It does not limit how many nodes you can have. Ingress controllers consolidate routing so that many applications can share one external IP. Your IP pool only limits the number of LoadBalancer services you can expose, not the number of worker nodes, pods, or applications.
With these concepts understood together, you gain complete control over how your workloads are exposed and how your cluster interacts with the outside world.






