Understanding Pod Evictions: Why Kubernetes Removes Your Pods And How To Prevent It

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator
Running applications on Kubernetes usually feels smooth and stable until one day a pod suddenly disappears. Kubernetes calls this process eviction, and it happens when the cluster decides that removing a pod is the safest way to protect the node or the rest of the workload. Pod evictions can feel mysterious at first, especially when they occur during heavy traffic or during important workloads. Fortunately, Kubernetes eviction patterns are predictable once you understand what triggers them and how to prevent them.
This guide explains why pod evictions happen, how to identify the root cause, and the practical steps you can follow to keep your pods running reliably. Everything is supported with real Kubernetes commands, YAML examples, and explainers that make the entire topic easy to apply in your own cluster.
What Exactly Is A Pod Eviction
A pod eviction happens when the Kubernetes control plane removes a pod from a node because the node is under pressure. This is not the same as a pod crashing. Eviction is a deliberate decision taken by Kubernetes to protect node stability.
When an eviction happens, you will usually see events such as:
The node had disk pressure
The node had memory pressure
The node had PID pressure
The node was unreachable
Evicting Pod due to node condition
Pods that are part of a deployment will be recreated on another node, but single node clusters or clusters with insufficient resources often suffer downtime as a result.
The Main Reasons Why Pods Get Evicted
Although Kubernetes can report several types of pressure, these three are the most common causes.
Memory Pressure
This is the most frequent reason for evictions. When a node runs out of memory, Kubernetes starts removing pods that exceed their memory requests or pods with lower priority.
Common signs:
Eviction messages mentioning memory pressure
OOMKilled events
Node metrics showing high memory usage
Disk Pressure
This happens when the node runs low on either free disk space or free inodes. Logging, large ephemeral storage usage, container images, and runaway volumes often cause this.
Signs include:
Events mentioning disk pressure
Eviction messages citing low disk availability
PID Pressure
This occurs when a node runs out of process identifiers. Too many running processes or certain badly designed sidecars can trigger this.
How To Inspect Pod Evictions In Your Cluster
Checking Events
Events provide the first clue. Run:
kubectl get events --sort-by=.lastTimestamp
Evicted pods usually report messages like:
Evicted Pod The node had memory pressure
Evicted Pod The node had disk pressure
Describing The Pod
Even after eviction, the history remains:
kubectl describe pod <pod-name> -n <namespace>
Look for the “Status” and “Last State” sections. They usually contain a reason such as:
Reason: Evicted
Message: The node had memory pressure
Checking Node Conditions
Nodes show their pressure state clearly:
kubectl describe node <node-name>
You might see conditions like:
MemoryPressure True
DiskPressure True
PIDPressure False
How Kubernetes Decides Which Pods To Evict
Kubernetes follows a predictable order when choosing which pods to evict.
Pod Priority
Pods with higher priority are protected. Lower priority pods face eviction first.
Example of a priority class:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: critical-service
value: 100000
globalDefault: false
description: "Critical system components"
Use it in a pod or deployment:
spec:
priorityClassName: critical-service
Resource Requests
Pods that request low memory but actually use much more are frequent eviction targets. Kubernetes tries to preserve pods that are within their declared requests.
How To Prevent Pod Evictions
Set Proper Resource Requests And Limits
One of the strongest defenses against eviction is correct sizing. A pod that consumes far more memory than it requests will be evicted during memory pressure.
Example:
resources:
requests:
cpu: "200m"
memory: "512Mi"
limits:
cpu: "500m"
memory: "1Gi"
Requests tell Kubernetes how much the pod needs to run reliably. If these numbers are too low, Kubernetes will treat the pod as a low priority candidate for eviction.
Enable Limit Ranges And Resource Quotas
In shared namespaces, this prevents runaway pods from disrupting others:
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
spec:
limits:
- type: Container
defaultRequest:
memory: 256Mi
cpu: 100m
Use Pod Priority Classes
Critical workloads deserve higher protection, especially in clusters with limited nodes.
Avoid Overcommitting The Node
Overcommitting CPU is safe, but overcommitting memory is dangerous. Pods will be removed during memory pressure, even if your deployment expects them to stay alive.
Manage Disk Usage Properly
Disk pressure is often caused by:
Large logging output
Growing emptyDir volumes
Too many container images on the node
Use log rotation or a logging agent with proper configuration.
Use Node Autoscaling When Possible
In managed Kubernetes like AKS or EKS, enabling cluster autoscaling significantly reduces unwanted evictions because new nodes appear when pressure increases.
A Real Example: Diagnosing And Fixing A Memory Eviction
Imagine a cluster using a single Standard D3 class VM. Under heavy traffic, one of your application pods disappears. The event log shows:
Evicted: The node had memory pressure
Next steps:
Step 1: Check Node Memory
kubectl describe node <node-name>
Output:
MemoryPressure True
Step 2: Compare Consumption With Requests
You notice the pod requests only 128Mi but uses over 800Mi during traffic spikes. The node cannot allocate enough memory, so Kubernetes removes the pod.
Step 3: Fix The Deployment
Update the deployment with realistic memory requests.
resources:
requests:
memory: "512Mi"
limits:
memory: "1Gi"
Step 4: Add A PriorityClass If Necessary
Critical services can be protected with a higher priority.
Step 5: Monitor
Use Prometheus, Grafana, or Kubelet metrics to observe memory growth and make adjustments.
if you want to see why monitoring (and a minimal kubernetes loki setup) is very important, read my previous blog:
https://blog.nyzex.in/why-observability-is-the-unsung-hero-in-modern-cloud-applications
After this fix, the pod no longer gets evicted during traffic spikes.
Conclusion
Pod evictions are not random events. They are Kubernetes actively protecting your cluster from resource exhaustion. Once you understand the signals that trigger these evictions and how Kubernetes chooses which pods to remove, you can build workloads that are far more resilient.
By setting correct resource requests, monitoring node pressure, controlling logging and ephemeral storage, tuning pod priorities, and sizing nodes properly, you ensure that your critical applications continue running without interruption.






