What I Learned Migrating a Real App from Docker Compose to Kubernetes

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator
For a long time, Docker Compose felt like the perfect solution. Simple YAML, fast local setup, predictable behavior. For a single service or even a small stack, it works beautifully.
But at some point, reality catches up.
As the application grew, traffic became less predictable, deployments needed to be safer, and uptime started to matter more than convenience. That was the point where migrating from Docker Compose to Kubernetes stopped being optional and became inevitable.
This post is not a Kubernetes tutorial. It’s a reflection on what actually changed, what broke, and what I wish I had understood earlier before making the move.
Docker Compose Worked Until It Didn’t

Docker Compose is excellent at answering one question:
“How do I run multiple containers together on one machine?”
The problems started when my needs shifted to different questions:
How do I scale only one service without touching others?
How do I deploy without downtime?
How do I expose HTTP and raw TCP services reliably?
How do I survive a node restart without manual intervention?
Compose can technically handle some of this, but only with scripts, conventions, and a lot of discipline. Over time, the setup became fragile. A single bad deploy could take everything down.
Kubernetes didn’t magically solve these problems, but it gave me primitives that were designed for them.
The Biggest Mental Shift: Stop Thinking in Containers

In Docker Compose, you think in containers.
In Kubernetes, you think in systems.
At first, this was uncomfortable. I kept asking questions like:
Where is my container?
Why did Kubernetes restart it?
Why are there three replicas when I only started one?
Eventually, I realized Kubernetes doesn’t care about my containers. It cares about desired state.
Once I stopped fighting that idea and started defining what I wanted instead of how to do it, things clicked.
I no longer deployed containers. I declared intentions.
Configuration Management Became a First-Class Concern

In Compose, environment variables often live in .env files or directly in YAML. That works until you have multiple environments, secrets, and rotating credentials.
Kubernetes forced me to clean this up.
ConfigMaps and Secrets felt verbose at first, but they created a clean separation:
Application code stopped knowing where configuration came from
Sensitive values were no longer mixed with runtime logic
Environment differences became explicit instead of accidental
This alone reduced production mistakes more than any CI rule I had before.
Networking Was Simpler and More Complicated at the Same Time

This was one of the most surprising parts.
Inside the cluster, service discovery is easier than Docker Compose. Every service gets a stable DNS name. Containers can restart, reschedule, or scale without breaking internal communication.
But ingress is where things got interesting.
Exposing HTTP traffic was straightforward once I adopted an ingress controller. TLS termination, routing, and host-based rules became declarative and repeatable.

Exposing raw TCP ports was harder. This is something Compose hides from you. In Kubernetes, you must understand:
Services
NodePorts
LoadBalancers
Ingress TCP mappings
Once configured correctly, it was more reliable than my old setup. Getting there required patience and a lot of reading logs.
Scaling Is Not Just Replicas

In Compose, scaling usually means --scale and hoping nothing breaks.
In Kubernetes, scaling forced me to confront assumptions I didn’t know I had:
Is my app stateless?
What happens if two replicas process the same request?
Where does session data live?
What happens during rolling updates?
Horizontal Pod Autoscaling was powerful, but it exposed poor application design immediately. Anything relying on local state or filesystem assumptions broke fast.
This pain was useful. It forced architectural improvements that made the system more resilient overall.
Health Checks Are Not Optional Anymore
Docker Compose lets unhealthy services limp along. Kubernetes does not.
Once liveness and readiness probes were in place, I learned two important lessons:
A service can be running and still be unusable
Restarting a container is often better than keeping it alive
Bad health checks caused cascading failures. Good ones made deployments boring. Boring deployments are the goal.
CI/CD Became Cleaner and More Predictable

Before Kubernetes, deployments were procedural.
Run this script. Pull this image. Restart that service.
After Kubernetes, deployments became declarative:
Build image
Push image
Update manifest
Let the cluster reconcile
This reduced the surface area for human error. If something went wrong, Kubernetes told me what failed and why. Logs, events, and pod states became a reliable source of truth instead of guesswork.
Kubernetes Did Not Reduce Complexity, It Reorganized It
This is important to say clearly.
Kubernetes did not make the system simpler. It made the complexity explicit.
Things that were previously hidden inside scripts, assumptions, and tribal knowledge were now written down in YAML. That felt heavy at first, but it also meant:
New environments were reproducible
Failures were diagnosable
Scaling decisions were intentional
The complexity existed before. Kubernetes just stopped pretending it didn’t.
What I Would Do Differently Next Time
I would start smaller.
Instead of migrating everything at once, I would:
Move one stateless service first
Get ingress and TLS right early
Invest in logging and metrics from day one
Treat manifests as code, not configuration
Most importantly, I would spend more time understanding Kubernetes concepts before trying to bend them into old patterns.
Final Thoughts
Docker Compose is not bad. It’s just honest about what it is.
Kubernetes is not overkill when your system starts needing guarantees instead of convenience.
The migration was not smooth, but it was worth it. Not because Kubernetes is trendy, but because it forced better engineering decisions that I had been postponing.
If you are feeling friction with Docker Compose, that friction is a signal. Listen to it.





