Skip to main content

Command Palette

Search for a command to run...

Lessons Learned Building a CI Pipeline That Auto-Tags and Deploys Docker Images

Updated
5 min read
Lessons Learned Building a CI Pipeline That Auto-Tags and Deploys Docker Images
S

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator

When I first automated Docker builds and deployments, I thought the hard part would be writing the YAML. It was not.

The real challenges were versioning, preventing accidental rollbacks, handling environment drift, and making deployments predictable. Over time, I built a CI pipeline that automatically tags Docker images, pushes them to a container registry, and deploys the latest version to a server without manual intervention.

This article walks through what worked, what broke, and what I learned while building a production-ready auto-tag and auto-deploy pipeline.

The Goal

The objective was simple:

  • Every merge to the main branch should build a Docker image

  • The image should get a unique incrementing version tag

  • The image should be pushed to a container registry

  • The deployment server should pull the new version and restart the service automatically

  • No manual SSH, no manual tagging, no human version bumps

The reality was more nuanced.

The High-Level Architecture

The system had four moving parts:

  • Source repository

  • CI workflow

  • Container registry

  • Deployment server

Here is the simplified flow.

At first glance, this looks trivial. The devil was in version control and deployment consistency.


Problem: Manual Versioning Does Not Scale

Initially, I hardcoded the image tag like this:

myapp:latest

That worked until it didn’t.

Using latest creates ambiguity. If something breaks, you cannot easily roll back. You also do not know what code is actually running in production.

So I moved to semantic versioning:

0.0.1
0.0.2
0.0.3

But manually updating the version before each commit quickly became annoying and error prone.

The fix was automatic version incrementing inside the CI pipeline.


Automatic Version Tagging Strategy

The pipeline logic became:

  • Fetch the latest tag from the registry

  • Parse the version

  • Increment the patch number

  • Tag the new image with the incremented version

  • Push it

Conceptually:

This solved several problems:

  • Every image became uniquely identifiable

  • Rollbacks became trivial

  • Production state became transparent

One major lesson here was to avoid deriving version numbers from Git commit hashes for user-facing services. While hashes are unique, semantic versions are easier to reason about operationally.

CI Pipeline Flow

The CI pipeline was responsible for:

  • Checking out the code

  • Logging into the registry

  • Building the Docker image

  • Tagging with the new version

  • Pushing both version tag and latest

  • Triggering deployment

The full CI flow looked like this:

One key insight was pushing both version and latest tags.

The version tag gives traceability. The latest tag simplifies pull logic on the server.

Deployment Automation

The deployment server had a simple responsibility:

  • Pull the newest image

  • Restart the container

At first, I used a naive approach:

docker pull myapp:latest
docker-compose up -d

This works, but only if you are disciplined.

The issue appears when the image digest does not change or when the server has cached layers in a strange state.

A more robust flow became:

This avoids unnecessary restarts and reduces downtime.

Avoiding Downtime

Restarting a container blindly can cause momentary service disruption.

Two improvements helped:

  • Health checks inside Docker

  • Graceful restart strategy

Instead of stopping first and then starting, the improved approach was:

  • Start new container

  • Verify health

  • Stop old container

This pattern mimics blue green deployment at a smaller scale.

This reduced deployment risk significantly.


Security Lessons

Several important security practices emerged:

  • Never store registry credentials in plain text

  • Use CI secrets properly

  • Use short-lived tokens when possible

  • Restrict server SSH access

Another key lesson was separating build and deploy permissions. The CI pipeline should not have unrestricted server access. Ideally, it triggers deployment via a webhook or controlled SSH user with limited privileges.


Observability Matters

The first time a deployment silently failed, I realized logs were not optional.

You need:

  • CI logs that clearly show version generated

  • Registry confirmation logs

  • Server deployment logs

  • Application startup logs

Without observability, automation becomes guesswork.


Rollback Strategy

One of the biggest advantages of version tagging is clean rollback.

If production breaks:

docker pull myapp:0.0.7
docker run myapp:0.0.7

No rebuild required.

Rollback becomes a configuration change rather than a panic-driven patch.


What I Would Do Differently

If building from scratch again:

  • Use immutable image references by digest in production

  • Introduce deployment locking to prevent concurrent runs

  • Add structured logging for CI

  • Add automated smoke tests after deployment

Automation is not about speed alone. It is about predictability.


Conclusion

Building an auto-tagging and auto-deploy CI pipeline sounds simple. It is not.

The complexity lies in:

  • Version consistency

  • Deployment safety

  • Rollback reliability

  • Security boundaries

  • Observability

Once implemented correctly, the workflow changes how you ship software. Deployments stop being events and start becoming routine.

If you are still manually tagging Docker images or SSHing into servers to deploy, start automating today, that shift in mindset is the real upgrade.