Skip to main content

Command Palette

Search for a command to run...

How I Automated GitHub Repository Deployments Into K3s Using SSH and Helm

Updated
5 min read
How I Automated GitHub Repository Deployments Into K3s Using SSH and Helm
S

DevOps & Cloud Engineer — building scalable, automated, and intelligent systems. Developer of sorts | Automator | Innovator

Building a smooth deployment pipeline for Kubernetes is usually a challenge for small teams and individual developers. Many solutions require complex CI pipelines, container registries, secret management and multiple integration steps. My goal was to create something simpler. I wanted a system that could take any GitHub repository containing a Dockerfile, build the image on a remote server, and deploy it directly into my K3s cluster.

This article explains the complete architecture and the final workflow that made this possible. It uses SSH for secure communication and Helm for templated Kubernetes deployments. Everything happens automatically once a repository is selected.

I have kept this blog short, as it was a large Proof of Concept (PoC), we will revisit this in detailed manner, some other day :)


The Overall Objective

The intention was to create a simple but powerful “Select a GitHub repository and deploy it to K3s” experience. Achieving this required solving the following problems:

  • Authenticating users through GitHub OAuth

  • Cloning the selected repository on the remote server

  • Building a Docker image using the repository’s Dockerfile

  • Extracting the application port from the Dockerfile

  • Pushing the image to a container registry when required

  • Generating Kubernetes manifests dynamically

  • Applying them to a K3s cluster using remote kubectl

  • Keeping the workflow secure without exposing SSH passwords

With these requirements in mind, I built the system around a Streamlit based interface, a backend with SSH utilities, and a Helm driven deployment mechanism.


Authentication Through GitHub OAuth

The first step was allowing users to log in with GitHub and access their repositories. During OAuth callback, I stored the GitHub username and token in a database so that future interactions could occur without requesting credentials again.

Once authenticated, the application fetched all repositories that the user had access to. The user could then pick the repository that needed deployment. This made the process extremely convenient because there was no need for manual cloning or token entry.


Cloning the Repository Over SSH

After a repository was selected, the system connected to the remote host where K3s was installed. The connection used a PEM key, not a password. This removed the need to expose secrets and made the communication secure.

The repository was cloned into a temporary directory on the remote server:

git clone https://github.com/<user>/<repo>.git /tmp/deploy/<repo-name>

The entire build and deployment workflow was executed inside this directory.


Building the Docker Image Remotely

Instead of building the image locally and pushing it to a registry, I chose to build the image directly on the remote server. This avoided large uploads and made the pipeline significantly faster.

sudo docker build -t <repo-name>:latest .

Since I used K3s with containerd by default, I installed Docker separately and configured the system so that it could load images into K3s. This was done using the Docker to containerd import step whenever necessary.

If AWS ECR integration was enabled, the image was tagged and pushed to the registry. That part was optional and only used in certain deployments.


Reading the Application Port From the Dockerfile

Many applications expose ports through the Dockerfile. To make deployments dynamic, I extracted the port by scanning for the EXPOSE instruction:

EXPOSE 8080

If this instruction was present, it became the container port in the Kubernetes Deployment manifest. If it was missing, the system used a default value that could be configured.

This simple extraction made deployments far more flexible. It removed the need for manual adjustments each time a repository changed its application port.


Generating Kubernetes Manifests Automatically

Each repository received its own namespace. Namespaces were created if they did not already exist:

kubectl create namespace <repo-name>

I used Helm to generate the Deployment, Service and optional Ingress files. A lightweight chart template was created and values were injected programmatically. The values included:

  • image name

  • container port

  • replica count

  • environment variables

  • namespace

The resulting Helm command looked like this:

helm upgrade --install <repo-name> ./chart \
  --namespace <repo-name> \
  --set image.repository=<image> \
  --set image.tag=latest \
  --set containerPort=<port>

Helm provided a clean way to handle templating and versioning without writing YAML repeatedly.


Applying the Manifests to K3s

Once Helm processed the templates, the K3s cluster applied everything instantly. The deployment became active with one replica. The Service exposed the container port. If Ingress was configured, the application became public with a stable URL.

All of this ran within the remote environment using SSH commands executed from my Python application.


The Final Automated Workflow

After completing the entire pipeline, this became the final experience for the user:

  1. Log in with GitHub.

  2. Select a repository from the list.

  3. Click “Deploy”.

Behind the scenes, the system handled:

  • cloning

  • image building

  • port extraction

  • manifest creation

  • Helm deployment

The user did not need to interact with Docker commands, kubectl or YAML files. Everything was automatic.


Challenges and Solutions

1. Large repositories and slow builds
To solve this, I ensured that the remote server had cached layers whenever possible by reusing previous build directories.

2. Managing SSH timeouts
Increasing the SSH keep alive configuration and using a resilient execution wrapper helped prevent failures during long deployments.

3. Ensuring reproducible deployments
Helm values files were logged to the database so that each deployment had a traceable configuration history.

4. Handling errors gracefully
Whenever Docker or kubectl produced errors, they were captured and displayed in the Streamlit UI so that the user could fix the repository or Dockerfile.


Benefits of This Approach

This approach is not designed to replace full CI systems. However, it solves a specific problem extremely well. It provides a fast method to deploy small to medium applications into K3s without the overhead of pipelines, registries or private runners.

Some benefits include:

  • Very quick setup for new applications

  • No complex infrastructure required

  • Secure SSH communication

  • Automatic image handling

  • Helm based templating

  • Namespaced isolation for each deployment

It is ideal for personal projects, prototypes, microservices or self-hosted internal tools.


Conclusion

Automating GitHub repository deployments into a K3s cluster became an enjoyable project. By combining SSH, Docker, Kubernetes and Helm into a single workflow, I created a flexible and dynamic deployment system. It saves time, reduces manual work and makes it possible to deploy new applications with a simple click.

More from this blog

C

CodeOps Studies

39 posts

Simple write-ups on day to day code or devops experiments, tests etc.