CI/CD: Automating Tests and Deployments the Fun Way

CI/CD Goals

Continuous Integration (CI) and Continuous Deployment (CD) are vital components of reliable software development. We want to seamlessly test and deploy code with consistency and fluidity - mitigating any disruptions to our workflow while simplifying security and other compliance checks.

On thoughtbot’s Platform Engineering team, we focus on making CI/CD work for team needs. Rightfully so. Code quality is essential; but just as important is the way in which we ship said code to various environments. Long gone are the days of clunky manual deploys - forgetting to run tests locally or bungling a production deploy because we neglected to recompile assets. We’ve all been there. We can do so much better.

With intentional CI/CD workflows, we can test and deploy our code with ease - confident in the reliability of our process.

A CI/CD Scenario

Justine just finished her Elasticsearch (ES) work. She’s psyched. It used to be a nightmare to find art work on her little company’s website. Now a unified search box opens up all the amazing work they host. Oil painting of a snoozing kitten? We got you. Water color abstract of the Pacific Northwest coast? Boom, here ya go. Woodblock botanical print? Naturally.

She’s ready for review and wants to get this new hotness out to the world. She used to twinge at this part… running tests, linters, rebasing, not forgetting to recompile assets; would take way too long and this was before things even got into staging. Sure they scripted some of that and it helped, but even then deployments were tedious: ssh to the right server, check system health, reboot (cause kernel updates), pull code, check out the new git tag, rerun the same process that happened locally, kick nginx and the puma systemd service….

But Justine breathed in deep remembering how much work they’d put into setting up a CI/CD workflow. As soon as her ES PR was reviewed and merged, a beautiful chain of events were quickly set into motion:

  • A series of workflow jobs were triggered to run in parallel:
    • code linting - making sure it was in line with their syle standards
    • front-end tests started cranking
    • unit tests for the Ruby back-end began ripping through as well
  • As soon as these completed successfully, the CD bits took the stage
    • the Docker container image was building with the latest verified code
    • then that was pushed out to the container registry
    • which then triggered compilation of the Kubernetes (k8s) templates (since there were some tweaks to the configuration there)
    • finally the Kubernetes deploy kicked off for the staging environment; pods recreated, done!

The whole chain of events took just over 5 minutes (a time Justine seized to grab a cup of tea and pet her cherished Australian Shepherd rescue pup). The process made her extremely happy. It was as it should be.

Our Process

We have done a lot of exploring with various tools. Clients have come to us using Codepipeline, CircleCI, etc. Deploy targets have ranged from Heroku to EC2 to Kubernetes clusters. Many have involved complicated deployment roles that consider branch, environment, prerequisite jobs, various hooks. With so many permutations involving the many tools out there, we have to be flexible; but for our internal defaults, the team has settled on some pragmatic standards.

Github Actions

For the majority of internal and client projects we are working with Github for version control hosting, collaboration, and CI/CD. The Actions Marketplace features a myriad of ready-to-use functionality to build out testing / deployment pipelines. Compiling your kubernetes Helm charts or Kustomize templates? There’s definitely a tool for that. Needing to build and push your Docker images to a container registry? Docker has you covered.

If there’s a common use case, there’s a common library for it. No wheels need be reinvented here. And if your code lives in Github, Github Actions are the obvious choice for powerful CI/CD that is simple to activate with the push of a branch or a pull request to main. The documentation is easy to navigate and the yaml templated workflow files are straightforward. It is our default tool of choice for continuous integration / deployment pipelines and we have been happy with the results.

Docker

Having come from more sysadmin / configuration management-centric backgrounds, some of our team members were skepical of the industry push towards container-based deployments. But seeing how deployments can be managed seamlessly with rebuilding a Docker image and pushing that out to a Kubernetes cluster, it is impossible not to appreciate the advantages. Instead of the slowness and tedium of managing cloud servers as conceptually separate and static, a workflow involving images and k8s allows for much easier updates, scaling, and efficient utilization of computing resources.

We can recall the days where Docker felt new and part of an uncertain trend. This is certainly no longer the case; the maturity of the project, the thorough documentation, and the tooling around it make it a solid foundation for building out applications.

Kubernetes

Kubernetes has emerged as the clear choice for dynamically managing containers. The functionality for deployments, scaling, managing resource limits, etc has become a critical solution for modern infrastructure management. While the complexity certainly entails a learning curve, k8s solves complex problems. And hosted versions lessen the burden of setting up a cluster from scratch (which is no small feat).

Which leads us to…

AWS

While there are copious options for cloud hosting, it is difficult to avoid AWS given its maturity, competitive pricing, stability, fine-grained security controls, and plethora of services it offers.

In the context of continuous deployments, it is important to mention two of their services particularly:

  1. EKS, or Elastic Kubernetes Service, is Amazon’s response to the need for Kubernetes clusters made simpler. While AWS offers their own proprietary container service (ECS), they fortunately realized that the community demands for an open-source standard made a service like EKS unavoidable. The integration with EC2 instances, advanced networking with virtual private clouds, granular identity and access controls, a container registry system, etc make EKS a solid choice that leverages the rich AWS ecosystem.

  2. ECR, AWS’s Elastic Container Registry is a simple but secure mechanism for storing and distributing container images for use with EKS. When building Docker images, it is a cinch to push those out to ECR for later Kubernetes deployments.

We know what a critical (and sometimes difficult) decision it can be to choose a cloud hosting provider. There are a variety of options and always pros & cons when evaluating certain priorities, but AWS has been a dominant force in this space for a variety of difficult-to-ignore reasons. And this is why we have integrated it closely into thoughtbot’s flightdeck project for rapidly building out production-grade Kubernetes clusters.

Kustomize

While more feature-heavy libraries exist, we have found Kustomize to be a simple, declarative, and effective tool for managing the k8s manifests that configure various cluster resources.

Kustomize follows the “do just enough but not too much” philosophy of canonical tools and its approach has led to its inclusion in kubectl (Kubernetes’ official command line tool). It can run in a standalone context as well which is useful for debugging, etc.

With the complexity of managing production clusters, we certainly appreciate simple tools that get the job done for the vast majority of use cases.

Closing Thoughts

There are so many aspects to discuss with respect to CI/CD, but as we have settled into a stable process that works well for our internal and client projects, we have come to appreciate being deliberate and consistent about our tooling.

With the variety of platforms, tools and approaches, it can quickly become an overwhelming tree of decisions to accomplish something that we want to be easy. Automating test runs and deployments should free us up to focus on application code without getting bogged down worrying about its delivery.

If your team is looking for some expert help in this realm, please reach out to us. We thoroughly enjoy building out infrastructure and process that is the dependable foundation for future work.