Evolving the Yieldr Stack

A while has passed since Yieldr's early days when software engineering was first established. Here's how our stack has evolved, from 2012 until today.

The Early Days

Around the start of 2012, the very first lines of code were written by a tiny team consisting of just two software engineers, working on what is now our Yieldr Ads platform.

With this team, collaboration was extremely easy. Organization was a breeze, and we were able to develop and release incremental changes with minimal tooling. At that point, a release consisted of connecting to our remote server via SSH, and running svn up. That server was provisioned by hand, mind you – no provisioning software was used!

As the team grew in size and more products made their way into Yieldr's offering, this process could obviously not be sustained any more. More teammates meant we needed better coordination, reproducible development environments and more automation. More usage meant we needed more capable hardware and a better uptime guarantee.

Growing Up

Since then, we've invested significantly in our tooling to improve our workflow, the quality of our software and ultimately the well-being of our team.

In the first days of enlightenment we set up automation using Puppet, and later on Ansible, where builds were triggered by Atlassian Bamboo and releases/upgrades by Rundeck. Monitoring tools were up and running, and we were pretty proud of ourselves...

But it didn't take long until we hit another ceiling. Parity between our development and production environments was a very common source of frustration, and making a change to the infrastructure was almost always a process that hurt developers' productivity. Onboarding new employees was also becoming increasingly harder as our stack grew more diverse.

We knew we needed to fix this, and at the time Vagrant was a great way to manage our development environment into the codebase itself, which brought us a huge improvement.

However, parity with our production environment was still an issue, as it was still managed separately by a different team. The number of projects also grew, making it harder for development machines to sustain multiple virtual machines created with Vagrant.

The Present

Containers

By early 2016, containers were making their way into the mainstream with the rise of Docker, which was aimed at solving these issues as a much lighter alternative to virtual machines.

Among the benefits of containers, portability was a great one. Engineers could finally have control over the entire stack in both development and production. We knew this was the way to go!

We ventured off then, trying to develop the next incarnation of our development and delivery process. We loved the philosophy of CoreOS and experimented with building our infrastructure around the operating system and its tooling (etcd, fleet, flannel), which was entirely centered around containers.

It promised automatic updates, among other great features, and its documentation was great!

Kubernetes

However, its orchestration tool fleet was competing with a juggernaut – Kubernetes. Kubernetes had all the bells and whistles, great orchestration capabilities, replication, health checking, networking, and a whole slew of features that culminated over Google's decades of experience with running containers on production, with their internal system known as Borg.

Around the summer of 2016, we had our first containerized workload in production, and it was fantastic! Combined with Wercker, a modern container-centric continuous integration service, building, testing, releasing and deploying new versions was a breeze!

At last, our engineers had full control of the stack, able to make changes from development to production as our architecture evolved.

Naturally, with more power came more responsibility, where the engineers building the software were the same engineers running it. This increased accountability of ownership throughout the entire lifecycle.

Although deploying and maintaining Kubernetes – especially in the early days – was a challenge, the project matured immensely and our skills around it developed quite substantially as a consequence.

As big proponents of Infrastructure as Code (IaC), we provisioned our Kubernetes cluster (and all other resources) using Terraform. Terraform makes creating infrastructure easy and predictable.

Combining Terraform with Packer, our infrastructure could become immutable, with minimal or no provisioning necessary. This made infrastructure creation and scaling much quicker as well. We have even open-sourced a terraform provider to manage our authentication infrastructure in the same way!

These days, Kubernetes is steadily becoming a commodity, with almost every major cloud provider offering a managed Kubernetes engine abstracting the hard parts of running and maintaining a Kubernetes cluster. As we're on Amazon AWS, we recently moved to Amazon Elastic Container Service for Kubernetes (Amazon EKS).

The Future

It's pretty clear by now that Kubernetes is at the center of everything we do. It's hard to predict how the ecosystem will evolve in the future, but it surely helped us find solutions to our infrastructure and delivery process that we wouldn't have imagined otherwise!  

As we're always looking for ways to improve, we are on the lookout for emerging technologies and practices that help keep us up-to-date with the industry. In addition to that, experimentation is very important to our culture, so we are constantly trying out new ideas in areas where we can improve.

Are you interested in infrastructure-as-code, containers and automation? We're hiring, so please check out our Careers page!

Alex Kalyvitis

Alex Kalyvitis