A few months ago, Vungle’s infrastructure was showing its age. As the company moved more toward microservices and needing globaly distributed infrastructure, our old approach of deploying a single app to a group of Ubuntu machines with Chef (either with autoscaling or manually) was starting to become a bottleneck. We were also worried that we were not utilizing our server resources well. We were already using Docker to streamline our development environments and CI systems, so moving production to a Docker-based system seemed like an obvious choice. After evaluating the options (Kubernetes, Mesos, Fleet, etc), we decided to go with Kubernetes on CoreOS.
This talk will focus on the technical decisions we made about our Kubernetes infrastructure to allow us to scale all over the globe, some of the issues we faced and how we worked around them, and the benefits we have seen.
Some highlights:
- Setting up clusters in VPCs using CloudFormation
- Moving from legacy infrastructure
- Exposing services to the outside world
- Making complex http routing easily configurable by services
- Communication between clusters
- Limitations in AWS support
- Integration into Deployment process
Vungle has benefitted greatly by embracing containers as the basic method for packaging services, and Kubernetes has really allowed us to become container-native all the way into production. It’s a lot of work to get it right, but putting in the effort is really paying off.