This is the second in a two part series about Denise Mauldin's experiences at CloudNativeCon.
Part One: Keynotes - Part Two: Day 1 Sessions
I tried to attend the beginner sessions mostly, though it was a little difficult to tell which ones were supposed to be that. I ended up in two ‘how we started using Kubernetes’ talks back to back from Box and Buffer. Then a session on Helm for managing charts (package descriptions), one on Logging using Fluentd, and then one on monitoring application performance from Cisco.
Box created their own scripted deployment system that seems very similar to what Helm does (because they started before Helm). They use Artifactory to serve their images instead of Dockerhub because it allows them to have a private repository. They use Jsonnet templates with a makefile that iterates over each JSON file and replaces the specifics for each environment. Then they use Applier to synchronize the github JSON files produced from the Jsonnet templates and the Kubernetes ideal POD config so that they don’t end up changing the clusters when they don’t need to. They use Flannel to create the networking layer between the one IP per server and the one IP per container networking problem, but they’re interested at looking at Project Calico instead. They use Linkerd for API discovery backed by Kubernetes and namespaces to run multiple virtual clusters in one physical cluster.
Buffer had a bunch of great wisdom regarding different logging solutions they’d tried with Kubernetes. First they tried elasticsearch with Fluentd, but they ended up filling up their logging storage with all of the events that were produced and they chose to try a different solution rather than digging into how to make the events more granular and how to re-write their application to do stateful logging better (in the prototype phase). Second they tried Sidecar with an AWS Kinesis firehose, but this ended up with really complex POD configuration and difficulty managing secrets and a lot of network overhead from having extra logging containers. Finally they settled on using a Kubernetes based Daemonset log collector that tailed the container logs and sent the output to a centralized resource using TCP with Fluentd and AWS Kinesis. They use Prometheus for monitoring and deployment with Helm. One of the best improvements to developer productivity was having standard naming conventions for Kubernetes labels, secrets, logs, and services. They’re investigating Linkerd and adding TLS to make their containers more secure.
Helm seems like a pretty straightforward package deployment manager. They have a templating engine and a CLI that allows you to easily list, install, and upgrade the packages you want to run in your container. It deploys Tiller onto your container and that runs the particulars of the package that you want to install. If you do an upgrade, it’ll transfer the data from the container with the old version to the container with the new version (Keeps the old database container?). Helm chart packages are a good way to learn Kubernetes internal best practices and they’re looking for more people to contribute different packages.
I found the Fluentd and Cisco talks to be a little opaque. The Fluentd talk had some good initial information about logging patterns, but quickly veered into doing logging on embedded devices and GO lang examples and requests for contributers. The Cisco talk seemed to not use any of the other Kubernetes ecosystem resources and was about their personal (but open-source) measurement bus and setting up events to be put into that bus for application alerts. These are probably talks that were meant for a more intermediate audience, but had some interesting thoughts for a newbie like myself.
Overall a really interesting set of breakout sessions and I’ve seen mention of several other ones that were really good. It’s always a great conference when there’s multiple tracks you wish you could see at the same time. :)