Watch the DataStax Kubernetes Operator in Action
There’s a simple reason why containers are becoming increasingly popular: They enable software teams to ship better products faster and more efficiently, but like any other kind of technology, containers introduce problems of their own. More and more development teams are using Kubernetes, an open source container orchestration platform, to deploy software using containers. Kubernetes helps teams ensure applications work as designed, regardless of the environment they’re running in.
Kubernetes is composed of a collection of resources describing components which declaratively define how an application should be deployed. This can include controllers determining how many instances of an app should be running at any given time and load balancers which are automatically updated. Of course, in a bit of an Inception-inspired conundrum, Kubernetes introduces additional complexities of its own. Given all of these new components doesn't that make things more difficult?
To reduce those complexities, we decided to build the DataStax Kubernetes Operator. Operators take the process of describing many of the lower level Kubernetes components and instead provide a simpler, logical, interface for describing an application. We created the operator because we found that deploying DataStax Enterprise (DSE) on-premises or in containers was often difficult and daunting. We took action to automate the process by moving to Kubernetes and building the operator.
Instead of managing each individual Kubernetes resource for a cluster, administrations simply define a logical DSE Datacenter in a YAML file. This Datacenter object describes what version we wish to deploy, the number of nodes, and any configuration changes that deviate from the defaults. Once the file is submitted to the Kubernetes cluster our operator handles parsing the fields and submitting requests for the required resources on your behalf. Instead of keeping track of multiple files per DSE node (k8s pod), it's a single file that represents an entire fleet of instances. Deploying a configuration change doesn't require intricate intervention on each node, instead the Kubernetes Operator determines the change in the datacenter level YAML file and applies the change to each node, one at a time, in a rolling fashion.
As a result, deployment is now trivial. Using the operator, admins can stop caring about spinning up the nodes of individual clusters by hand and get back to focusing on what’s most important. To illustrate, changing the number of nodes in a cluster is completely automated; new nodes will come online as quickly as possible without having to check logs or carefully timing shell invocations.
What’s more, the operator also integrates well in a multi-operator environment. Clusters utilizing the monitoring tools Prometheus and Grafana can have their metrics picked up with those operator resources. As new nodes are added to the cluster they are automatically added to monitoring! This ensures that no system slips through the cracks during configuration, startup, or deployments stages.
Add it all up, the operator makes life easier while freeing engineers and developers to do their best work—which sure beats focusing on the tedium. To learn more about the operator and see it in action, check out this demo Christopher Bradford, Product Manager at DataStax, recently gave at KubeCon.
If you’re interested in finding out even more about the DataStax Kubernetes Operator, here are some resources you may want to check out:
- First look at the DSE Kubernetes Operator with Christopher Bradford | Ep. 128 Distributed Data Show (podcast)
- Simplifying DataStax Enterprise Deployments with Kubernetes for Containerized Workflows (blog)
- Optimizing Data Management in Containers with Kubernetes and DataStax (white paper)
- Kubernetes and Containers
- DataStax Labs
- GitHub repository