CompanyDecember 9, 2019

DataStax Accelerate 2019 Rewind: IBM (Watch Video)

DataStax Accelerate 2019 Rewind: IBM (Watch Video)

Running a Globally Distributed Cassandra Cluster on Kubernetes in the Cloud

As enterprises increasingly build applications with microservices and containers, more and more of them are using container orchestration platforms like Kubernetes.

Very simply, Kubernetes lets developers organize multiple containers, making it easier to ensure applications are operating as they’re supposed to—regardless of the environment they’re running in.

From resource optimization and scalability to automated deployments and self-healing, Kubernetes delivers a number of transformative benefits to modern applications. As such, it comes as no surprise that more and more leading organizations—like IBM—are relying on Kubernetes every day.

Earlier this year at DataStax Accelerate, Mike Treadway, principal cloud architect at IBM, shared insights from his journey of moving from a simple Apache Cassandra® deployment on Kubernetes to a global distribution that’s operationally viable.

Running Cassandra on Kubernetes

IBM was building out a digital experience platform, and Treadway wanted to use Cassandra to support that initiative. 

While he had experience running Cassandra on virtual machines and virtual infrastructures at previous companies, IBM was hosting applications, APIs, and services on IBM’s Kubernetes service. So Treadway figured it would be cool if he and his team could do the same and run Cassandra there, too.

As he began his research to find out how he might do that, he tracked down a few basic examples online. But they fell short in several areas—including global replication, scaling out and in, operational tasks (e.g., maintenance), and monitoring.

“We needed to be able to support multiple Kubernetes clusters,” Treadway said. “We needed to be able to deploy Cassandra in multiple regions and multiple geos.” 

Further, IBM needed to be able to monitor Cassandra even though Kubernetes clusters don’t federate with one another. It was also essential that they be able to perform automated maintenance and have the ability to scale out easily.

To solve these problems, Treadway opted to create a Kubernetes service for each Cassandra node and assign each a private IP address.

“When you deploy something like this, Helm will be your friend,” Treadway continued. “We use Helm to automate our deployments.”

For IBM, moving Cassandra to Kubernetes was an intricate process that involved a fair share of fine-tuning. At a very high level, IBM:

  • Defined each StatefulSet with persistent volumes while ordering storage from the IBM cloud
  • Configured Cassandra nodes to scale up and down by picking IP addresses out of a pool
  • Allowed nodes to communicate within Kubernetes clusters via a pod network
  • Allowed nodes to communicate over external networks via a private IP address

What about maintenance?

“As I was playing around with this and trying to get this to work, I had this problem where I needed to access the storage volumes—but I needed to do it in a way where Cassandra wasn’t running,” Treadway said. 

At first, he accomplished this by creating an on-the-fly container, finding the persistent volume claim, and mounting it in there. 

But instead of doing that every time, Treadway created a deployment in Kubernetes that had every Cassandra volume and backup volume attached to it.

“That deployment sits in our Kubernetes environment with a replica of zero,” Treadway explained. “I don’t use it unless I really need it.”

When he does need it, he goes in and scales it up to one. All of a sudden, he has access to all Cassandra volumes in a single ops pod.

“This gives me the ability to look at the file system or look for anomalies,” Treadway continued. “I also use it for backups.”

Want to learn more about Kubernetes and Cassandra?

For more information on how, specifically, IBM runs Cassandra on Kubernetes in the cloud—including how they approach automated tasks, backups, and maintenance—check out Mike’s Accelerate presentation here.

If you’re interested in learning more about how leading organizations across all industries are using DataStax and Cassandra to change the world, join us at DataStax Accelerate 2020, which will be held May 11–May 13, 2020 in San Diego! 

Be sure to save the date! We look forward to seeing you there. To learn more, visit https://www.datastax.com/accelerate.

DataStax Accelerate

Share

One-Stop Data API for Production GenAI

Astra DB gives developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.