When you are designing and building a cloud-native application, you are probably thinking about deploying it with Kubernetes. What about the database? That can get a bit more complicated as you weigh out the tradeoffs of elasticity, scale, and self-healing requirements versus maintaining servers and the long term operations required. Apache Cassandra™ ticks the first three boxes easily, but what about the operational burden of managing Cassandra? That’s where the cloud-native, DataStax Astra, helps both operators and developers. But what makes it easy for deployments on Kubernetes to access the Astra managed service?
Today, we are releasing the DataStax Astra Service Broker, so you can seamlessly integrate Cassandra into your Kubernetes deployments and leave the operations to somebody else. In this article, we’ll show you exactly how easy it is to use Astra with Kubernetes, and make you wonder why anyone would do anything else.
For those unfamiliar with DataStax Astra, it is a Database-as-a-Service (DBaaS) platform that lets you use Cassandra without the operations overhead. From the web interface, you can fill out a few fields, click a button and in a short time you will have a fully functioning database ready to scale on demand. When building cloud-native applications, combining services that scale in a similar fashion reduces the amount of tradeoffs initially and the dreaded technical debt later. Astra is cloud agnostic and gives you full portability between one or more cloud platforms. Just like how Kubernetes allows you to run where and how you want, Astra will be right there as a highly reliable data layer.
When you are building cloud-native applications with Kubernetes, choices still have to be made. When spinning up an instance of your application how can you provision that data layer? How is the application made aware of the data connection’s information including endpoints, security certificates, and credentials? The Open Service Broker API and Service Catalog operator for Kubernetes define an interface for provisioning and binding of services like DataStax Astra. For DataStax this means standing up a Service Broker for translating requests from the Open Service Broker API specification to our Astra DevOps API. The Service Catalog handles monitoring the Kubernetes API for lifecycle requests and forwarding them to the Astra Service Broker. Once an instance is provisioned it may then be bound, at which point service information is retrieved and stored in a Kubernetes secret.
In a continuous delivery environment, placing these custom resources alongside your code makes it trivial to push them to Kubernetes with each deployment of your application. Let's take a look at how easy it is to integrate the Astra Service Broker.
You’ll need a few prerequisites to follow this walkthrough
- A running Kubernetes cluster you can access via command line.
- Helm package manager for Kubernetes.
- Service Catalog command line interface.
- A DataStax Astra account with service account credentials created.
First, start by ensuring the Service Catalog operator is installed in your local cluster. This only needs to be done once per Kubernetes cluster.
- 1
- 2
- 3
helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
helm repo update
helm install catalog svc-cat/catalog --namespace catalog --create-namespace
Next, create a Kubernetes secret with the service account information from Astra. For this, you will need to go to the service account area of Astra and copy the credentials (instructions). What you get is a small snippet of JSON with all the important info needed to create the secret in Kubernetes. It requires a little Command-line Fu but rest-assured, you only have to do this once. You just need to replace the part labeled <service_account_creds>
kubectl create secret generic astra-creds --from-literal=username=unused --from-literal=password=`echo '<service_account_creds>'| base64`
You then register the broker via a ServiceBroker custom resource. For brevity we will leverage the helpful svcat command-line tool.
$ svcat register astra --url https://broker.astra.datastax.com/ --basic-secret astra-creds
With this information, Service Catalog automatically queries for available services on Astra and displays all the plans or service tiers.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
$ svcat marketplace
CLASS PLANS DESCRIPTION
+----------------+-----------+------------------------------------------------+
astra-database A10 DataStax Astra, built on the
best distribution of Apache
Cassandra™, provides the
ability to develop and deploy
data-driven applications
with a cloud-native service,
without the hassles of
database and infrastructure
administration.
A20
developer
$ svcat get plans
NAME NAMESPACE CLASS DESCRIPTION
+-----------+-----------+----------------+------------------------------------+
A10 default astra-database 6 vCPU, 24GB DRAM, 20GB
Storage
A20 default astra-database 12 vCPU, 48GB DRAM, 40GB
Storage
developer default astra-database Free tier: Try Astra with
no obligation. Get 5 GB of
storage, free forever.
Note the information here is a small subset of what is available. Listings have been reduced for space.
With this information you may now provision your database instance using svcat or kubectl:
- 1
- 2
- 3
- 4
- 5
- 6
$ svcat provision devdb --class astra-database --plan developer --params-json '{
"cloud_provider": "GCP",
"region": "us-east1",
"capacity_units": 1,
"keyspace": "sample_keyspace"
}'
You should see the following output:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
Name: devdb
Namespace: default
Status:
Class:
Plan:
Parameters:
capacity_units: 1
cloud_provider: GCP
keyspace: sample_keyspace
region: us-east1
For kubectl
create a file called astra.yaml
to describe the type of instance you need:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: devdb
namespace: default
spec:
parameters:
capacity_units: 1
cloud_provider: GCP
keyspace: petclinic
region: us-east1
serviceClassExternalName: astra-database
servicePlanExternalName: developer
kubectl apply -f astra.yaml
Service catalog handles the provisioning and communication with Astra. After a couple minutes you can check the instance status with svcat and kubectl:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
$ svcat get instances
NAME NAMESPACE CLASS PLAN STATUS
+-------+-----------+-------+------+--------+
devdb default Ready
$ kubectl get serviceinstances devdb
NAME CLASS PLAN STATUS AGE
devdb ServiceClass/26b3fbe6-0c18-5140-8ac6-87d03b5b4148 1c9bb5ac-6609-5af5-a747-ecf1d093cc7f Ready 3m20s
The process of retrieving service credentials is known as binding. Here’s how you bind the devdb instance:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
$ svcat bind devdb
Name: devdb
Namespace: default
Status:
Secret: devdb
Instance: devdb
Parameters:
No parameters defined
With kubectl this may be described with a ServiceBinding resource, such as: astra-service-binding.yaml
:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: devdb
spec:
externalID: 9a412237-1c66-4d43-b5e6-cd92d7b61779
instanceRef:
name: devdb
secretName: devdb
kubectl apply -f astra-service-binding.yaml
After receiving this request Service Catalog handles retrieving the credentials from Astra and placing them within a local kubernetes secret at the same name as our binding. In this example, this is called devdb
:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
$ kubectl get secrets devdb -o yaml
apiVersion: v1
data:
cql_port: 9042
external_endpoint: ...astra.datastax.com
encoded_external_bundle: BASE64_ENCODED_CONNECT_BUNDLE_ZIP
internal_endpoint: ...internal.astra.datastax.com
encoded_internal_bundle: BASE64_ENCODED_CONNECT_BUNDLE_ZIP
keyspace: sample_keyspace
local_datacenter: dc-1
password: REDACTED
port: 1337
tls_ca_cert: PEM_ENCODED_CA_CERT
tls_cert: PEM_ENCODED_APPLICATION_CERT
tls_key: PEM_ENCODED_APPLICATION_KEY
username: REDACTED
kind: Secret
metadata:
name: devdb
type: Opaque
This is all of the information required to configure the Cassandra driver for secure connectivity to Astra. Instead of manually spinning up nodes, wiring up monitoring, and sourcing infrastructure, Apache Cassandra is available on-demand through a simple GitOps interface. If you need to update the cluster to increase capacity, it is a simple YAML change that is checked into your repository and deployed with CD tools. The only "hard work" here is a call to kubectl apply
. With a running database head over to the Spring Reactive Pet Clinic for a reference Java application which is configured to use the Secret returned by Astra Service Broker.
For more learning check out the Kubernetes and Cassandra and cloud-native data pages on our developer site datastax.com/dev. If you are interested in receiving updates on our new Cassandra on Kubernetes certification program, or becoming a Cassandra certified developer or administrator please visit https://datastax.com/dev/certifications.