Metric Collection and Storage with Cassandra
One of the often cited use cases for Cassandra is storing time series data. Time series data comes in many forms but one of the more common use cases is storing metrics. More specifically, you might be storing application/server data used for monitoring or capacity planning purposes. In fact, a perfect example of this use case in action is OpsCenter, our tool for managing and monitoring your Cassandra cluster. OpsCenter tracks metrics about your Cassandra nodes and stores this data inside Cassandra before presenting it as user friendly graphs in the OpsCenter UI.
Since metric tracking/collection is such a common use case in Cassandra, we'll dive into some of the specifics about the data modeling and general practices that OpsCenter uses when collecting metrics about your cluster.
Note: I'll be using CQL to illustrate some of the ideas in this post. Specifically I'll be using Cassandra 1.1.0 and CQL 3.0. You can find more information about CQL 3.0 here.
The Basics
At the most basic level, your data model for metric storage in Cassandra will consist of two items: a metric id (the row key), and a collection of timestamp/value pairs (the columns in a row).
Metric Id
When storing metrics in cassandra you want to have a way to uniquely identify each metric you want to track. In the basic case this consists of two parts: a metric name and a server id (ip address). By simply joining these two parts together you generate the row key you will use to store collected data points in Cassandra. For example, a row key for storing cassandra read request metrics in OpsCenter might be '127.0.0.1-getSPReadOperations'.
In addition to the basic case, you may often append additional components to your metric id for more dynamic metrics. For example, OpsCenter tracks metrics about the specific column families in the cluster. The keys for these metrics are generated dynamically based on a the configuration of the Cassandra cluster being monitored. An example row key for this types of metric would be '127.0.0.1-Keyspace1-Counter1-getLiveSSTableCount'.
An important note about this data model is that it assumes you know all components of a metric id when retrieving it. It isn't possible to retrieve the read request metric for all nodes in a cluster without actually knowing the ip addresses of the nodes. Keep this in mind when designing the part of your application that is going to be reading metric data.
Data Points
The data points being collected for a given metric are going to be the columns that you store in your cassandra row. The biggest thing to note here is that we are going to take advantage of Cassandra's ordering within rows. By using a timestamp as the column name and the data point as the column value, data points are automatically sorted by time. This makes pulling out data points in order to graph them extremely efficient.
A Brief Example
cqlsh> CREATE KEYSPACE stats WITH strategy_class = 'SimpleStrategy' ... AND strategy_options:replication_factor = 1; cqlsh> USE stats; cqlsh:stats> CREATE TABLE metrics ( ... metric_id varchar, ... ts timestamp, ... value float, ... PRIMARY KEY (metric_id, ts) ... ); cqlsh:stats> INSERT INTO metrics (metric_id, ts, value) ... VALUES ('127.0.0.1-readlatency', '2012-05-25 08:30:29', 13.1); cqlsh:stats> INSERT INTO metrics (metric_id, ts, value) ... VALUES ('127.0.0.1-readlatency', '2012-05-25 08:31:31', 10.8); cqlsh:stats> SELECT * FROM metrics; metric_id | ts | value -----------------------+--------------------------+------- 127.0.0.1-readlatency | 2012-05-25 08:30:29-0500 | 13.1 127.0.0.1-readlatency | 2012-05-25 08:31:31-0500 | 10.8
Aggregation
The basic model we've described above allows for easy storing and collecting of raw collected data points with Cassandra. While the raw data points contain the most information, they often can be hard for an application to work with without any processing. For example, graphing a months worth of read latency data that's been sampled once per minute will require about 44,000 data points. Usually, we want to sacrifice some precision in our graph and aggregate those data points into a more manageable set.
The aggregation techniques OpsCenter uses are actually modeled after the open source project rrdtool. OpsCenter predefines multiple 'buckets' or 'rollups' that data points are aggregated into. These are the different column families created in the OpsCenter keyspace: rollups60, rollups300, rollups7200, and rollups86400. In the rollups86400 column family, each data point in a row represents a value for a 24 hour span (86400 seconds). Now if we want to graph read latency for the past month we only need to pull 30 data points.
OpsCenter actually computes these aggregate values on the fly as it is collecting data. For each rollup size, OpsCenter will calculate the average, min, and max of a metric during that window. One of the benefits of the modeling after rrdtool is that these aggregates can be computed on the fly as data collection is happening and without knowing every raw value (see cumulative moving average). OpsCenter will sample a metric every minute and update an in memory view of each rollup size. Since the rollups60 column family only aggregates data for a single minute, these data points can be written almost immediately. After 5 minutes/samples we will have already computed the min, max, and average value of the metric during that 5 minute period and can write that aggregate and reset those values for the next 5 minute period.
You might have noticed that as part of the aggregation we are calculating 3 data points for a given time: min, max, and average. This doesn't fit into the basic data model we described above where a column is single timestamp/data point pair. In order to save disk space OpsCenter will actually store all three of these values in single byte array which is stored as a single column value in Cassandra. The more flexible approach however is to just add these as additional columns to your schema.
cqlsh:stats> CREATE TABLE rollups300 ( ... metric_id varchar, ... ts timestamp, ... min float, ... max float, ... avg float, ... PRIMARY KEY (metric_id, ts) ... ); cqlsh:stats> INSERT INTO rollups300 (metric_id, ts, min, max, avg) ... VALUES ('127.0.0.1-readlatency', '2012-05-25 08:35:00', 2.4, 24.8, 14.1); cqlsh:stats> SELECT * FROM rollups300; metric_id | ts | avg | max | min -----------------------+--------------------------+------+------+----- 127.0.0.1-readlatency | 2012-05-25 08:35:00-0500 | 14.1 | 24.8 | 2.4
Considerations for Wide Rows
One of the reasons Cassandra is such a good fit for this use case is that its data model supports rows with quite a large number of columns. Even with this support though, there is a limit on the number of columns you can practically store in a single row. Depending on your sample rate, you can potentially hit this limit fairly quickly. There are two general approaches to preventing rows from becoming too large: expiring data, and row sharding.
The approach OpsCenter takes is to expire older metric data. This approach generally works well if your collection rate isn't too high and you are also aggregating your data points. Using Cassandra's built in ttl support, OpsCenter expires the columns in the rollups60 column family after 7 days, the rollups300 column family after 4 weeks, the rollups 7200 column family after 1 year, and the data in the rollups86400 column family never expires. This prevents any rows from growing too large but also allows you to view aggregated metric data for arbitrary times in the past. OpsCenter also takes advantage of the fact that metric data is only ever written once and never updated. Because of this property, metric column families can set gc_grace_seconds to 0, and allow expired columns to be removed immediately. For more info on why this is useful see this wiki page.
The other approach to preventing rows from becoming too large is to shard your metric rows into multiple rows, often based on time. We have another blog post completely dedicated to solving this problem in Cassandra, so the details of this approach won't be described here.
Conclusion
Hopefully this post has helped to give some insight into how OpsCenter is handling the metric storage for you cluster as well as general approaches to storing metrics in Cassandra. Feel free to ask any questions you have about OpsCenter or metric collection in Cassandra in the comments section below.