CompanyApril 17, 2019

New Intel® Hardware Pushes DSE Even Further Ahead of Apache Cassandra®

Kathryn Erickson
Kathryn EricksonStrategy
New Intel® Hardware Pushes DSE Even Further Ahead of Apache Cassandra®

In April 2018, we released DataStax Enterprise (DSE) 6 and revealed to the market that it was twice as fast as Cassandra or any previous version of DSE. The performance improvements included a new thread-per-core asynchronous architecture and major storage engine improvements.

We would never have imagined that 12 months later Intel would release hardware that more than doubles the performance gap between DSE and Cassandra. The secret lies within the second generation of Intels Scalable processors, which have optimizations specifically for their new OptanePersistent Memory Module.

When you combine our innovations with Intel’s, you get a database, CPU, and storage system that are optimized for working closely together.

Let's break it down:

  • DataStax thread-per-core architecture assigns each token range a thread and each thread is assigned a CPU core.
  • To eliminate thread contention in the new design, DataStax made reads, writes, and other tasks asynchronous throughout the entire platform.
  • Intel recognized that more threads and more paths to memory were required for scale-out systems and offered the scalable Xeon® series with two dies per CPU socket (meaning you’ll see two CPUs per socket).
  • Intel provided additional optimizations (such as single-hop routing) for accessing Optanestorage, resulting in latency as low as 79 nanoseconds (yes, nanoseconds).

Based on our benchmark findings, the performance of unmodified DSE looks great and further serves to demonstrate the advantages of DSE Advanced Performance over Cassandra.

The benchmark used DSE 6.7 and DataStax Distribution of Apache Cassandra® (based on Cassandra 3.11) to compare the new Intel® Optane™ Persistent Memory Module’s performance against NVMe drives. All tests were done on a 4-node cluster packed with 40-core Cascade Lake Xeon® Scalable data center processors.

These were the results:

  • For write workloads, DSE 6.7 throughput is approximately 2x higher (better) on Optane™ compared to the latest NVMe drives we tried, and 99th percent latencies are under 20ms.
  • For mixed and read-heavy workloads, DSE 6.7 throughput is up to 4x higher on Optane™ than NVMe.
  • DSE Advanced Performance features love this hardware. DSE 6.7 achieves 5x the throughput of Cassandra, while at the same time delivering an almost 4x reduction in p99 latency (lower is better).
  • Although Cassandra write-heavy workloads didn’t benefit as much from the new hardware, evenly mixed and read-heavy workloads were also up to 5x faster with Optane™.

graph comparing throughput of DSE, DDAC, and NVMe

graph comparing P99% latency of DSE, DDAC, and NVMe

The Takeaway

If you haven’t upgraded to DSE 6 or 6.7, you really should. If you have upgraded and want more performance without data model changes, you should check out Intel's latest product line.

It’s been incredible watching Intel innovate in this space. No other company is positioned to combine CPU and storage innovation in such a revolutionary way. We’re also working closely with Intel on software innovations they’re making around Cassandra.

We’ll be previewing these ideas at our inaugural user conference, DataStax Accelerate, in Maryland on May 21–23, 2019. Shoot our partner team an email and reference this blog for a conference discount code: techpartner@datastax.com.

The Power of an Active Everywhere Database (White Paper)

READ NOW

Discover more
DataStax Enterprise
Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.