TechnologyNovember 11, 2020

[Webcast] The 411 on Storage Attached Indexing in Apache Cassandra

[Webcast] The 411 on Storage Attached Indexing in Apache Cassandra

Secondary indexing in Apache Cassandra® has evolved, but has never really solved the usability issues with creating indexes. A new indexing technology was just introduced to the Cassandra project called Storage Attached Indexing (SAI) and you’re going to love it!

SAI is a robust, new indexing that eliminates many tradeoffs between scalability, ease of use, and operations when choosing NoSQL, making development and data modeling in Cassandra easier to use for everyone. 

In this recorded webcast, operators and developers will get a deep dive into how SAI works, how it is different and mostly, how to use them. Learn more about SAI through this technical article and blog

You can also give SAI a spin at no-cost by signing up for Astra.

Introduction

Matt Kennedy (00:00): Good morning. And thank you for joining us for today's webinar. Everything You Need to Know About Storage Attached Indexing in Apache Cassandra. This is Matt Kennedy, Product Manager here at DataStax. And I'm joined today by Patrick McFadin, who I'm sure needs no introduction, but just in case you are new to our community, Patrick is a Cassandra is contributor, evangelist, educator, and advocate.

Indexing In Distributed Systems

Matt Kennedy (00:22): Today, we're going to cover Storage Attached Indexing, also known as SAI. SAI is a new indexing technology that transforms the task of data modeling in Cassandra. I'm really excited about today's topic. So, let's jump into it. Patrick, why don't you take it away with our slides.

Patrick McFadin (00:40): Every time we talk about indexing with Cassandra, and new databases, or a new indexing in Cassandra, it's always a big topic, and everyone gets excited. Primary indexes are well known in every database. So, when you create a table, and you create a primary key, that initial part of it, the ID, in this case, is what we call primary index.

Patrick McFadin (01:05): So, indexing that's done by a database means that it is a fast lookup, it's the primary part of that primary key, but it's a primary index, where I can say, select from, and where, and then use that primary key. And I'm guaranteed, or at least promised a fast lookup, because that's how it's stored on the disk. And it's indexed on disk. And everything is optimized around a primary key.

Patrick McFadin (01:34): Those primary indexes are also, they're not a novel to most databases. A single server database, or also known as a relational database, have these since pretty much the beginning. But for distributed databases, they are also there, and Cassandra has primary indexes as well. As a matter of fact, this table and this query are pretty much Cassandra queries.

Patrick McFadin (01:58): They would work fine in both relational databases and in Cassandra. Now, secondary indexing is different, and that's a non-primary key query. When we create a table with columns, well, this is something that in relational databases, you just know that you can do a select and from the table, and then pick any column that you want.

Patrick McFadin (02:28): Now, it may not be the fastest query because it's not an indexed query, which makes it faster, but you can get it. So, if you say, for instance, use country, which is one of the fields inside of the table, you could say country equals France, for instance. And for relational database, or a single server database, totally fine. If you're using Cassandra, and if you have used Cassandra, you'll know that that will fall over.

Patrick McFadin (02:52): It'll give you an error and say, "Wait a minute, that's not a part of the primary key." So, what is that all about? And this point of view quite often is the why. It's frustrating for users, when you're like, "I have table data. Why can't I address any of the columns in that table?" And for relational databases, it's pretty clear. So, I've created this diagram to make a couple of points on how the differences between how we store data, and this is really key on how you index as well.

Patrick McFadin (03:32): With relational systems, that tabular data stored in a data file is generally row and column. The way that it indexes on disk is arranged around the table structure of a relational. So, for instance, you are looking for a column that is organized on the file system in a way that's easy to find. And since it's a single system, even if it wasn't really easy to find, it still is a fast operation, because it's a single server operation.

Patrick McFadin (04:01): So, the combination of the optimized file structure, and the fact that it's on a single system, make those indexed or non-indexed lookups really efficient, or at least fast enough. Now, when you want to make them faster in a relational database, you create an index, so it's a faster lookup, yay. So, whenever we talk about distributed systems, though, and in this case, I'm going to speak directly about Cassandra, there's some tie over to any distributed system, really.

Patrick McFadin (04:27): But for Cassandra, the data is arranged in more of a row-based access. And so, you could have those columns exist in different data files, and in different positions in those data files. So, it turns into a much harder problem. It's not a single pass on the disk, or an index on the disk. And so, let's say that you're looking for a country as text or as a field, well, it could be in a lot of places.

Patrick McFadin (04:54): So, you get into this really fast scatter gather problem, which is almost impossible to make efficient. And on top of that, we like to add a lot more servers to the system. So, it's not that we have just one server that you can get that data from. We're trying to gather it from a cluster of servers. So, there's all kinds of different challenges that are involved there.

Matt Kennedy (05:18): I think that's an important point to emphasize about the primary key structure in Cassandra is it's not just talking about the physical layout on disk, which you highlighted a couple of times. But also, it says where in the cluster we find it. That is why we have this separate part called the partition key.

Patrick McFadin (05:37): Right. And that's been such a critical part of Cassandra data model is the partition key is really an important foundation, because it describes where your data is in the cluster. So, if you have 1,000 node cluster, what node has your data? But it also gives you a fast pointer on disk. We have a key cache, which stores information about where the partition key is on disk. So, it turns into a really fast operation with the primary key, or the partition key, I'm sorry.

Matt Kennedy (06:11): Indeed.

Patrick McFadin (06:11): But once we start getting into column access, there's much more of a scatter gather issue that we have to manage. So, the way we manage that, and secondary indexing is what we call it. But secondary indexing in Cassandra has some very specific Cassandra-esque things. 

Secondary Indexing In Cassandra

Patrick McFadin (06:33): Secondary indexing does exist in other databases. And that's actually what a non-primary key index is called. But in Cassandra, it's nothing new. So, early days, beginning of Cassandra, we had secondary index. And because secondary index is boring, we gave it a cool kid name called 2i. If you see documentation that has word 2i it, that's what it means. But secondary indexes in Cassandra are around, because we were using Thrift at the time, it wasn't CQL, the table structure.

Patrick McFadin (07:05): It gave you the ability to create a column, or define a schema with the column name, and then find that data. So, some of the great things about it was that now, I can do a query on an arbitrary column. And that made it a lot easier to use. And you could do inequalities, which also was very handy for times where you needed to do a range of data.

Patrick McFadin (07:29): And the disk space usage wasn't really that bad, it was fairly linear. But the way that it works, and you can see in this diagram is that whenever data was written to the mem table, it actually triggered a reindex of that data if it was on an index field. So, there is this reindex on write problem. And when you got into a certain size of cluster, you started getting bigger and bigger.

Patrick McFadin (07:58): Eventually, you got into trouble, this is where operators get really angry, where they have to manage a cluster that you added a ton of secondary indexes to, because now, they're dealing with all these write amplification issues. So, the guidance has been, use secondary indexes sparingly, or your operators will find a way to bury you. They will plan your disappearance. And there's also some other issues around repairs and things like that. So, it was a good first shot. It was like, would you call it that, Matt, this is a good first go?

Matt Kennedy (08:35): I think it showed us what the potential was there. But there were so many issues, or there are so many issues around it that are even hard to provide guidance around. So, there's the am I writing too much. And then, there's the am I querying something that has an appropriate cardinality for 2i.

Matt Kennedy (08:56): So, it was a very complex piece to try to really get anything out of. There's the one degenerate use case where I would still use it, where if you were specifying the partition key first, then it's okay to sub select rows of a large partition, for example.

Patrick McFadin (09:23): Yeah, you're indexing a single here.

Matt Kennedy (09:26): Yeah. Weird, esoteric cases like that.

Patrick McFadin (09:29): Yeah. And then, that's super hard. If you're new to Cassandra, and then you show up, and there's all these rules, I don't know about you, Matt, but I'm not a really good rules follower.

Matt Kennedy (09:40): Right. We noticed.

Patrick McFadin (09:42): No, yeah.

Matt Kennedy (09:43): Not as bad as you, but yeah.

Patrick McFadin (09:46): Yeah. We don't have felonies or anything, but database rules. I'm talking about database rules. Yeah. And then, as a community, thinking about indexing, along came the potential savior for us all called Storage Attached Secondary Indexes. Boy, Matt, engineers are terrible at naming things, by the way. Yeah. So, what do we call it?

Matt Kennedy (10:14): Repair.

Patrick McFadin (10:15): Yeah. We really suck at it. SASI, SASI sounds cool. But SASI was a fresh look at this. And this came from a team at Apple that was doing machine learning, and Cassandra work, and they needed a fast way to do lookups basically on columns. And it had a lot of really good promise because it uses this tokenized text search, which was really cool. The range gaining was fast.

Patrick McFadin (10:44): And it changed up what secondary indexes is doing. Instead of indexing on write, it was actually indexing in memory. And then, it would flush that index with the SS table to the disk. So, it was saving a lot on the CPU cycles, and the disk usage. So, it didn't do a lot of RAM amplification. It was a very promising first shot at this. Unfortunately, it had another problem, where this issue was lost was like, how big is the index file.

Patrick McFadin (11:19): And that turned in a long-term problem, where SASI just blew up your disk space. And especially if you did token searches off text, because you'd have to tokenize everything. Even if you took away the case and sensitivity, which really blew it up, it became a nightmare for some folks when it came to disk management. And then, the syntax was very specific for the use case it was built for, and wasn't generalizable enough for most users.

Patrick McFadin (11:49): And that turned out to be a problem too, because there's like, "What am I using this for?" Because there was different types of searches, and different types of indexes for those searches. It was a little user hostile. I'm going to say that. Because that team moved on to another thing, they were unable to continue that, and it didn't really go much further than some of the initial implementations, which is unfortunate, but it happens.

Patrick McFadin (12:15): This is open source. What was clearly seen at this point is, we need yet another one. Well, and meanwhile, we had DSE Search. And so, DSE Search is like in a parallel universe was a part of a proprietary DataStax enterprise, where it was using Solr as an indexing scheme, and putting Lucene files on disk. Now, all of that is super-efficient. Lucene is very efficient at indexing, has all kinds of neat features.

Patrick McFadin (12:48): Solr is really good about those. So, you can do full text searching. And it was really efficient. It was in memory, just like SASI. But it required a full Solr engine attached in there with another JVM. And the integration with CQL was a little clunky at times. And so, really, when you deploy DCE Search with Cassandra, you're really deploying two servers, your Solr server with Lucene and Cassandra.

Patrick McFadin (13:19): And they just happen to have some glue between them, which made it easy to use. They're great use cases, a lot of cool stuff that was done with Solr and DSE Search, but it's a really heavy implementation. And so, that brings us to all of these things happened-

Matt Kennedy (13:37): But before you move on...

Patrick McFadin (13:38): What's that?

Matt Kennedy (13:38): Before you move on, I think it's important to observe that part of DSE Search is great influence on SAI was showing the realm of possibility once you did have effective indexing in a Cassandra database. There was, I think, 2i's lack of success led to an impression that perhaps it's not as important to have that indexing. But seeing what enterprises have done with DSE Search over the years.

Matt Kennedy (14:12): And in fact, using it as a fairly effective index replacement, the fact that they weren't going to use too high for a lot of those use cases. And were able to use DSE Search as an effective database indexing engine, I think was important to inspire the development of SAI to do what it can do.

Patrick McFadin (14:36): Yeah. Understanding the fit for purpose use case, what does indexing do? What do we needed to look for? When we say tech searching, what does that really mean? Is it full document searches?

Matt Kennedy (14:48): Right.

Patrick McFadin (14:49): Maybe not. So, Matt, what do we come up with?

Storage Attached Indexes

Matt Kennedy (14:53): We came up with Storage Attached Indexes.

Patrick McFadin (14:56): Which is another terrible name, by the way, I'm just going to say that.

Matt Kennedy (14:59): Well, not only that, but there was a significant argument as to whether or not they should be storage attached indices. But we've gone...

Patrick McFadin (15:07): Oopsy, I love nerds. This is fantastic. Of course, you have that argument, please.

Matt Kennedy (15:16): No, no, no, I'm fine on team indexes. I'm on team indexes.

Patrick McFadin (15:19): Okay. So, let's talk about storage attach indexes. Because this is not a revolutionary feature, it's an evolutionary feature. And it's learning a lot from some of the past mistakes. And I don't want to call mistakes, I want to say just engineering learning. We learn things as we go along, and we make them better. So, first and foremost, let me talk about the syntax. Create a custom index, it's just an indexing scheme.

Patrick McFadin (15:46): So, why don't we make it really easy. So, you create an index on a column name, and with the using tag, search attached indexes, and then you have a few options, just a couple. And you're going to make it case sensitive or normalized. And in this case, we're thinking of simple options. If you use SASI, there was a lot of options there.

Patrick McFadin (16:11): And some of them are really confusing, and you had to learn a lot about those. This is very simple. Think about what you're going to index, do you need a case sensitive, do you need it normalized? Moving on. At this point, you're just doing selects.

Matt Kennedy (16:23): Yeah. Just to jump in really quickly on what normalize means, because I do see a lot of questions on that. So, when we're working with things like international characters, and let's say we might be indexing a Spanish name that has an Ñ character in it. That's its own Unicode character, but it might not be something that people are coming into search on, and they are inclined to probably want to match an N to an Ñ.

Matt Kennedy (16:47): Ñ is just N with a tilde over it, and it dictates the sound that N makes. But you may not want to have those as requirements for matching. And so, if you normalize, you'll normalize those accented characters into their base character.

Correction: Normalization actually refers to unicode normalization which will treat characters that are nearly identical as the same character.

Patrick McFadin (17:07): Thank you. Actually, that's good information. Because case sensitivity, that's pretty self-explanatory. But normalizing is like, "Why be normal?"

Matt Kennedy (17:16): Indeed.

Patrick McFadin (17:20): So, this is really a developer enablement tool. And also, for those of you who operate Cassandra clusters, this is going to make it easier for you to say yes. But it's really for developers. So, we can do things that we haven't done very efficiently before. So, for instance, when we create tables that have multiple columns in there, and we create an index on top of it.

Patrick McFadin (17:46): Now, we could do where a column and column have the qualities, or even inequalities, and it's going to be efficient and fast. That is so important. That code around that restrictive read path has been cleaned up quite a bit. The technical details below the line are, this is still using an indexing scheme, still storage attached.

Patrick McFadin (18:16): But it's much more efficient. And it's thoughtful about how people actually use it. So, it's doing the work of what 2i was originally meant to do, is make the queries easier. And then, using some of the work that SASI was doing, where it was making indexes, or the syntax usable. But it's blended those two, and made it so it's more efficient, and better on disk.

Patrick McFadin (18:43): So, we want it to be so that whenever developers are writing their queries, or creating a data model, that in a case where they need to index something like a column, that column is not a primary key, that it's easy, and there's no pain for that. So, waiting for search and things like that to be a part of it is not the case. This will be a part of Apache Cassandra.

Patrick McFadin (19:14): So, from an architect standpoint, and this is again, enablement is you're not having to bolt on extra parts to the system. You now get the ability to create a single system without having to running a text search engine on the side like less search or something like that, for just simple text like here's a word. And the simplified architecture is really important, especially in cloud deployments.

Patrick McFadin (19:40): Yes, we love deploying lots of things in the cloud. But if you think of like, "Well, this one system should do the basics pretty simply and easily." That's one deployment you can work with. And so, now you can use open-source Cassandra to do this. You do it on Astra. You can do it on DSE. And it makes it simpler when you deploy those things together.

Matt Kennedy (20:03): But the other massive TCO benefit is potential ridiculous reduction in the amount of storage that you need on disk. Because if you are now in a position with your data modeling challenge, where you can potentially use an index, instead of a denormalized table, or a materialized view, now instead of needing a whole extra copy of the data to support a query pattern, you can potentially get away with an index that is some percentage of the size of the original data. So, from a TCO standpoint, if you're talking about actually reducing the amount of data stored on disk, it's huge.

Patrick McFadin (20:47): Yeah. So, I see a lot of questions like how is it working? And I think we could dig in this a little bit. We're not here to say it's magic. There's no magic. The way that SAI works, and now I realize, I need a little diagram slide that shows how this works. But that's actually happening, we'll have that. But the way that it works is pretty simple is that once data is written into Cassandra, like any other data that's written into Cassandra, if it's an index field, just like what SASI did, it'll index that in memory.

Patrick McFadin (21:25): And then, whenever the data is flushed from memory to disk, like it normally does with Cassandra, it will flush the index along with the data table. So, that was a lesson learned from SASI that worked really well. What that means is you get no write amplification on that. And it's actually a lot faster than using 2i on writes. So, it doesn't get in your way like 2i did. I think there's still-

Matt Kennedy (21:55): Hang on, I have a stat.

Patrick McFadin (21:58): ... about 20%. You have stats, you're just ruffling through your... I have paperwork here that shows stats. It's not as fast as a non-index field. Like I said, there's no magic, but it's a lot faster than 2i.

Matt Kennedy (22:14): It is a 43% throughput improvement over 2i and a 230% latency improvement over 2i.

Patrick McFadin (22:21): There you go. So, yeah, which isn't hard, because 2i had a lot of latency issues. That's probably one of the main reasons to not use 2i. And then, so what this does is for, and I'm going to go to the next slide here, this is for you, my DBA operator SRE folk is, this gives you a lot less to worry about. Like I said, it gives you the ability to say yes.

Patrick McFadin (22:51): Because first of all, doing backups is a lot easier. Because now, the data file and the index are together. They don't have to be reindexed again. So, these are both immutable files. The way that the architecture works from how it flushes to disk, and it's just built in a normal flushing keeps you from having to deal with these crazy disk spikes that happen in 2i.

Patrick McFadin (23:17): And I have a slide on this next, but it's also greatly reduced the amount of disk space that it uses even less than 2i because of some very efficient Trie data structures. If you want to know more about it, I have a link at the end here that will give you really, really deep dark details. It's just crazy amount of detail. But essentially, there's a lot of thought put into how data is indexed on disk because that's how fast it is.

Patrick McFadin (23:47): Now, what you will always still have is whenever you're looking for data in your cluster, it's still going to have to look at every node. No magic there. But just like how 2i had some efficiency algorithms around not actually touching every single node in your cluster, same applies. Matt, do you want to throw some numbers out on that one? Do you got anything in your papers you could ruffle around?

Matt Kennedy (24:23): So, as far as, are you asking about stats on the read path?

Patrick McFadin (24:30): Yeah, because I mean what I don't want to.

Matt Kennedy (24:31): Yeah. So, in that respect, SAI is pretty similar to 2i. They're doing similar amounts of work to find the data you're looking for. So, if you have a super, super latency sensitive query, there may be a reason to still go, and de-normalize that, and do a query table as we would call them. But you'll have a lot more use cases where it's SAI is completely appropriate, and returns in tens of milliseconds with what you're looking for, as opposed to being say, single digit millisecond read, like you might have with a fully denormalized table.

Patrick McFadin (25:16): And just like any other Cassandra data model, or even a relational data model, there are tricks to make it faster. For instance, adding a partition key, fast. That will make it really fast. Because at that point, you're now looking at index data in a file that now has a partition key. So, that combination is going to be really powerful.

Patrick McFadin (25:37): The worst-case scenarios, you just throw it a column name without a partition key. And that's the worst it can get. You can optimize that a lot of ways, and it'll be really fast. But the important thing is that for operators, and for even developers, you're going to use something that's not going to give you inconsistent, crazy, performance problems.

Patrick McFadin (26:01): All of a sudden, you start doing more write amplification, and the node start to get really latent. It's fixing that problem in a really elegant way. I think I pushed out enough. Here's a magic slide. I like this one, because this tells the tale. So, from a data density standpoint, and this is what I was trying to point out earlier, that red line on the top, that's SASI.

Patrick McFadin (26:29): And yeah, SASI had some really cool features, and some really neat stuff on disk amplification. But when it came to disk usage, it was crazy town because it just sucked up a lot of disk. It wasn't really optimized for disk. That was not the original use case. I think it was one of the biggest issues that people have with SASI. It just consumed disk at a really rapid pace.

Matt Kennedy (26:53): And I will point out, we did have a question that stated what kind of storage cost comes with SAI? I think this is the chart you want to look at that speaks to that. This is a relatively modest size requirement for additional indexing. So, you're looking at teams of percentage points over your data footprint.

Patrick McFadin (27:18): Right. So, yeah, looking at the X and the Y, you're looking at how much storage, but then the bottom numbers are how many indexes you have. And it's pretty reasonable to think you wouldn't have 10 indexes on a table. But if you have 10 in your cluster, yeah, maybe. SAI, you'll notice, or secondary indexes, the original 2i was pretty linear.

Patrick McFadin (27:41): But if you look at SAI, it's really miserly with your disk in comparison. And the reason is the optimization of storage. The data structure that is being used is called a Trie index. Not tree as in a tree with leaves, but T-R-I-E, or tray, I think some people call them, but whatever. It's a very established part of computer science.

Patrick McFadin (28:06): And they're starting to show up a lot in Cassandra because they're just really efficient data structure for fighting complex data. And I have a link at the end if you want to go read all the really important details, like get into the code, everything. But this is a key component, I think of what it is. So, at this point, this is what's going, where is it? So, right now, it's in Astra, it's in DSE, DataStax Enterprise.

Patrick McFadin (28:40): And there's a proposal out, it's called CEP-7, it's Cassandra Enhancement Proposal for Storage Attached Index to add that post-4.0. 4.0 is getting ready to ship pretty soon if you pay attention to the traffic there. And there's been a code freeze on 4.0 so it wouldn't be added to 4.0. So, as soon as 4.0 is shipped, this will be one of the first features that's added, like 4.x, 5.x, whatever the project agrees on.

Patrick McFadin (29:15): There're conversations going on right now on the dev channel. Just do a Google search for CEP-7, and you'll find all the documentation for it. It's all the gory details about how it works. And we invite you to comment. This is a work in progress. We will never sit still on indexing, right, Matt?

Matt Kennedy (29:35): Indeed. I guess I say that a lot.

Patrick McFadin (29:38): So, now...

Notebook Walkthrough 

Matt Kennedy (29:41): So, here we are with what in a previous week's webinar would be the demo portion, but we've decided to do something a little bit different for this week. So, we've created a downloadable artifact in the form of a studio notebook that you can do to run your own demonstration. So, if we go to the next slide, within Studio, you'll notice that we have a hamburger menu in the upper left.

Matt Kennedy (30:11): So, that's what those three stacks of things are called because they look like hamburgers. And you can hit that and import a notebook. So, we have posted to the resources section on 24, a Studio notebook that you can download, and import here, and you will get this SAI tutorial. So, what the SAI tutorial has is, starts out with some basic information.

Matt Kennedy (30:35): You can check out our indexing skills page in our news slash dev site. You can read up on the SAI docs, but then as you scroll down, you'll see that we have included some sample tables, and sample data for you to get your hands dirty with. And you can do all of this on a free Astra account. So, if you haven't had a chance to play with Astra yet, you haven't had a chance to play with SAI yet, and you're intrigued by this, it's a good opportunity to check out both.

Matt Kennedy (31:05): So, astra.datastax.com is where you can go, create a free database tier, and then launch Studio for that database, and then import this notebook, and you're off to the races. I will say that there is one minor usability note with Studio. There're a couple other cells in the notebook that have a bunch of select statements in them.

Matt Kennedy (31:27): If you want to run one at a time, just highlight the select statement you want, and then hit that local quorum run button in the upper right. So, very much hoping that many of you will take advantage of trying this out as a way to do a self-guided demo of SAI. Very much looking forward to hearing your feedback, and seeing what you have to say, as far as suggestions for new tutorials go.

Matt Kennedy (32:00): There's a relatively basic set of indexes at the beginning, which tells a little story about me going to the dentist. But at the end, there's actually a real-world use case where we were able to use SAI to solve a problem that helps somebody avoid excessive tombstones, and basically reorient their data model, which in terms of Cassandra data models was well denormalized.

Matt Kennedy (32:32): But it was taking up a ton of disk space to do that. And so, we were able to revert to one table, and support the query patterns with SAI, and was a major change for this operator. So, all of that said, let me switch over into question mode.

Q&A

Patrick McFadin (32:57): Because there are questions. There are questions.

Matt Kennedy (33:01): There are questions. There are tons of questions. Let's start with this one.

Patrick McFadin (33:04): I see a lot of good friends out there. Yeah.

Matt Kennedy (33:06): Will there be geospatial support? You can read up a little bit more on that in the CEP. But yes, we do have plans to add full geospatial support in. If you do need geospatial support now, you can achieve that with the little help of a geohashing library. And just treat your geohash columns as text columns and index those. We covered already a little bit about the performance implications of SAI. But here's one, Patrick, how can someone learn more, or participate in the CEP process?

Patrick McFadin (33:47): Yeah. So, that's a multifaceted answer. First of all, I would suggest joining the ASF Slack. There's the Cassandra dev channel in the ASF Slack, which is where all the developers hang out. First and foremost, though, I think reading the CEP spec is important in the Cassandra wiki. Just, you could Google CEP-7 storage attached index, and it'll take you right there. I think that's the first link on there.

Patrick McFadin (34:16): But read through the spec, and then if you have questions, either on Slack, or sign up for the developer mailing list dev at cassandra.apache.org, you can go to the Apache website, and see how to do that. But that's where a lot of conversations been happening is like, "Hey, what about this? What about that?" We've also had a couple of Zoom meetings. If you go to the Planet Cassandra YouTube channel, there is one meeting that we had that was recorded there.

Patrick McFadin (34:47): It goes into a lot of questions from the developers. There's some a group from Apple, there's a group from DataStax, and there's a smattering of other folks there that were digging into some of the details, and asking the hard questions, especially around how the indexing is done. So, there's a lot of ways, and then if you have an opinion, voice it, love the opinions, and especially around indexing schemes.

Matt Kennedy (35:14): Awesome answer, Patrick. Thank you. We have another question. Is SAI appropriate for selecting data with a partition by an index column? And I think the answer there is yes. In fact, I would also add that one of the cool things you can do with SAI is you can add an index on one of the columns in your partition, or composite primary key.

Matt Kennedy (35:38): Which can often help if you need to query by some of those columns in a composite partition key, but not all of them. So, that is actually an exercise in the studio notebook. So, if you have questions about when you would use those different kinds of indexes, do check that out.

Patrick McFadin (35:56): Oh, here's an interesting question. Some guy named Mark, he's like, how do I become a DataStax employee? Great question. But this is totally out of scope for this right now. I'm sorry. There's a friend who's in there. Hi, Mark. So, here's another one that I think is actually pretty important. How does this avoid a full cluster scan? And I think this is really one we have to reiterate a lot.

Patrick McFadin (36:23): So, I'll give you my short answer. The algorithm that's built into it does avoid everything. However, it still needs to look for all the data in your cluster. Yes, there is no magic. But by avoiding the cluster scan like we've mentioned in the partition key, that's possible. Matt, do you want to expand on that as?

Matt Kennedy (36:47): Well, so we've learned a lot through DSE Search. We've learned a lot about how to optimize the scatter gather part of one of these queries. And we have made the queries that occur on the various nodes as efficient as possible. So, this is, I think, just a question of, we've had a lot of time optimizing this problem. And that's why we're able to get the performance out of SAI that we're able to get.

Patrick McFadin (37:24): Another question. What version of DSE is this included?

Matt Kennedy (37:30): So, I believe we have DSE 6.8.3 was the DSE version that declared SAI to be generally available.

Patrick McFadin (37:46): Okay, good. I got a question about storage attached, the CEP-7 did not Google well. So, I'm going to find a better Google term. Probably adding storage attached index is the way to make that work.

Matt Kennedy (38:00): I think adding Cassandra, CEP-7 and Cassandra will get you to the right page. And I think it is also linked from the resources on 24.

Patrick McFadin (38:12): Very good. There you go.

Matt Kennedy (38:14): We have another question. What trade-offs need to be considered when using these? So, I'll take an initial stab at that. And then, you can weigh in, Patrick. I would say that the first issue is really to take a long hard look at the latency requirements for the query. And I would say that, when there is an opportunity to potentially get rid of a query table in favor of an SAI index, the performance is such that you should consider that.

Matt Kennedy (38:48): It will be performant enough for many, many queries that I've seen our users perform in the wild. It is not the fastest possible read, though. The fastest possible read will still come from a partition lookup. So, if you are in a situation where you need the fastest possible read, then that's probably a place you want to de-normalize into a query table.

Matt Kennedy (39:12): But I think, especially during the development time periods, treating everything as an SAI index, and working off a small number of core tables is a massive productivity boost. You can make that decision a little bit later on about what data does need to be completely denormalized for those as fast as possible single partition queries. And that's also going to dramatically reduce your data footprint compared to other more traditional Cassandra implementations. Anything to add there, Patrick?

Patrick McFadin (39:57): No, I think you nailed it. And just keep moving along here, so we would get a few more going. Good question here is like, so when a record is committed, the index is created, or is the index creation asynchronous? Easy answer. It's in memory. So, whenever the commit happens, it means it's in the mem table. There is an index in the mem table as well. And that index resides in memory, and then both the index, and the data are flushed to disk as a normal part of Cassandra operation. So, it's fairly instantaneous.

Matt Kennedy (40:31): Yeah. I suspect that question comes from those with experience with DSE Search, where there is a soft pause before you can query that again. So, you can insert, and then there is some number of milliseconds that transpire before you can query that. With SAI, when you write data, you can query it right away.

Patrick McFadin (40:55): Another question around DSE Search. Does this mean DSE Search will be deprecated?

Matt Kennedy (41:01): No, it does not. DSE Search is still a search technology that is definitely a different set of capabilities than SAI. We do think that some DSE Search usage will shift to SAI for those more standard database indexing cases. But DSE Search, especially for those that are self-managing, like our DSE enterprises, as opposed to our Astra enterprises really benefit from having that integrated package that puts the IR or information retrieval search engine in the same packaging as the core database.

Patrick McFadin (41:50): Okay. Another really interesting question here, and I think there's multiple answers is, is there a max limit on the number of indexes on a table? Now, in Astra, we limit that.

Matt Kennedy (42:02): We do. I believe it's eight in Astra at the moment.

Patrick McFadin (42:06): But as it goes with open-source Cassandra, it's up to you. But what's a practical limit, do you think?

Matt Kennedy (42:16): That's a good question. We chose eight for a reason. But that is associated with the Astra guardrails feature. And there's certainly a little bit of conservatism to the guardrails feature, or to the guardrails numbers. But we would certainly like to hear feedback from those that need more than eight indexes, and let us know what that use case is. And that may be something that is possible to change in the future.

Patrick McFadin (42:51): Yeah. And I think, as anything goes, when you're adding, and this is things that I've done with databases since the beginning of, at least, my time working with databases is the more gadgets you apply to your data model, the more you want to test it in production, your production workload. I test everything in production. What do you mean? You want to put a production workload against it because you now have a variable part of your workload that you need to think about. It could be that it's fine, or maybe it doesn't meet your SLA threshold. This is why those of us in operations and SRE world will always be gainfully employed, because we're the keepers of the SLA. Do we have time for one more question, or how are we doing on time here?

Matt Kennedy (43:38): Yeah. There is one question. Are there any stats versus DSE Search? So, looking at my stats sheet here, so on the write path-

Patrick McFadin (43:50): Magical papers.

Matt Kennedy (43:52): Indeed. On the write path, we see that throughput is 86% better than DSE Search, and latency is 670% better than DSE Search on the write path. And then, latency for reads is similar to DSE Search. And throughput is slightly better with SAI than DSE Search. And I believe it's also worth pointing out that SAI can handle higher per node densities of indexed data than DSE Search can. So, with that said, thank you again.

Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.