CompanyOctober 14, 2019

5 More Reasons to Choose Apache Pulsar over Kafka

5 More Reasons to Choose Apache Pulsar over Kafka

A while back, I wrote a post about the 7 Reasons We Choose Apache Pulsar over Apache Kafka. Since then, I have been working on a detailed report comparing Kafka and Pulsar, talking to users of the open-source Pulsar project, and talking to users of our managed Pulsar service, Kesque. What I’ve realized is that I missed some reasons in that first post. So, I thought I’d do a follow-up post that adds to the list.

Before diving into the new reasons, let’s quickly recap the seven reasons mentioned in the previous post:

  • Streaming and queuing together. Kafka and RabbitMQ in a single platform. It’s a two-for-one deal.
  • Partitions are optional. With Pulsar you don’t need to mess around with partitions if you don’t want to. (And I don’t want to.)
  • Distributed log. The Pulsar log is horizontally scalable because it is distributed. Do I hear music in my ears?
  • Stateless brokers. A cloud-native dream scenario. Where did I put my auto-scaler?
  • Native geo-replication. Anybody and I mean anybody, can get geo-replication working.
  • It’s faster. Tests prove this.
  • All ASF open source. Nobody is going to pull the licensing rug out from under you.

Those are the first seven reasons. Of course, if you want more details on any of them, you can check out the full post. Those seem like plenty, but I have found some more. So let’s get into them.

1. Getting along with multi-tenancy

I really should have talked about multi-tenancy in the first post because it’s a big deal. Even if you aren’t planning on building a managed Pulsar service (and why would you since we’ve already built one for you?), unless you are a hermit, there are going to be multiple teams working on multiple projects using your messaging infrastructure. Having to spin up a cluster for each team or project is a pain. And it’s also expensive.

With Pulsar, you can have multiple tenants and those tenants can have multiple namespaces to keep things all organized. Add to that access controls, quotas, and rate-limiting for each namespace and you can imagine a future where we can all get along using just this one cluster. Not only can we imagine this future, but Kafka can imagine it, too. You can read about it in Kafka Improvement Proposal (KIP) KIP-37. It’s been under discussion for a while now.

2. Have we got quorum yet? Replication

We’re getting into the weeds here, but bear with me. You want to make sure your messages never get lost, so you configure your messaging system to make two or three replicas of each message in case something goes wrong. 

Kafka does this using a follow-the-leader model. The leader stores the message and the followers make a copy of it. Once enough followers acknowledge they’ve got it, Kafka is happy. Pulsar uses a quorum model. It sends the message out to a bunch of nodes, and once enough of them acknowledge they’ve got it, Pulsar is happy.

Quorum replication is more democratic with none of this leader-follower hierarchy. The majority always wins, and all votes are equal. But that doesn’t matter with technology. What does matter is that quorum replication tends to give more consistent behavior over time. This probably explains why Pulsar gives more consistent latency performance. 

If you want to get into the gory details of Kafka and Pulsar latency, check out this blog post I wrote. (It’s long. Don’t say I didn’t warn you.) Oh, and Kafka has been thinking about quorum replication for improving latency consistency, too. Check out KIP-250 for the discussion.

3. Tiered storage, event sourcing dreaming

One of the great things about a streaming system like Kafka is its ability to replay messages that have already been consumed. If you like those messages the first time around, replaying them to correct something or build a new application around them is fun to do. 

What if you like those messages so much, you want to keep them around forever? Like, say if you are doing event-sourcing. It sounds like a great idea, but forever is a mighty long time and storing messages forever can get expensive — especially if you are storing them on those high-performance SSDs that keep your messaging system humming.

Wouldn’t it make sense if you could move those old messages — the ones you need to keep around because you might need them someday — to a cheaper storage solution? And if you could use dirt cheap cloud storage like Amazon S3 buckets, wouldn’t that be great?

You can probably guess where I am going here. With Pulsar tiered storage, you can automatically push those dusty old messages into practically infinite, cheap cloud storage and retrieve them just like you do those newer, fresh-as-a-daisy messages. 

I bet Kafka would like to have that feature. You guessed it, they would. It’s described in KIP-405.

4. End-to-end encryption and gobbledygook

Obviously, security is important and you want to keep your messages safe from prying eyes. Of course, you will use TLS between your client and the messaging system (encrypted in transit). 

When you do that, the messaging system has to decrypt the connection so it can figure out what the client is trying to say. It is then going to save the unencrypted message on disk. Of course, you will insist that the disk is encrypted so that if someone stole the disk your messages would be safe (encrypted at rest). But in both these cases, the messaging system has the keys to your data. If it didn’t, it would be dealing with unintelligible streams of gobbledygook.

In many cases, this level of encryption is good enough. But if you want to make absolutely sure nobody can peek at your messages, you need end-to-end encryption. The producer encrypts the message before it sends it using keys that are shared with the consumer that will receive the message. When the message gets saved on the disk of the messaging system, it’s encrypted and the messaging system doesn’t have the key. The messaging system can do its job. But your message is super-secure gobbledygook to the messaging system.

Pulsar can do end-to-end encryption in its Java client. Kafka has been talking about doing it in KIP-317.

5. Broker balancing act

In my last post, I talked about Pulsar brokers being stateless, which is great. But there is actually more to the story. 

Stateless components are desirable because when one gets overloaded, you can just add another one to handle the load. When new clients connect, they can be directed to the new instance. But that doesn’t help the instance that was getting overloaded in the first place. You need to shift some of the work from the overloaded instance to the new, fresh one. 

In other words, you need to rebalance the load.

Pulsar does broker load balancing automatically for you. It monitors the CPU, memory, and network (not disk; did I mention brokers are stateless?) usage of brokers and will move the load around to maintain balance. This means that you don’t have to add that new broker until you use up the capacity of all the brokers — not because one of them is running hot.

You can do broker load balancing with Kafka. But, you are going to have to install another package such as LinkedIn’s Cruise Control. Or, if you like (eventually) paying for stuff, you can use Confluent’s rebalancer tool as well.

Community and ecosystem

One of the criticisms of my last post was that I didn’t mention the size and richness of Kafka’s community and ecosystem. That’s a fair point. 

In the community and ecosystem category, Kafka has Pulsar beat hands down. Kafka has a five-year head start as an open-source project, so it only stands to reason that it will have a larger community, more related projects, and more answers on Stack Overflow.

All I can say is that the Pulsar community is growing, people are contributing new components and integrations regularly, and the folks on the community Slack channel are friendly and supportive. 

Actually, there is one more thing I can say: It’s clear that a lot of Pulsar was inspired and informed by Kafka and that Pulsar is standing on the shoulders of a giant. The Kafka project and community deserve a lot of credit and respect. I know that it may sometimes sound like I am disrespecting Kafka, but I’m really just excited about Pulsar.

Legit Kafka alternative

Between this post and the last one, I am up to a dozen reasons to choose Pulsar over Kafka. And the cool thing is that the deeper I dive into Pulsar, the more reasons I find. So, there may need to be a third blog post on this topic in the future. Stay tuned.

I think it’s pretty clear that Pulsar is a legit alternative to Kafka. Pulsar supports most of the same functionality as Kafka but has several (a dozen by my count) advantages and is gaining momentum as more people learn about it. 

If you are evaluating streaming and/or queuing systems, you owe it to yourself to check out Apache Pulsar. It’s that simple.

Want to try out Apache Pulsar? Sign up now for Astra Streaming, our fully managed Apache Pulsar service. We’ll give you access to its full capabilities entirely free through beta. See for yourself how easy it is to build modern data applications and let us know what you’d like to see to make your experience even better. 

(Editor's note: DataStax acquired Kesque in January 2021.)

Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.