CompanyJanuary 8, 2024

The Top 5 DataStax Stories from 2023

The Top 5 DataStax Stories from 2023

DataStax is blessed with many talented writers, so it’s no surprise that we churned out literally hundreds of articles and blog posts over the past 12 months. Even we have a hard time keeping track sometimes; in case you did, too, we looked back over the 2023 and highlighted some of the most popular and helpful pieces of the year. Enjoy!

Can I Ask You a Question? Building a Taylor Swift Chatbot with the Astra DB Vector Database

To celebrate Taylor Swift’s birthday, the cadre of “Swifties” at DataStax concocted a plan to build a chatbot that could answer questions about their favorite artist. Take a deep-dive into how we used Next.js, LangChain.js, Cohere, OpenAI, and DataStax Astra DB to build SwiftieGPT.   

Cassandra 5.0: What Do the Developers Who Built It Think?

A brilliant, distributed team that hails from the likes of Apple, Netflix, and DataStax work tirelessly toward a single goal: improving Apache Cassandra. Those who earn the trust of the Cassandra open source community and can make changes to the base code are called “committers.” Here’s what a handful of them had to say about the upcoming Cassandra 5.0 GA.

Why Your CEO Needs to Watch a Coding Video

Business leaders probably don’t need to know Python. But they do need to understand how simple it is to unlock the power of generative AI. Watching an instructional video by someone like developer and coding instructor Ania Kubow is a perfect way to absorb a key message: every developer has what they need, right now, to build GenAi apps. Read about it in CIO.com.

5 Hard Problems in Vector Search and How Cassandra Solves Them

Vector search is a database feature that’s critical for GenAI applications, but it comes with multiple architectural challenges, including scale-out, garbage collection, concurrency, effective use of disk, and composability. Here’s how DataStax approached solving these issues in Astra DB.

Fine Tuning Isn’t the Hammer for Every Generative AI Nail

There are two ways to get information into an LLM to help prevent “hallucinations” and ensure that the model provides accurate, relevant answers: train the model with the data, either when the model is built or by “fine tuning” it after the fact. Or you can do it at runtime, via a process known as retrieval augmented generation (RAG). In this article, we discuss how to choose between fine tuning and RAG.

One-Stop Data API for Production GenAI

Astra DB gives developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.