CompanyAugust 24, 2023

Keeping the Momentum Rolling with Google Cloud

Keeping the Momentum Rolling with Google Cloud

Businesses everywhere are looking to quickly infuse the power of generative AI into scalable, secure, production-ready applications. These applications require a massive amount of data— data that speaks the language of large learning models (LLMs).

Enter vector search—arguably one of the most important new capabilities to come to the world of databases. A database that supports vector search can store data as “vector embeddings,” a key to delivering generative AI applications. 

To address the strong demand for a high-scale, production-ready vector database, we are extremely proud to offer vector search in Astra DB, our massively scalable database-as-a-service built on the open source Apache Cassandra® database. 

The industry enthusiasm for and adoption of Astra DB as a vector database has been remarkable, and Google Cloud has been an important AI partner to DataStax from the start. We initially unveiled our vector search capability earlier this summer with Google Cloud, and with the close collaboration and expert support of the Google Cloud team, we demonstrated a NoSQL assistant atop Vertex AI.  Here’s a quick peek at this demo:

We’re very excited to see our collaboration with Google Cloud grow. We recently highlighted our AI partnership at the I Love AI digital event where Noel Kenehan, Google Cloud’s AI Center of Excellence Partner Engineering Lead, joined us for a discussion about enabling enterprise search with vector databases and Google Cloud.

DataStax’s Astra DB simplifies cloud-native GenAI application development on Google Cloud, with platform integrations to Dataflow, BigQuery, Marketplace, Cloud Functions, Compute Engine and more. To get started with Astra DB on Google Cloud, sign up here.

Discover more
AI
Share

One-Stop Data API for Production GenAI

Astra DB gives developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.