TechnologyOctober 23, 2023

LlamaIndex & Astra DB: Building Petabyte-Scale GenAI Apps Just Got Easier

 LlamaIndex & Astra DB: Building Petabyte-Scale GenAI Apps Just Got Easier

We’re excited to introduce a new integration with LlamaIndex that makes it easier than ever to build generative AI apps with DataStax Astra DB.

LlamaIndex is a very popular, simple, and flexible data framework for connecting custom data sources to large language models (LLMs). With LlamaIndex, you can build powerful, petabyte-scale, data-augmented chatbots and agents, or get answers from a variety of structured or unstructured data sources.

The Astra DB LlamaIndex connector enables you to easily ingest real-time vector embedding data from your Astra vector database directly into your app.

Creating a real-time Gen AI app with LlamaIndex takes minutes. Below we’ll walk you through some sample code that reads a PDF, generates embeddings, stores the content in Astra DB, and finally uses the ingested document to answer questions according to the Retrieval Augmented Generation (RAG) pattern.

Simply import dependencies:

import os
from pathlib import Path
import cassio
from dotenv import load_dotenv
from llama_index import StorageContext, VectorStoreIndex
from llama_index.vector_stores import CassandraVectorStore
from llama_index import download_loader

Plug in your Astra DB credentials and connect to your database:

ASTRA_DB_ID = os.environ["ASTRA_DB_ID"]
ASTRA_DB_APPLICATION_TOKEN = os.environ["ASTRA_DB_APPLICATION_TOKEN"]
ASTRA_DB_KEYSPACE = os.getenv("ASTRA_DB_KEYSPACE")

cassio.init(
    database_id=ASTRA_DB_ID,
    token=ASTRA_DB_APPLICATION_TOKEN,
    keyspace=ASTRA_DB_KEYSPACE,
)

Then set up the Cassandra store.

The embedding dimensions are set to 1536 to match the number of dimensions generated by our embeddings model. In this example, you'll load a NASA rockets PDF that we’ve downloaded and stored in the local repo.

Now generate an index for querying later:

cassandra_store = CassandraVectorStore(table="nasa", embedding_dimension=1536)

PDFReader = download_loader("PDFReader")

loader = PDFReader()
documents = loader.load_data(file=Path('./documents/nasa-rockets-guide.pdf'))

storage_context = StorageContext.from_defaults(vector_store=cassandra_store)
index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context,
)

And you’re ready to query your Astra DB vector database with LlamaIndex! Here’s an example of querying the NASA rockets guide content indexed in the previous step:

query_engine = index.as_query_engine()
query_engine.query('What sort of experiments did Galileo conduct?')

The query returns the following response:

> Galileo conducted a wide range of experiments involving motion.

You can see the full code example in this repo.

This is just one of many integrations we’re cooking up to make it easier to build Gen AI apps with Astra DB.


Join us on November 16 at 10am PT for a live webinar with LlamaIndex’s head of partnerships, Yi Ding, where we’ll discuss the challenges of bringing an LLM application to production.

Discover more
LlamaIndex
Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.