TechnologyOctober 2, 2024

Better Vector Search with Graph RAG

Better Vector Search with Graph RAG

Retrieval-augmented generation (RAG) is a technique that enhances output from large language models (LLMs) by providing them with real-time context when generating responses. Think of it as giving an AI assistant the latest facts in a vast library before it answers your questions.

To retrieve the latest facts, RAG uses vector search: a technique that converts text into numerical representations (vectors) and then finds the most similar documents to a user's query.

Graph RAG takes this concept further by organizing this information into an interconnected web, much like how our brains connect related ideas. Imagine you're planning a trip to Paris. Your brain doesn't just think about the Eiffel Tower; it connects that to French cuisine, the Louvre, and perhaps even the history of France. Graph RAG aims to give AI a similar ability to make these rich, multi-faceted connections.

This technique has shown promise to revolutionize how AI applications interact with unstructured data, offering a more nuanced and comprehensive understanding of complex information. By leveraging these interconnected relationships, graph RAG can provide more contextually relevant and insightful responses than traditional RAG methods.

In this post, we’ll show you the benefits of using graph RAG and how easy it is to get started, but first let's first look at the limitations of vector search and the complexity of current  knowledge graphs.

The limitations of vector search

Traditional vector search methods, while powerful, often fall short in capturing all the important and implicit relationships within unstructured data.

For example, consider a tech blog as a data source:

  1. Document A - "The XYZ-100 smartphone has a 6.5-inch OLED display and 5G capability."
  2. Document B - "Supplier FastChip is experiencing production delays for their latest 5G chipsets."
  3. Document C - "Customer reviews praise the XYZ-100's battery life and camera quality."

A traditional vector search for "XYZ-100 availability" might return document A due to the direct mention of the product. However, it may miss the crucial context provided by document B about the supplier's production delays, which directly impacts the product's availability. It might also overlook the positive customer reviews in document C, which could explain high demand and potential stock shortages.

This limitation can lead to AI responses missing vital context or connections between closely related pieces of information, resulting in an incomplete understanding of the product's availability situation.

Knowledge graphs: A powerful but complex solution

Knowledge graphs offer a solution to this, addressing many of the limitations of vector search. They explicitly model relationships between entities, allowing for rich, contextual understanding of data. However, implementing a full knowledge graph comes with significant challenges:

  1. Modeling complexity - Creating a comprehensive knowledge graph requires extensive manual effort to define entities, relationships, and ontologies. This process is time-consuming and often requires domain expertise.

  2. Unstructured data challenges - Modeling relationships in unstructured data is particularly challenging. It involves identifying relevant entities, determining meaningful relationships between them, and ensuring consistency and accuracy in the extracted information.

  3. Maintenance overhead - As new information emerges, the knowledge graph needs constant updating to remain relevant. This ongoing maintenance can be resource-intensive.

While knowledge graphs offer powerful capabilities, the pain points associated with manually modeling relationships in unstructured data make them a challenging solution for many organizations. This is where LangChain-based graph RAG emerges as a more accessible alternative, offering many of the benefits of knowledge graphs without the same level of implementation complexity. 

To be clear, graph RAG is a form of knowledge graph, but one that is more accessible because you're just augmenting the vector information you can already extract with links. There’s no special database or language needed.

Here's how it enhances AI applications:

  1. Capturing non-semantic similarity - Graph RAG excels at identifying relationships between pieces of information that might not be semantically similar but are contextually related.
  2. Multimodal capabilities - The system can handle various types of data, including text, images, and metadata, creating a rich, interconnected knowledge base.
  3. Intelligent traversal - By combining vector similarity with graph structures, graph RAG can retrieve both strongly connected documents and nearby, weakly connected information, providing a more comprehensive context.

Graph RAG shows promise in various domains:

  • Customer support - Enabling more accurate and contextually relevant responses to user queries in CRM (customer relationship management) systems.
  • Content recommendation - Improving suggestion algorithms by considering complex relationships between content pieces.
  • Research and analysis - Assisting researchers in discovering non-obvious connections within large datasets.

Getting started with graph RAG

Leveraging the power of graph RAG with LangChain couldn’t be simpler. The only changes required to take advantage of graph RAG are to add relevant metadata to your documents using data extractors as seen below:

# Load and process documents
loader = AsyncHtmlLoader(urls)
documents = loader.load()

# Extract links as graph edges
transformer = LinkExtractorTransformer([
HtmlLinkExtractor().as_document_extractor(),
#KeybertLinkExtractor(),
#...
])
documents = transformer.transform_documents(documents)

# Extract page content / "clean" documents
bs4_transfromer = BeautifulSoupTransformer()
documents = bs4_transfromer.transform_documents(documents)

# Split documents into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1024,
chunk_overlap=64,
)
documents = text_splitter.split_documents(documents)

# Add documents and their metadata to the graph vector store
store.add_documents(documents)

A demo example can be found here.

This alone is enough to get started with graph RAG and will enable LangChain to construct and use a knowledge graph when performing RAG. Now, you can use the CassandraGraphVectorStore LangChain component instead of a traditional VectorStore to perform graph RAG as seen here:

-from langchain_astradb import AstraDBVectorStore
+from langchain_community.graph_vectorstores import CassandraGraphVectorStore

-store = AstraDBVectorStore(embeddings)
+store = CassandraGraphVectorStore(embeddings)

Using the CassandraGraphVectorStore component enables you to leverage the power of graph databases without the need to manually manage the underlying data structures; it’s a minimal code change for a significant boost in the accuracy of retrieved information.

Once you’ve added documents to your store, you can perform a graph traversal:

# Execute a graph search on your stored documents
result_documents = list(
    store.traversal_search("Sci-fi movies with a strong female lead")
)

 The traversal_search() function executes a regular similarity search and then traverses the graph to find linked documents using the edges created from extractors, providing more varied and rich results.

Do I need a dedicated graph database?

When considering the implementation of graph RAG, a common question that arises is whether a dedicated graph database is necessary. The short answer is: probably not. While graph databases excel at managing complex, interconnected data structures, they aren't essential for leveraging the benefits of graph RAG.

Most organizations can effectively implement graph RAG using their existing data infrastructure. Modern relational databases and document stores often provide sufficient capabilities to represent and query the types of relationships needed for graph RAG. These systems can be adapted to support graph-like queries and traversals without the need for a complete overhaul of your data architecture or special search syntax.

Moreover, the strength of graph RAG lies not in the underlying database technology, but in the intelligent combination of vector search and relationship-aware retrieval algorithms. These algorithms can be implemented as an additional layer on top of your current data storage solution, enabling you to enhance your AI applications with graph-like capabilities without the complexity and cost associated with adopting a new database system.

Ultimately, the decision should be based on your specific use case, data complexity, and existing infrastructure. For most organizations considering graph RAG, focusing on enhancing their current systems and algorithms will yield significant improvements without the need for a specialized graph database and its associated setup and maintenance costs.

The road ahead

As graph RAG continues to evolve, we can expect further improvements in graph traversal algorithms and integration with popular AI development tools. DataStax is at the forefront of this innovation, continuously enhancing Astra DB, Langflow, and the rest of our AI PaaS to support advanced graph RAG capabilities.

If you’d like to see a working example, here’s a GitHub repo for you to discover. You can also browse through the LangChain documentation for more info.

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.