Retrieval-Augmented Generation (RAG) helps large language models stay up to date and reduce hallucinations, but what’s really happening under the hood?
Join us for a hands-on livestream where we’ll break down the key components of a RAG system—by building one from scratch! (Okay, maybe not the LLM itself—we do have a time limit!) Along the way, you’ll gain a deep understanding of how vectorization, similarity search, embedding models, and vector databases work together to power better AI responses.
What you'll learn:
- Vectorization & similarity search – How data is transformed for AI-powered retrieval
- Embedding models & vector databases – Their roles in improving chatbot accuracy
- Bringing it all together – Watch as we connect the pieces and build an augmented chatbot
Can’t join us live? Register anyway and we’ll share the replay afterward.

Phil Nash
Developer Relations Engineer
DataStax