How can you build LLM based Gen AI apps with your own production customer data?
Join Google Cloud, LangChain and DataStax in this deep dive into Generative AI agent architecture. We’ll build an enterprise customer service chatbot that uses Vertex.ai embedding and RAG (Retrieval Augmented Generation) with LangChain and Astra DB vector search.
We’ll demo and discuss:
- Creating memory for LLMs with Vertex AI embeddings and Astra vector search
- Best practices for data pre-processing and chunking documentation for vectors
- Real-world Prompt Engineering and testing for Production AI apps?
- Thinking through security and data protection