Thomas Boltze Transforms Developer Productivity with AI and Astra DB at PagoNXT

Thomas Boltze Transforms Developer Productivity with AI and Astra DB at PagoNXT

Thomas Boltze, Head of Enterprise Architecture at PagoNXT

Video preview
Thomas Boltze
Thomas Boltze
Head of Enterprise Architecture at PagoNXT

Thomas Boltze is the Head of Enterprise Architecture at PagoNXT, with extensive experience in both large financial institutions and innovative startups within FinTech, payments, eCommerce, and SaaS. He is a skilled hands-on architect known for spearheading product, process, and technology advancements. Thomas is passionate about nurturing high-performing teams and crafting exceptional products.

Transcript

PagoNXT's mission is quite simple. We're aiming to enable our customers to accept any type of payments wherever they are, so be it. Credit card, alternative payment methods. That's what we do. We started out with an interesting problem. We have about a thousand developers, thousands of GIT repos, and a lot of the code is undocumented.

We thought we can either try to ask our developers to write documentation, which will then immediately get out of date again and so not be very useful, or we can try to generate the documentation from the code. So that's what we're building at the moment. We're taking all our code base, putting it into a vector database, then running LLMs on top of that to generate READMEs, architecture documents, API documentation and so forth.

That's one part, that's the easy thing. We are making that available in Backstage to the developers, but also as pull requests in the git repository so they can see it before it gets merged. The second part is more exciting, I believe. We enable our developers to talk to their code base.

So we build a plugin for Backstage and in there they can ask, do we have a component that manages customer addresses or do we have a component that uses Vault? Or how should I refactor this microservice? We're using similarity search for that and then feed that into LLMs to generate those answers.

What we're hoping to gain from that is a significant improvement in the productivity and less burnout for us. The AI piece comes in with the LLMs where we say we use similarity search to find the relevant content to the question and then use AI to summarize it, to make it into a nice format that our developers can use.

So we use Astra DB and DataStax. Historically. I went to an event, I explained the problem that we had to the DataStax team there, and they thought we can help with that. So we started to put together a prototype.

They surprised me. They had a prototype in a week that uses AstradB. So then they showed me how to do it. And it was originally with Jupyter notebooks and so forth. So classic experimentation. And we just ran with it. So I certainly played around with a few other vector databases.

Ultimately, it's a question of what our developers are comfortable with, what can get quickly through compliance and vendor onboarding. But it's also a question of the capabilities. So we need search, we need vector search, but also graph search eventually.

So that limits it a little bit. We're using a lot of different systems and tools from the ecosystem. We're using LangChain a lot. We're using Hugging Face, we're using genAI or the embeddings. I think the key advice is because this is still relatively new and there's a lot of hype around it, you need to allow your developers the space to learn and you need to work with your business to identify, use cases, problems to solve that truly matter, because only then can you really get it right.