Mert Deveci on AI-Driven Personalization at Scale with GodmodeHQ

Mert Deveci on AI-Driven Personalization at Scale with GodmodeHQ

Mert Deveci, Founder at GodmodeHQ

Video preview
Mert Deveci
Mert Deveci
Founder at GodmodeHQ

Mert Deveci is the founder of GodmodeHQ, the AI-native sales operating system for modern teams. Based in London, GodmodeHQ specializes in automating sales and marketing tasks with a focus on personalized, context-rich interactions. Before founding GodmodeHQ in 2022, Mert worked as an Investment Banking Analyst at Morgan Stanley, where he developed Excel models and streamlined data presentation processes. He also gained experience in private equity at Franklin Templeton in Istanbul and as an Investment Banking at Société Générale. Mert holds a background in Financial and Managerial Accounting from Boğaziçi University. His diverse experience across finance and AI technology positions him uniquely in the world of AI-driven business automation.

Transcript

What we do at GodMode is we produce AI agents specifically for sales and marketing purposes. And these AI agents do the work instead of selling thework or giving the tools to do the work for people. What that essentially means is that they can do outbound prospecting, inbound prospecting, marketing, copywriting, and also creative marketing for advertising too.

What we focus on is the research purposes so that the AI agent can get the necessary real time online information for all of the customers that a company has and then act on them autonomously.

In a world where AI is so prevalent and everyone sort of uses the same models, it becomes difficult to differentiate yourself, but not only to differentiate, also to make noise. So what I mean by that is if you are publishing a certain marketing material or a certain email, everyone might be sending the same email. But the real difference becomes when you actually make it meaningful in the way that it refers to the context of the customer.

So we do our research in a contextual way way so that each and every email we send or each and every marketing material that we publish is actually helpful to the prospect or to the customer. But we are enabled to build with the technology that the Astra DB especially offers is something that is really core to our product.

We ingest data from internal resources of the prospect that we have or the user that we have, and we use that data resource to, to say that hey, this is what we are selling and this is the way that you have sold. Let's say a case study is ingested and this is the way that the customer benefits from your products.

So if you've got a list of prospects, then based on this case study, this is what you should offer them because this might be their pain point. In that way we are able to actually offer very different use cases and very different research capabilities in the same platform to two different customers.

And that all is powered by Astra DB. Once the documents are ingested, once the context is understood, it just flows into the same platform and then trickles down to different customers in different contexts.

Vector search is important in two ways. One is obviously compliance and data security. The second is, although LLMs have an increasing context window and make things easier now it's very important that you can enable any use case that your customer wants.

When you are producing a product that is like a simple demo, it's, it's pretty simple to basically just say, hey, here's a bunch of information, a couple paragraphs and then generate something out of it. LLM can deal with it, but if it is real, B2B customer use case.

Usually those case studies span from pages and in that case I think vector search is very important because it provides the context. It can ingest almost unlimited information and it can at the end, which is the most important thing, provide good results out of that vector search, which is something we don't even develop.

We just rely on Astra coming from the bottom of the stack. Basically LLMs and other than the closed source LLMs, we use open source LLMs now that like obviously LLAMA is much better than some of the closed source models out there.

Secondly, we use to orchestrate those LLMs into agents we use a framework called Cryptape and we use LangChain. We use LangGraph, which is the agent framework by LangChain. And we used but churned from Autogen, which was a framework from Microsoft for agent frameworks.

And then you come to the vector search. We obviously use Astra and because it's well integrated with all these ecosystem players and the SDKs, it's been working pretty well for us and we didn't need any alternatives on it.