If you’ve ever wondered how to create an AI assistant to search the web, write code, or help with daily tasks, LangChain is the power plug for creating intelligent agents. This guide walks through creating a LangChain agent step by step, which shows how to make the complex world of AI agents accessible and fun.
Ready to build the next breakthrough AI application? Let’s go!
Setting up your environment
Before diving into agent creation, let's prepare our development environment. Think of it like preparing your kitchen before cooking—you need all the right ingredients.
Create a new directory for your project and set up a Python virtual environment. Please ensure that you have a modern version of Python (preferably 3.8+):
Then, you'll need to install the latest LangChain packages:
Next, let's create a secret file to store our private API keys. This file should not be shared with anyone or uploaded anywhere. We can create a .env file:
Grab the OpenAI API key from the OpenAI console or use Tavily, a search tool made specifically for use with LLMs.
Create a basic LangChain model
Now comes the exciting part—bringing your AI assistant to life!
With LangChain, creating an agent is extremely straightforward since LangChain enables all the abstractions needed to get up and running fast.
In the above code block, we import a few adapters, namely:
ChatOpenAI
to connect to OpenAI chat modelsTavilySearchResults
to create a tool to search the web (more on this later)Tool
to define a custom functionAgentExecutor
to create a runtime for agentsChatPromptTemplate
to create a prompt template that we can change at runtime by passing our inputsHumanMessage
, AIMessage
, and SystemMessage
classes to create the appropriate message type
Before we create an agent, let's connect to an LLM and test if everything works properly:
And just like that, we have connected to the OpenAI model. You should see a response of the type AIMessage
with a "content" key that says something like "The capital of France is Paris." More information is attached to the response, but safely ignore it for now.
Language models are incapable of executing actions; they are configured to only output text. That's where the concept of agents come in. At the core, agents are systems that take a high-level task, use an LLM as a reasoning engine, and decide on the sequence of actions to take.
Defining the tools
Tools give your AI assistant special abilities. Want it to search the web?
There's a tool for that.
Need it to solve math problems? There's a tool for that too! Under the hood, these tools convert into functions that large language models are trained to recognize.
In this example, let's add two tools:
- a tool to search the web
- a custom tool that adds two numbers
Search tool
For this, we will use Tavily to provide information on current events to the model
Custom tool
For this, we will write a Python function with an appropriate tool name and convert it using the @tool
decorator.
We use the bind_tools
method to attach the tools to our LLM, and then call it with two questions. If everything is set up correctly, you should see an empty content
key, but the tool_calls
should be populated with the tools we described above.
The LLM is stating that it wants to call these tools to fetch information, and then provide us with an answer. Note that the LLM does not have the ability to run these tools yet.
Building and configuring the agent
Like every superhero needs their costume and catchphrase, your AI assistant needs its own personality and instructions. Let's start by defining the prompt template:
This creates a prompt object that we can now pass our input to, and it will attach the system prompt automatically. We also have a placeholder message (similar to a scratchpad) for the agent to think and write about its reasoning.
Now that we have the LLM, the tools, and the prompt template in place, we can finally define our agent:
Testing and refining the agent
It’s time to see your assistant in action! Let's give it some tasks to handle:
If you see the responses with the answers to the questions, congratulations! You've just created an agent using the LangChain framework!
Advanced agent development
Agent development is rapidly growing. The above example introduced the basic concepts of an agent and how to create a simple tool-calling agent with LangChain. Advanced features such as retrieval, memory, human-in-the-loop, and dynamic breakpoints evolve these agents into assistants capable of reasoning and executing multiple tasks.
The AgentExecutor is a good starting point and transitions into LangGraph, a library for building stateful, multi-actor applications with LLMs.
Next steps
Congratulations! You've just built your own AI assistant. But this is just the beginning - there's so much more you can do. Try adding new tools, experimenting with different prompts, or teaching your assistant new skills.
A common use case connects agents to vector databases such as Astra DB to enable retrieval-augmented generation (RAG) by defining a retriever tool.
Remember, the best way to learn is by doing. Start small, experiment often, and don't be afraid to make mistakes. Please refer to the LangChain documentation for any queries.