Building Your First AI Agent with LangChain and Open APIs
Santhosh Vijayabaskar
Posted on October 24, 2024
I am sure we all have been hearing about AI agents and are not sure where to begin đ€; no worriesâyou're in the right place! In this article, I am going to introduce you to the world of AI Agents and walk you through step-by-step how to build your first AI agent with LangChain.
LangChain is an incredibly useful tool for connecting AI models to various outbound APIs. In this guided tutorial, we will build our first Agent and connect it to Open API Weather data đŠïž to make it more interactive and practical.
By the time we're done, you will have your own AI agent đ€, which can chat, pull in live data, and do so much more!
đ€ What is an AI Agent? Letâs Break it Down
AI Agents are like a supercharged virtual assistant thatâs always ready to help. Whether itâs answering your questions, doing small tasks for you, or even making decisions, this AI agent is like having a digital helper at your disposal. It can do everything from fetching data to creating content or even having a conversation with you. Pretty cool, right?đ
AI agents arenât just staticâtheyâre smart, dynamic, and capable of working on their own, thanks to the power of large language models (LLMs) like GPT-3 or GPT-4.
𧩠What is LangChain? A Developer-Friendly Powerhouse
LangChain is a developer-friendly framework that connects AI models (like GPT-3 or GPT-4) with external tools and data. It helps create structured workflows where the AI agent can talk to APIs or databases to fetch information.
Why LangChain?
- Easy to use: It simplifies integrating large language models with other tools (Jira, Salesforce, Calendars, Database, etc).
- Scalable: You can build anything from a basic chatbot to a complex multi-agent system.
- Community-driven: With a large, active community, LangChain provides a wealth of documentation, examples, and support.
In our case, weâre building a simple agent that can answer questions, and to make things cooler, itâll retrieve real-time data like weather information. Let's dive in!
Step 1: Setting Up Your Environment
In this section, let's set up our development environment.
1.1 Install Python (if you havenât already)
Make sure you have Python installed. You can download it from python.org. Once installed, verify it by running:
python --version
1.2 Install LangChain
Now letâs install LangChain via pip. For those who are new to Python, PIP is a package manager for Python packages. Open your terminal and run:
pip install langchain
1.3 Install OpenAI
Weâll also be using the OpenAI API to interact with GPT-3, so youâll need to install the OpenAI Python client:
pip install openai
1.4 Set Up a Virtual Environment (Optional)
Itâs a good practice to work in a virtual environment to keep your project dependencies separate:
python -m venv langchain-env
source langchain-env/bin/activate # For Mac/Linux
# or for Windows
langchain-env\Scripts\activate
Step 2: Building Your First AI Agent
Now comes the fun partâletâs build our first AI agent! In this step, weâll create an agent that can have a simple conversation using OpenAIâs language model. For this, youâll need an API key from OpenAI, which you can get by signing up at OpenAI.
Hereâs a small snippet to create your first agent:
from langchain.llms import OpenAI
# Initialize the model
llm = OpenAI(api_key="your-openai-api-key")
# Define a prompt for the agent
prompt = "What is the weather like in New York today?"
# Get the response from the AI agent
response = llm(prompt)
print(response)
In the above code, weâre setting up a very basic agent that takes a prompt (a question about the weather) and returns a response from GPT-3. At this point, the agent doesnât actually retrieve live weather dataâitâs just generating a response based on the language modelâs knowledge.
Step 3: Connecting to an Open API (Weather API)
Now letâs step things up by integrating real-time data into our agent. Weâre going to connect it to a weather API, which will allow the agent to retrieve live weather information đŠïž.
Hereâs how you do it.
Get an API Key from OpenWeather
Head over to OpenWeather and sign up for a free API key.Make the API Request
In this next part, weâll modify our agent so that it fetches live weather data from OpenWeatherâs API, and then outputs it as part of the conversation.
import requests
from langchain.llms import OpenAI
def get_weather(city):
api_key = "your-openweather-api-key"
url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url).json()
# Extract relevant data
temp = response['main']['temp']
description = response['weather'][0]['description']
return f"The current temperature in {city} is {temp}°C with {description}."
# Now use the LangChain LLM model to integrate this data
llm = OpenAI(api_key="your-openai-api-key")
city = "New York"
weather_info = get_weather(city)
prompt = f"Tell me about the weather in {city}: {weather_info}"
response = llm(prompt)
print(response)
In the above code, the get_weather function makes a request to the OpenWeather API and extracts data like temperature and weather description.
The response is then integrated into the AI agentâs output, making it look like the agent is providing up-to-date weather information.
Step 4: Deploying Your AI Agent as an API
Now that our agent can chat and retrieve live data, letâs make it accessible to others by turning it into an API. This way, anyone can interact with the agent through HTTP requests.
Using FastAPI for Deployment
FastAPI is a powerful web framework that makes it easy to create APIs in Python. Hereâs how we can deploy our agent using FastAPI:
from fastapi import FastAPI
from langchain.llms import OpenAI
import requests
app = FastAPI()
def get_weather(city):
api_key = "your-openweather-api-key"
url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url).json()
temp = response['main']['temp']
description = response['weather'][0]['description']
return f"The weather in {city} is {temp}°C with {description}."
llm = OpenAI(api_key="your-openai-api-key")
@app.get("/ask")
def ask_question(city: str):
weather = get_weather(city)
prompt = f"Tell me about the weather in {city}: {weather}"
response = llm(prompt)
return {"response": response}
Now you can run this API locally and access it by sending HTTP requests to http://localhost:8000/ask?city=New York.
Conclusion: Whatâs Next?
Congratulations!đ Youâve just built your first AI agent from scratch and connected it to an open API to fetch real-time data. Youâve also deployed your agent as an API that others can interact with. From here, the possibilities are endlessâyou can integrate more APIs, build multi-agent systems, or deploy it on cloud platforms for broader use.đ
If youâre ready for more đ„, and want to explore advanced features of LangChain, like memory management for long conversations, or dive into multi-agent systems to handle more complex tasks, do let me know in the comments below.
Have fun experimenting, and feel free to drop your thoughts in the comments below!đŹ
đ You can also learn more about my work and projects at https://santhoshvijayabaskar.com
Posted on October 24, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.