«`html
Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. This guide ensures effortless setup, even for beginners, by simplifying dependency installations.
Users are introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial illustrates the integration of these tools within a sophisticated agent architecture built using LangGraph, facilitating rapid deployment of custom multi-functional AI agents.
Setting Up Your Environment
We automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. This setup streamlines the environment preparation process, making it portable and beginner-friendly.
import subprocess
import sys
def install_packages():
packages = [
"langgraph",
"langchain",
"langchain-anthropic",
"langchain-community",
"requests",
"python-dotenv",
"duckduckgo-search"
]
for package in packages:
try:
subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
print(f"✓ Installed {package}")
except subprocess.CalledProcessError:
print(f"✗ Failed to install {package}")
print("Installing required packages...")
install_packages()
print("Installation complete!\n")
Tool Implementations
We import necessary libraries and modules for constructing the multi-tool AI agent. These imports form the foundational building blocks for defining tools, agent workflows, and interactions.
import os
import json
import math
import requests
from typing import Dict, List, Any, Annotated, TypedDict
from datetime import datetime
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
from duckduckgo_search import DDGS
API Key Configuration
We set and retrieve the Anthropic API key required to authenticate and interact with Claude models, ensuring secure access throughout the script.
os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here"
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
Calculator Tool
We define a robust calculator tool that evaluates mathematical expressions securely.
@tool
def calculator(expression: str) -> str:
try:
allowed_names = {
'abs': abs, 'round': round, 'min': min, 'max': max,
'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
'log': math.log, 'log10': math.log10, 'exp': math.exp,
'pi': math.pi, 'e': math.e
}
expression = expression.replace('^', '**')
result = eval(expression, {"__builtins__": {}}, allowed_names)
return f"Result: {result}"
except Exception as e:
return f"Error in calculation: {str(e)}"
Web Search Tool
This tool enables the agent to fetch real-time information using the DuckDuckGo Search API.
@tool
def web_search(query: str, num_results: int = 3) -> str:
try:
num_results = min(max(num_results, 1), 10)
with DDGS() as ddgs:
results = list(ddgs.text(query, max_results=num_results))
if not results:
return f"No search results found for: {query}"
formatted_results = f"Search results for '{query}':\n\n"
for i, result in enumerate(results, 1):
formatted_results += f"{i}. **{result['title']}**\n"
formatted_results += f" {result['body']}\n"
formatted_results += f" Source: {result['href']}\n\n"
return formatted_results
except Exception as e:
return f"Error performing web search: {str(e)}"
Weather Information Tool
This tool simulates retrieving current weather data for a given city.
@tool
def weather_info(city: str) -> str:
mock_weather = {
"new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
"london": {"temp": 15, "condition": "Rainy", "humidity": 80},
"tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
"paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
}
city_lower = city.lower()
if city_lower in mock_weather:
weather = mock_weather[city_lower]
return f"Weather in {city}:\n" \
f"Temperature: {weather['temp']}°C\n" \
f"Condition: {weather['condition']}\n" \
f"Humidity: {weather['humidity']}%"
else:
return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)"
Text Analyzer Tool
This tool provides a detailed statistical analysis of a given text input.
@tool
def text_analyzer(text: str) -> str:
if not text.strip():
return "Please provide text to analyze."
words = text.split()
sentences = text.split('.') + text.split('!') + text.split('?')
sentences = [s.strip() for s in sentences if s.strip()]
analysis = f"Text Analysis Results:\n"
analysis += f"• Characters (with spaces): {len(text)}\n"
analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}\n"
analysis += f"• Words: {len(words)}\n"
analysis += f"• Sentences: {len(sentences)}\n"
analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}\n"
analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}"
return analysis
Current Time Tool
This tool retrieves the current system date and time in a human-readable format.
@tool
def current_time() -> str:
now = datetime.now()
return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"
Agent Workflow Creation
We construct the LangGraph-powered workflow that defines the AI agent’s operational structure.
def create_agent_graph():
tool_node = ToolNode(tools)
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
workflow.add_edge("tools", "agent")
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
return app
print("Creating LangGraph Multi-Tool Agent...")
agent = create_agent_graph()
print("✓ Agent created successfully!\n")
Testing and Interactive Chat
The test_agent() function ensures that the LangGraph agent responds correctly across different use cases, while the chat_with_agent() function provides an interactive command-line interface for real-time conversations.
def test_agent():
config = {"configurable": {"thread_id": "test-thread"}}
test_queries = [
"What's 15 * 7 + 23?",
"Search for information about Python programming",
"What's the weather like in Tokyo?",
"What time is it?",
"Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
]
print(" Testing the agent with sample queries...\n")
for i, query in enumerate(test_queries, 1):
print(f"Query {i}: {query}")
print("-" * 50)
try:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
last_message = response["messages"][-1]
print(f"Response: {last_message.content}\n")
except Exception as e:
print(f"Error: {str(e)}\n")
def chat_with_agent():
config = {"configurable": {"thread_id": "interactive-thread"}}
print(" Multi-Tool Agent Chat")
print("Available tools: Calculator, Web Search, Weather Info, Text Analyzer, Current Time")
print("Type 'quit' to exit, 'help' for available commands\n")
while True:
try:
user_input = input("You: ").strip()
if user_input.lower() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
elif user_input.lower() == 'help':
print("\nAvailable commands:")
print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'")
print("• Web Search: 'Search for Python tutorials' or 'Find information about AI'")
print("• Weather: 'Weather in Tokyo' or 'What's the temperature in London?'")
print("• Text Analysis: 'Analyze this text: [your text]'")
print("• Current Time: 'What time is it?' or 'Current date'")
print("• quit: Exit the chat\n")
continue
elif not user_input:
continue
response = agent.invoke(
{"messages": [HumanMessage(content=user_input)]},
config=config
)
last_message = response["messages"][-1]
print(f"Agent: {last_message.content}\n")
except KeyboardInterrupt:
print("\nGoodbye!")
break
except Exception as e:
print(f"Error: {str(e)}\n")
Quick Demonstration
The quick_demo() function showcases the agent’s capabilities across different categories.
def quick_demo():
config = {"configurable": {"thread_id": "demo"}}
demos = [
("Math", "Calculate the square root of 144 plus 5 times 3"),
("Search", "Find recent news about artificial intelligence"),
("Time", "What's the current date and time?")
]
print(" Quick Demo of Agent Capabilities\n")
for category, query in demos:
print(f"[{category}] Query: {query}")
try:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
print(f"Response: {response['messages'][-1].content}\n")
except Exception as e:
print(f"Error: {str(e)}\n")
Conclusion
This step-by-step tutorial provides valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With clear explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. Developers can confidently extend and customize their AI agents with this foundational knowledge.
For further exploration, you can check out the notebook on GitHub.
All credit for this research goes to the researchers of this project. Follow us on Twitter and join our Machine Learning SubReddit.
«`