←back to Blog

A Step-by-Step Coding Guide to Building an Iterative AI Workflow Agent Using LangGraph and Gemini

«`html

A Step-by-Step Coding Guide to Building an Iterative AI Workflow Agent Using LangGraph and Gemini

In this tutorial, we demonstrate how to build a multi-step, intelligent query-handling agent using LangGraph and Gemini 1.5 Flash. The core idea is to structure AI reasoning as a stateful workflow, where an incoming query is passed through a series of purposeful nodes: routing, analysis, research, response generation, and validation. Each node operates as a functional block with a well-defined role, making the agent not just reactive but analytically aware. Using LangGraph’s StateGraph, we orchestrate these nodes to create a looping system that can re-analyze and improve its output until the response is validated as complete or a maximum iteration threshold is reached.

Prerequisites

First, install the required Python packages:

!pip install langgraph langchain-google-genai python-dotenv

This command installs three essential Python packages:

  • langgraph: Enables graph-based orchestration of AI agents.
  • langchain-google-genai: Provides integration with Google’s Gemini models.
  • python-dotenv: Allows secure loading of environment variables from .env files.

Setting Up the Environment

import os
from typing import Dict, Any
from dataclasses import dataclass
from langgraph.graph import Graph, StateGraph, END
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage, SystemMessage
import json

os.environ["GOOGLE_API_KEY"] = "Use Your API Key Here"

In this step, we import essential modules and libraries, including ChatGoogleGenerativeAI for interacting with Gemini models and StateGraph for managing conversational state. The line os.environ["GOOGLE_API_KEY"] = "Use Your API Key Here" assigns the API key to an environment variable, allowing the Gemini model to authenticate and generate responses.

Defining the Agent State

@dataclass
class AgentState:
    """State shared across all nodes in the graph"""
    query: str = ""
    context: str = ""
    analysis: str = ""
    response: str = ""
    next_action: str = ""
    iteration: int = 0
    max_iterations: int = 3

This AgentState dataclass defines the shared state that persists across different nodes in a LangGraph workflow. It tracks key fields, including the user’s query, retrieved context, any analysis performed, the generated response, and the recommended next action. It also includes an iteration counter and a maximum iterations limit to control how many times the workflow can loop, enabling iterative reasoning or decision-making by the agent.

Building the Graph AI Agent

class GraphAIAgent:
    def __init__(self, api_key: str = None):
        if api_key:
            os.environ["GOOGLE_API_KEY"] = api_key
       
        self.llm = ChatGoogleGenerativeAI(
            model="gemini-1.5-flash",
            temperature=0.7,
            convert_system_message_to_human=True
        )
       
        self.analyzer = ChatGoogleGenerativeAI(
            model="gemini-1.5-flash",
            temperature=0.3,
            convert_system_message_to_human=True
        )
       
        self.graph = self._build_graph()
   
    def _build_graph(self) -> StateGraph:
        """Build the LangGraph workflow"""
        workflow = StateGraph(AgentState)

        workflow.add_node("router", self._router_node)
        workflow.add_node("analyzer", self._analyzer_node)
        workflow.add_node("researcher", self._researcher_node)
        workflow.add_node("responder", self._responder_node)
        workflow.add_node("validator", self._validator_node)

        workflow.set_entry_point("router")
        workflow.add_edge("router", "analyzer")
        workflow.add_conditional_edges(
            "analyzer",
            self._decide_next_step,
            {
                "research": "researcher",
                "respond": "responder"
            }
        )
        workflow.add_edge("researcher", "responder")
        workflow.add_edge("responder", "validator")
        workflow.add_conditional_edges(
            "validator",
            self._should_continue,
            {
                "continue": "analyzer",
                "end": END
            }
        )
       
        return workflow.compile()

Node Implementation

Each node in the workflow serves a distinct purpose:

  • _router_node: Routes and categorizes the incoming query based on context.
  • _analyzer_node: Analyzes the query and determines the approach — whether to research or respond directly.
  • _researcher_node: Conducts additional research or information gathering based on the analysis.
  • _responder_node: Generates the final response using the analyzed context.
  • _validator_node: Validates the response quality and completeness.

Running the Agent

def main():
    agent = GraphAIAgent("Use Your API Key Here")
   
    test_queries = [
        "Explain quantum computing and its applications",
        "What are the best practices for machine learning model deployment?",
        "Create a story about a robot learning to paint"
    ]
   
    print("Graph AI Agent with LangGraph and Gemini")
    print("=" * 50)
   
    for i, query in enumerate(test_queries, 1):
        print(f"\nQuery {i}: {query}")
        print("-" * 30)
       
        try:
            response = agent.run(query)
            print(f" Response: {response}")
        except Exception as e:
            print(f" Error: {str(e)}")
       
        print("\n" + "="*50)

if __name__ == "__main__":
    main()

The main function initializes the GraphAIAgent with a Gemini API key and runs it on a set of test queries covering technical, strategic, and creative tasks. It prints each query and the AI-generated response, showcasing how the LangGraph-driven agent processes diverse types of input using Gemini’s reasoning and generation capabilities.

Conclusion

By combining LangGraph’s structured state machine with the power of Gemini’s conversational intelligence, this agent represents a new paradigm in AI workflow engineering that mirrors human reasoning cycles of inquiry, analysis, and validation. The tutorial provides a modular and extensible template for developing advanced AI agents that can autonomously handle various tasks, ranging from answering complex queries to generating creative content.

Check out the Notebook.

«`