←back to Blog

A Coding Guide to Build Intelligent Multi-Agent Systems with the PEER Pattern

«`html

A Coding Guide to Build Intelligent Multi-Agent Systems with the PEER Pattern

This tutorial provides a comprehensive overview of constructing a multi-agent system based on the PEER pattern: Plan, Execute, Express, and Review. The entire workflow is executed in Google Colab/Notebook, integrating specialized agents and utilizing Google’s Gemini 1.5 Flash model via a free API key. Throughout the tutorial, we examine how each agent collaborates to address complex tasks across various sectors, including finance, technology, and creative strategies. This hands-on approach aids in understanding the underlying architecture, workflow, and iterative refinement essential for generating high-quality AI outputs.

Installation and Configuration

To begin, install the necessary libraries, including agentUniverse and google-generativeai, to set up the multi-agent system.

!pip install agentUniverse google-generativeai python-dotenv pydantic

Next, configure the Gemini API using your free API key to enable AI-powered content generation:

import os
import asyncio
from typing import Dict, List, Any, Optional
from dataclasses import dataclass
from enum import Enum
import json
import time
import google.generativeai as genai

GEMINI_API_KEY = 'Use Your API Key Here' 
genai.configure(api_key=GEMINI_API_KEY)

Agent Roles and Task Management

We define four specific agent roles within the system: Planner, Executor, Expresser, and Reviewer, utilizing an Enum to represent these functions. A Task dataclass is created to manage task metadata, including status, results, and feedback. The BaseAgent class serves as the foundational structure for all agents, allowing them to process tasks, communicate with the Gemini API using role-specific prompts, and store results efficiently.

class AgentRole(Enum):
   PLANNER = "planner"
   EXECUTOR = "executor"
   EXPRESSER = "expresser"
   REVIEWER = "reviewer"

@dataclass
class Task:
   id: str
   description: str
   context: Dict[str, Any]
   status: str = "pending"
   result: Optional[str] = None
   feedback: Optional[str] = None

class BaseAgent:
   def __init__(self, name: str, role: AgentRole, system_prompt: str):
       self.name = name
       self.role = role
       self.system_prompt = system_prompt
       self.memory: List[Dict] = []

   async def process(self, task: Task) -> str:
       prompt = f"{self.system_prompt}\n\nTask: {task.description}\nContext: {json.dumps(task.context)}"
       result = await self._simulate_llm_call(prompt, task)
       self.memory.append({
           "task_id": task.id,
           "input": task.description,
           "output": result,
           "timestamp": time.time()
       })
       return result

Implementing the PEER Pattern

The PEER pattern is implemented through the PEERAgent class, which coordinates the four specialized agents to collaboratively solve tasks. Each iteration through the phases—Planning, Execution, Expression, and Review—refines the outputs based on structured processes, allowing up to three iterations to improve efficiency.

class PEERAgent:
   def __init__(self):
       self.planner = BaseAgent("Strategic Planner", AgentRole.PLANNER, "You are a strategic planning agent. Break down complex tasks into actionable steps.")
       self.executor = BaseAgent("Task Executor", AgentRole.EXECUTOR, "You are an execution agent. Complete tasks efficiently using available tools and knowledge.")
       self.expresser = BaseAgent("Result Expresser", AgentRole.EXPRESSER, "You are a communication agent. Present results clearly and professionally.")
       self.reviewer = BaseAgent("Quality Reviewer", AgentRole.REVIEWER, "You are a quality assurance agent. Review outputs and provide improvement feedback.")
       self.iteration_count = 0
       self.max_iterations = 3

   async def collaborate(self, task: Task) -> Dict[str, Any]:
       results = {"iterations": [], "final_result": None}
       while self.iteration_count < self.max_iterations:
           iteration_result = {}
           plan = await self.planner.process(task)
           iteration_result["plan"] = plan
           task.context["current_plan"] = plan
           execution = await self.executor.process(task)
           iteration_result["execution"] = execution
           task.context["execution_result"] = execution
           expression = await self.expresser.process(task)
           iteration_result["expression"] = expression
           task.result = expression
           review = await self.reviewer.process(task)
           iteration_result["review"] = review
           task.feedback = review
           results["iterations"].append(iteration_result)
           if "high" in review.lower() and self.iteration_count >= 1:
               results["final_result"] = expression
               break
           self.iteration_count += 1
           task.context["previous_feedback"] = review
       return results

Orchestrating Multi-Agent Collaboration

The MultiAgentOrchestrator manages the entire multi-agent system. It processes complex tasks using the PEER pattern, enhancing results with domain-specific agents when necessary.

class MultiAgentOrchestrator:
   def __init__(self):
       self.agents = {}
       self.peer_system = PEERAgent()
       self.task_queue = []

   def register_agent(self, agent: BaseAgent):
       self.agents[agent.name] = agent

   async def process_complex_task(self, description: str, domain: str = "general") -> Dict[str, Any]:
       task = Task(
           id=f"task_{int(time.time())}",
           description=description,
           context={"domain": domain, "complexity": "high"}
       )
       peer_results = await self.peer_system.collaborate(task)
       if domain in ["financial", "technical", "creative"]:
           domain_agent = self._get_domain_agent(domain)
           if domain_agent:
               domain_result = await domain_agent.process(task)
               peer_results["domain_enhancement"] = domain_result
       return {
           "task_id": task.id,
           "original_request": description,
           "peer_results": peer_results,
           "status": "completed",
           "processing_time": f"{len(peer_results['iterations'])} iterations"
       }

   def _get_domain_agent(self, domain: str) -> Optional[BaseAgent]:
       domain_agents = {
           "financial": BaseAgent("Financial Analyst", AgentRole.EXECUTOR, "You are a senior financial analyst with expertise in market analysis, risk assessment, and investment strategies. Provide detailed financial insights."),
           "technical": BaseAgent("Technical Expert", AgentRole.EXECUTOR, "You are a lead software architect with expertise in system design, scalability, and best practices. Provide detailed technical solutions."),
           "creative": BaseAgent("Creative Director", AgentRole.EXPRESSER, "You are an award-winning creative director with expertise in brand strategy, content creation, and innovative campaigns. Generate compelling and strategic creative solutions.")
       }
       return domain_agents.get(domain)

Running the Demo

We unify the components in the run_advanced_demo function, which tests the pipeline with financial, technical, and creative tasks while capturing agent performance metrics. This showcases the capabilities of the multi-agent system.

async def run_advanced_demo():
   orchestrator = MultiAgentOrchestrator()
   financial_task = "Analyze the potential impact of rising interest rates on tech stocks portfolio"
   result1 = await orchestrator.process_complex_task(financial_task, "financial")
   technical_task = "Design a scalable microservices architecture for a high-traffic e-commerce platform"
   result2 = await orchestrator.process_complex_task(technical_task, "technical")
   creative_task = "Create a comprehensive brand strategy for a sustainable fashion startup"
   result3 = await orchestrator.process_complex_task(creative_task, "creative")
   return {
       "demo_results": [result1, result2, result3],
       "agent_stats": {
           "total_tasks": 3,
           "success_rate": "100%",
           "avg_iterations": sum(len(r['peer_results']['iterations']) for r in [result1, result2, result3]) / 3
       }
   }

Conclusion

In conclusion, this tutorial effectively demonstrates how a multi-agent system can systematically solve complex problems by combining domain-specific reasoning, structured communication, and iterative quality checks. We gain insights into the collaborative potential of the PEER framework and observe how Gemini enhances each agent’s output, illustrating the prospective utility of modular AI systems in creating scalable, reliable, and intelligent applications.

For additional resources, visit our GitHub Page for tutorials, codes, and notebooks. Follow us on Twitter and join our community for more updates.

«`