←back to Blog

How I Built an Intelligent Multi-Agent Systems with AutoGen, LangChain, and Hugging Face to Demonstrate Practical Agentic AI Workflows

How I Built an Intelligent Multi-Agent Systems with AutoGen, LangChain, and Hugging Face to Demonstrate Practical Agentic AI Workflows

In this tutorial, we explore the concept of Agentic AI through the integration of LangChain, AutoGen, and Hugging Face into a functional framework that operates without paid APIs. We will set up a lightweight open-source pipeline, progressing through structured reasoning, multi-step workflows, and collaborative agent interactions. This exploration will provide insights into how reasoning, planning, and execution can blend to form autonomous intelligent behavior, all within our control and environment.

Understanding the Target Audience

The target audience for this tutorial includes:

  • AI Developers and Engineers: Professionals looking to enhance their skills in building intelligent systems using open-source tools.
  • Business Managers: Individuals interested in understanding how AI can optimize workflows and improve decision-making processes in their organizations.
  • Researchers and Academicians: Those seeking practical applications of AI theories in real-world scenarios.

Common pain points include:

  • High costs associated with proprietary AI APIs and services.
  • Lack of knowledge on how to implement multi-agent systems effectively.
  • Difficulty in integrating various AI frameworks and tools.

Goals of the audience encompass:

  • Learning how to build and deploy AI systems without incurring high costs.
  • Gaining hands-on experience with practical applications of AI workflows.
  • Understanding how to leverage collaboration among AI agents to enhance productivity.

Interests include:

  • Open-source technologies and frameworks.
  • Advancements in AI and machine learning.
  • Case studies showcasing AI in business environments.

Preferred communication methods are typically technical documentation, tutorial videos, and interactive coding sessions.

Setting Up the Environment

We begin by installing the necessary libraries and initializing a Hugging Face FLAN-T5 pipeline as our local language model. This model generates coherent and contextually rich text, laying the groundwork for our agentic experiments.

import warnings
warnings.filterwarnings('ignore')

from typing import List, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json

print("Loading models...\n")

pipe = pipeline(
   "text2text-generation",
   model="google/flan-t5-base",
   max_length=200,
   temperature=0.7
)

llm = HuggingFacePipeline(pipeline=pipe)
print("✓ Models loaded!\n")

LangChain Basics

We explore LangChain’s capabilities by constructing intelligent prompt templates that allow our model to reason through tasks.

def demo_langchain_basics():
   print("="*70)
   print("DEMO 1: LangChain - Intelligent Prompt Chains")
   print("="*70 + "\n")
   prompt = PromptTemplate(
       input_variables=["task"],
       template="Task: {task}\n\nProvide a detailed step-by-step solution:"
   )
   chain = LLMChain(llm=llm, prompt=prompt)
   task = "Create a Python function to calculate fibonacci sequence"
   print(f"Task: {task}\n")
   result = chain.run(task=task)
   print(f"LangChain Response:\n{result}\n")
   print("✓ LangChain demo complete\n")

Multi-Step Reasoning with LangChain

We construct a multi-step reasoning flow that breaks complex goals into clear subtasks.

def demo_langchain_multi_step():
   print("="*70)
   print("DEMO 2: LangChain - Multi-Step Reasoning")
   print("="*70 + "\n")
   planner = PromptTemplate(
       input_variables=["goal"],
       template="Break down this goal into 3 steps: {goal}"
   )
   executor = PromptTemplate(
       input_variables=["step"],
       template="Explain how to execute this step: {step}"
   )
   plan_chain = LLMChain(llm=llm, prompt=planner)
   exec_chain = LLMChain(llm=llm, prompt=executor)
   goal = "Build a machine learning model"
   print(f"Goal: {goal}\n")
   plan = plan_chain.run(goal=goal)
   print(f"Plan:\n{plan}\n")
   print("Executing first step...")
   execution = exec_chain.run(step="Collect and prepare data")
   print(f"Execution:\n{execution}\n")
   print("✓ Multi-step reasoning complete\n")

Building Simple Agents

We design lightweight agents powered by the same Hugging Face pipeline, each assigned a specific role, such as researcher, coder, or reviewer. These agents collaborate on a coding task, exchanging information and building upon each other’s outputs.

class SimpleAgent:
   def __init__(self, name: str, role: str, llm_pipeline):
       self.name = name
       self.role = role
       self.pipe = llm_pipeline
       self.memory = []
   def process(self, message: str) -> str:
       prompt = f"You are a {self.role}.\nUser: {message}\nYour response:"
       response = self.pipe(prompt, max_length=150)[0]['generated_text']
       self.memory.append({"user": message, "agent": response})
       return response
   def __repr__(self):
       return f"Agent({self.name}, role={self.role})"

AutoGen Concepts

We illustrate AutoGen’s core idea by defining a conceptual configuration of agents and their workflow.

def demo_autogen_conceptual():
   print("="*70)
   print("DEMO 4: AutoGen Concepts (Conceptual Demo)")
   print("="*70 + "\n")
   agent_config = {
       "agents": [
           {"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
           {"name": "Assistant", "type": "assistant", "role": "Solves problems"},
           {"name": "Executor", "type": "executor", "role": "Runs code"}
       ],
       "workflow": [
           "1. UserProxy receives task",
           "2. Assistant generates solution",
           "3. Executor tests solution",
           "4. Feedback loop until complete"
       ]
   }
   print(json.dumps(agent_config, indent=2))
   print("\nAutoGen Key Features:")
   print("  • Automated agent chat conversations")
   print("  • Code execution capabilities")
   print("  • Human-in-the-loop support")
   print("  • Multi-agent collaboration")
   print("  • Tool/function calling\n")
   print("✓ AutoGen concepts explained\n")

Combining LangChain and Agents

By integrating LangChain’s structured reasoning with our simple agentic system, we create a hybrid intelligent framework.

def demo_hybrid_system():
   print("="*70)
   print("DEMO 6: Hybrid LangChain + Multi-Agent System")
   print("="*70 + "\n")
   reasoning_prompt = PromptTemplate(
       input_variables=["problem"],
       template="Analyze this problem: {problem}\nWhat are the key steps?"
   )
   reasoning_chain = LLMChain(llm=llm, prompt=reasoning_prompt)
   planner = SimpleAgent("Planner", "strategic planner", pipe)
   executor = SimpleAgent("Executor", "task executor", pipe)
   problem = "Optimize a slow database query"
   print(f"Problem: {problem}\n")
   print("[LangChain] Analyzing problem...")
   analysis = reasoning_chain.run(problem=problem)
   print(f"Analysis: {analysis[:120]}...\n")
   print(f"[{planner.name}] Creating plan...")
   plan = planner.process(f"Plan how to: {problem}")
   print(f"Plan: {plan[:120]}...\n")
   print(f"[{executor.name}] Executing...")
   result = executor.process(f"Execute: Add database indexes")
   print(f"Result: {result[:120]}...\n")
   print("✓ Hybrid system complete\n")

Conclusion

This tutorial has demonstrated how Agentic AI can be realized through modular design, integrating LangChain’s reasoning depth with the collaborative power of agents. We showcased how powerful, autonomous AI systems can be built without expensive infrastructure, leveraging open-source tools and creative design.

Check out the FULL CODES for detailed implementations. For further learning resources, explore our GitHub Page for tutorials, codes, and notebooks. Follow us on Twitter and join our community on 100k+ ML SubReddit. Subscribe to our Newsletter for updates. We are also available on Telegram.

The post How I Built an Intelligent Multi-Agent Systems with AutoGen, LangChain, and Hugging Face to Demonstrate Practical Agentic AI Workflows appeared first on MarkTechPost.