←back to Blog

A Coding Implementation to Build a Multi-Agent Research and Content Pipeline with CrewAI and Gemini

«`html

A Coding Implementation to Build a Multi-Agent Research and Content Pipeline with CrewAI and Gemini

In this tutorial, we establish an end-to-end AI agent system powered by CrewAI and Google’s Gemini models. We begin by installing all required packages, configuring the Gemini key securely, and building a suite of specialized agents, including research, data analysis, content creation, and quality assurance, each optimized for rapid, sequential collaboration. With clear utility classes and interactive commands, we streamline everything from quick one-off analyses to comprehensive multi-agent research projects right inside the notebook.

Installation of Required Packages

We kick things off by auto-installing CrewAI, Gemini client libraries, and other helpers, ensuring that every dependency is ready within the Colab runtime. As the loop runs, we see each package installed quietly and verify its success before proceeding.

import subprocess
import sys

def install_packages():
    """Install required packages in Colab"""
    packages = [
        "crewai",
        "crewai-tools",
        "google-generativeai",
        "python-dotenv",
        "langchain-google-genai"
    ]

    for package in packages:
        try:
            print(f" Installing {package}...")
            subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
            print(f" {package} installed successfully!")
        except Exception as e:
            print(f" Failed to install {package}: {e}")

print(" Setting up Google Colab environment...")
install_packages()
print(" All packages installed!")

Setting Up the Gemini API Key

We retrieve our Gemini API key from Colab Secrets, or, if it’s missing, we prompt ourselves to securely paste it. A quick test call confirms the key works, ensuring our LLM is authenticated before any real tasks begin.

def setup_api_key():
    """Setup Gemini API key in Colab"""
    try:
        api_key = userdata.get('GEMINI_API_KEY')
        print(" API key loaded from Colab secrets!")
        return api_key
    except:
        print(" Gemini API key not found in Colab secrets.")
        print("Please follow these steps:")
        print("1. Go to https://makersuite.google.com/app/apikey")
        print("2. Create a free API key")
        print("3. In Colab, go to  (Secrets) in the left sidebar")
        print("4. Add a new secret named 'GEMINI_API_KEY' with your API key")
        print("5. Enable notebook access for the secret")
        print("6. Re-run this cell")

        from getpass import getpass
        api_key = getpass("Or enter your Gemini API key here (it will be hidden): ")
        return api_key

GEMINI_API_KEY = setup_api_key()

ColabGeminiAgentSystem Class

We architect the heart of the workflow: a ColabGeminiAgentSystem class that wires Gemini into LangChain, defines a file-reading tool, and spawns four specialized agents—research, data, content, and QA—each ready to collaborate on tasks.

class ColabGeminiAgentSystem:
    def __init__(self, api_key):
        """Initialize the Colab-optimized Gemini agent system"""
        self.api_key = api_key
        self.setup_gemini()
        self.setup_tools()
        self.setup_agents()
        self.results_history = []

    def setup_gemini(self):
        """Configure Gemini API for Colab"""
        try:
            genai.configure(api_key=self.api_key)
            model = genai.GenerativeModel('gemini-1.5-flash')
            response = model.generate_content("Hello, this is a test.")
            print(" Gemini API connection successful!")

            self.llm = ChatGoogleGenerativeAI(
                model="gemini-1.5-flash",
                google_api_key=self.api_key,
                temperature=0.7,
                convert_system_message_to_human=True
            )

        except Exception as e:
            print(f" Gemini API setup failed: {str(e)}")
            raise

Creating Specialized Agents

The ColabGeminiAgentSystem class initializes four specialized agents:

  • Researcher: Conducts comprehensive research and provides detailed insights.
  • Data Analyst: Analyzes information and provides statistical insights.
  • Content Creator: Transforms research into engaging, accessible content.
  • Quality Assurance Specialist: Ensures high-quality, accurate, and coherent deliverables.

Executing Colab Projects

The execute_colab_project method allows users to execute projects optimized for Colab, creating tasks based on the selected type (comprehensive, quick, or analysis).

def execute_colab_project(self, topic, task_type="comprehensive", save_results=True):
    """Execute project optimized for Colab"""
    print(f"\n Starting Colab AI Agent Project")
    print(f" Topic: {topic}")
    print(f" Task Type: {task_type}")
    print("=" * 60)

    start_time = time.time()

    try:
        tasks = self.create_colab_tasks(topic, task_type)

        if task_type == "quick":
            agents = [self.researcher, self.content_creator]
        elif task_type == "analysis":
            agents = [self.data_analyst]
        else:  
            agents = [self.researcher, self.data_analyst, self.content_creator, self.qa_agent]

        crew = Crew(
            agents=agents,
            tasks=tasks,
            process=Process.sequential,
            verbose=1,
            memory=True,
            max_rpm=20  
        )

        result = crew.kickoff()

        execution_time = time.time() - start_time

        print(f"\n Project completed in {execution_time:.2f} seconds!")
        print("=" * 60)

        if save_results:
            self._save_results(topic, task_type, result, execution_time)

        return result

    except Exception as e:
        print(f"\n Project execution failed: {str(e)}")
        print(" Try using 'quick' task type for faster execution")
        return None

Interactive Agent System

An interactive command-line loop lets users type commands to initiate projects on demand, turning the notebook into an interactive sandbox without requiring extra coding.

def interactive_agent_system():
    """Interactive interface for the agent system"""
    print("\n Interactive AI Agent System")
    print("=" * 40)
    print("Available commands:")
    print("1. 'research [topic]' - Comprehensive research")
    print("2. 'quick [topic]' - Quick analysis")
    print("3. 'analyze [topic]' - Deep analysis")
    print("4. 'history' - Show results history")
    print("5. 'help' - Show this help")
    print("6. 'exit' - Exit the system")
    print("=" * 40)

    while True:
        try:
            command = input("\n Enter command: ").strip().lower()

            if command == 'exit':
                print(" Goodbye!")
                break
            elif command == 'help':
                print("\nAvailable commands:")
                print("- research [topic] - Comprehensive research")
                print("- quick [topic] - Quick analysis")
                print("- analyze [topic] - Deep analysis")
                print("- history - Show results history")
                print("- exit - Exit the system")
            elif command == 'history':
                agent_system.show_results_history()
            elif command.startswith('research '):
                topic = command[9:]
                agent_system.execute_colab_project(topic, task_type="comprehensive")
            elif command.startswith('quick '):
                topic = command[6:]
                agent_system.execute_colab_project(topic, task_type="quick")
            elif command.startswith('analyze '):
                topic = command[8:]
                agent_system.execute_colab_project(topic, task_type="analysis")
            else:
                print(" Unknown command. Type 'help' for available commands.")

        except KeyboardInterrupt:
            print("\n System interrupted. Goodbye!")
            break
        except Exception as e:
            print(f" Error: {e}")

Conclusion

We have a fully operational, reusable framework that lets us spin up research pipelines, generate polished outputs, and store our results with just a few commands. We can now run quick tests, deep dives, or interactive sessions on any topic, download the findings, and even mount them to Google Drive.

All credit for this research goes to the researchers of this project. Ready to connect with 1 Million+ AI developers, engineers, and researchers? See how NVIDIA, LG AI Research, and top AI companies leverage MarkTechPost to reach their target audience.

«`