←back to Blog

How to Create a Custom Model Context Protocol (MCP) Client Using Gemini

How to Create a Custom Model Context Protocol (MCP) Client Using Gemini

In this tutorial, we will be implementing a custom Model Context Protocol (MCP) Client using Gemini. By the end of this tutorial, you will be able to connect your own AI applications with MCP servers, unlocking powerful new capabilities to supercharge your projects.

Step 1: Setting up the dependencies

Gemini API

We& be using the Gemini 2.0 Flash model for this tutorial.

To get your Gemini API key, visit page and follow the instructions.

Once you have the key, store it safely—you& need it later.

N

Some of the MCP servers require N to run. Download the latest version of N from

Run the installer.

Leave all settings as default and complete the installation.

National Park Services API

For this tutorial, we will be exposing the National Park Services MCP server to our client. To use the National Park Service API, you can request an API key by visiting and filling out a short form. Once submitted, the API key will be sent to your email.

Make sure to keep this key accessible—we’ll be using it shortly.

Installing Python libraries

In the command prompt, enter the following code to install the python libraries:

Copy Code Copied Use a different Browser

pip install mcp python-dotenv google-genai

Step 2: Setting up the configuration files

Creating file

Next, create a file named .

This file will store configuration details about the MCP servers your client will connect to.

Once the file is created, add the following initial content:

Copy Code Copied Use a different Browser

«mcpServers»:
«nationalparks»:
«command»: «npx»,
«args»: [«-y», «mcp-server-nationalparks»],
«env»:
«NPS_API_KEY»:

Replace with the key you generated.

Creating .env file

Create a .env file in the same directory as the file and enter the following code:

Copy Code Copied Use a different Browser

GEMINI_API_KEY =

Replace with the key you generated.

Step 3: Implementing the MCP Client

We will now create a file to implement our MCP Client. Make sure that this file is in the same directory as and .env

Basic Client Structure

We will first import the necessary libraries and create a basic client class

Copy Code Copied Use a different Browser

import asyncio
import json
import os
from typing import List, Optional
from contextlib import AsyncExitStack
import warnings

from google import genai
from import types
from mcp import ClientSession, StdioServerParameters
from import stdio_client
from dotenv import load_dotenv

load_dotenv()
rwarnings(«ignore», category=ResourceWarning)

def clean_schema(schema): # Cleans the schema by keeping only allowed keys
allowed_keys = «type», «properties», «required», «description», «title», «default», «enum»
return k: v for k, v in () if k in allowed_keys

class MCPGeminiAgent:
def __init__(self):
on: Optional[ClientSession] = None
_stack = AsyncExitStack()
_client = genai.Client(api_key=v(«GEMINI_API_KEY»))
= «gemini-2.0-flash»
= None
r_params = None
r_name = None

The __init__ method initializes the MCPGeminiAgent by setting up an asynchronous session manager, loading the Gemini API client, and preparing placeholders for model configuration, tools, and server details.

It lays the foundation for managing server connections and interacting with the Gemini model.

Selecting the MCP Server

Copy Code Copied Use a different Browser

async def select_server(self):
with open(», ‘r’) as f:
mcp_config = (f)
servers = mcp_config[‘mcpServers’]
server_names = list(())
print(«Available MCP servers:»)
for idx, name in enumerate(server_names):
print(f» idx+1. name»)
while True:
try:
choice = int(input(f»Please select a server by number [1-len(server_names)]: «))
if 1

This method prompts the user to choose a server from the available options listed in . It loads and prepares the selected server’s connection parameters for later use.

Connecting to the MCP Server

Copy Code Copied Use a different Browser

async def connect(self):
await t_server()
_transport = await __async_context(stdio_client(r_params))
, = _transport
on = await __async_context(ClientSession(, ))
await alize()
print(f»Successfully connected to: r_name»)
# List available tools for this server
mcp_tools = await _tools()
print(«\nAvailable MCP tools for this server:»)
for tool in mcp_:
print(f»- : iption»)

This establishes an asynchronous connection to the selected MCP server using stdio transport. It initializes the MCP session and retrieves the available tools from the server.

Handling User query and tool calls

Copy Code Copied Use a different Browser

async def agent_loop(self, prompt: str) -> str:
contents = [types.Content(role=»user», parts=[types.Part(text=prompt)])]
mcp_tools = await _tools()
tools = types.Tool(function_declarations=[

«name»: ,
«description»: iption,
«parameters»: clean_schema(getattr(tool, «inputSchema», ))

for tool in mcp_
])
= tools
response = await _ate_content(
model=,
contents=contents,
config=types.GenerateContentConfig(
temperature=0,
tools=[tools],
),
)
d(dates[0].content)
turn_count = 0
max_tool_turns = 5
while ion_calls and turn_count = max_tool_turns and ion_calls:
print(f»Stopped after max_tool_turns tool calls to avoid infinite loops.»)
print(«All tool calls complete. Displaying Gemini’s final response.»)
return response

This method sends the user& prompt to Gemini, processes any tool calls returned by the model, executes the corresponding MCP tools, and iteratively refines the response. It manages multi-turn interactions between Gemini and the server tools.

Interactive Chat Loop

Copy Code Copied Use a different Browser

async def chat(self):
print(f»\nMCP-Gemini Assistant is ready and connected to: r_name»)
print(«Enter your question below, or type ‘quit’ to exit.»)
while True:
try:
query = input(«\nYour query: «).strip()
if () == ‘quit’:
print(«Session ended. Goodbye!»)
break
print(f»Processing your request…»)
res = await _loop(query)
print(«\nGemini’s answer:»)
print()
except KeyboardInterrupt:
print(«\nSession interrupted. Goodbye!»)
break
except Exception as e:
print(f»\nAn error occurred: str(e)»)

This provides a command-line interface where users can submit queries and receive answers from Gemini, continuously until they exit the session.

Cleaning up resources

Copy Code Copied Use a different Browser

async def cleanup(self):
await _e()

This closes the asynchronous context and cleans up all open resources like the session and connection stack gracefully.

Main entry point

Copy Code Copied Use a different Browser

async def main():
agent = MCPGeminiAgent()
try:
await ct()
await ()
finally:
await up()

if __name__ == «__main__»:
import sys
import os
try:
(main())
except KeyboardInterrupt:
print(«Session interrupted. Goodbye!»)
finally:
r = open(ll, «w»)

This is the main execution logic.

Apart from main(), all other methods are part of the MCPGeminiAgent class. You can find the complete file .

Step 4: Running the client

Run the following prompt in the terminal to run your client:

Copy Code Copied Use a different Browser

python

The client will:

Read the file to list the different available MCP servers.

Prompt the user to select one of the listed servers.

Connect to the selected MCP server using the provided configuration and environment settings.

Interact with the Gemini model through a series of queries and responses.

Allow users to issue prompts, execute tools, and process responses iteratively with the model.

Provide a command-line interface for users to engage with the system and receive real-time results.

Ensure proper cleanup of resources after the session ends, closing connections and releasing memory.

The post appeared first on .