←back to Blog

A Coding Implementation of an Intelligent AI Assistant with Jina Search, LangChain, and Gemini for Real-Time Information Retrieval

«`html

Build an Intelligent AI Assistant with Jina Search, LangChain, and Gemini

In this tutorial, we will demonstrate how to create an intelligent AI assistant by integrating LangChain, Gemini 2.0 Flash, and Jina Search tools. By combining the capabilities of a large language model (LLM) with an external search API, we will develop an assistant that provides real-time information with citations. This guide will walk you through setting up API keys, installing necessary libraries, binding tools to the Gemini model, and building a custom LangChain that dynamically calls external tools. By the end, you will have a fully functional, interactive AI assistant that can respond to user queries with accurate, current, and well-sourced answers.

Prerequisites

Before we begin, ensure you have the following:

  • Python installed on your machine
  • API keys for Jina and Google Gemini

Step 1: Install Required Libraries

We will start by installing the necessary Python packages for this project.

%pip install --quiet -U "langchain-community>=0.2.16" langchain langchain-google-genai

This includes the LangChain framework, LangChain Community tools, and Google Gemini integration, enabling seamless use within LangChain pipelines.

Step 2: Import Essential Modules

import getpass
import os
import json
from typing import Dict, Any

We will utilize several essential modules:

  • getpass to securely enter API keys
  • os for managing environment variables
  • json for handling JSON data
  • typing for type hints, enhancing code readability and maintainability

Step 3: Set Up API Keys

if not os.environ.get("JINA_API_KEY"):
    os.environ["JINA_API_KEY"] = getpass.getpass("Enter your Jina API key: ")

if not os.environ.get("GOOGLE_API_KEY"):
    os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Google/Gemini API key: ")

This ensures that API keys for Jina and Google Gemini are securely stored as environment variables, allowing access without hardcoding sensitive information.

Step 4: Initialize Tools

from langchain_community.tools import JinaSearch
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage

search_tool = JinaSearch()

We import key modules and initialize the Jina Search tool, designed for handling web search queries within the LangChain ecosystem.

Step 5: Initialize the Gemini Model

gemini_model = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",
    temperature=0.1,
    convert_system_message_to_human=True  
)

We initialize the Gemini model with a low temperature setting (0.1) for deterministic responses, ensuring that system messages are human-readable.

Step 6: Define a Prompt Template

detailed_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an intelligent assistant with access to web search capabilities. When users ask questions, you can use the Jina search tool to find current information."),
    ("human", "{user_input}"),
    ("placeholder", "{messages}"),
])

This structured prompt will guide the AI’s behavior to provide comprehensive answers and source citations effectively.

Step 7: Bind Tools to the Gemini Model

gemini_with_tools = gemini_model.bind_tools([search_tool])
main_chain = detailed_prompt | gemini_with_tools

Binding the Jina Search tool to the Gemini model creates a seamless workflow for handling user inputs and dynamic tool calls.

Step 8: Define the Enhanced Search Chain

@chain
def enhanced_search_chain(user_input: str, config: RunnableConfig):
    input_data = {"user_input": user_input}
    ai_response = main_chain.invoke(input_data, config=config)
   
    if ai_response.tool_calls:
        for tool_call in ai_response.tool_calls:
            tool_result = search_tool.invoke(tool_call)
            # Further processing...
    else:
        return ai_response

This function handles user queries dynamically, executing tool calls when needed and enhancing responses with fresh data.

Step 9: Testing the AI Assistant

def test_search_chain():
    test_queries = [
        "What is LangChain?",
        "Latest developments in AI for 2024",
        "How does LangChain work with different LLMs"
    ]
    for query in test_queries:
        response = enhanced_search_chain.invoke(query)
        print(response.content)

This function validates the AI assistant setup by running various test queries, ensuring it can effectively return useful information.

Step 10: Run the Assistant

if __name__ == "__main__":
    test_search_chain()
    while True:
        user_query = input("Your question: ").strip()
        if user_query.lower() in ['quit', 'exit']:
            break
        response = enhanced_search_chain.invoke(user_query)
        print(response.content)

This section initiates the testing phase and allows for real-time user interaction, responding to custom queries as they arise.

Conclusion

In summary, we have successfully built an AI assistant that leverages LangChain’s framework, Gemini 2.0 Flash’s generative capabilities, and Jina Search’s real-time web search functionality. This approach expands the assistant’s knowledge base beyond static data to provide users with timely and relevant information from credible sources. You can further enhance this project by integrating additional tools or deploying the assistant as an API or web application.

For further reading and code snippets, check out the project on GitHub. Follow us for more insights on AI and business management topics.

«`