Gemini DeepSearch

The Gemini DeepSearch MCP Server is an automated research agent designed to perform in-depth, multi-step web research.

It uses Google’s Gemini models and Google Search to generate sophisticated queries, synthesize information, and deliver high-quality, citation-rich answers.

This MCP server is built to tackle complex research questions by breaking them down, exploring different angles, and compiling the findings into a comprehensive response.

It can be useful for developers, analysts, and content creators who need to quickly gather and verify information from multiple web sources.

Features

  • 🧠 Automated Multi-Step Research: Leverages Gemini models and Google Search for deep investigation into topics.
  • 🚀 FastMCP Integration: Can be deployed as both an HTTP API and a stdio server for flexible integration.
  • 📊 Configurable Research Depth: Offers “low,” “medium,” and “high” effort levels to control the thoroughness of the research.
  • 📚 Citation-Rich Responses: Automatically tracks and lists the sources used to generate the answer.
  • 🔄 LangGraph-Powered Workflow: Utilizes a stateful graph framework to manage the research process effectively.

Use Cases

  • Initial Project Scoping: When starting a new project, you can use Gemini DeepSearch to quickly gather information on existing solutions, libraries, and potential challenges. This saves you the time of manually sifting through search results.
  • Technical Problem Solving: If you’re stuck on a complex coding problem, you can ask the server to research potential solutions, error messages, or alternative approaches, providing you with a curated list of relevant articles and documentation.
  • Content and Blog Post Research: For those who write technical articles or blog posts, this tool can help you gather background information, statistics, and different viewpoints on a topic, complete with citations.
  • Competitor Analysis: You can use it to research what other companies in your space are working on, what technologies they are using, and what the community is saying about them.

How To Use It

1. You can install the MCP server directly using uvx:

uvx install gemini-deepsearch-mcp

A GEMINI_API_KEY environment variable is required for the server to function.

2. Run the development server with an HTTP interface and the Studio UI.

make dev

3. For local integration with MCP clients via stdio, start the server with:

make local

4. You can run the test suite to ensure everything is working correctly:

make test

5. To specifically test the MCP stdio server:

make test_mcp

6. The MCP inspector is available for debugging:

make inspect

7. To use the inspector with Langsmith tracing, set your API keys and enable tracing:

GEMINI_API_KEY=AI******* LANGSMITH_API_KEY=ls******* LANGSMITH_TRACING=true make inspect

8. The deep_search tool has two main parameters:

  • query (string): The research question you want to investigate.
  • effort (string): The desired depth of the research. This can be “low”, “medium”, or “high”.
    • Low: 1 query, 1 loop, using the Flash model.
    • Medium: 3 queries, 2 loops, using the Flash model.
    • High: 5 queries, 3 loops, using the Pro model.

9. The server’s output format depends on how it’s being run:

  • HTTP MCP Server: Returns a JSON object with an answer (the detailed research response) and a sources list (the URLs used).
  • Stdio MCP Server: Returns a file_path to a JSON file in the system’s temporary directory. This file contains the same answer and sources data. This method is optimized for token usage in environments like Claude Desktop.

10. To integrate with Claude Desktop, you need to add the server configuration to your claude_desktop_config.json file. The location of this file varies by operating system:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json
  • Linux: ~/.config/claude/claude_desktop_config.json

Here is a sample configuration:

{
  "mcpServers": {
    "gemini-deepsearch": {
      "command": "uvx",
      "args": ["gemini-deepsearch-mcp"],
      "env": {
        "GEMINI_API_KEY": "your-gemini-api-key-here"
      },
      "timeout": 180000
    }
  }
}

Remember to replace "your-gemini-api-key-here" with your actual Gemini API key and restart Claude Desktop. It is also a good idea to set a generous timeout to prevent errors.

11. For local development, you can point to your source code directly:

{
  "mcpServers": {
    "gemini-deepsearch": {
      "command": "uv",
      "args": ["run", "python", "main.py"],
      "cwd": "/path/to/gemini-deepsearch-mcp",
      "env": {
        "GEMINI_API_KEY": "your-gemini-api-key-here"
      }
    }
  }
}

Make sure to replace the cwd path with the absolute path to your project directory.

FAQs

Q: What is the difference between the ‘medium’ and ‘high’ effort levels?
A: The ‘medium’ effort level performs 3 queries over 2 research loops with the Gemini Flash model. The ‘high’ effort level is more thorough, executing 5 queries over 3 loops with the more powerful Gemini Pro model. ‘High’ will take longer but should produce a more comprehensive answer.

Q: Why does the stdio server return a file path instead of the direct answer?
A: This is an optimization to reduce the number of tokens passed back to the client, which can be a concern in some large language model environments. By providing a file path, the client can read the potentially large research output without it counting against token limits in the primary interaction.

Q: I’m getting a timeout error in Claude Desktop. What should I do?
A: Research, especially at higher effort levels, can take some time. The default timeout might not be sufficient. In your claude_desktop_config.json, increase the timeout value (in milliseconds) for the gemini-deepsearch server. A value of 180000 (3 minutes) is a good starting point.

Q: Can I use this with other MCP clients besides Claude Desktop?
A: Yes. Since it supports both HTTP and stdio, you can integrate it with any client that is compatible with the Model Context Protocol.

Latest MCP Servers

CVE

An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

WebMCP

webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

Google Meta Ads GA4

An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!