Gemini MCP Tool

The Gemini MCP Tool is an MCP server that connects an AI assistant, like Claude, with the Google Gemini CLI.

It lets you use Gemini’s powerful, large-token window for analyzing big files and codebases directly within your preferred MCP clients.

Features

  • 🔗 Direct Gemini CLI Integration – Execute Gemini commands directly from Claude through MCP
  • 📁 File Reference Support – Use @ syntax to analyze specific files or entire directories
  • 🛡️ Sandbox Mode – Safe code execution and testing in isolated environments
  • 🎯 Model Selection – Choose from different Gemini models including gemini-2.5-flash
  • 💬 Natural Language Interface – Ask questions in plain English without complex command syntax
  • NPX Support – Run without installation using npx for immediate setup
  • 🔧 Slash Commands – Built-in commands for Claude Code interface integration
  • 📊 Large File Analysis – Leverage Gemini’s extensive context window for massive codebases

Use Cases

  • Legacy Code Analysis: Analyze large, undocumented codebases where token limits prevent comprehensive understanding in a single Claude session
  • Project Documentation: Generate detailed documentation for complex projects by having Gemini analyze entire directory structures and dependencies
  • Code Review at Scale: Review pull requests or commits across multiple files simultaneously without hitting context limitations
  • Safe Script Testing: Use sandbox mode to test potentially risky scripts or commands before implementing them in production environments

How To Use It

1. Add the MCP Server to your MCP clients like Claude:

    claude mcp add gemini-cli -- npx -y gemini-mcp-tool

    2. Type /mcp in the Claude Code interface to confirm that the gemini-cli MCP is active.

      3. If you prefer to install the package globally, run:

      npm install -g gemini-mcp-tool

      Then, configure your client (like Claude Desktop) by adding the following to your mcpServers JSON object:

      "gemini-cli": {
        "command": "gemini-mcp"
      }

      4. You can then interact with the MCP server using natural language or specific slash commands.

      • ask gemini to analyze @src/main.js and explain what it does
      • use gemini to summarize @. the current directory
      • use gemini sandbox to install numpy and create a data visualization
      • /analyze prompt:<your_prompt>: Analyzes files/directories or asks a general question. Example: /analyze prompt:@src/ summarize this directory.
      • /sandbox prompt:<your_prompt>: Tests code or scripts in the sandbox. Example: /sandbox prompt:@script.py Test this script safely.
      • /help: Displays help information.
      • /ping: Tests the connection to the server.

      FAQs

      Q: Do I need to install Gemini CLI separately?
      A: Yes, the Gemini CLI must be installed and configured independently. This tool acts as an MCP interface to your existing Gemini CLI installation.

      Q: What’s the difference between sandbox mode and regular mode?
      A: Sandbox mode provides an isolated environment for safe code execution and testing. Use it when running potentially risky scripts or commands that you want to test before implementing.

      Q: How do I reference multiple files in a single request?
      A: Use multiple @ references in your prompt: analyze @src/main.js @src/utils.js @package.json and explain the project structure

      Latest MCP Servers

      CVE

      An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

      WebMCP

      webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

      Google Meta Ads GA4

      An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

      View More MCP Servers >>

      Featured MCP Servers

      Notion

      Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

      Claude Peers

      An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

      Excalidraw

      Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

      More Featured MCP Servers >>

      FAQs

      Q: What exactly is the Model Context Protocol (MCP)?

      A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

      Q: How is MCP different from OpenAI's function calling or plugins?

      A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

      Q: Can I use MCP with frameworks like LangChain?

      A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

      Q: Why was MCP created? What problem does it solve?

      A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

      Q: Is MCP secure? What are the main risks?

      A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

      Q: Who is behind MCP?

      A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

      Get the latest & top AI tools sent directly to your email.

      Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!