Claude + Gemini

This is an open-source MCP server that connects Claude to Google’s Gemini.

Rather than working in isolation, this server enables both AI assistants to coordinate their approaches, question each other’s assumptions, and build on shared insights.

This enables Claude and Gemini to collaborate on problems and maintain conversation history across different tools and interactions.

Features

  • πŸ€– AI-to-AI conversation threading – Gemini and Claude collaborate across multiple exchanges with full context retention
  • 🧠 Extended reasoning capabilities – Access Gemini’s specialized thinking models for deep analysis and problem-solving
  • πŸ“Š Massive context window – Utilize Gemini 2.5 Pro’s 1M token capacity for comprehensive codebase analysis
  • πŸ” Professional code review – Get detailed feedback with severity levels and actionable recommendations
  • πŸ› Expert debugging assistance – Systematic root cause analysis with multiple hypothesis generation
  • πŸ”„ Cross-tool continuation – Start with one tool and seamlessly continue with another while preserving context
  • πŸ“ Smart file handling – Automatically expands directories, filters relevant files, and manages token limits
  • πŸš€ Pre-commit validation – Comprehensive review of git changes across multiple repositories
  • πŸ”— Docker integration – Complete containerized setup with Redis for conversation persistence
  • βš™οΈ Configurable thinking modes – Control analysis depth and token costs based on task complexity

Use Cases

  • Code Architecture Reviews: Analyze entire codebases that exceed Claude’s context limits, getting comprehensive insights across hundreds of files while maintaining architectural coherence and identifying cross-module dependencies.
  • Complex Debugging Workflows: Start by having Claude identify potential issues, then use Gemini for deep root cause analysis with full system context, and finally coordinate both AIs to implement and validate solutions.
  • Collaborative Design Validation: Present your system design to both AIs for different perspectives – Claude can focus on implementation feasibility while Gemini evaluates scalability and edge cases, creating a comprehensive design review process.
  • Multi-Repository Development: Work across multiple related projects simultaneously, with both AIs maintaining context about dependencies, shared libraries, and cross-project impacts during development and refactoring tasks.

How to Use It

1. Get an API key from Google AI Studio.

    2. Open your terminal and run these commands:

    # Clone the repository
    git clone https://github.com/BeehiveInnovations/gemini-mcp-server.git
    cd gemini-mcp-server
    # Run the one-command setup
    ./setup-docker.sh

    This script builds the Docker images, creates a .env file for your key, starts the necessary Redis service for conversation memory, and launches the MCP server.

    3. Edit the newly created .env file to add your Gemini API key.

    # Open the file in a text editor
    nano .env
    # Add your key
    # GEMINI_API_KEY=your-gemini-api-key-here

    4. Configure Claude Desktop

    # Add the MCP server
    claude mcp add gemini -s user -- docker exec -i gemini-mcp-server python server.py
    # Verify it was added
    claude mcp list

    You can also manually edit the claude_desktop_config.json file.

    {
      "mcpServers": {
        "gemini": {
          "command": "docker",
          "args": [
            "exec",
            "-i",
            "gemini-mcp-server",
            "python",
            "server.py"
          ]
        }
      }
    }

    5. Quit and restart the Claude Desktop application completely to apply the changes.

    6. Now you can start giving Claude tasks for Gemini. Just talk to it naturally:

    • “Use gemini to review this code for security issues.”
    • “Get gemini to think deeper about this architecture design.”
    • “Analyze these files with gemini to understand the data flow.”

    Available Tools

    chat – General development conversations and collaborative thinking

    • Use for brainstorming, second opinions, and technology comparisons
    • Example: “Use gemini to compare Redis vs Memcached for session storage”

    thinkdeep – Extended reasoning and problem-solving

    • Get deeper analysis on complex architectural decisions
    • Example: “Use gemini to think deeper about my authentication design”

    codereview – Professional code review with severity levels

    • Comprehensive analysis focusing on bugs, security, and performance
    • Example: “Use gemini to review auth.py for security issues”

    precommit – Pre-commit validation across repositories

    • Validates changes against requirements and catches incomplete implementations
    • Example: “Use gemini to review my pending changes before I commit”

    debug – Expert debugging with root cause analysis

    • Systematic hypothesis generation for complex problems
    • Example: “Use gemini to debug this TypeError with full stack trace”

    analyze – Smart file and architecture analysis

    • Understand code structure, patterns, and dependencies across large codebases
    • Example: “Use gemini to analyze the src/ directory for architectural patterns”

    Tool Parameters

    File Processing

    • files: List of absolute file paths or directories
    • thinking_mode: minimal|low|medium|high|max (controls analysis depth and token usage)
    • use_websearch: Enable web search recommendations for current documentation

    Code Review Specific

    • review_type: full|security|performance|quick
    • severity_filter: critical|high|medium|all
    • standards: Coding standards to enforce

    Debugging Specific

    • error_context: Stack traces or error logs
    • runtime_info: Environment and system details
    • previous_attempts: What solutions have been tried

    Analysis Specific

    • analysis_type: architecture|performance|security|quality|general
    • output_format: summary|detailed|actionable

    Thinking Modes and Token Management

    Control analysis depth and API costs:

    • minimal (128 tokens): Simple tasks, quick explanations
    • low (2,048 tokens): Basic reasoning, style checks
    • medium (8,192 tokens): Default for most development tasks
    • high (16,384 tokens): Complex problems, security reviews
    • max (32,768 tokens): Exhaustive analysis for critical issues

    Example usage:

    # Save tokens for simple tasks
    "Use gemini with minimal thinking to check code style"
    # Invest tokens for security-critical analysis  
    "Use gemini with high thinking to audit our encryption module"

    AI-to-AI Conversation Threading

    The server enables true collaborative workflows:

    1. Cross-questioning: AIs challenge each other’s assumptions
    2. Coordinated problem-solving: Each AI contributes unique strengths
    3. Context building: Claude gathers data while Gemini provides analysis
    4. Cross-tool continuation: Start with one tool, continue with another using the same conversation thread

    Example collaborative workflow:

    1. "Use gemini to analyze /src/auth.py for security issues"
    2. "Use gemini to review the authentication logic thoroughly" 
       (continues previous conversation)
    3. "Use gemini to help debug the auth test failures"
       (builds on previous analysis and review)

    FAQs

    Q: What is the real difference between the analyze and codereview tools?
    A: Think of it this way: analyze is for understanding, and codereview is for improving. Use analyze when you want to know how a piece of code works, what its architecture is, or how different files connect. Use codereview when you want to find specific problems like bugs, security flaws, or performance bottlenecks and get actionable feedback on how to fix them.

    Q: Can this server access any file on my computer? That seems risky.
    A: By default, the server can access files within your user’s home directory, and all file paths must be absolute. For better security, you can create a sandbox. Set the MCP_PROJECT_ROOT environment variable to a specific project directory. This action will restrict all file access to only that folder and its subdirectories.

    Q: My prompt with all its context is huge. Will that cause an error with MCP’s token limit?
    A: No, the server handles this automatically. If you provide a prompt that’s too large for the MCP protocol’s limit, the server instructs Claude to save it to a temporary file and resend the request. The server then reads the full prompt from the file, bypassing the limit and preserving the entire token capacity for Gemini’s response.

    Q: How do thinking modes affect API costs?
    A: Thinking modes directly impact token usage and costs. Minimal mode uses 128 tokens while max mode uses 32,768 tokens – a 256x difference. Claude automatically selects appropriate modes, but you can override for specific needs. Use lower modes for simple tasks and higher modes when thorough analysis justifies the cost.

    Latest MCP Servers

    Notion

    Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

    Log Mcp

    An MCP server that provides 7 tools for log analysis, including error fingerprinting, pattern comparison, and ML classification.

    Apple

    An MCP package that provides AI assistants with direct access to Notes, Messages, Mail, Contacts, Reminders, Calendar, and Maps via AppleScript and EventKit.

    View More MCP Servers >>

    Featured MCP Servers

    Notion

    Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

    Claude Peers

    An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

    Excalidraw

    Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

    More Featured MCP Servers >>

    FAQs

    Q: What exactly is the Model Context Protocol (MCP)?

    A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

    Q: How is MCP different from OpenAI's function calling or plugins?

    A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

    Q: Can I use MCP with frameworks like LangChain?

    A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

    Q: Why was MCP created? What problem does it solve?

    A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

    Q: Is MCP secure? What are the main risks?

    A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

    Q: Who is behind MCP?

    A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

    Get the latest & top AI tools sent directly to your email.

    Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!