Gemini Coding Assistant

The Gemini Coding Assistant MCP Server transforms Claude Code into a collaborative coding environment where you can consult Google’s Gemini AI for complex programming challenges.

This MCP server maintains conversation context across multiple queries, handles file attachments, and provides persistent sessions for in-depth technical discussions.

Features

  • 🔄 Session Management: Maintain conversation context across multiple queries with automatic cleanup
  • 📁 File Attachments: Read and include actual code files directly from your filesystem
  • 🧠 Context Caching: Code context and file content cached per session for efficient follow-ups
  • 💬 Multi-turn Conversations: Ask follow-up questions without resending entire codebases
  • 🔧 Hybrid Context: Combine text-based code snippets with file attachments in single queries
  • Parallel Sessions: Run multiple simultaneous conversations for different coding problems
  • 🤖 Latest AI Model: Uses Gemini 2.5 Pro for advanced code analysis and suggestions
  • 🔒 Automatic Security: Built-in rate limiting and session expiry for secure usage

Use Cases

  • Architecture Reviews: Upload entire project structures and get comprehensive feedback on code organization, design patterns, and architectural decisions from a fresh AI perspective
  • Performance Optimization: Attach multiple related files showing performance bottlenecks and receive specific optimization strategies with context about your entire codebase
  • Debugging Complex Issues: Send problematic code files along with error logs and stack traces to get detailed debugging assistance that considers your full application context
  • Code Refactoring Guidance: Submit legacy code modules for modernization advice while maintaining awareness of dependencies and integration points across your project

How to Use It

1. Clone the repository and set up your Python environment. Create a virtual environment to isolate dependencies and activate it using the appropriate command for your operating system.

python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

2. Install the required dependencies from the requirements file, then configure your environment variables by copying the example file and adding your Gemini API key.

pip install -r requirements.txt
cp .env.example .env

3. Edit the .env file to include your GEMINI_API_KEY. Finally, add the server to Claude Code by specifying the full path to the start_server.sh script.

claude mcp add gemini-coding -s user -- /path/to/gemini-mcp/start_server.sh

Available Tools and Parameters

  • session_id (optional): Continue an existing conversation using the session identifier
  • problem_description (required for new sessions): Clear description of the coding challenge
  • code_context (required for new sessions): Relevant code snippets or pseudocode
  • attached_files (optional): Array of absolute file paths to read and include
  • file_descriptions (optional): Object mapping file paths to descriptive explanations
  • specific_question (required): The exact question you want answered
  • additional_context (optional): Updates or changes since the last question
  • preferred_approach (optional): Type of assistance needed (solution/review/debug/optimize/explain/follow-up)
  • list_sessions: Display all active consultation sessions with summaries
  • end_session: Terminate a specific session to free memory and clean up resources

Starting New Conversations

Begin a new consultation by providing a problem description and your code context. You can include code directly in the code_context parameter or attach files using absolute paths.

/consult_gemini
  problem_description: "React component experiencing memory leaks during state updates"
  attached_files: ["/absolute/path/to/src/components/UserDashboard.jsx", "/absolute/path/to/src/hooks/useUserData.js"]
  file_descriptions: {
    "/absolute/path/to/src/components/UserDashboard.jsx": "Main dashboard component with suspected memory issues",
    "/absolute/path/to/src/hooks/useUserData.js": "Custom hook managing user data state"
  }
  specific_question: "What causes the memory leak and how can I fix it?"
  preferred_approach: "debug"

Continuing Conversations

Use the session ID returned from your initial consultation to ask follow-up questions without resending all your code context.

/consult_gemini
  session_id: "abc123..."
  specific_question: "I applied your memory leak fix but now getting stale data. How do I handle cache invalidation?"
  additional_context: "Implemented useCallback and useMemo as suggested, but users see outdated information"
  preferred_approach: "follow-up"

Integration with Claude Code Development Kit

The server reaches maximum effectiveness when paired with the Claude Code Development Kit. This integration provides automated context injection through the gemini-context-injector.sh hook, which automatically attaches project-specific files like MCP-ASSISTANT-RULES.md and project-structure.md to new sessions.

The Development Kit transforms Claude Code into an orchestrated environment where complex commands spawn specialized agents that can consult Gemini for architectural decisions. Commands like /full-context automatically leverage Gemini for complex problems, while security scanning hooks prevent sensitive data from being transmitted to external services.

FAQs

Q: What file types can I attach to conversations?
A: The server supports all text-based files, including JavaScript, Python, TypeScript, JSON, HTML, CSS, and configuration files. Binary files cannot be processed, but you can include relevant excerpts in the code_context parameter.

Q: How long do sessions remain active?
A: Sessions automatically expire after one hour of inactivity to preserve memory and clean up temporary files. You can manually end sessions using the end_session tool when your consultation is complete.

Q: Can I work on multiple coding problems simultaneously?
A: Yes, you can run multiple parallel sessions for different problems. Each session maintains its own context and conversation history, allowing you to switch between different coding challenges without interference.

Q: What happens to my code files after consultation?
A: File contents are cached in memory during active sessions and automatically cleaned up when sessions expire or are manually ended. No persistent storage occurs, and your code remains secure on your local filesystem.

Q: How much code context can I include in a single query?
A: The combined input limit is approximately 50,000 characters per message. For larger codebases, focus on the most relevant files and use the code_context parameter to provide essential snippets while attaching key files.

Latest MCP Servers

CVE

An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

WebMCP

webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

Google Meta Ads GA4

An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!