Clojure

The Clojure MCP server connects AI models, like the ones from Anthropic or OpenAI, directly to a Clojure nREPL. It also bundles a set of specialized Clojure editing tools.

Features

  • Clojure REPL Connection: Directly hooks into your running nREPL.
  • Clojure-Aware Editing: Uses tools like clj-kondo, parinfer, cljfmt, and clj-rewrite for smart editing.
  • Optimized Clojure Toolset: A comprehensive set of tools for Clojure development, even more than what Claude Code offers.
  • Emacs Edit Highlighting: This is an alpha feature, but shows promise for Emacs users.

How to Use Clojure MCP.

Prerequisites

  • Clojure (1.11 or later)
  • Java (JDK 11 or later)
  • Claude Desktop (recommended for the best experience)

Setting up the project

1. Get the Clojure MCP Server

git clone https://github.com/bhauman/clojure-mcp.git

Or, add it as a git dependency in your deps.edn:

{:deps {com.bhauman/clojure-mcp {:git/url "https://github.com/bhauman/clojure-mcp.git"
                                 :git/sha "latest-main-branch-sha"}}} ; Make sure to use an actual SHA

2. In the Clojure project where you want AI assistance, you need to update its deps.edn:

{:aliases {
  ;; nREPL server for AI to connect to
  :nrepl {:extra-paths ["test"]
          :extra-deps {nrepl/nrepl {:mvn/version "1.3.1"}
                       ch.qos.logback/logback-classic {:mvn/version "1.4.14"}}
          :jvm-opts ["-Djdk.attach.allowAttachSelf"]
          :main-opts ["-m" "nrepl.cmdline" "--port" "7888"]}
  ;; MCP server configuration
  :mcp {:extra-deps {org.slf4j/slf4j-nop {:mvn/version "2.0.16"}
                     com.bhauman/clojure-mcp {:local/root "/path/to/your/cloned/clojure-mcp"}} ; Or use the git dep
        :exec-fn clojure-mcp.main/start-mcp-server
        :exec-args {:port 7888}}}}

Remember to replace /path/to/your/cloned/clojure-mcp if you cloned it locally and are using :local/root. If you used the git dep, that part will be different.

A key point: the MCP server doesn’t have to run inside your project directory. It uses the nREPL connection for context. The root directory of the project running the nREPL server becomes the root for all MCP tool actions. Currently, the nREPL and MCP server need to be on the same machine due to an assumption of a shared file system.

3. You’ll need to edit your Claude Desktop configuration file. On a Mac, this is typically at ~/Library/Application Support/Claude/claude_desktop_config.json.

{
    "mcpServers": {
        "clojure-mcp": {
            "command": "/bin/sh",
            "args": [
                "-c",
                "cd /path/to/your/clojure/project && PATH=/your/bin-or-nix/path:$PATH && clojure -X:mcp :port 7888"
            ]
        }
    }
}

Adjust /path/to/your/clojure/project to your actual project’s path, and /your/bin-or-nix/path to your system’s binary path (e.g., /Users/yourname/.nix-profile/bin or wherever your clojure command lives).

4. Test the Setup

  1. Start the nREPL in your target Clojure project:
    bash cd /path/to/your/project clojure -M:nrepl
    You should see a message like nREPL server started on port 7888....
  2. Restart Claude Desktop. This is necessary for it to pick up the configuration changes.
  3. Verify the connection. In Claude Desktop, click the + button in the chat area. You should see an option like “Add from clojure-mcp”.

5. In Claude Desktop, click the + (tools) button. You might want to add:

  • Resource PROJECT_SUMMARY.md: The LLM can help create this. More on this below.
  • Resource Clojure Project Info: This introspects the project connected via nREPL.
  • Resource LLM_CODE_STYLE.md: Your personal coding style guide. You can copy the example from the clojure-mcp repository.
  • Prompt clojure_repl_system_prompt: This provides instructions to the AI on how to code in Clojure.

Then, start your chat. You can begin by stating a problem and interactively design a solution with the LLM. Ask it to “propose” a solution. Iterate a bit. Then, you can have it:

  • Code and validate the idea directly in the REPL. (LLMs are surprisingly good at this!)
  • Make changes to your source code files and then validate in the REPL.
  • Run tests.
  • Commit the changes (make a branch first!).

Project Summary Management

The system has a neat workflow for maintaining a PROJECT_SUMMARY.md file. This file helps the AI quickly grasp your codebase structure.

  • Creating/Updating: Use the MCP prompt create-project-summary (found in the + > clojure-mcp menu). It analyzes your code, documents key files, dependencies, and tools.
  • Using: When you start a new chat, the “Project Summary” resource (if added) loads this file, giving the AI immediate context.
  • Keeping Current: After a productive session where you’ve added new things, run create-project-summary again to update it.

FAQs

Q: What’s the main difference between using Clojure MCP and just, say, Claude Code directly with a Clojure project?
A: Clojure MCP offers a more deeply integrated experience with the Clojure REPL and a set of Clojure-aware editing tools. It’s designed as a cohesive system for REPL-driven development with AI, rather than just general code assistance. The stateful file tracking and specialized structural editing tools are key differentiators.

Q: Do I need to pay for LLM API keys to use Clojure MCP?
A: No. The core functionality, especially when used with Claude Desktop, does not require you to provide your own LLM API keys. The API keys are only necessary if you want to use the specific “agent tools” (dispatch_agent, architect, code_critique) which can make their own calls to models from Google, OpenAI, or Anthropic.

Latest MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Log Mcp

An MCP server that provides 7 tools for log analysis, including error fingerprinting, pattern comparison, and ML classification.

Apple

An MCP package that provides AI assistants with direct access to Notes, Messages, Mail, Contacts, Reminders, Calendar, and Maps via AppleScript and EventKit.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!