Claude Peers
claude-peers is an MCP server that enables multiple Claude Code instances to discover each other and exchange messages in real time. When you run several Claude Code sessions across different projects, each instance can find the others and send messages that arrive instantly via the channel protocol.
Terminal 1 (poker-engine) Terminal 2 (eel)
┌───────────────────────┐ ┌──────────────────────┐
│ Claude A │ │ Claude B │
│ "send a message to │ ──────> │ │
│ peer xyz: what files │ │ <channel> arrives │
│ are you editing?" │ <────── │ instantly, Claude B │
│ │ │ responds │
└───────────────────────┘ └──────────────────────┘Features
- Discovers other Claude Code instances on the same machine with configurable scope (machine-wide, directory, or repository).
- Sends messages to specific peers by ID with instant delivery via channel push.
- Sets a visible summary for each session so other instances know what you are working on.
- Falls back to manual message checking when channel mode is unavailable.
- Maintains a local broker daemon with SQLite persistence that auto-launches on first use and cleans up dead peers automatically.
- Generates automatic session summaries using OpenAI’s gpt-4o-nano when an API key is present.
- Provides a CLI for inspecting broker status, listing peers, sending messages, and stopping the broker.
Use Cases
- Coordinate across multiple Claude Code sessions when working on related codebases.
- Ask another session about what files it has open or what branch it is on.
- Share context between Claude instances without switching terminal windows.
- Debug distributed behavior by having Claude sessions communicate their state.
- Automate multi-session workflows through Claude’s tool-calling capabilities.
How To Use It
You must install Bun and Claude Code v2.1.80+ and maintain an active claude.ai login.
1. Clone the repository from GitHub.
git clone https://github.com/louislva/claude-peers-mcp.git ~/claude-peers-mcp
cd ~/claude-peers-mcp
bun install2. Register the MCP server globally. This command adds the server to every Claude Code session across all directories.
claude mcp add --scope user --transport stdio claude-peers -- bun ~/claude-peers-mcp/server.ts3. Start Claude Code with the development channel flag. The broker daemon launches automatically during the first session startup.
claude --dangerously-skip-permissions --dangerously-load-development-channels server:claude-peers4. Create a shell alias to shorten the startup command.
alias claudepeers='claude --dangerously-load-development-channels server:claude-peers'5. Open a second terminal session. Start Claude Code using the same command. Ask the assistant to list all peers on the machine. The assistant displays every running instance along with their working directory, git repository, and current task summary. Instruct the assistant to send a message to a specific peer ID. The receiving instance gets the message immediately and generates a response.
6. The auto-summary feature requires the OPENAI_API_KEY environment variable. The server passes the current directory, git branch, and recent files to the gpt-5.4-nano model. The gpt-5.4-nano model costs fractions of a cent per execution. Instances set their own summary via the set_summary tool when the API key is missing.
7. The server exposes four specific tools to the Claude Code instance.
| Tool | Description |
|---|---|
list_peers | Finds other Claude Code instances scoped to machine, directory, or repo. |
send_message | Sends a message to another instance by ID via channel push. |
set_summary | Describes the current working context to other peers. |
check_messages | Checks for messages manually as a fallback mechanism. |
8. The command-line interface provides four commands for manual inspection.
| Command | Description |
|---|---|
bun cli.ts status | Displays the broker status and lists all active peers. |
bun cli.ts peers | Lists all registered peers. |
bun cli.ts send <id> <msg> | Sends a direct message into a specific Claude session. |
bun cli.ts kill-broker | Stops the local broker daemon. |
9. The system accepts three environment variables for configuration.
| Variable | Default Value | Description |
|---|---|---|
CLAUDE_PEERS_PORT | 7899 | Defines the local port for the broker daemon. |
CLAUDE_PEERS_DB | ~/.claude-peers.db | Sets the file path for the SQLite database. |
OPENAI_API_KEY | (None) | Activates automatic session summaries via gpt-5.4-nano. |
FAQS
Q: Why does the system require a claude.ai login?
A: The server relies on the claude/channel protocol for instant message delivery. The channel protocol functions exclusively with claude.ai authenticated sessions. Standard API key authentication rejects channel connections.
Q: How does the auto-summary feature generate descriptions?
A: The system reads the OPENAI_API_KEY environment variable. The server passes the current directory, git branch, and recent files to the gpt-5.4-nano model. The model generates a brief text summary of the active workspace.
Q: What happens if the broker daemon crashes?
A: The broker daemon restarts automatically when you launch a new Claude Code session. You can stop a stuck broker manually using the bun cli.ts kill-broker command.
Q: Can I run the broker on a different port?
A: You can change the port via the CLAUDE_PEERS_PORT environment variable. The MCP servers read this variable to locate the broker daemon.
Latest MCP Servers
Obsidian Web
Claude Peers
Memo
Featured MCP Servers
Claude Peers
Excalidraw
Claude Context Mode
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



