n8n
The n8n-MCP server provides your AI assistants, like Claude Desktop, with direct access to n8n’s node library.
You can use it to have an AI generate, validate, and even deploy n8n workflows.
Features
- 🔍 Smart Node Search: Find n8n nodes by their name, function, or category.
- ✅ Configuration Validation: Check your node setups for errors before you deploy them.
- ⚡ Partial Workflow Updates: Modify existing workflows with diff operations, saving tokens and time.
- 📄 Essential Properties: Request just the key properties for a node, not the entire complex object.
- 🚀 Multiple Deployment Options: Run it instantly with NPX, use an isolated Docker container, or deploy to the cloud.
- 🔧 Direct n8n Management: Optionally connect your n8n API key to create, update, and manage workflows from your AI assistant.
How to Use It
Option 1: Run with npx
This approach runs the latest version of n8n-MCP directly without a permanent installation.
Open your Claude Desktop configuration file. The location depends on your operating system:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Add the following JSON to the mcpServers object. This basic setup enables documentation and validation tools.
{
"mcpServers": {
"n8n-mcp": {
"command": "npx",
"args": ["n8n-mcp"],
"env": {
"MCP_MODE": "stdio",
"LOG_LEVEL": "error",
"DISABLE_CONSOLE_OUTPUT": "true"
}
}
}
}To enable tools that manage your n8n instance directly (like creating or updating workflows), add your n8n API URL and key.
{
"mcpServers": {
"n8n-mcp": {
"command": "npx",
"args": ["n8n-mcp"],
"env": {
"MCP_MODE": "stdio",
"LOG_LEVEL": "error",
"DISABLE_CONSOLE_OUTPUT": "true",
"N8N_API_URL": "https://your-n8n-instance.com",
"N8N_API_KEY": "your-api-key"
}
}
}
}Restart the Claude Desktop application for the changes to take effect.
Option 2: Run with Docker
This method uses a pre-built Docker image. It does not require Node.js, only Docker.
Pull the latest image from the GitHub Container Registry.
docker pull ghcr.io/czlonkowski/n8n-mcp:latestNext, edit your claude_desktop_config.json file. The -i and --init flags are important for proper communication and container lifecycle management.
For documentation and validation tools only:
{
"mcpServers": {
"n8n-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--init",
"-e", "MCP_MODE=stdio",
"-e", "LOG_LEVEL=error",
"-e", "DISABLE_CONSOLE_OUTPUT=true",
"ghcr.io/czlonkowski/n8n-mcp:latest"
]
}
}
}To add direct n8n management capabilities, include your API credentials as environment variables.
{
"mcpServers": {
"n8n-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--init",
"-e", "MCP_MODE=stdio",
"-e", "LOG_LEVEL=error",
"-e", "DISABLE_CONSOLE_OUTPUT=true",
"-e", "N8N_API_URL": "https://your-n8n-instance.com",
"-e", "N8N_API_KEY": "your-api-key",
"ghcr.io/czlonkowski/n8n-mcp:latest"
]
}
}
}Save the file and restart Claude Desktop.
Available Tools
tools_documentation: Provides information on how to use all other tools. Start here.list_nodes: Lists available n8n nodes. You can filter by category.get_node_essentials: Fetches only the most important properties for a node.search_nodes: Searches for nodes based on a keyword or description.validate_node_operation: Validates a node’s configuration for a specific operation.validate_workflow: Performs a full validation of a workflow, including connections.n8n_create_workflow: Creates a new workflow in your n8n instance (requires API key).n8n_update_partial_workflow: Applies small changes to an existing workflow (requires API key).
FAQs
Q: Do I need to provide my n8n API key?
A: No, the API key is optional. You only need it if you want the AI to directly create, read, update, or delete workflows in your n8n instance. All documentation, search, and validation tools work without it.
Q: Can this tool accidentally modify my live workflows?
A: Yes. If you provide your n8n API credentials, the AI will have the ability to change your workflows. You should always test on copies or in a development environment first. Never let an AI edit production workflows directly without review.
Q: What is the main difference between the npx and Docker installation?
A: npx is a command-line tool that runs a package from the npm registry without installing it locally, but it requires you to have Node.js installed. Docker runs the server in a self-contained, isolated environment, which is cleaner but requires you to have Docker installed and running.
Q: Can I use this with an n8n instance that is also running in Docker?
A: Yes. When configuring the N8N_API_URL in the MCP server’s Docker command, use http://host.docker.internal:5678 to allow the MCP container to communicate with the n8n container on the same machine.
Latest MCP Servers
Notion
Log Mcp
Apple
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



