Cursor n8n

The Cursor n8n MCP Server connects your Cursor IDE directly to your n8n workflow automation instance.

It utilizes the Model Context Protocol (MCP) to provide AI assistants with full administrative control over your n8n environment via the REST API.

You can use this server to create, manage, trigger, and debug workflows without ever leaving the code editor.

Featuress

  • 🤖 Workflow Management: Create, read, update, delete, activate, and deactivate n8n workflows.
  • 📜 Execution Oversight: List the execution history, get detailed logs, and delete execution records.
  • 🪝 Webhook Triggering: Manually trigger a workflow by calling its webhook URL.
  • 🩺 Health Verification: Check connectivity and authentication status with your n8n API.
  • 📚 Integrated Documentation: Access a usage guide and get configuration details for common n8n node types.
  • 🔁 Resilient Connection: Includes automatic retry logic with exponential backoff for API requests.

How to Use

Installation

You have two options for installing the server: using the provided install script or setting it up manually.

Option 1: Automated Script

./install.sh

Option 2: Manual Installation

git clone https://github.com/alicankiraz1/cursor-n8n-mcp.git
cd cursor-n8n-mcp
npm install
npm run build
# Run the interactive setup
node dist/index.js setup

Configuration

Configure the MCP server in your Cursor settings to establish a connection with your n8n instance.

1. Retrieve n8n Credentials

  1. Log in to your n8n instance.
  2. Navigate to Settings > API.
  3. Click Create API Key.
  4. Copy the key immediately.

2. Edit Cursor Configuration

Open or create the file ~/.cursor/mcp.json. Add the cursor-n8n-mcp entry to the mcpServers object. Ensure the path points to your specific installation directory.

{
  "mcpServers": {
    "cursor-n8n-mcp": {
      "command": "node",
      "args": ["/path/to/cursor-n8n-mcp/dist/index.js"],
      "env": {
        "MCP_MODE": "stdio",
        "LOG_LEVEL": "error",
        "N8N_API_URL": "https://your-n8n-instance.com",
        "N8N_API_KEY": "your-api-key"
      }
    }
  }
}

CLI Commands

The server includes a CLI interface for management and help. Run these commands from the project directory:

  • node dist/index.js --help: Displays the help menu.
  • node dist/index.js setup: Launches the interactive setup wizard.
  • node dist/index.js config: Prints a configuration template to the console.

Available Tools

Documentation & Help

  • n8n_tools_help: Retrieves the usage guide and general documentation.
  • n8n_get_node_info: Fetches configuration details for common n8n nodes.
    • Supported Triggers: webhook, scheduleTrigger, manualTrigger
    • Supported Actions: httpRequest, code, set, if, merge
    • Utilities: splitInBatches, respondToWebhook

Workflow Management

  • n8n_list_workflows: Lists all available workflows.
  • n8n_get_workflow: Retrieves full details of a specific workflow by ID.
  • n8n_create_workflow: Creates a new workflow.
  • n8n_update_workflow: Modifies an existing workflow.
  • n8n_delete_workflow: Permanently removes a workflow.
  • n8n_activate_workflow: Sets a workflow to active status.
  • n8n_deactivate_workflow: Sets a workflow to inactive status.

Execution Management

  • n8n_list_executions: specific execution history logs.
  • n8n_get_execution: Retrieves detailed data for a single execution.
  • n8n_delete_execution: Removes a specific execution record.
  • n8n_trigger_webhook: Manually triggers a workflow via its webhook URL.

System

  • n8n_health_check: Verifies connectivity to the n8n API.

Environment Variables

The configuration relies on the following environment variables:

VariableRequiredDescription
N8N_API_URLYesThe full URL of your n8n instance (e.g., https://n8n.example.com).
N8N_API_KEYYesThe API key generated in n8n settings.
LOG_LEVELNoSets logging verbosity (debug, info, warn, error).
MCP_MODENoDefines the transport mode. Defaults to stdio.

Error Handling Behavior

  • Retries: Failed requests retry up to 3 times.
  • Backoff: The server uses exponential backoff to increase delays between retries.
  • Timeouts: API requests time out after 30 seconds to prevent hanging processes.
  • Feedback: Error messages include actionable hints for resolution.

FAQs

Q: How do I verify that Cursor can talk to my n8n instance?
A: You can ask Cursor to “Check the n8n connection.” This prompts the AI to run the n8n_health_check tool. If configured correctly, it will confirm connectivity.

Q: Can I use this to write complex code inside n8n nodes?
A: Yes. You can use the n8n_get_node_info tool to understand the schema of the code node, and then instruct Cursor to “Create a workflow with a code node that processes X,” passing the JavaScript code directly into the workflow creation payload.

Q: What happens if my n8n server is temporarily down?
A: The MCP server implements automatic retries with exponential backoff. It will attempt to connect 3 times before returning a detailed error message to the interface.

Q: Does this support every single node type in n8n?
A: The n8n_get_node_info tool specifically supports common nodes like Webhooks, HTTP Requests, and Code nodes. However, you can still include other node types in your workflows if you know their JSON structure, as the workflow creation tools accept standard n8n JSON formats.

Q: Where can I find the logs if something goes wrong?
A: You can set the LOG_LEVEL environment variable to debug in your mcp.json file. The logs will be output based on your MCP client’s handling of stderr/stdout.

Latest MCP Servers

Memo

An MCP provides structured context storage for AI agents. This server enables you to transfer conversation states between different AI coding agents.

Colab

Google's official Colab MCP server that enables AI assistants to run Python code in Google Colab.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

View More MCP Servers >>

Featured MCP Servers

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

Claude Context Mode

This MCP Server compresses tool outputs by 98% using sandboxed execution, full-text search with BM25 ranking, and multi-language support for Claude Code.

Context+

An MCP server provides AST parsing, semantic search, and feature graph tools for large codebases with 99% accuracy.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!