Node.js Debugger
Node.js Debugger is an MCP server that gives your AI assistants full access to Node.js debugging capabilities through the Chrome DevTools Protocol.
It allows MCP clients like Claude and Cursor to set breakpoints, step through code, inspect variables, and analyze the runtime behavior of your Node.js applications programmatically.
Features
- 🐛 Full Debugging Control: Set standard breakpoints, conditional breakpoints, logpoints, and even pause on exceptions.
- 🚶♂️ Step-Through Execution: Control the flow with step over, step into, and step out commands, or just continue to a specific location.
- 🔍 Variable & Scope Inspection: Dive into local and closure scopes, check the
thiscontext, and drill down into any object’s properties. - 📝 Expression Evaluation: Run JavaScript expressions within the current call frame to test hypotheses or check values on the fly.
- 🗺️ Source Map Support: Debug TypeScript, CoffeeScript, or any other transpiled language as if you were running the original source code.
- 🖥️ Console Monitoring: Capture and review all console output generated during a debugging session.
Use Cases
- AI-Powered Bug Hunts: When you have a tricky bug, you can instruct your AI assistant to start a debug session, place breakpoints in suspect areas, and inspect the state of variables at each pause. The AI can then report its findings, automating a tedious investigation process.
- Automated Runtime Analysis: If you need to understand your application’s state at a specific, hard-to-reach point, you can have your AI script the entire process. It can start the app, set a conditional breakpoint, resume execution, and then evaluate complex expressions to verify the application’s state when the condition is met.
- Effortless Debugging of Transpiled Code: Working with TypeScript and encountering a bug that only appears in the compiled JavaScript can be a headache. With source map support, you can tell your assistant to debug the running code while you reason about and reference the original TypeScript source, making the process much more intuitive.
- Integrating Debugging into Automated Workflows: For complex testing or CI/CD pipelines, you can integrate this server to programmatically manage and inspect Node.js processes in your development environments.
How To Use It
1. Install the MCP server via npm.
npm install devtools-debugger-mcp2. Add the server to your MCP settings file. If you installed it as a local dependency in your project, your configuration might look like this:
{
"devtools-debugger-mcp": {
"command": "node",
"args": ["path/to/devtools-debugger-mcp/dist/index.js"]
}
}If you installed it globally, the configuration is even simpler:
{
"devtools-debugger-mcp": {
"command": "devtools-debugger-mcp"
}
}3. The server works by launching your Node.js script with the --inspect-brk=0 flag, which tells Node to start its inspector and wait for a debugger to attach. The MCP server handles this connection automatically.
Here’s a typical debugging sequence you might ask an AI assistant to perform:
- Start a debug session: The
start_node_debugtool is called with the path to your script. This launches the process and pauses on the first line. - Set breakpoints: Use
set_breakpointwith a file path and line number to tell the debugger where to pause. - Resume execution: The
resume_executiontool runs the code until it hits a breakpoint or the script finishes. - Inspect the state: Once paused, you can use
inspect_scopesto see all local variables orevaluate_expressionto check the value of something specific, likeuser.name. - Step through code: Commands like
step_over,step_into, andstep_outallow for granular control over execution. - Stop the session: The
stop_debug_sessioncommand terminates the Node.js process and cleans everything up.
4. All available tools.
- Session Management:
start_node_debug,stop_debug_session - Breakpoint Management:
set_breakpoint,set_breakpoint_condition,add_logpoint,set_exception_breakpoints - Execution Control:
resume_execution,step_over,step_into,step_out,continue_to_location,restart_frame - Inspection and Analysis:
inspect_scopes,evaluate_expression,get_object_properties,list_call_stack,get_pause_info - Utilities:
list_scripts,get_script_source,blackbox_scripts,read_console
FAQs
Q: How is this different from just using the debugger in VS Code or Chrome DevTools?
A: The main difference is the programmatic, AI-driven approach. While tools like VS Code provide an excellent graphical interface for manual debugging, this MCP server allows an AI assistant to perform those same actions based on your instructions. It’s built for automation and conversational debugging.
Q: Can I use this to debug TypeScript?
A: Yes. The server has full source map support, so you can debug code written in TypeScript or any other language that transpiles to JavaScript. The debugger will correctly map the running code back to your original source files.
Q: What is a pauseId and why is it needed for some commands?
A: The pauseId is a unique identifier for a specific paused state in your application’s execution. Certain commands, like evaluate_expression or inspect_scopes, need this ID to know the exact context (which function call, which scope) in which to operate. The server provides this ID whenever execution is paused.
Q: How does the server handle console messages?
A: The server buffers any output sent to the console between pauses. You can retrieve these messages either by setting the includeConsole parameter to true when you step or resume, or by calling the read_console tool at any time.
Latest MCP Servers
CVE
WebMCP
webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…
Google Meta Ads GA4
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



