Debugg AI

This is DebuggAI’s official MCP Server created for browser automation and E2E testing.

It helps your AI assistants test UI changes, simulate realistic user behavior, and analyze visual outputs of running web applications using natural language commands and CLI tools.

Features

  • 🗣️ Natural Language Testing: Describe a user story like “test the login page,” and the AI handles the test execution.
  • 🛠️ Zero Configuration: It avoids the hassle of setting up local browsers or Playwright.
  • 🏠 Localhost Integration: It securely connects to and tests your dev app running on any localhost port.
  • 🖼️ Visual Verification: Captures screenshots of the final page state, which is useful for visual regression or for an LLM to “see” the result.
  • 🔗 Broad Compatibility: Works with any MCP client, such as Claude Desktop or custom LangChain agents.
  • 📈 Test History: All test runs are saved to a dashboard, so you can review them later or build a test suite for your CI/CD pipeline.

Use Cases

  • Rapid Feature Validation – Test new UI features immediately after implementation without writing traditional test scripts, perfect for validating user registration flows, checkout processes, or form submissions
  • Continuous Integration Testing – Integrate AI-driven tests into CI/CD pipelines to catch regressions early, with historical test results available for trend analysis and debugging
  • Cross-browser Compatibility Checks – Verify application behavior across different browser environments without maintaining local browser installations or dealing with version management
  • User Experience Validation – Simulate real user interactions to identify usability issues, test accessibility features, and validate complex user journeys before production deployment

How to Use It

1. Create a free account and generate an API key at DebuggAI.

2. Install the Debugg AI MCP server.

NPX (Recommended for Development)

npx -y @debugg-ai/debugg-ai-mcp

Docker

docker run -i --rm --init \
  -e DEBUGGAI_API_KEY=your_api_key \
  -e TEST_USERNAME_EMAIL=your_test_email \
  -e TEST_USER_PASSWORD=your_password \
  -e DEBUGGAI_LOCAL_PORT=3000 \
  -e DEBUGGAI_LOCAL_REPO_NAME=your-org/your-repo \
  -e DEBUGGAI_LOCAL_BRANCH_NAME=main \
  -e DEBUGGAI_LOCAL_REPO_PATH=/app \
  -e DEBUGGAI_LOCAL_FILE_PATH=/app/index.ts \
  quinnosha/debugg-ai-mcp

3. Add the following configuration to your Claude Desktop MCP settings:

{
  "mcpServers": {
    "debugg-ai-mcp": {
      "command": "npx",
      "args": ["-y", "@debugg-ai/debugg-ai-mcp"],
      "env": {
        "DEBUGGAI_API_KEY": "YOUR_API_KEY",
        "TEST_USERNAME_EMAIL": "[email protected]",
        "TEST_USER_PASSWORD": "supersecure",
        "DEBUGGAI_LOCAL_PORT": 3000,
        "DEBUGGAI_LOCAL_REPO_NAME": "org/project",
        "DEBUGGAI_LOCAL_BRANCH_NAME": "main",
        "DEBUGGAI_LOCAL_REPO_PATH": "/Users/you/project",
        "DEBUGGAI_LOCAL_FILE_PATH": "/Users/you/project/index.ts"
      }
    }
  }
}

Environment Variables

VariableDescriptionRequired
DEBUGGAI_API_KEYAPI key for DebuggAI backend authentication
DEBUGGAI_LOCAL_PORTPort number where your application runs locally
TEST_USERNAME_EMAILEmail address for test user account
TEST_USER_PASSWORDPassword for test user account
DEBUGGAI_LOCAL_REPO_NAMEGitHub repository name
DEBUGGAI_LOCAL_BRANCH_NAMECurrent branch name
DEBUGGAI_LOCAL_REPO_PATHAbsolute path to repository root
DEBUGGAI_LOCAL_FILE_PATHSpecific file path for testing

4. The main tool this server provides is debugg_ai_test_page_changes. It’s what you’ll call from your AI agent or MCP client.

Parameters:

  • description (required) – Natural language description of the feature or flow to test (e.g., “Test user signup and login process”)
  • localPort (optional) – Port number of your running application (defaults to 3000)
  • repoName (optional) – GitHub repository name for context
  • branchName (optional) – Current branch being tested
  • repoPath (optional) – Absolute path to repository root
  • filePath (optional) – Specific file path being tested

Local Development Setup

# Clone and install dependencies
npm install
# Configure test settings
cp test-config-example.json test-config.json
# Run with MCP inspector
npx @modelcontextprotocol/inspector --config test-config.json --server debugg-ai-mcp-node

FAQs

Q: How does Debugg AI differ from traditional testing frameworks like Playwright or Selenium?
A: Debugg AI eliminates local browser management and complex setup procedures. Instead of writing code-based test scripts, you describe tests in natural language, and the AI executes them on remote browsers. This removes version conflicts, eliminates disruptive browser pop-ups, and requires zero configuration beyond obtaining an API key.

Q: Can I run tests on applications that aren’t publicly accessible?
A: Yes, Debugg AI uses secure tunneling technology to access applications running on localhost ports. Your local development server remains private while the remote browser can interact with it through the encrypted tunnel connection.

Q: What happens to my test results and data?
A: All test results, screenshots, and execution logs are stored in your Debugg AI dashboard for historical reference. This data can be used for trend analysis, debugging failed tests, and creating comprehensive test suites for CI/CD pipelines. Your application data remains secure during testing.

Q: How do I integrate this into my existing CI/CD pipeline?
A: The Docker version of the MCP server can be integrated into most CI/CD systems. Configure the required environment variables in your pipeline, and the server will execute tests automatically. Historical results in your dashboard help track test performance over time and identify regression patterns.

Q: What types of web applications work best with Debugg AI?
A: Debugg AI works with any web application accessible via HTTP/HTTPS, including React, Vue, Angular, and traditional server-rendered applications. It excels at testing user interactions like form submissions, navigation flows, authentication processes, and complex multi-step workflows.

Q: Can I customize the browser environment or testing conditions?
A: The remote browsers are pre-configured for optimal testing performance, but you can influence testing behavior through your natural language descriptions. Specify particular user scenarios, device types, or interaction patterns in your test descriptions for more targeted testing.

Latest MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Log Mcp

An MCP server that provides 7 tools for log analysis, including error fingerprinting, pattern comparison, and ML classification.

Apple

An MCP package that provides AI assistants with direct access to Notes, Messages, Mail, Contacts, Reminders, Calendar, and Maps via AppleScript and EventKit.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!