Blueprint

The Blueprint MCP Server integrates with Arcade’s ecosystem to generate visual diagrams from codebases and system documentation using Nano Banana Pro.

It creates architecture diagrams, flowcharts, and system visualizations that help you understand complex code structures and technical processes.

Features

  • 🧩 Generates architecture diagrams from code analysis
  • 🔄 Creates sequence diagrams for API flows
  • 📊 Produces data flow diagrams for ETL pipelines
  • 🔗 Integrates with Arcade’s tool ecosystem
  • ⚡ Uses Nano Banana Pro for fast diagram generation
  • 🖼️ Exports diagrams as base64-encoded PNG files
  • 🎯 Supports enterprise architecture scenarios
  • 📚 Creates educational learning cards for frameworks

Installation

1. This MCP server requires a connection to the Arcade platform and a Google AI Studio API key.

2. Set up a Python virtual environment to isolate dependencies.

# Create virtual environment
python3 -m venv venv
# Activate the environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activate
# Install the Arcade CLI
pip install arcade-mcp

3. Authenticate the CLI and securely store your Google API key.

# Log in to your Arcade account
arcade-mcp login
# Set the Google API key as a secret
arcade-mcp secret set GOOGLE_API_KEY="your_api_key_here"

4. Deploy the MCP server code to the Arcade runtime.

# Navigate to the server directory
cd architect_mcp
# Deploy the server
arcade-mcp deploy

5. The deployed MCP server needs to be exposed via a Gateway to be accessible by clients like Cursor.

  1. Navigate to the Arcade Dashboard.
  2. Select Gateways and click Create Gateway.
  3. Locate your deployed architect_mcp server in the list and add it to this gateway.
  4. Copy the resulting Gateway URL.

6. Configure your AI assistant like Cursor:

  • Open the Cursor app and go to Settings > MCP.
  • Click Add new MCP server.
  • Select “SSE” (Server-Sent Events) if prompted, or simply paste the Arcade Gateway URL you copied in the previous step.
  • Restart Cursor to load the tools.

Tool Usage and Workflow

Available Tools:

  • start_diagram_job: Initiates the request.
  • check_job_status: Polls for completion.
  • download_diagram: Retrieves the final image.

Step-by-Step Execution:

1. Call start_diagram_job with your prompt.

  • Example Prompt: “Analyze the authentication module in src/auth/ and create an architecture diagram.”
  • Result: Returns a job_id.

2. The Nano Banana Pro model takes approximately 30 seconds to generate the diagram.

3. Call check_job_status using the job_id from step 1.

  • Repeat this until the status returns “Complete”.

4. Call download_diagram with the job_id.

  • Result: Returns a base64 encoded PNG string. Your MCP client (Cursor) should decode this and save it to your workspace automatically.

    FAQs

    Q: Why doesn’t the diagram appear instantly?
    A: The diagram generation is computationally intensive. The server uses an asynchronous job queue. You must wait roughly 30 seconds after starting a job before the result is ready for download.

    Q: Can I use this with code that isn’t in a public repository?
    A: Yes. When you use this in an editor like Cursor, the editor sends the relevant code context from your local files to the MCP server as part of the prompt. The server processes this context to generate the diagram.

    Q: What happens if I lose my Job ID?
    A: You will need to restart the generation process. The Job ID is the only key to retrieving the specific diagram you requested.

    Q: Can I customize the colors or style of the diagram?
    A: You can influence the style via the prompt. For example, you can specify “Use technical whiteboard style, muted colors (gray, light blue)” in your initial request to start_diagram_job.

    Latest MCP Servers

    Apify

    Connect AI assistants to 8000+ web scraping tools via Apify MCP Server. Extract social media data, contact details, and automate web research.

    Blueprint

    Use the Blueprint MCP Server to generate system architecture diagrams directly from your codebase using Nano Banana Pro.

    HOPX

    Connect your AI agents to secure, isolated cloud environments with the HOPX MCP Server. Execute Python, JS, and Bash safely without local risks.

    View More MCP Servers >>

    Featured MCP Servers

    Apify

    Connect AI assistants to 8000+ web scraping tools via Apify MCP Server. Extract social media data, contact details, and automate web research.

    Blueprint

    Use the Blueprint MCP Server to generate system architecture diagrams directly from your codebase using Nano Banana Pro.

    Monday.com

    Use the monday.com MCP server to connect AI agents to your Work OS. Provides secure data access, action tools, and workflow context.

    More Featured MCP Servers >>

    FAQs

    Q: What exactly is the Model Context Protocol (MCP)?

    A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

    Q: How is MCP different from OpenAI's function calling or plugins?

    A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

    Q: Can I use MCP with frameworks like LangChain?

    A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

    Q: Why was MCP created? What problem does it solve?

    A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

    Q: Is MCP secure? What are the main risks?

    A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

    Q: Who is behind MCP?

    A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

    Get the latest & top AI tools sent directly to your email.

    Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!