MCP Django

The mcp-django MCP Server provides AI assistants with direct access to your Django project’s structure and functionality.

It delivers read-only project exploration resources and optional stateful shell access, enabling LLMs to understand your codebase and interact with it through Python execution.

Features

  • 🔍 Project Exploration: Provides read-only resources for an LLM to discover your Django project’s apps, models, and general setup.
  • 🚀 Zero Configuration: It works out of the box by detecting your Django settings, requiring no complex setup.
  • 🔒 Safe by Default: The base installation does not allow code execution, keeping your environment secure.
  • 🐚 Stateful Shell: The optional shell executes Python code and remembers variables and imports between calls.
  • 🔄 Session Reset: Includes a simple tool to reset the shell’s state if the LLM gets confused or you need a clean slate.
  • 🌐 Multiple Transports: Supports STDIO, HTTP, and SSE, so you can connect to it however your client requires.

Use Cases

  • Rapid Query Scaffolding: You can point an LLM to your models using the read-only resources and ask it to write complex queries. This is a huge time-saver when you’re dealing with tricky aggregations or annotations and just need a starting point.
  • AI-Assisted Debugging: Instead of manually stepping through code, you can fire up the django_shell tool and have an LLM inspect variables, test model methods, and query the database to help pinpoint the source of a bug.
  • Automated Data Seeding and Testing: For development environments, you can instruct an LLM to create test data. Tell it to “create 50 user accounts with realistic-looking email addresses,” and it can generate and execute the necessary Django shell commands.

How to Use It

1. Choose an installation option based on whether you need shell access.

For read-only project exploration, install the core package:

pip install mcp-django

If you’re in a secure development environment and want to enable code execution, install it with the [shell] extra.

pip install "mcp-django[shell]"

A serious warning: The shell provides full, unrestricted access to your Django project. An LLM could easily misunderstand a prompt and delete data. Never, ever use the shell extra in a production environment or with access to production data.

2. Run the MCP server from your Django project’s root directory:

python -m mcp_django

The server automatically finds your DJANGO_SETTINGS_MODULE. If you need to specify it manually, you can use the --settings flag.

4. The server defaults to STDIO transport, but you can switch to HTTP or SSE if needed:

# Run on localhost, port 8000 with HTTP
python -m mcp_django --transport http --host 127.0.0.1 --port 8000

5. To connect your AI assistant, you’ll need to configure its MCP client settings.

For Claude Code:

{
  "mcpServers": {
    "django": {
      "command": "python",
      "args": ["-m", "mcp_django"],
      "cwd": "/path/to/your/django/project",
      "env": {
        "DJANGO_SETTINGS_MODULE": "myproject.settings"
      }
    }
  }
}

For Opencode:

{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "django": {
      "type": "local",
      "command": ["python", "-m", "mcp_django"],
      "enabled": true,
      "environment": {
        "DJANGO_SETTINGS_MODULE": "myproject.settings"
      }
    }
  }
}

6. The core package exposes read-only resources like django://project, django://apps, and django://models to give an LLM context. The shell installation adds two tools: django_shell for running code and django_reset to clear the session.

FAQs

Q: Is the shell functionality really that dangerous?
A: Yes. It executes whatever Python code the LLM generates with full access to your Django project and its database connections. It’s a fantastic tool for development, but a huge liability anywhere else.

Q: Does the shell remember variables and imports from previous commands?
A: Yes, the shell is stateful. An LLM can import a model in one command and then use that model in the next. This is what makes it so effective for iterative work. If the state gets messy, you just call the django_reset tool to start fresh.

Q: Do I have to add mcp-django to my INSTALLED_APPS?
A: No, not unless you want to use the python manage.py mcp management command. You can run it directly with python -m mcp_django without modifying your project’s settings, which helps keep your configuration clean.

Latest MCP Servers

CVE

An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

WebMCP

webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

Google Meta Ads GA4

An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!