MCP Code Executor
The MCP Code Executor is an MCP server that allows LLMs to write and run Python code directly. It can also manage different Python environments like Conda or virtualenv, and even handle installing dependencies on the fly.
Features
- 🧱 Incremental Code Building: Helps with large code blocks by allowing piecemeal generation and assembly, sidestepping token limits.
- 🔧 Environment Flexibility: Works with Conda, standard Python virtualenv, or UV virtualenv.
- 📦 On-the-Fly Dependencies: Can install necessary Python packages as needed.
- 🔍 Package Verification: Checks if required packages are already in the environment.
- ⚙️ Dynamic Environment Tweaks: You can change the environment configuration during runtime.
- 💾 Custom Code Location: You decide where the generated Python files are stored.
Use Cases
- LLM-driven data tasks: Imagine your LLM needs to pull some data, clean it up with pandas, and then generate a quick summary. Instead of just outputting Python code as text, it can use MCP Code Executor to actually run those steps. This takes care of the LLM not being able to directly interact with a Python interpreter or manage libraries.
- Rapid scripting and prototyping: If you need a small utility script – maybe to rename a batch of files or hit a simple API endpoint – you can describe the logic to an LLM. The LLM can then use the
initialize_code_file,append_to_code_file, andexecute_code_filetools to construct and run that script in the designated environment. This is great for tasks where the script might be a bit too long for a single prompt. - Agent-like behavior with Python tools: If you’re building an AI agent that needs to decide which Python tool to run based on a conversation or a set of conditions, this server fits right in. The agent could first use
check_installed_packages, theninstall_dependenciesif something is missing, and finallyexecute_codeto run the specific piece of Python logic. It can even switch gears withconfigure_environmentif a task demands a completely different setup.
How To Use It
Make sure you have Node.js on your system. You’ll also need one of the Python environment managers it supports: Conda (with an environment already created), a standard Python virtualenv, or a UV virtualenv.
Setup:
- Grab the code:
git clone https://github.com/bazinga012/mcp_code_executor.git- Move into the directory:
cd mcp_code_executor- Install the Node.js bits:
npm install- Build the project:
npm run buildConfiguration:
You’ll need to tell your main MCP server setup about this new code executor. This involves adding a configuration block to your MCP servers file.
If you’re running it with Node.js directly:
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/your/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env-name"
}
}
}
}Remember to change /path/to/mcp_code_executor/build/index.js and /path/to/your/code/storage to your actual paths. The env section is where you define its behavior.
If you prefer Docker:
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}The provided Dockerfile has been tested primarily with the venv-uv environment type. If you use other types with Docker, you might need to adjust the Dockerfile.
Environment Variables:
CODE_STORAGE_DIR: (Required) This is the directory where any Python files generated by the LLM will be saved.
You also need to specify one of the following environment setups:
- For Conda:
ENV_TYPE: Set this toconda.CONDA_ENV_NAME: The name of your existing Conda environment.- For Standard Virtualenv:
ENV_TYPE: Set this tovenv.VENV_PATH: The full path to your virtualenv directory.- For UV Virtualenv:
ENV_TYPE: Set this tovenv-uv.UV_VENV_PATH: The full path to your UV virtualenv directory.
Available Tools:
execute_code: Runs a Python code snippet. Good for short, self-contained pieces of code.
{
"name": "execute_code",
"arguments": {
"code": "import numpy as np\nprint(np.random.rand(3,3))",
"filename": "matrix_gen"
}
}install_dependencies: Installs Python packages into the configured environment.
{
"name": "install_dependencies",
"arguments": {
"packages": ["numpy", "pandas", "matplotlib"]
}
}check_installed_packages: Checks if specified packages are already present.
{
"name": "check_installed_packages",
"arguments": {
"packages": ["numpy", "pandas", "non_existent_package"]
}
}configure_environment: Lets you change the Python environment settings dynamically.
{
"name": "configure_environment",
"arguments": {
"type": "conda",
"conda_name": "new_env_name"
}
}get_environment_config: Retrieves the current environment configuration.
{
"name": "get_environment_config",
"arguments": {}
}initialize_code_file: Creates a new Python file, usually the first step for larger scripts.
{
"name": "initialize_code_file",
"arguments": {
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
"filename": "my_script"
}
}append_to_code_file: Adds more code to an existing file.
{
"name": "append_to_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py",
"content": "\ndef another_function():\n print('This was appended to the file')\n"
}
}(Note: The file_path will typically include a unique identifier appended to the filename you provided in initialize_code_file, located in your CODE_STORAGE_DIR.)
execute_code_file: Runs a complete Python script that you’ve built up.
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}read_code_file: Reads the content of a Python file. Useful for checking the state of a script during incremental building.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}General Usage:
Once it’s configured and your MCP host is aware of it, LLMs can invoke these tools by referencing mcp-code-executor (or whatever name you assigned it) in their requests.
FAQs
Q: What if my Python script is really long and exceeds the LLM’s token limit?
A: You’d use the incremental code generation tools. Start with initialize_code_file to create a new Python file with the initial part of your code. Then, use append_to_code_file as many times as needed to add more sections to that file. You can even use read_code_file in between if the LLM needs to verify the current state of the script. Once the script is complete, you use execute_code_file to run it.
Q: How does the server know which Python environment to use?
A: You define this in the configuration when you set up the MCP Code Executor server. You’ll set environment variables like ENV_TYPE (to conda, venv, or venv-uv) and then provide specific details such as CONDA_ENV_NAME for Conda environments or VENV_PATH for virtualenv paths. If you need to switch environments for a specific task without restarting the server, the configure_environment tool can be used by the LLM at runtime.
Q: Can I install new Python packages on the fly?
A: Yes, absolutely. The install_dependencies tool is designed for this. Your LLM can request the installation of packages like numpy, pandas, or any other package from PyPI before it attempts to execute code that relies on them. There’s also check_installed_packages so the LLM can see if what it needs is already there.
Q: Where does the Python code actually get saved and run?
A: The Python code that the LLM generates gets saved into the directory you specified with the CODE_STORAGE_DIR environment variable during the server’s setup. When it’s time to run the code (either a snippet via execute_code or a file via execute_code_file), it’s executed within the context of the Python environment (Conda, venv, etc.) that you’ve configured for the server.
Latest MCP Servers
Notion
Log Mcp
Apple
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



