Context+
Context+ is an MCP server that brings semantic understanding to massive codebases. It combines Tree‑sitter AST parsing, spectral clustering, and Obsidian‑style wikilinks to build a searchable, hierarchical feature graph. Developers who need 99% accuracy when navigating, refactoring, or documenting large projects can use Context+ to move from file‑level guesswork to meaning‑aware exploration.
The server exposes a set of MCP tools divided into four categories: Discovery, Analysis, Code Ops, and Version Control. It runs over stdio and integrates with Claude Code, Cursor, VS Code, Windsurf, and OpenCode. Context+ uses Ollama for embeddings and chat‑based labeling, caches results in .mcp_data/, and tracks file changes in real time to keep its semantic index fresh.
Features
- 🧠 Structural AST Tree –
get_context_treereturns a pruned syntax tree with file headers and symbol line numbers, dynamically shrinking output for large projects. - 🗂️ File Skeleton –
get_file_skeletonextracts function signatures, class methods, and type definitions without reading full bodies, showing the API surface at a glance. - 🔍 Semantic Code Search –
semantic_code_searchuses embeddings over file headers and symbols to find code by meaning, not exact text. - 🏷️ Identifier‑Level Retrieval –
semantic_identifier_searchretrieves ranked call sites and line numbers for specific functions, classes, or variables. - 🌐 Semantic Navigation –
semantic_navigategroups semantically related files into labeled clusters using spectral clustering, letting you browse by concept. - 💥 Blast Radius Analysis –
get_blast_radiustraces every import and usage of a symbol across files, preventing orphaned references. - 🧪 Static Analysis –
run_static_analysisruns native linters and compilers (TypeScript, Python, Rust, Go) to find dead code, unused variables, and type errors. - ✍️ Safe Code Proposals –
propose_commitvalidates changes against strict rules before writing and creates a shadow restore point for instant undo. - 📚 Feature Hubs –
get_feature_hubgenerates Obsidian‑style.mdfiles with[[wikilinks]]that map features to code files, documenting your architecture. - ⏪ Undo Without Git –
list_restore_pointsandundo_changemanage shadow restore points created by AI changes, leaving your Git history untouched.
Use Cases
- Large Scale Refactoring: Developers use Context+ to find all references of a deprecated function across hundreds of files. The
get_blast_radiustool identifies every usage line. This prevents orphaned references during major updates. - Codebase Onboarding: New team members explore unfamiliar projects via the
semantic_navigatetool. The server groups related files into labeled clusters based on meaning. Engineers read the generated feature hubs to comprehend system architecture. - Safe AI Code Generation: Programmers write code via the
propose_committool. The server validates the code against strict rules before a save operation. The system creates a shadow restore point to reverse bad AI edits. - Dead Code Elimination: Teams run the
run_static_analysistool to find unused variables and type errors. The server supports TypeScript, Python, Rust, and Go natively. Developers clean up legacy repositories via these precise reports.
HOW TO USE IT
Table Of Contents
Setup and Quick Start
You can run Context+ directly via npx or bunx. I prefer bunx for its speed. The server requires no manual installation. You must add the Context+ configuration to your IDE MCP settings.
Claude Code, Cursor, and Windsurf Configuration
Add the following JSON to your mcpServers configuration block.
{
"mcpServers": {
"contextplus": {
"command": "bunx",
"args": ["contextplus"],
"env": {
"OLLAMA_EMBED_MODEL": "nomic-embed-text",
"OLLAMA_CHAT_MODEL": "gemma2:27b",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY"
}
}
}
}VS Code Configuration
Add the following JSON to your .vscode/mcp.json file.
{
"servers": {
"contextplus": {
"type": "stdio",
"command": "bunx",
"args": ["contextplus"],
"env": {
"OLLAMA_EMBED_MODEL": "nomic-embed-text",
"OLLAMA_CHAT_MODEL": "gemma2:27b",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY"
}
}
},
"inputs":[]
}NPX Alternative
Use npx for the command execution. You must set the command to "npx". You must set the args to ["-y", "contextplus"].
CLI Initialization
Generate the MCP configuration file directly in your current directory. You execute specific commands based on your package manager and target IDE.
npx -y contextplus init claude
bunx contextplus init cursor
npx -y contextplus init opencodeSupported IDEs and Config Locations
The MCP server supports multiple coding agents. The system places the configuration file in specific locations based on the IDE.
- Claude Code: The system uses the
.mcp.jsonfile. - Cursor: The system uses the
.cursor/mcp.jsonfile. - VS Code: The system uses the
.vscode/mcp.jsonfile. - Windsurf: The system uses the
.windsurf/mcp.jsonfile. - OpenCode: The system uses the
opencode.jsonfile.
CLI Subcommands
The Context+ CLI provides three main subcommands.
init [target]: The command generates the MCP configuration for your target IDE.skeleton [path]ortree [path]: The command displays the structural tree of a project with file headers and symbol definitions directly in your terminal.[path]: The command starts the MCP server via stdio for the specified path. The path defaults to the current directory.
Source Installation
Build the server from source.
npm install
npm run buildTesting
Run the test suite via npm commands.
npm test
npm run test:demo
npm run test:allConfiguration Variables
Customize the server behavior via environment variables.
OLLAMA_EMBED_MODEL: The system uses this embedding model. The default isnomic-embed-text.OLLAMA_API_KEY: The system requires this Ollama Cloud API key.OLLAMA_CHAT_MODEL: The system uses this chat model for cluster labels. The default isllama3.2.CONTEXTPLUS_EMBED_BATCH_SIZE: The server sets the embedding batch size per GPU call. The system clamps this value to 5 to 10. The default is8.CONTEXTPLUS_EMBED_TRACKER: The server enables realtime embedding refresh on file modifications. The default istrue.CONTEXTPLUS_EMBED_TRACKER_MAX_FILES: The server processes a maximum number of changed files per tracker tick. The system clamps this value to 5 to 10. The default is8.CONTEXTPLUS_EMBED_TRACKER_DEBOUNCE_MS: The server waits for this debounce window before a tracker refresh. The default is700.
Available MCP Tools
The server exposes 11 distinct tools across four categories.
Discovery Tools
get_context_tree: The tool returns a structural AST tree of a project with file headers and symbol ranges. The system shrinks the output automatically via dynamic pruning.get_file_skeleton: The tool extracts function signatures, class methods, and type definitions with line ranges. The tool skips reading full bodies. The output shows the API surface.semantic_code_search: The tool searches by meaning. The system ignores exact text. The tool uses embeddings over file headers and symbols to return matched symbol definition lines.semantic_identifier_search: The tool retrieves identifier level semantic data for functions, classes, and variables. The output includes ranked call sites and line numbers.semantic_navigate: The tool browses the codebase by meaning via spectral clustering. The system groups semantically related files into labeled clusters.
Analysis Tools
get_blast_radius: The tool traces every file and line where a symbol appears. This prevents orphaned references.run_static_analysis: The tool runs native linters and compilers to find unused variables, dead code, and type errors. The system supports TypeScript, Python, Rust, and Go.
Code Ops Tools
propose_commit: This tool provides the only way to write code. The system validates changes against strict rules before a save operation. The tool creates a shadow restore point before a write operation.get_feature_hub: The tool provides an Obsidian style feature hub navigator. The hubs are markdown files with wikilinks. The links map features to code files.
Version Control Tools
list_restore_points: The tool lists all shadow restore points created by thepropose_committool. Each point captures the file state before AI changes.undo_change: The tool restores files to their state before a specific AI change. The system uses shadow restore points. The tool operates independently of git.
FAQs
Q: How does Context+ handle embedding generation?
A: The server uses Ollama vector embeddings with a disk cache. The system stores reusable file and identifier embeddings in a .mcp_data/ directory. A realtime tracker refreshes changed files incrementally to save compute resources.
Q: Does the undo feature modify my git history?
A: The undo_change tool relies entirely on shadow restore points. The system restores files to their previous state independently of git. Your git history remains completely untouched.
Q: Which programming languages does the static analysis tool support?
A: The run_static_analysis tool supports TypeScript, Python, Rust, and Go natively. The system runs native linters and compilers to find unused variables and type errors.
Q: How do I view the project structure in my terminal?
A: You can run the skeleton [path] or tree [path] CLI subcommand. The terminal displays the structural tree of your project. The output includes file headers and symbol definitions.
Latest MCP Servers
Excalidraw
Claude Context Mode
Context+
Featured MCP Servers
Excalidraw
Claude Context Mode
Context+
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



