Typescript

The Typescript MCP server provides language-aware context and code manipulation capabilities to your AI assistants like Cursor, Windsurf, or Claude Desktop.

It allows the AI to perform complex and safe refactoring operations, like renaming a symbol across multiple files or finding all its references, tasks that are often error-prone when done with simple text-based commands.

Features

  • 🔍 Semantic Symbol Analysis – Find definitions, references, and type information across your codebase
  • ♻️ Safe Refactoring Operations – Rename symbols, move files, and restructure code with automatic import updates
  • 🏗️ Type-Aware Code Navigation – Get precise type information and symbol scope analysis
  • 📁 File System Operations – Move files and directories while maintaining TypeScript project integrity
  • 🔧 Diagnostic Integration – Access TypeScript compiler diagnostics for error detection
  • Experimental TSGO Support – Enhanced performance with TypeScript’s native compilation

Use Cases

  • Large Codebase Refactoring – Safely rename functions, classes, or variables across hundreds of files without breaking references or imports
  • Code Architecture Migration – Move TypeScript files between directories while automatically updating all import statements and maintaining type safety
  • API Development – Navigate complex type hierarchies and understand symbol relationships when building or modifying TypeScript APIs
  • Legacy Code Modernization – Analyze existing TypeScript codebases to understand dependencies and plan systematic refactoring approaches

How to Use It

1. Automated initialization:

npm install typescript typescript-mcp -D
npx typescript-mcp --init=claude

This command creates or updates two configuration files:

  • .mcp.json – Configures the TypeScript MCP server connection
  • .claude/settings.json – Sets permissions for MCP tool access

Start Claude with MCP integration:

claude

2. For custom setups, add the server configuration to .mcp.json:

{
  "mcpServers": {
    "typescript": {
      "command": "npx",
      "args": ["typescript-mcp"]
    }
  }
}

Configure permissions in .claude/settings.json:

{
  "permissions": {
    "allow": [
      "mcp__typescript__rename_symbol",
      "mcp__typescript__move_file", 
      "mcp__typescript__move_directory",
      "mcp__typescript__find_references",
      "mcp__typescript__get_definitions",
      "mcp__typescript__get_diagnostics",
      "mcp__typescript__get_module_symbols",
      "mcp__typescript__get_type_at_symbol"
    ],
    "deny": []
  }
}

3. Available Tools and Operations.

Symbol Operations:

  • rename_symbol – Rename variables, functions, classes across all references
  • find_references – Locate all usages of a symbol in the project
  • get_definitions – Navigate to symbol definitions
  • delete_symbol – Remove symbols with dependency analysis

File Operations:

  • move_file – Relocate TypeScript files with automatic import updates
  • move_directory – Move entire directories while preserving project structure

Type Analysis:

  • get_type_at_symbol – Retrieve detailed type information for any symbol
  • get_type_in_module – Analyze types within specific modules
  • get_symbols_in_scope – List available symbols in current scope

Project Diagnostics:

  • get_diagnostics – Access TypeScript compiler errors and warnings
  • get_module_symbols – Explore exported symbols from modules

Experimental TSGO

For enhanced performance, you can try the experimental Go-based engine. First, install the preview package:

npm add @typescript/native-preview

Then, update your .mcp.json to enable the TSGO environment variable:

{
  "mcpServers": {
    "typescript": {
      "env": {
        "TSGO": "true"
      },
      "command": "npx",
      "args": ["typescript-mcp"]
    }
  }
}

FAQs

Q: Why do I need this? Can’t my AI assistant already edit code?
A: While AI can edit text, it often lacks the semantic understanding of your code. It might perform a simple find-and-replace that breaks your application. This server provides the AI with IDE-level tools to make changes safely and accurately.

Q: What is the difference between this and a Language Server Protocol (LSP) server?
A: This server is inspired by LSP but is designed specifically for Large Language Models (LLMs). LLMs are not great at handling the precise word/character counts that LSP uses, so typescript-mcp communicates using line and symbol numbers, which the AI can handle more reliably.

Latest MCP Servers

CVE

An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

WebMCP

webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

Google Meta Ads GA4

An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!