Shadcn UI
The Shadcn UI MCP Server allows your AI assistants access to the popular shadcn/ui component library.
It can pull the latest TypeScript source for components, find implementation demos, and even retrieve entire pre-built “Blocks” like dashboards or login forms.
Features
- 📦 Component Source: Get direct access to the latest shadcn/ui v4 component source code.
- 🎬 Implementation Demos: Pull up example code and usage patterns for any component.
- 🧱 Block Implementations: Retrieve complete, multi-component blocks like calendars and dashboards.
- ℹ️ Component Metadata: Access component dependencies, descriptions, and other details.
- 🔄 Framework Switching: Instantly toggle between React (
shadcn/ui) and Svelte (shadcn-svelte) component libraries. - 🔑 GitHub API Integration: Uses smart caching and handles GitHub API rate limits, especially when you provide a token.
How To Use It
1. Run the server with npx:
npx @jpisnice/shadcn-ui-mcp-serverThis command has a rate limit of 60 GitHub API requests per hour. For any serious work, you should use a GitHub personal access token to increase the limit to 5,000 requests per hour.
npx @jpisnice/shadcn-ui-mcp-server --github-api-key <your_github_token>You can get a token from your GitHub Developer settings. No special permissions or scopes are needed.
2. The MCP server defaults to React’s shadcn/ui. To switch to the Svelte version, use the --framework (or -f) flag:
# Serve Svelte components
npx @jpisnice/shadcn-ui-mcp-server --framework svelte3. Connect the MCP server to your MCP clients. Here is an example for VS Code’s settings.json using the Claude extension:
{
"claude.mcpServers": {
"shadcn-ui": {
"command": "npx",
"args": ["@jpisnice/shadcn-ui-mcp-server"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<your_github_token>"
}
},
// Svelte configuration
"shadcn-ui-svelte": {
"command": "npx",
"args": ["@jpisnice/shadcn-ui-mcp-server", "--framework", "svelte"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<your_github_token>"
}
}
}
}4. Command Line Options
--version, -v: Shows the current version of the server package.--github-api-key, -g <token>: Specifies your GitHub Personal Access Token to increase the API request limit.--framework, -f <framework>: Switches the component library. Accepts ‘react’ (the default) or ‘svelte’.--help, -h: Displays the help message with all available options.
FAQs
Q: Does this server install the components into my project?
A: No. The server’s job is to fetch the component code and information for your AI assistant to use. You are still responsible for installing and configuring shadcn/ui in your project.
Q: How can I see which components or blocks are available?
A: Your AI assistant can use the tools provided by the server, such as list_components and list_blocks, to get a complete list of what’s available.
Q: The server isn’t working in my editor. What should I check?
A: First, confirm the server runs by itself in your terminal by executing npx @jpisnice/shadcn-ui-mcp-server --help. Next, carefully review your editor’s JSON configuration file for any typos in the command or arguments. Finally, ensure your GitHub token is correct and hasn’t expired.
Latest MCP Servers
Excalidraw
Claude Context Mode
Context+
Featured MCP Servers
Excalidraw
Claude Context Mode
Context+
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



