Imagician
Imagician is a Model Context Protocol (MCP) server that hooks into AI assistants like Claude and lets you run common image operations programmatically or with natural language commands.
Features
🖼️ Resize: Change image dimensions with several fit options like cover, contain, and fill.
🔄 Format Conversion: Switch between JPEG, PNG, WebP, and AVIF.
✂️ Crop: Pull a specific section from an image.
🗜️ Compress: Lower file size with adjustable quality settings.
↪️ Rotate & Flip: Turn images by a specific angle or mirror them.
🗂️ Batch Processing: Create multiple sizes from one source image, which is great for responsive designs.
ℹ️ Metadata: Pull image properties like format, dimensions, and file size.
Use Cases
- Automating Responsive Images: The
batch_resizetool is a standout. Instead of manually creating different sizes for asrcset, you can script it to generate all the required versions from a single high-res image. This has saved me a ton of time on web projects. - AI-Powered Content Management: When working within an AI assistant like Claude, you can manage assets without breaking your flow. For instance, you can ask it to “crop avatar.jpg to 200×200” or “convert logo.png to WebP with 90% quality,” and it just gets done.
- Standardizing Asset Pipelines: If you have a system where users upload images in various formats, you can use Imagician to automatically convert them all to an optimized format like AVIF or WebP and resize them to fit standard dimensions.
How to Use It
1. Install it through npm or build it from the source.
Installation via npm:
npm install -g @flowy11/imagicianInstallation from source:
git clone https://github.com/flowy11/imagician.git
cd imagician
npm install
npm run build2. Add the MCP server to your MCP client’s configuration file.
For Claude Code (~/.config/claude/config/settings/mcp-servers.json):
{
"imagician": {
"command": "npx",
"args": ["-y", "@flowy11/imagician"]
}
}For Cursor (~/.cursor/mcp_settings.json):
{
"mcpServers": {
"imagician": {
"command": "npx",
"args": ["-y", "@flowy11/imagician"]
}
}
}For Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"imagician": {
"command": "npx",
"args": ["-y", "@flowy11/imagician"]
}
}
}Available Tools
- resize_image: Resizes an image. You can set width, height, and fit mode.
- convert_format: Converts format and adjusts quality for lossy types.
- crop_image: Crops an image using left, top, width, and height offsets.
- compress_image: Reduces file size with a quality setting.
- rotate_image: Rotates an image by a given angle.
- flip_image: Flips horizontally, vertically, or both.
- get_image_info: Returns image metadata.
- batch_resize: Resizes an image into multiple specified sizes.
FAQs
Q: Can Imagician handle complex edits like layer masking?
A: No, Imagician is focused on foundational image operations like resizing, cropping, and format conversion. For more advanced edits like layer manipulation, you would need a more comprehensive tool like Photoshop or GIMP.
Q: What’s the main advantage of using this over a command-line tool like ImageMagick?
A: The primary advantage is its integration with the Model Context Protocol. This allows AI assistants to use these tools programmatically through natural language, which you can’t do with a standard CLI tool. It’s built for an AI-native workflow.
Q: Is the format conversion lossless?
A: It depends on the format. When converting to JPEG, WebP, or AVIF, you can set a quality parameter (1-100), which implies lossy compression to reduce file size. Converting to PNG would be lossless.
Latest MCP Servers
CVE
WebMCP
webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…
Google Meta Ads GA4
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



