SigNoz
The SigNoz MCP Server plugs your observability data straight into AI assistants, built in Go.
This MCP server lets LLMs like Claude or tools like Cursor ask natural language questions about your SigNoz metrics, alerts, dashboards, and services.
Think “show me CPU usage last hour” or “list active alerts”, and get structured, ready-to-use answers back.
Features
- 📊 Pull metric keys or search them by keyword
- 🚨 Fetch active alerts and drill into specific alert rule details
- 📈 List dashboard summaries or pull full dashboard JSON configs
- 🔍 Discover services in a time window and analyze their top operations
- ⚙️ Configure via environment variables — no complex setup
- 🧩 Plug-and-play with Claude Desktop, Cursor, or any MCP-compatible client
- 📝 Returns clean, LLM-optimized JSON — summaries for lists, full data on demand
- 🕒 Uses nanosecond Unix timestamps for all time-based queries
Use Cases
- You’re on-call and need to triage alerts fast. Ask your AI assistant to “list all firing alerts” instead of clicking through UIs.
- Onboarding a new team member? They can ask “show me dashboards tagged ‘database’” to find relevant monitoring without memorizing UUIDs.
- Debugging a performance dip? Query “top operations for checkout-service last 2 hours” to pinpoint slow endpoints without writing PromQL.
- Building internal tooling? Use the MCP server as a backend to let non-engineers ask natural language questions about system health.
How To Use It
1. Download or build the binary. Set SIGNOZ_URL and SIGNOZ_API_KEY as environment variables. Point your MCP client (Claude Desktop, Cursor, etc.) to the binary path.
2. For Claude Desktop: Edit claude_desktop_config.json. Add a “signoz” entry under “mcpServers” with “command” pointing to the binary, “args” as empty array, and “env” containing your SigNoz URL, API key, and optional LOG_LEVEL.
{
"mcpServers": {
"signoz": {
"command": "/absolute/path/to/signoz-mcp-server/bin/signoz-mcp-server",
"args": [],
"env": {
"SIGNOZ_URL": "https://your-signoz-instance.com",
"SIGNOZ_API_KEY": "your-api-key-here",
"LOG_LEVEL": "info"
}
}
}
}3. For Cursor: Either use the GUI (Settings → Tools & Integrations → + New MCP Server) or create .cursor/mcp.json in your project root. Use the same JSON structure as Claude’s config.
{
"mcpServers": {
"signoz": {
"command": "/absolute/path/to/signoz-mcp-server/bin/signoz-mcp-server",
"args": [],
"env": {
"SIGNOZ_URL": "https://your-signoz-instance.com",
"SIGNOZ_API_KEY": "your-api-key-here",
"LOG_LEVEL": "info"
}
}
}
}4. Restart your client. The server loads silently. Tools appear automatically. No manual registration needed.
5. Time parameters expect nanoseconds since Unix epoch. Use 1751328000000000000 for August 2025, for example. LOG_LEVEL=debug for verbose output during setup; switch to warn for production.
6. Available tools: list_metric_keys (no args), search_metric_keys (requires searchText), list_alerts (no args), get_alert (requires ruleId), list_dashboards (no args), get_dashboard (requires uuid), list_services (requires start and end in ns), get_service_top_operations (requires start, end, service; optional tags as JSON array).
FAQs
Q: Where do I get my SigNoz API key?
A: In SigNoz UI, go to Settings → Workspace Settings → API Key.
Q: Why am I getting empty responses?
A: Check your time range. It’s in nanoseconds, not milliseconds. Also verify your API key has read permissions and your URL points to the ingest endpoint.
Q: Can I add custom tools?
A: Yes. Fork the repo. Add your tool handler in internal/handler/tools/, extend the client if needed, register it in the server, rebuild, and restart.
Q: Is this production-ready?
A: It’s open-source and Apache 2.0 licensed. Test it in staging first. Set LOG_LEVEL=warn to avoid log noise. Monitor resource usage if querying frequently.
Q: How are responses formatted for LLMs?
A: List commands return minimal summaries (name, ID, tags). Detail commands return full objects. Errors include structured messages — no raw HTTP dumps.
Latest MCP Servers
Notion
Log Mcp
Apple
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



