Security Detections
Security Detections MCP is an MCP server that provides AI agents with direct access to a unified database of security detection rules.
The MCP server indexes and normalizes detection content from four major sources: Sigma, Splunk ESCU, Elastic, and KQL. It stores everything in a local SQLite database with full-text search capabilities, and it exposes over 25 specialized query tools through the MCP.
Features
- 🔍 Unified Full-Text Search: SQLite FTS5-powered search across detection names, descriptions, queries, CVE identifiers, process names, MITRE mappings, and more
- 🗂️ Multi-Format Support: Parses YAML (Sigma, Splunk), TOML (Elastic), and Markdown/raw .kql (KQL hunting queries)
- 🎯 MITRE ATT&CK Integration: Filter detections by technique ID (e.g., T1059.001) or tactic (e.g., credential-access, persistence)
- 🛡️ CVE Coverage Lookup: Find all detections targeting a specific vulnerability identifier
- ⚙️ Process Name Filtering: Search for detections referencing specific executables like powershell.exe or w3wp.exe
- 📊 Token-Optimized Analysis Tools: Server-side processing returns minimal, actionable data instead of full detection objects
- 📈 Coverage Gap Analysis: Identify detection gaps against threat profiles (ransomware, APT, persistence)
- 🗺️ ATT&CK Navigator Export: Generate ready-to-import Navigator layer JSON files
- 📖 Analytic Story Search: Query Splunk’s threat campaign narratives for additional context
- 🔄 Auto-Indexing: Automatically indexes all configured paths on server startup
Use Cases
- Gap Analysis & Coverage Mapping: Security engineers can query the database to identify which specific MITRE ATT&CK techniques are missing from their current detection set (e.g., “Show me gaps in my Ransomware coverage for Elastic”).
- Rule Translation & Logic Validation: Detection engineers can retrieve a Splunk SPL rule and ask the LLM to convert the logic into a Sigma rule or KQL query while referencing the actual source logic.
- Incident Response & Threat Hunting: Analysts can quickly pull up all detections related to a specific artifact, such as
w3wp.exeorCVE-2024-27198, to determine what to look for during an active investigation. - Rapid Content Deployment: Developers can download the latest rules from upstream repositories and immediately query them through the MCP server to stay current with emerging threats without waiting for vendor updates.
How to Use It
Installation
npx (Recommended)
npx -y security-detections-mcpClone and Build
git clone https://github.com/MHaggis/Security-Detections-MCP.git
cd Security-Detections-MCP
npm install
npm run buildDownload Detection Content
The MCP server requires local copies of detection rule repositories. The sparse checkout method downloads only the rules directories (not full git history), which saves significant disk space.
Create a detections directory and download all sources:
# Create detections directory
mkdir -p detections && cd detections
# Download Sigma rules (~3,000+ rules)
git clone --depth 1 --filter=blob:none --sparse https://github.com/SigmaHQ/sigma.git
cd sigma && git sparse-checkout set rules rules-threat-hunting && cd ..
# Download Splunk ESCU detections + stories (~2,000+ detections, ~330 stories)
git clone --depth 1 --filter=blob:none --sparse https://github.com/splunk/security_content.git
cd security_content && git sparse-checkout set detections stories && cd ..
# Download Elastic detection rules (~1,500+ rules)
git clone --depth 1 --filter=blob:none --sparse https://github.com/elastic/detection-rules.git
cd detection-rules && git sparse-checkout set rules && cd ..
# Download KQL hunting queries (~400+ queries from 2 repos)
git clone --depth 1 https://github.com/Bert-JanP/Hunting-Queries-Detection-Rules.git kql-bertjanp
git clone --depth 1 https://github.com/jkerai1/KQL-Queries.git kql-jkerai1The script outputs the exact paths to use in your MCP configuration.
Alternative: Full Clone
Clone complete repositories if you need git history:
git clone https://github.com/SigmaHQ/sigma.git
git clone https://github.com/splunk/security_content.git
git clone https://github.com/elastic/detection-rules.git
git clone https://github.com/Bert-JanP/Hunting-Queries-Detection-Rules.git
git clone https://github.com/jkerai1/KQL-Queries.gitConfiguration
Environment Variables
| Variable | Description | Required |
|---|---|---|
sigma_paths | Comma-separated paths to Sigma rule directories | At least one source required |
escu_paths | Comma-separated paths to Splunk ESCU detection directories | At least one source required |
elastic_paths | Comma-separated paths to Elastic detection rule directories | At least one source required |
kql_paths | Comma-separated paths to KQL hunting query directories | At least one source required |
story_paths | Comma-separated paths to Splunk analytic story directories | Optional (enhances context) |
Cursor IDE Configuration
Add to ~/.cursor/mcp.json or .cursor/mcp.json in your project:
{
"mcpServers": {
"security-detections": {
"command": "npx",
"args": ["-y", "security-detections-mcp"],
"env": {
"SIGMA_PATHS": "/path/to/sigma/rules,/path/to/sigma/rules-threat-hunting",
"SPLUNK_PATHS": "/path/to/security_content/detections",
"ELASTIC_PATHS": "/path/to/detection-rules/rules",
"KQL_PATHS": "/path/to/Hunting-Queries-Detection-Rules",
"STORY_PATHS": "/path/to/security_content/stories"
}
}
}
}Claude Desktop Configuration
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"security-detections": {
"command": "npx",
"args": ["-y", "security-detections-mcp"],
"env": {
"SIGMA_PATHS": "/Users/you/sigma/rules,/Users/you/sigma/rules-threat-hunting",
"SPLUNK_PATHS": "/Users/you/security_content/detections",
"ELASTIC_PATHS": "/Users/you/detection-rules/rules",
"KQL_PATHS": "/Users/you/Hunting-Queries-Detection-Rules",
"STORY_PATHS": "/Users/you/security_content/stories"
}
}
}
}Database Location
The SQLite index is stored at ~/.cache/security-detections-mcp/detections.sqlite. The server creates this file automatically on first run and re-indexes when paths change. Use the rebuild_index() tool to manually refresh the index after updating your detection repositories.
All Available MCP Tools
| Tool | Description |
| Search | Full-text search across all detection fields |
| Get ID | Retrieve a single detection by its unique identifier |
| List | Paginated list of all indexed detections |
| Filter Source | Filter by source: Sigma, ESCU, Elastic, or KQL |
| Get Raw | Return the original YAML/TOML/Markdown content |
| Stats | Return index statistics (counts by source, last indexed time) |
| Reindex | Force re-indexing from all configured paths |
| Technique | Filter by technique ID (e.g., T1059.001) |
| Tactic | Filter by tactic (e.g., execution, persistence, credential-access) |
| CVE | Find detections for a CVE (e.g., CVE-2024-27198) |
| Process | Find detections referencing a process (e.g., powershell.exe) |
| Data Source | Filter by data source (e.g., Sysmon, Windows Security) |
| Logsource | Filter Sigma rules by logsource fields |
| Severity | Filter by severity: informational, low, medium, high, critical |
| Type | Filter by type: TTP, Anomaly, Hunting, Correlation |
| Story Filter | Filter by Splunk analytic story name |
| Category | Filter by category (e.g., “Defender For Endpoint”, “Azure Active Directory”) |
| Tag | Filter by tag (e.g., “ransomware”, “hunting”, “ti-feed”, “dfir”) |
| MS Data Source | Filter by Microsoft data source (e.g., “DeviceProcessEvents”, “SigninLogs”) |
| Story Search | Search analytic stories by narrative and description |
| Story Detail | Get detailed story information |
| Story List | Paginated list of all analytic stories |
| Story Category | Filter stories by category (Malware, Adversary Tactics, etc.) |
| Coverage | Coverage stats by tactic, top techniques, weak spots |
| Gap Analysis | Find gaps for ransomware, apt, persistence profiles |
| Ideas | Get detection ideas for a specific technique |
| Get TIDs | Get only technique IDs without full objects |
| Navigator | Generate ATT&CK Navigator layer JSON |
FAQs
Q: Why are my search results empty after installation?
A: The server requires local content to function. Verify that you cloned the repositories into the paths specified in your env configuration. If the paths are correct, run the rebuild_index() tool once to force the SQLite database to populate.
Q: Can I use this with my private detection repository?
A: Yes. If your private repository follows the standard Sigma, Splunk, or Elastic file structure, you can simply add the path to the corresponding environment variable (e.g., add your path to SIGMA_PATHS).
Q: Does this send my queries or rules to a third-party cloud?
A: No. The indexing and searching happen entirely locally using SQLite. The only external communication occurs between your MCP client (Claude/Cursor) and the LLM provider you are already using.
Q: How do I update the rules?
A: You must update the local git repositories manually. Navigate to your detections folders and run git pull. Afterward, ask the MCP to run rebuild_index() to update the internal database.
Q: What happens if I don’t define all path variables?
A: You must define at least one path variable (SIGMA_PATHS, SPLUNK_PATHS, ELASTIC_PATHS, or KQL_PATHS) for the server to start. You can omit the ones you do not wish to use.
Latest MCP Servers
CVE
WebMCP
webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…
Google Meta Ads GA4
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



