Buttplug
buttplug-mcp is an MCP server that bridges the Buttplug.io ecosystem and AI tools like Claude. It lets LLMs interact with intimate hardware, like querying device status, adjusting settings, or triggering actions via natural language.
Features
- 🔌 Device Discovery: Lists connected Buttplug devices via
/devicesresource. - 📶 Signal/Battery Checks: Fetches RSSI signal strength (
/device/{id}/rssi) and battery levels (/device/{id}/battery). - 🌀 Direct Control: Triggers vibrations via
device_vibratetool with motor/strength parameters. - 🔧 Open & Hackable: MIT-licensed Golang project built on go-buttplug and go-mcp libraries.
- ⚙️ Multi-Platform: Homebrew installs or pre-built binaries for macOS/Linux/Windows.
Use Cases
- Prototyping AI intimacy tools: Test LLM-driven device interactions without building full UI.
- Troubleshooting hardware: Query battery or signal issues mid-session via chat.
- Automation scripting: Pair with HomeAssistant MCP to dim lights when devices activate.
- Educational experiments: Learn MCP protocol implementation using a tangible, unconventional dataset.
How to Use It
1. Installation
# Homebrew (macOS/Linux)
brew tap conacademy/homebrew-tap
brew install buttplug-mcp
# Binaries: GitHub Releases (all OS)
https://github.com/conacademy/buttplug-mcp/releases2. Launch Intiface Central (default port: 12345).
3. Configure Claude Desktop (or other MCP servers):
// claude_desktop_config.json
{
"mcpServers": {
"buttplug": {
"command": "/opt/homebrew/bin/buttplug-mcp",
"args": ["--ws-port", "12345"]
}
}
}4. Local LLMs (Ollama + mcphost):
go install github.com/mark3labs/mcphost@latest
mcphost -m ollama:llama3.3 --config mcp.jsonBasic Commands
Prompt your LLM with:
"List my connected Buttplug devices"→ Returns JSON via/devices."Vibrate device ID 0 at 70% strength"→ Invokesdevice_vibratetool.
CLI Flags
| Flag | Purpose |
|---|---|
--ws-port | Intiface Central port (default: 12345) |
--sse | Switch transport to SSE |
--log-json | JSON-structured logs |
FAQs
Q: Is this production-ready?
A: No. It’s unstable due to go-buttplug connection issues. Devices may not respond to commands—treat it as a proof-of-concept.
Q: Can I test without physical hardware?
A: Not currently. The lack of virtual device support limits debugging.
Q: My device isn’t vibrating—what’s wrong?
A: Verify Intiface Central detects the device first. If it’s “read-only” in Intiface, the go-buttplug library may be blocking commands.
Latest MCP Servers
Terminal
Cloudflare
Comet
Featured MCP Servers
Microsoft Work IQ
Better Icons
Apify
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



