PGMCP
PGMCP is an MCP server that provides a natural language interface to any PostgreSQL database.
It translates plain English questions into SQL queries, executes them safely with read-only access, and streams the results back to MCP clients like Claude Desktop or Cursor.
This MCP server enables you to interact with your existing database schemas, whether for analytics, e-commerce, or CRM, without writing a single line of SQL.
Features
- 🗣️ Natural Language to SQL: Ask questions in English; get SQL results.
- 🔒 Safe Read-Only Access: Blocks any write operations like INSERT, UPDATE, or DELETE.
- 🌊 Automatic Streaming: Handles huge result sets without eating all your memory.
- 🌐 Universal Compatibility: Works with any existing PostgreSQL database schema out of the box.
- 📄 Multiple Output Formats: Delivers results as a table, JSON, or CSV.
- 🛠️ Robust Error Handling: Recovers gracefully when the AI generates incorrect SQL.
Use Cases
- Rapid Data Exploration: You’re a developer jumping into a new project with a complex database. Instead of digging for schema diagrams, you can just ask PGMCP, “Show me all tables related to user profiles” or “What columns are in the orders table?” to get oriented quickly.
- Quick Answers for Non-Developers: A product manager needs to know how many users signed up last week. Instead of asking an engineer to run a query, they can use a client connected to PGMCP and ask the question directly, getting the number back instantly.
- Safe AI-Powered Development: You are using an AI assistant like Cursor to write code that interacts with the database. You can connect it to PGMCP to give the AI context about the database schema safely. The AI can then generate accurate code without having direct, risky access to the database credentials.
- Building Internal Tools: You can build a simple search interface for your support team on top of PGMCP. The
searchtool lets them perform a free-text search across all text columns to find customer records without needing a complicated admin panel.
How to Use It
1. Prerequisites:
- Go 1.23 or newer
- A PostgreSQL database
- An OpenAI API key (or another compatible API)
2. Compile the server and a test client from the source.
# Build the server and client
go build -o pgmcp-server ./server
go build -o pgmcp-client ./client3. Point it to your own database.
# Set the connection string for your database
export DATABASE_URL="postgres://user:password@localhost:5432/mydb"
# This step is optional, only needed for the sample marketplace data
psql $DATABASE_URL < schema.sql4. Now, run the server. You need to provide an API key for the language model that generates the SQL.
# Set your API key and run the server
export OPENAI_API_KEY="your-api-key"
./pgmcp-server5. You can test it with the client you compiled earlier.
# Ask a question and get a table-formatted response
./pgmcp-client -ask "Who is the user that places the most orders?" -format table
# Ask another question
./pgmcp-client -ask "Show me the top 40 most reviewed items in the marketplace" -format table6. Control the server through these environment variables:
DATABASE_URL(Required): The connection string for your PostgreSQL database.OPENAI_API_KEY: Your OpenAI API key.OPENAI_MODEL: The model you want to use. It defaults to “gpt-4o-mini”.HTTP_ADDR: The server address. It defaults to “:8080”.HTTP_PATH: The MCP endpoint path. It defaults to “/mcp”.AUTH_BEARER: A bearer token you can set for simple authentication.
7. To use it with Cursor, add the server to your settings file.
{
"mcp.servers": {
"pgmcp": {
"transport": {
"type": "http",
"url": "http://localhost:8080/mcp"
}
}
}
}8. For Claude Desktop, edit ~/.config/claude-desktop/claude_desktop_config.json.
{
"mcpServers": {
"pgmcp": {
"transport": {
"type": "http",
"url": "http://localhost:8080/mcp"
}
}
}
}9. The MCP server exposes three main tools via the MCP protocol:
stream: Provides advanced, paginated streaming for handling extremely large result sets.ask: Transforms a natural language question into SQL, executes it, and streams the results.search: Performs a case-insensitive text search across all textual columns in the database.
FAQs
Q: Does PGMCP make any changes to my database?
A: No. It operates in a strictly read-only mode. It cannot alter your schema or your data. The server blocks any SQL commands like INSERT, UPDATE, DELETE, or DROP.
Q: What happens if the AI generates a bad or harmful SQL query?
A: PGMCP has safety features to handle this. First, its read-only enforcement blocks any write attempts. Second, if the AI generates syntactically incorrect SQL (e.g., referencing a column that doesn’t exist), the server catches the database error and returns a helpful JSON message with a suggestion instead of crashing.
Q: Can I use this with a local LLM like Ollama instead of OpenAI?
A: Yes. As long as your local LLM exposes an OpenAI-compatible API endpoint, you can configure PGMCP to use it.
Latest MCP Servers
Excalidraw
Claude Context Mode
Context+
Featured MCP Servers
Excalidraw
Claude Context Mode
Context+
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



