How Do I Build an MCP Server That Lets Claude Query My Postgres Database?

The first time I asked Claude to look at a client's Postgres database, I did the dumb thing. I dumped a CSV of the last 90 days, uploaded it to the chat, and asked for insights. Six hours later the project was drowning in stale exports, I couldn't remember which CSV was which, and every question required a new dump. That workflow doesn't scale.
What scales is MCP — Anthropic's Model Context Protocol — and specifically a read-only Postgres MCP server. Once you set one up, Claude can query your database directly, with a role that can't write anything, and your CSV graveyard disappears.
This guide builds that server from scratch in about an hour. No hosted tier, no SaaS dependency. Just Python, a role, and a config file.
Why this matters
MCP is a standard for exposing tools to Claude. An MCP "server" is a small program that registers a set of functions Claude can call. Claude asks the server to run a function (say, query(sql)), the server runs it, returns the result, and Claude uses the data in its reply.
For a database, the MCP server is the safest possible shape for giving Claude access. You control:
- Which role it connects as (a read-only role you create just for this)
- Which queries it allows (via SQL, LIMIT clauses, cost guards)
- What it returns (row count caps, column whitelists)
None of that lives in the prompt. It lives in your server code, where an attacker can't jailbreak their way past it.
The alternative — pasting schemas and dumps into chats, or handing Claude a write-capable role and crossing your fingers — is how you end up on the bad side of a postmortem.
Before you start
You need:
- A Postgres database you own (or have admin rights to). Local, RDS, Supabase, Neon — all fine.
- Python 3.10+ on your machine.
python3 --versionshould print 3.10 or newer. - Claude Code or Claude Desktop — both speak MCP. If you have neither, start with our Claude Code macOS setup guide.
- About 60 minutes the first time. Iterating on the schema is fast once this scaffolding is in place.
- The
uvpackage manager for fast Python dependency handling. Install from docs.astral.sh/uv.
Step 1: Create a read-only Postgres role
Before a single line of Python, create the role Claude will use. This is the single most important security step.
Connect to your database as a superuser and run:
-- Create a role Claude will authenticate as.
CREATE ROLE claude_readonly WITH LOGIN PASSWORD 'replace-with-strong-password';
-- Can't create new objects.
REVOKE CREATE ON SCHEMA public FROM claude_readonly;
-- Can read current tables.
GRANT USAGE ON SCHEMA public TO claude_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO claude_readonly;
-- Can read future tables you create too.
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO claude_readonly;
-- Lock the session to read-only even if we mess up the Python code later.
ALTER ROLE claude_readonly SET default_transaction_read_only = on;
-- Budget: cancel any query that runs longer than 15 seconds.
ALTER ROLE claude_readonly SET statement_timeout = '15s';Four layers of defense: no CREATE, only SELECT, forced read-only transactions, forced query timeout. If Claude ever tries to DROP TABLE (or, more realistically, if a prompt-injection attempt from a malicious document tries to), it dies at the database level before your Python code even hears about it.
Test the role:
psql -U claude_readonly -d your_database_name -c "SELECT now();"If that returns a timestamp, you're good. Now try to break it:
psql -U claude_readonly -d your_database_name -c "CREATE TABLE boom(id int);"
# ERROR: permission denied for schema publicPerfect.
Step 2: Scaffold the MCP server with FastMCP
FastMCP is the Python library that makes MCP servers roughly as short as writing a Flask app. Create a project directory:
mkdir -p ~/code/postgres-mcp && cd ~/code/postgres-mcp
uv init
uv add fastmcp psycopg2-binaryNow create server.py:
import os
from contextlib import contextmanager
from fastmcp import FastMCP
import psycopg2
from psycopg2.extras import RealDictCursor
mcp = FastMCP("postgres-mcp")
# Single source of truth for the connection string.
CONN_STR = os.environ["CLAUDE_POSTGRES_DSN"]
# Hard cap on rows returned to Claude. Nobody needs a 50k-row
# JSON dump pasted into chat context.
MAX_ROWS = 200
@contextmanager
def get_cursor():
conn = psycopg2.connect(CONN_STR)
try:
with conn.cursor(cursor_factory=RealDictCursor) as cur:
yield cur
finally:
conn.close()
@mcp.tool()
def list_tables() -> list[str]:
"""List all user-facing tables in the public schema."""
with get_cursor() as cur:
cur.execute(
"SELECT tablename FROM pg_tables WHERE schemaname='public' ORDER BY tablename"
)
return [row["tablename"] for row in cur.fetchall()]
@mcp.tool()
def describe_table(table_name: str) -> list[dict]:
"""Return the column names and types for a given table."""
with get_cursor() as cur:
cur.execute(
"""
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema='public' AND table_name=%s
ORDER BY ordinal_position
""",
(table_name,),
)
return cur.fetchall()
@mcp.tool()
def run_query(sql: str) -> dict:
"""
Run a SELECT query. Returns at most 200 rows. The database role
is read-only and has a 15s statement timeout, so non-SELECT
statements will fail.
"""
with get_cursor() as cur:
cur.execute(sql)
rows = cur.fetchmany(MAX_ROWS)
truncated = len(rows) == MAX_ROWS
return {
"row_count": len(rows),
"truncated": truncated,
"rows": rows,
}
if __name__ == "__main__":
mcp.run()That's the whole server. Three tools: list tables, describe a table, run a query. Claude can chain them — list, pick one, describe its columns, write a SELECT — and never touch anything outside of read.
Notice run_query doesn't try to parse the SQL or blocklist words like "DELETE." That approach loses every time against a determined attacker. The defense is structural: the role can't write, period.
Step 3: Register the server with Claude Code
Claude Code reads MCP config from ~/.claude/mcp-config.json (user-wide) or ./.claude/mcp-config.json (project-local). I keep database servers project-local so different projects hit different databases.
Create .claude/mcp-config.json in your project:
{
"mcpServers": {
"postgres": {
"command": "uv",
"args": ["run", "--directory", "/Users/you/code/postgres-mcp", "python", "server.py"],
"env": {
"CLAUDE_POSTGRES_DSN": "postgresql://claude_readonly:the-password@localhost:5432/your_database"
}
}
}
}Replace the path and the connection string. Don't commit this file if it has a password in it — add .claude/mcp-config.json to your .gitignore, or use a secrets tool like 1Password CLI to substitute the password at runtime.
Step 4: Test from Claude
Open Claude Code in the project folder:
claudeAsk:
What tables are in the database?Claude should call list_tables and show you the list. If it works:
Describe the users table and then count how many users signed up
in the last 7 days.Claude should call describe_table("users"), read the schema, then construct a SELECT against it via run_query, and give you the answer.
If it fails, almost always it's one of two things: the DSN is wrong (test it with psql independently), or Claude Code didn't load the MCP config (restart Claude Code from the project root).
Step 5: Harden for real use
Before this server touches anything beyond a staging database, add three things:
1. Connection pooling. Creating a connection per query is fine for five queries a day. For serious usage, drop in psycopg2.pool.ThreadedConnectionPool and reuse connections.
2. A query cost guard. Before running, use EXPLAIN and reject queries whose estimated cost is above a threshold:
@mcp.tool()
def run_query(sql: str) -> dict:
with get_cursor() as cur:
cur.execute(f"EXPLAIN {sql}")
plan = cur.fetchone()
# Very rough guard; tune to your database size.
if "cost=" in str(plan) and "cost=1000000" in str(plan):
return {"error": "Query plan too expensive; refine it."}
cur.execute(sql)
rows = cur.fetchmany(MAX_ROWS)
return {"row_count": len(rows), "rows": rows}3. Table whitelisting. If Claude doesn't need to see users.password_hash or payments.card_number, don't GRANT SELECT on them. Revoke at the column level:
REVOKE SELECT ON users FROM claude_readonly;
GRANT SELECT (id, email, created_at, status) ON users TO claude_readonly;Now Claude can query the users table but physically cannot see password hashes. Defense in depth.
Verify it worked
You know your setup is solid when all three of these pass:
You: "List the tables."
Claude: [uses list_tables, shows the list]
You: "Try to delete one."
Claude: [attempts run_query with DELETE, gets a permission error, reports it]
You: "What's our top revenue day last month?"
Claude: [constructs a SELECT across the right tables, returns the answer]The second test is the most important. You want Claude hitting a wall, not a wall of text about "I shouldn't do that" — you want the database to be the wall.
Where this breaks
- Using the default
postgresuser. Every guide on the internet does this. Don't. A moment of laziness here is the difference between a contained mistake and a destroyed production database. - Skipping the
statement_timeout. A Claude-generated query missing a LIMIT and a WHERE can pin a CPU for minutes. The role-level timeout is your airbag. - Running over the open internet without TLS. If your database is remote, use
sslmode=requirein the DSN. Your read-only password is still a password. - Treating the MAX_ROWS cap as the last line of defense. Claude can ask for aggregates that return 1 row and summarize 50M. That's a database-cost problem, not a row-count problem. Use the cost guard.
- Forgetting to
pg_dumpbefore you trust it. Even read-only queries can lock rows in certain isolation levels. Test on a backup first.
What to try next
- How Do I Cut My Anthropic Bill in Half Using the Batch API? — once Claude is calling your database a hundred times a day, batch pricing is how you keep the bill sane.
- How Do I Automate a Weekly SEO Audit That Emails Me the Wins and Losses? — similar pattern, different data source.
- Want this built for your stack, with role hardening and pooling and the full hardening pass? Let's talk.
Let's talk about your AI + SEO stack
If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.
Let's Talk