Docker Compose, direct hosting on a VPS you own
Self-host on Hetzner / DigitalOcean / your own server. node:22-slim base, env_file pattern, health check, the --force-recreate gotcha.
Docker Compose, direct hosting on a VPS you own
MCPize is great until you need custom infra: bigger memory, GPU, on-prem requirements, or you just want to own the host. Docker Compose on a Hetzner / DigitalOcean / Vultr VPS gives you full control for €5-10/month. This recipe is the production-tested compose file plus the env-file gotcha that kills 50% of restarts.
Schritt 1: The Dockerfile
# Dockerfile
FROM node:22-slim
WORKDIR /app
# Copy manifests first for layer caching
COPY package*.json ./
RUN npm ci --omit=dev
# Then source
COPY dist ./dist
COPY recipes ./recipes
COPY scripts ./scripts
# Healthcheck, node fetch() because slim doesn't have curl
HEALTHCHECK --interval=30s --timeout=5s --start-period=20s --retries=3 \
CMD node -e "fetch('http://localhost:'+process.env.PORT+'/health').then(r => process.exit(r.ok ? 0 : 1))"
EXPOSE 3000
CMD ["node", "dist/server.js"]
Three intentional choices:
node:22-slim. Debian Bookworm-based, ~150MB compressed, has glibc (Alpine's musl breaks Chromium / native deps).npm ci --omit=dev, production deps only, ~5x smaller node_modules.node fetch()for healthcheck,node:22-slimdoesn't shipcurlorwget.node -e fetch(...)is the lightest alternative.
COPY package*.json before COPY dist is the layer-cache trick, package changes are rare, dist changes every build.
Schritt 2: docker-compose.yml
# docker-compose.yml
services:
my-mcp:
build: .
container_name: my-mcp
restart: unless-stopped
network_mode: host # see Step 3 for why
env_file: .env # see Step 4 for the trap
environment:
- NODE_ENV=production
- PORT=3000
- HOST=127.0.0.1 # bind to loopback only
healthcheck:
test: ["CMD-SHELL", "node -e \"fetch('http://localhost:3000/health').then(r => process.exit(r.ok ? 0 : 1))\""]
interval: 30s
timeout: 5s
retries: 3
start_period: 20s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Key choices explained in the next steps.
Schritt 3: `network_mode: host` vs default bridge
For a single-server setup, network_mode: host is simpler:
- Container binds directly to the host's localhost, no port mapping.
127.0.0.1:3000works from both inside and outside the container.- Nginx (also on host) reverse-proxies to
localhost:3000without docker port forwarding.
Trade-off: container can see all host ports (security implication if you're running multi-tenant containers). For single-app servers, fine. For shared servers, use bridge mode + explicit port mapping.
HOST=127.0.0.1 keeps you off 0.0.0.0 so no accidental public exposure even on host network.
Schritt 4: env_file trap, `restart` doesn't reload
# Edit .env
echo "STRIPE_SECRET_KEY=sk_live_new" >> .env
# WRONG, restart doesn't reload env_file
docker compose restart my-mcp
# Container restarts but still uses old STRIPE_SECRET_KEY
# RIGHT, recreate from compose-time env
docker compose up -d --force-recreate my-mcp
docker compose restart reloads the process but keeps the existing container with its existing environment. up -d --force-recreate rebuilds the container from the compose file, which re-reads .env.
This took us out for 25 hours once. The fix is muscle memory: env change → --force-recreate. Always.
Schritt 5: The deploy script
#!/bin/bash
# deploy.sh, run from your local dev box
set -e
REMOTE=ai-server # SSH alias
REMOTE_DIR=/opt/my-mcp
# 1. Build locally first to catch errors early
npm ci && npm run build
# 2. Sync to server (excluding node_modules, .env, .git)
rsync -av --delete \
--exclude=node_modules --exclude=.git --exclude=.env --exclude=dist/cache \
./ $REMOTE:$REMOTE_DIR/
# 3. One SSH call: build + restart + health check
ssh $REMOTE "set -e
cd $REMOTE_DIR
docker compose build
docker compose up -d --force-recreate my-mcp
sleep 8
docker inspect my-mcp --format '{{.State.Health.Status}}'
curl -s http://localhost:3000/health
"
SSH batching matters. One SSH call with && chains is cheap; ten separate SSH calls (ssh ... cd, ssh ... build, ssh ... up, ...) trigger fail2ban after 10 minutes (we got banned from our own server once, couldn't SSH back in for hours). One SSH = no ban.
Schritt 6: nginx reverse proxy
(Full setup in 8.3, just the relevant bit here):
# /etc/nginx/sites-enabled/your-mcp.conf
server {
listen 443 ssl http2;
server_name your-mcp.io;
ssl_certificate /etc/letsencrypt/live/your-mcp.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-mcp.io/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s; # SSE streams
}
}
proxy_read_timeout 3600s is the one non-default that matters. MCP Streamable HTTP holds connections open for SSE responses, default 60s timeout would cut them off mid-stream.
Schritt 7: Verify
Run academy_validate_step. The validator checks package.json is wired. For the deploy itself:
ssh $REMOTE 'docker ps --filter name=my-mcp'
# → my-mcp Up 2 minutes (healthy)
curl -s https://your-mcp.io/health
# → {"status":"ok","version":"1.0.0",...}
(healthy) in the docker ps output means the healthcheck passed. If it says (unhealthy) or (starting) for more than 30 seconds, check logs:
ssh $REMOTE 'docker compose -f /opt/my-mcp/docker-compose.yml logs my-mcp --tail=50'
Common traps
restartdoesn't reload.env, always use--force-recreateafter env changes.- No healthcheck. Docker can't restart a hung container. Always add one.
- Healthcheck uses
curl,node:22-slimdoesn't have curl. Usenode -e fetch(...). - Multiple SSH calls, fail2ban will ban you. Batch with one SSH +
&&. network_mode: host+HOST=0.0.0.0, accidental public exposure. SetHOST=127.0.0.1.- Forgetting
proxy_read_timeouton nginx. SSE responses cut off at 60s. - No log rotation. JSON-file logs eat disk. Set
max-size: 10m, max-file: 3. - Building on the production server, slow + uses prod CPU. Build locally, rsync the dist, build the image on prod (Docker layers cache).
What good looks like
One docker-compose.yml, one Dockerfile, one deploy.sh. Deploy is ./deploy.sh from your laptop, ~30 seconds total, container (healthy) in 10 seconds, external health URL returns 200. nginx reverse-proxy with HTTPS. No env-file confusion because the deploy script always uses --force-recreate.
If you find yourself SSH-ing into the production server to debug a deploy, something in the script is wrong, fix the script, not the server.
cat package.json 2>/dev/null | python3 -c "import json,sys; p=json.load(sys.stdin); deps=list((p.get(\"dependencies\") or {}).keys()); print(\"sdk:\", \"@modelcontextprotocol/sdk\" in deps); print(\"bin:\", bool(p.get(\"bin\"))); print(\"main:\", bool(p.get(\"main\")))" 2>/dev/null || echo "no package.json in cwd"