Quick Answer
AI-generated Docker Compose files ship with three critical security flaws: databases exposed on public ports, default passwords like "postgres" or "password", and no network isolation between services. On a VPS, these defaults mean your database is publicly accessible from the internet with a password that every bot already knows. Fix it by removing external port mappings for internal services, using Docker secrets or strong environment variables, and creating internal networks.
The Default Docker Compose is a Security Hole
When you ask an AI tool to generate a Docker Compose file, the output is optimized for local development convenience, not production security. The AI generates port mappings so you can access everything from your browser, uses simple passwords so the setup works immediately, and puts everything on the default network.
This is fine on localhost. But when you copy the same file to a VPS and run docker compose up -d, every mapped port is now publicly accessible on the internet. According to Shodan's 2024 data, over 900,000 PostgreSQL instances are exposed on the public internet, and 12,000+ Redis instances are accessible without authentication. Many of these are Docker containers running with default configurations.
The Sysdig 2024 Cloud Security report found that 75% of container security incidents involve misconfigured access controls - not sophisticated exploits, just default passwords and exposed ports.
The Three Critical Flaws
Flaw 1: Exposed Database Ports
// ❌ BAD - Database accessible from the internet on port 5432
services:
postgres:
image: postgres:15
ports:
- "5432:5432" # Anyone on the internet can connect
environment:
POSTGRES_PASSWORD: postgres
redis:
image: redis:7
ports:
- "6379:6379" # Redis has no auth by default
// ✅ GOOD - No external ports, internal network only
services:
postgres:
image: postgres:15
# No 'ports' - only accessible from other containers
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD} # From .env file
networks:
- internal
redis:
image: redis:7
command: redis-server --requirepass ${REDIS_PASSWORD}
# No 'ports' - only accessible from other containers
networks:
- internal
api:
build: .
ports:
- "3000:3000" # Only the API is publicly accessible
networks:
- internal
networks:
internal:
driver: bridge
Flaw 2: Default Passwords
AI tools generate passwords like postgres, password, admin, or changeme. Bots scan for exactly these credentials. If your database port is exposed, it will be compromised within minutes.
// ❌ BAD - Hardcoded default passwords
environment:
POSTGRES_PASSWORD: postgres
REDIS_PASSWORD: password
MONGO_INITDB_ROOT_PASSWORD: admin123
// ✅ GOOD - Strong passwords from .env file (not committed to git)
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
REDIS_PASSWORD: ${REDIS_PASSWORD}
// .env (add to .gitignore!)
DB_PASSWORD=kP9$mN2xL7#qR4wE8vB1
REDIS_PASSWORD=hJ6&tF3yK9@pW5sA2mC8
Flaw 3: No Network Isolation
By default, all services in a Docker Compose file share a single network. This means if one service is compromised, the attacker has direct access to every other service. Create separate networks for public-facing services and internal services.
| Service | Should Be Public | Should Have External Port | Network |
|---|---|---|---|
| Web app / API | Yes | Yes (80/443 via reverse proxy) | Public + Internal |
| PostgreSQL | No | No | Internal only |
| Redis | No | No | Internal only |
| MongoDB | No | No | Internal only |
| Workers / Queues | No | No | Internal only |
Production Docker Compose Checklist
Before deploying your Docker Compose file to a VPS, check every item:
- Remove all database port mappings - PostgreSQL, Redis, MongoDB, MySQL should have no
ports:section - Use strong passwords from .env - generate 20+ character passwords with symbols
- Add .env to .gitignore - never commit secrets to Git
- Create a .env.example - document required variables without real values
- Pin image versions - use
postgres:15-alpine, notpostgres:latest - Add health checks - every service should have a
healthcheck:block - Set resource limits - add
mem_limitandcpusto prevent one container from starving others - Use a reverse proxy - Caddy, Nginx, or Traefik in front of your app for HTTPS termination
- Enable restart policies -
restart: unless-stoppedfor production services
Tools like VibeDoctor (vibedoctor.io) scan your Docker configuration for exposed ports, default passwords, unpinned image tags, and missing health checks, flagging exactly what needs to change before you deploy. Free to sign up.
FAQ
How do I access my database if I remove the port mapping?
Your app containers access the database through the internal Docker network using the service name as the hostname (e.g., postgres://user:pass@postgres:5432/db). For admin access, use docker exec -it postgres psql to connect from inside the container, or add a temporary port mapping only while debugging and remove it immediately after.
Is Docker Compose secure enough for production?
Docker Compose is fine for single-server deployments. Most startups and solo founders do not need Kubernetes. The security issues are not with Docker Compose itself but with the default configuration that AI tools generate. A properly configured Compose file with internal networks, strong passwords, and no exposed internal ports is production-ready.
Should I use Docker secrets instead of environment variables?
Docker secrets (available in Swarm mode) are more secure because they are stored encrypted and mounted as files, not visible in docker inspect. For a solo founder on a single VPS, environment variables from a properly secured .env file are pragmatically sufficient. If you handle extremely sensitive data (financial, healthcare), consider Docker secrets or a secrets manager like Vault.
What image tag should I use if not "latest"?
Pin to a specific major.minor version: postgres:15-alpine, redis:7-alpine, node:20-alpine. This ensures reproducible builds while still getting patch updates. Never use :latest in production because it can pull breaking changes without warning during a rebuild.