Dockerizing Python and Django Applications
Python in containers has more sharp edges than Node. Here's how to handle native dependencies, virtualenvs, Gunicorn, and migrations cleanly in a Django app.
Python in Docker is more annoying than Node in Docker. Native dependencies, virtualenvs, and a sprawling ecosystem of recommended servers all conspire to make the first attempt go sideways. This guide is the canonical "good defaults" Dockerfile for a Django app.
Picking a base image
python:3.12-slim— debian-based, balanced size, native build tools available.python:3.12-alpine— smallest, but native packages with C extensions often need recompilation.python:3.12— full Debian, biggest, easiest for problematic dependencies.
Default to python:3.12-slim unless image size is critical and you've verified your dependencies build on Alpine.
Production Dockerfile
FROM python:3.12-slim AS builder
WORKDIR /app
# System deps for common Python wheels (psycopg, lxml, etc.)
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential libpq-dev && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip wheel --no-cache-dir --wheel-dir=/wheels -r requirements.txt
FROM python:3.12-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
# Runtime native libs only
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 && rm -rf /var/lib/apt/lists/*
COPY --from=builder /wheels /wheels
RUN pip install --no-cache-dir /wheels/*
COPY . .
# Non-root
RUN adduser --disabled-password --gecos '' app && chown -R app /app
USER app
EXPOSE 8000
CMD ["gunicorn", "myproject.wsgi", "--bind", "0.0.0.0:8000", "--workers", "3", "--timeout", "60"]
Gunicorn vs runserver
manage.py runserver is for development only. In production use Gunicorn (sync) or Uvicorn (async, for Django + ASGI). Both are battle-tested and integrate cleanly with the container model.
Migrations
Don't bake migrations into the image entrypoint — that runs on every container start and creates race conditions when scaling. Run them once, as a one-shot job:
docker compose run --rm web python manage.py migrate
compose.yaml for local dev
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports: ["8000:8000"]
volumes: [".:/app"]
environment:
DATABASE_URL: postgres://app:secret@db:5432/app
depends_on:
db: { condition: service_healthy }
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
POSTGRES_DB: app
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app"]
interval: 5s
timeout: 5s
retries: 5
volumes: ["db:/var/lib/postgresql/data"]
volumes:
db:
Static files
In production, run python manage.py collectstatic as part of the build, then serve /static/ with a CDN, S3, or Whitenoise. Don't have Gunicorn serve static files in production.
Continue reading
Related essays
Multi-Stage Docker Builds for Dramatically Smaller Images
A 1.2 GB Node.js image becomes 80 MB with one trick. Learn multi-stage builds — what they are, why they work, and how to apply them to common stacks.
Your First Dockerfile: A Hands-On Walkthrough
Write a Dockerfile from scratch for a small Node.js service, understand every line, and learn the instructions that matter (and the ones that don't).
Docker Compose for Multi-Container Applications
Run a real stack — web app, Postgres, Redis — with one command. A practical walkthrough of Compose v2 with the patterns that scale to production-like local dev.