Skip to content

Docker Compose

Sidecar pattern with Docker Compose

The most common deployment is running the Probe as a service alongside your application in a docker-compose.yml. Your application container connects to the Probe by service name.

Minimal example

docker-compose.yml
version: "3.9"
services:
app:
build: .
environment:
- ANTHROPIC_BASE_URL=http://govern-probe:4020
depends_on:
govern-probe:
condition: service_healthy
govern-probe:
image: archetypal/govern-probe:latest
environment:
- GOVERN_API_KEY=${GOVERN_API_KEY}
- GOVERN_ORG_ID=${GOVERN_ORG_ID}
- UPSTREAM_URL=https://api.anthropic.com
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4020/healthz"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s

Create a .env file:

Terminal window
GOVERN_API_KEY=gvn_live_xxxx
GOVERN_ORG_ID=org_xxxx

Start:

Terminal window
docker compose up -d

Full production example

docker-compose.yml
version: "3.9"
networks:
app-network:
driver: bridge
volumes:
probe-config:
services:
app:
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-network
environment:
- ANTHROPIC_BASE_URL=http://govern-probe:4020
- OPENAI_BASE_URL=http://govern-probe:4020
depends_on:
govern-probe:
condition: service_healthy
ports:
- "3000:3000"
govern-probe:
image: archetypal/govern-probe:latest
restart: unless-stopped
networks:
- app-network
environment:
- GOVERN_API_KEY=${GOVERN_API_KEY}
- GOVERN_ORG_ID=${GOVERN_ORG_ID}
- GOVERN_ENV=production
- UPSTREAM_URL=${AI_UPSTREAM_URL:-https://api.anthropic.com}
- SCORING_MODE=${SCORING_MODE:-flag}
volumes:
- ./config/govern-probe.yaml:/app/config/default.yaml:ro
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4020/healthz"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
deploy:
resources:
limits:
memory: 256M
cpus: "0.5"
reservations:
memory: 64M
cpus: "0.1"
logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"

Multi-provider configuration

If your app calls multiple AI providers, run one Probe per upstream:

services:
app:
environment:
- ANTHROPIC_BASE_URL=http://probe-anthropic:4020
- OPENAI_BASE_URL=http://probe-openai:4020
probe-anthropic:
image: archetypal/govern-probe:latest
environment:
- GOVERN_API_KEY=${GOVERN_API_KEY}
- GOVERN_ORG_ID=${GOVERN_ORG_ID}
- UPSTREAM_URL=https://api.anthropic.com
- GOVERN_PROBE_ID=probe-anthropic
probe-openai:
image: archetypal/govern-probe:latest
environment:
- GOVERN_API_KEY=${GOVERN_API_KEY}
- GOVERN_ORG_ID=${GOVERN_ORG_ID}
- UPSTREAM_URL=https://api.openai.com
- GOVERN_PROBE_ID=probe-openai

The GOVERN_PROBE_ID label appears in all telemetry events, letting you differentiate traffic in the dashboard.

Override mode for local development

In development you may want logging only, not blocking:

# docker-compose.override.yml (git-ignored)
services:
govern-probe:
environment:
- SCORING_MODE=log
- GOVERN_ENV=development
Terminal window
# Development (log mode)
docker compose up
# Production (uses base compose, flag mode)
docker compose -f docker-compose.yml up -d

Checking logs

Terminal window
# Follow probe logs
docker compose logs govern-probe --follow
# Sample output
# {"level":"info","msg":"inference intercepted","model":"claude-sonnet-4","tokens":847,"latency_ms":2134}
# {"level":"info","msg":"scores computed","security":0.02,"bias":0.01,"accuracy":0.89,"drift":0.05,"cost":0.12}
# {"level":"info","msg":"telemetry batch flushed","batch_size":1,"duration_ms":82}