Docker

One container, one volume. Full LLM proxy with tracing, caching, and guardrails.

Quick start

Run Stockyard with a single docker run command:

docker run -d \
  --name stockyard \
  -p 4200:4200 \
  -e OPENAI_API_KEY=sk-... \
  -e STOCKYARD_ADMIN_KEY=sy_admin_your_secret \
  -v stockyard-data:/data \
  ghcr.io/stockyard-dev/stockyard:latest

That is it. The proxy is running at http://localhost:4200/v1, the console is at http://localhost:4200/ui, and your data persists across restarts in the stockyard-data volume.

# Verify it is running
curl http://localhost:4200/health
# {"status":"ok"}

# Send a request through the proxy
curl http://localhost:4200/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello"}]}'

Docker Compose

For a more permanent setup, use a docker-compose.yml:

# docker-compose.yml
services:
  stockyard:
    image: ghcr.io/stockyard-dev/stockyard:latest
    container_name: stockyard
    restart: unless-stopped
    ports:
      - "4200:4200"
    volumes:
      - stockyard-data:/data
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - STOCKYARD_ADMIN_KEY=${STOCKYARD_ADMIN_KEY}

volumes:
  stockyard-data:

Create a .env file alongside it with your keys:

# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
STOCKYARD_ADMIN_KEY=sy_admin_your_secret

Then start it:

docker compose up -d
Important
Never commit your .env file. Add it to .gitignore. Stockyard encrypts provider keys at rest with AES-256-GCM, but the admin key and raw env vars should stay out of version control.

Environment variables

Stockyard reads provider keys and configuration from environment variables. You do not need a config file to get started.

VariablePurpose
OPENAI_API_KEYOpenAI provider key (sk-...)
ANTHROPIC_API_KEYAnthropic provider key (sk-ant-...)
GOOGLE_API_KEYGoogle Gemini provider key
GROQ_API_KEYGroq provider key (gsk_...)
MISTRAL_API_KEYMistral provider key
TOGETHER_API_KEYTogether AI provider key
STOCKYARD_ADMIN_KEYAdmin API key for management endpoints
STOCKYARD_ENCRYPTION_KEYCustom encryption key for provider keys at rest (min 32 chars, optional — auto-generated if unset)
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint for trace export (e.g. http://jaeger:4318)

Set as many provider keys as you want. Stockyard auto-detects the provider from each key’s prefix and enables routing, failover, and model aliasing across all configured providers.

Data volume

Stockyard stores everything in a single SQLite database at /data/stockyard.db. Mount a volume to this path so your data survives container restarts, upgrades, and redeployments.

The volume contains: request traces, cost records, audit ledger entries, module configurations, cached responses, API keys (AES-256-GCM encrypted), and product state. All of it is in one file.

Backups
To back up Stockyard, copy the SQLite file. That is the entire backup procedure.

docker cp stockyard:/data/stockyard.db ./backups/stockyard-$(date +%Y%m%d).db

To restore, copy it back and restart the container.

Build your own image

If you want to build Stockyard from source instead of using the published image:

# Dockerfile
FROM golang:1.22-alpine AS build
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /stockyard ./cmd/stockyard/

FROM alpine:3.19
RUN apk add --no-cache ca-certificates
COPY --from=build /stockyard /usr/local/bin/stockyard
VOLUME /data
EXPOSE 4200
ENTRYPOINT ["stockyard"]

Build and run:

docker build -t stockyard:local .
docker run -d -p 4200:4200 -v stockyard-data:/data \
  -e OPENAI_API_KEY=sk-... stockyard:local

The multi-stage build produces a minimal image. The final layer is Alpine with the compiled binary and CA certificates — nothing else.

With OpenTelemetry

To export traces to Jaeger, Grafana Tempo, or any OTLP-compatible backend running in Docker alongside Stockyard:

# docker-compose.yml with Jaeger
services:
  stockyard:
    image: ghcr.io/stockyard-dev/stockyard:latest
    restart: unless-stopped
    ports:
      - "4200:4200"
    volumes:
      - stockyard-data:/data
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - STOCKYARD_ADMIN_KEY=${STOCKYARD_ADMIN_KEY}
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318

  jaeger:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"  # Jaeger UI
      - "4318:4318"    # OTLP HTTP

volumes:
  stockyard-data:

Every proxied request becomes an OTEL span with attributes for model, provider, tokens, cost, and latency. Open Jaeger at http://localhost:16686 to browse traces.

Behind a reverse proxy

To expose Stockyard behind Nginx, Caddy, or Traefik with TLS:

# Caddyfile example
stockyard.example.com {
    reverse_proxy stockyard:4200
}
# docker-compose.yml with Caddy
services:
  stockyard:
    image: ghcr.io/stockyard-dev/stockyard:latest
    restart: unless-stopped
    volumes:
      - stockyard-data:/data
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - STOCKYARD_ADMIN_KEY=${STOCKYARD_ADMIN_KEY}
    # No ports exposed — Caddy handles external access

  caddy:
    image: caddy:2
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy-data:/data

volumes:
  stockyard-data:
  caddy-data:

Stockyard handles streaming (SSE) natively, so no special proxy buffering configuration is needed. If you use Nginx, set proxy_buffering off for the /v1/ path.

Sample .env files

Copy one of these as your starting .env file:

OpenAI only:

OPENAI_API_KEY=sk-your-key-here
STOCKYARD_ADMIN_KEY=sy_admin_your_secret

Multi-provider (OpenAI + Anthropic):

OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
STOCKYARD_ADMIN_KEY=sy_admin_your_secret

Ollama local + cloud fallback:

OPENAI_API_KEY=sk-your-key-here
STOCKYARD_ADMIN_KEY=sy_admin_your_secret
# Ollama is auto-detected at localhost:11434
# Stockyard fails over to OpenAI if Ollama is unavailable

Upgrading

Pull the new image and restart. The SQLite database migrates automatically on startup.

docker compose pull
docker compose up -d

There is no migration coordination between services because there is only one service. The new binary reads the existing database, runs any needed schema migrations, and starts serving.

Troubleshooting

Check container logs:

docker logs stockyard

Verify the data volume is mounted and writable:

docker exec stockyard ls -la /data/
# Should show stockyard.db

If the container exits immediately, check that port 4200 is not already in use on the host, and that your provider key environment variables are set. Run docker logs stockyard for the specific error.

Explore: Why SQLite · vs LiteLLM · Proxy-only mode