The LLM control plane for teams who ship.
Proxy, routing, cost tracking, tracing, guardrails, and 76 middleware modules. One Go binary. Zero dependencies. OpenAI-compatible.
Proxy, routing, cost tracking, tracing, guardrails, and 76 middleware modules. One Go binary. Zero dependencies. OpenAI-compatible.
Point your app at one endpoint, route requests through it, and stop there if that's all you need.
Change your base URL and keep moving.
Route requests through one endpoint
Swap providers without changing app code
Keep model names stable with aliases
Turn on tracing, routing, or security later
Adopt every feature on day one
Run extra services or infrastructure
Set up Docker, Redis, or Postgres first
Learn the full platform before getting value
Try it in 60 seconds. No signup, no credit card.
Self-hosted · Source available · Unlimited requests
Every module is a config flag. Enable PII redaction, guardrails, auto-routing, or cost limits without redeploying.
| Stockyard | LiteLLM | Helicone | |
|---|---|---|---|
| Language | Go | Python | TypeScript |
| Deploy | Single static binary | pip install + optional Redis/Postgres | Managed SaaS |
| Dependencies | Zero | Redis + Postgres for full features | Cloud only |
| Free tier | Proxy, tracing, routing, red-team scan | Open source proxy | Free tier with limits |
| Auto-routing | Built-in | Basic fallback | No |
| Red-team | Built-in | No | No |
| Request replay | Built-in | No | No |
| Memory | ~12MB | ~200MB+ | Cloud |
| License | Source-available (BSL) | MIT | Proprietary |
For the 150-tool Complete bundle, see Stockyard Complete — $29/mo
400ns median proxy overhead. ~12MB memory. ~25MB binary. Measured on real hardware with published methodology.
See benchmarks & methodology