Map app-facing model names to real providers. Swap GPT-4o for Claude without changing your application code.
Your application code has a model name hardcoded or configured: gpt-4o, claude-sonnet-4-5-20250929, llama-3-70b. When you want to switch providers, test a new model, or fall back to a backup, you need to change that string, redeploy, and hope nothing breaks.
Model aliasing decouples the name your application uses from the actual model and provider that handles the request.
You define aliases in your proxy. Your application sends requests to fast-model. The proxy maps that to gpt-4o-mini on OpenAI. Tomorrow you change the alias to point to claude-3-5-haiku on Anthropic. Your application code stays the same.
With Stockyard, aliases are managed via a runtime API. No restart, no config file change, no redeploy:
curl -X PUT http://localhost:4200/api/proxy/aliases \ -d '{"alias": "fast-model", "model": "claude-3-5-haiku-20241022"}'
The next request to fast-model goes to Claude instead of GPT-4o. Every request is logged with both the alias and the resolved model name, so you can track exactly what happened.
Provider migration: Move from OpenAI to Anthropic gradually by updating aliases one at a time. No big-bang rewrite.
Cost optimization: Point default-model at a cheaper model during off-peak hours, or when a new model launches with better price/performance.
A/B testing: Run traffic through different models to compare quality and cost before committing.
Failover: When your primary provider goes down, update the alias to route to a backup. Some proxies do this automatically.
Stockyard supports model aliasing with runtime API updates, wildcard patterns, and provider-aware routing. Aliases are persisted in SQLite and survive restarts.
Combined with proxy-only mode, you can use Stockyard purely as a routing layer with model aliasing and nothing else. Install in under 60 seconds.
Try Stockyard. One binary, 16 providers, under 60 seconds.
Get Started