SDKs
Stockyard is OpenAI-compatible. Use any existing SDK.
The One-Line Change
Stockyard exposes the same /v1/chat/completions endpoint as OpenAI. To use it, point your existing SDK at your Stockyard instance and set your Stockyard API key. That's it.
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
base_url="https://your-host/v1",
api_key="sk-sy-YOUR_KEY",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello from Stockyard"}],
)
print(response.choices[0].message.content)
Works with streaming too:
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Count to 10"}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
TypeScript / Node.js
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://your-host/v1",
apiKey: "sk-sy-YOUR_KEY",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello from Stockyard" }],
});
console.log(response.choices[0].message.content);
Go
package main
import (
"context"
"fmt"
openai "github.com/sashabaranov/go-openai"
)
func main() {
config := openai.DefaultConfig("sk-sy-YOUR_KEY")
config.BaseURL = "https://your-host/v1"
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion(
context.Background(),
openai.ChatCompletionRequest{
Model: "gpt-4o",
Messages: []openai.ChatCompletionMessage{
{Role: "user", Content: "Hello from Stockyard"},
},
},
)
if err != nil {
panic(err)
}
fmt.Println(resp.Choices[0].Message.Content)
}
curl
curl https://your-host/v1/chat/completions \
-H "Authorization: Bearer sk-sy-YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}'
Routing Headers
Stockyard supports extra headers for routing and attribution that pass through any SDK:
# Python
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
extra_headers={
"X-Project": "my-app",
"X-Provider": "anthropic",
"X-Stockyard-Tags": "env=prod,team=backend",
},
)
// TypeScript
const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"X-Project": "my-app",
"X-Provider": "anthropic",
"X-Stockyard-Tags": "env=prod,team=backend",
},
}
);
| Header | Description |
|---|---|
X-Project | Project name for spend tracking (default: default) |
X-Provider | Force a specific provider instead of auto-routing |
X-Stockyard-Tags | Comma-separated key=value tags for cost attribution |
X-Schema | JSON schema for structured output validation |
Any OpenAI-Compatible SDK
Stockyard works with any library that targets the OpenAI API: LiteLLM, LangChain, Vercel AI SDK, Instructor, and more. Just change the base URL and API key.