Workflows
Ship Cheaper Ship Safer Ship Faster Ship Compliant Ship Better
Workflow · Security & Safety

Find vulnerabilities before attackers do.

Most LLM stacks have zero adversarial testing. Stockyard runs automated attacks against your proxy, detects PII in prompts, and blocks injection attempts in real time.

Install Stockyard
1

Scan

Run a free 5-probe quickscan. Get a security grade (A–F) in 10 seconds. No API key sharing, no setup.

Feral quickscan (free)
2

Block

Enable prompt guard, secret scan, and code fence modules. Injection attempts, PII, and system prompt leaks are caught in the middleware chain.

Prompt Guard • Secret Scan • Code Fence
3

Hunt

Run the full 29-probe red-team suite. Attacks that bypass defenses are mutated and retried across generations to find deeper weaknesses.

Feral campaigns • Phantom personas

Products involved

Feral
Red-team engine. Free 5-probe quickscan. Full 29-probe suite with evolutionary mutation on Team.
Quickscan: Free • Full: Team
Phantom
Persona-based testing. Synthetic users probe your system for weaknesses around the clock.
Team • $99.99/mo
Prompt Guard
Real-time injection detection. Blocks prompt override attempts before they reach the model.
Free • built-in module
Secret Scan
PII and credential detection. Flags emails, credit cards, SSNs, API keys in prompts.
Free • built-in module
Code Fence
Code execution prevention. Blocks tool-use exploitation and function hijacking.
Free • built-in module
Agent Guard
Agent safety. Prevents unauthorized tool calls and chain escapes.
Free • built-in module

Auto-insights detected PII in 12 of 100 recent requests — email addresses, credit card numbers, and names being sent to third-party providers.

See the data →
Safety at the infrastructure level

Application-level guardrails are one code change away from being disabled. Infrastructure-level guardrails run regardless of what the application does. Stockyard's guardrail middleware inspects every request before it reaches the LLM provider. Define blocked patterns — PII formats, prompt injection signatures, topic restrictions — and the middleware rejects matching requests with a structured error. The application never needs guardrail logic because the proxy handles it.

Rate limiting through Cutoff prevents runaway API consumption. Set a budget per endpoint, per user, or per model, and the middleware enforces it. When a limit is hit, the response explains what was exceeded and when the limit resets. Combined with cost tracking through Trough, you get both a spending cap and visibility into what drove the spending. For teams deploying LLM features to external users, this combination prevents the two most common disasters: a prompt injection that generates harmful content, and a usage spike that generates a five-figure API bill.

Five minutes to your first trace.

Install Stockyard, send a request, watch it flow through the middleware chain. Everything on this page starts working immediately.

Install Stockyard See Pricing
Explore: OpenAI-compatible · Model aliasing · Why SQLite