Strict Mode
Strict mode provides enhanced security and API compliance by using typed request/response handlers instead of the default wildcard passthrough router. This feature:
- Validates all requests against OpenAI API schemas before forwarding
- Sanitizes all responses by removing third-party provider metadata
- Standardizes error messages to prevent information leakage
- Ensures model field consistency between requests and responses
- Supports streaming and non-streaming for all endpoints
This is useful when you need guaranteed API compatibility, security hardening, or protection against third-party response variations.
Enabling strict mode
Strict mode is a global configuration that applies to all targets in your gateway. Add strict_mode: true at the top level of your configuration (not inside individual targets).
{
"strict_mode": true,
"targets": {
"gpt-4": {
"url": "https://api.openai.com",
"onwards_key": "sk-openai-key"
},
"claude": {
"url": "https://api.anthropic.com",
"onwards_key": "sk-ant-key"
}
}
}
When enabled, all requests to all targets will use strict mode validation and sanitization.
How it works
When strict_mode: true is enabled:
- Request validation: Incoming requests are deserialized through OpenAI schemas. Invalid requests receive immediate
400 Bad Requesterrors with clear messages. - Response sanitization: Third-party responses are deserialized (automatically dropping unknown fields). If deserialization fails, a standard error is returned - malformed responses are never passed through. The model field is rewritten to match the original request, then re-serialized as clean OpenAI responses with correct Content-Length headers.
- Error standardization: Third-party errors are logged internally but never forwarded to clients. Clients receive standardized OpenAI-format errors based only on HTTP status codes.
- Streaming support: SSE streams are parsed line-by-line to handle multi-line data events and strip comment lines, each chunk is sanitized, and re-emitted as clean events.
Security benefits
Prevents information leakage:
- Third-party stack traces, database errors, and debug information are never exposed
- Error responses contain only standard HTTP status codes and generic messages
- No provider-specific metadata (trace IDs, internal IDs, costs) reaches clients
- Malformed provider responses fail closed with standard errors (never leaked)
- SSE comment lines stripped to prevent metadata leakage in streaming responses
Ensures consistency:
- Responses always match OpenAI’s API format exactly
- The
modelfield always reflects what the client requested, not what the provider returned - Extra fields like
provider,cost,trace_idare automatically dropped
Fast failure:
- Invalid requests fail immediately with clear, actionable error messages
- No wasted upstream requests for malformed input
- Reduces debugging time for integration issues
Error standardization
When strict mode is enabled, all error responses follow OpenAI’s error format exactly:
{
"error": {
"message": "Invalid request",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
Status code mapping:
| HTTP Status | Error Type | Message |
|---|---|---|
| 400 | invalid_request_error | Invalid request |
| 401 | authentication_error | Authentication failed |
| 403 | permission_error | Permission denied |
| 404 | not_found_error | Not found |
| 429 | rate_limit_error | Rate limit exceeded |
| 500 | api_error | Internal server error |
| 502 | api_error | Bad gateway |
| 503 | api_error | Service unavailable |
Third-party error details are always logged server-side but never sent to clients.
Supported endpoints
Strict mode currently supports:
/v1/chat/completions(streaming and non-streaming) - Full sanitization/v1/embeddings- Full sanitization/v1/responses(Open Responses API, non-streaming) - Full sanitization/v1/models- Model listing (no sanitization needed)
All supported endpoints include:
- Request validation - Invalid requests fail immediately with clear error messages
- Response sanitization - Third-party metadata automatically removed
- Model field rewriting - Ensures consistency with client request
- Error standardization - Third-party error details never exposed
Requests to unsupported endpoints will return 404 Not Found when strict mode is enabled.
Comparison with response sanitization
| Feature | Response Sanitization | Strict Mode |
|---|---|---|
| Request validation | ✗ No | ✓ Yes |
| Response sanitization | ✓ Yes | ✓ Yes |
| Error standardization | ✗ No | ✓ Yes |
| Endpoint coverage | /v1/chat/completions only | Chat, Embeddings, Responses, Models |
| Router type | Wildcard passthrough | Typed handlers |
| Use case | Simple response cleaning | Production security & compliance |
Important: When strict mode is enabled globally, the per-target sanitize_response flag is automatically ignored. Strict mode handlers perform complete sanitization themselves, so enabling sanitize_response: true on individual targets has no effect and won’t cause double sanitization.
When to use strict mode:
- Production deployments requiring security hardening
- Compliance requirements around error message content
- Multi-provider setups needing guaranteed response consistency
- Applications that need request validation before forwarding
When to use response sanitization:
- Simple use cases where you only need response cleaning
- Non-security-critical deployments
- Maximum flexibility with endpoint coverage
Trusted Providers
In strict mode, you can mark providers as trusted to bypass error sanitization while keeping success response sanitization. This is useful when you have providers you fully control (e.g., your own OpenAI account) and want their detailed error messages to help with debugging, while still ensuring response consistency.
Trust can be set at two levels:
- Pool level (
trustedon the target) — default for all providers in the pool - Provider level (
trustedinside a provider entry) — overrides the pool default for that specific provider
This is the only exception to strict mode’s error standardization guarantees: when a provider is effectively trusted, its errors may be forwarded with full third-party details instead of being standardized.
Configuration
Single-provider (pool-level trusted):
{
"strict_mode": true,
"targets": {
"gpt-4": {
"url": "https://api.openai.com",
"onwards_key": "sk-...",
"trusted": true
},
"third-party": {
"url": "https://some-provider.com",
"onwards_key": "sk-..."
}
}
}
Uniform pool-level trusted:
{
"strict_mode": true,
"targets": {
"gpt-4-pool": {
"trusted": true,
"providers": [
{ "url": "https://api.openai.com", "onwards_key": "sk-primary-..." },
{ "url": "https://api.openai.com", "onwards_key": "sk-backup-..." }
]
}
}
}
Mixed trust within a pool (per-provider override):
{
"strict_mode": true,
"targets": {
"gpt-4": {
"trusted": false,
"providers": [
{ "url": "https://internal.example.com", "trusted": true },
{ "url": "https://external.example.com" }
]
}
}
}
Here, the internal provider’s error responses pass through unchanged. The external provider omits trusted, so it inherits the pool default (false) and has its error responses sanitized. This lets you mix trusted internal infrastructure with untrusted external providers inside a single pool.
Behavior
When a pool is marked as trusted: true:
Success responses (200 OK) are STILL sanitized:
- Model field IS rewritten to match the client’s request
- Provider-specific metadata IS removed (costs, trace IDs, custom fields)
- Response IS validated against OpenAI schemas
- Content-Length headers ARE updated correctly
- Streaming responses ARE parsed and sanitized line-by-line
Error responses (4xx, 5xx) bypass sanitization:
- Original error messages and metadata forwarded to clients
- Provider-specific error details preserved (stack traces, debug info)
- Custom error fields passed through unchanged
This allows you to get detailed debugging information from errors while maintaining response consistency for successful requests.
Security Warning
⚠️ Use trusted providers carefully. Marking a provider as trusted bypasses error sanitization for that provider:
What is exposed for trusted providers:
- Error details and stack traces from the provider
- Provider-specific error metadata (trace IDs, internal error codes)
- Debug information in error responses
What is NOT exposed (still sanitized):
- Success responses are fully sanitized (model rewritten, metadata removed)
- Provider metadata in successful requests (costs, trace IDs) is still stripped
- Responses still match OpenAI schema exactly for successful requests
Only mark providers as trusted when you fully control or trust them. This typically means:
- Your own OpenAI/Anthropic accounts (providers using your API keys)
- Self-hosted models you operate
- Internal services you maintain
Do not mark third-party providers as trusted unless you want their detailed error messages exposed to your clients. Trusted providers are designed for debugging your own infrastructure, not for production use with external providers.
Interaction with Model Override Header
Onwards supports the model-override header to route requests to different pools than specified in the request body. Trust is resolved from the provider that actually handles the request (after routing and provider selection), so it correctly reflects the resolved model rather than what the client specified in the body.
This means if a client sends:
- Request body with
"model": "trusted-pool" - Header with
model-override: untrusted-pool
The request will route to untrusted-pool and sanitization will be applied based on that pool’s provider trust settings, preventing metadata leakage. Clients cannot bypass sanitization by exploiting mismatches between body and header model resolution.
Implementation details
For developers working on the Onwards codebase:
Router architecture:
- Strict mode uses typed Axum handlers defined in
src/strict/handlers.rs - Each endpoint has dedicated request/response schema types in
src/strict/schemas/ - Requests are deserialized using serde, which automatically validates structure
response_transform_fnis skipped when strict mode is enabled to prevent double sanitization
Response sanitization:
- Responses are deserialized through strict schemas (extra fields automatically dropped by serde)
- Malformed responses fail closed with standard errors - never passed through
- Model field is rewritten to match the original request model
- Re-serialized to ensure only defined fields are present
- Content-Length headers updated to match sanitized response size
- Applies to both non-streaming responses and SSE chunks
- SSE streams processed line-by-line to handle multi-line events and strip comments
Error handling:
- Third-party errors are intercepted in
sanitize_error_response() - Original error logged with
error!()macro for server-side debugging - Standard error generated based only on HTTP status code
- OpenAI-compatible format guaranteed via
error_response()helper - Deserialization failures return standard errors, never leak malformed responses
Trust resolution:
target_message_handlerresolves effective trust asprovider.trusted.unwrap_or(pool.trusted)after provider selection- The resolved trust is attached to the response via a
ResolvedTrustextension - Strict mode handlers read it via
ForwardResult.trusted— no separate pool lookup needed - Ensures trust reflects the actual provider that handled the request, including after fallback retries
Testing:
- Request/response schema tests in each schema module
- Integration tests in
src/strict/handlers.rsverify sanitization behavior - Tests verify fail-closed behavior on malformed responses (no passthrough)
- Tests verify SSE multi-line events and comment stripping
- Tests verify Content-Length header correctness after sanitization
- Tests verify per-provider trusted overrides pool-level setting in both directions