Siumai Extras
Optional utilities for the siumai LLM library.
Features
This crate provides optional functionality that extends siumai without adding heavy dependencies to the core library:
schema- JSON Schema validation for structured outputstelemetry- Advanced tracing and logging withtracing-subscriberserver- Server adapters for Axum and other web frameworksmcp- MCP (Model Context Protocol) integration for dynamic tool discoveryall- Enable all features
Installation
[]
= "0.11.0-beta.6"
= { = "0.11.0-beta.6", = ["schema", "telemetry", "mcp"] }
Usage
Orchestrator and high-level object helpers do not require any extra features. Schema validation and tracing are opt-in via the
schemaandtelemetryfeatures.
High-level structured objects
Provider-agnostic helpers for generating typed JSON objects:
use Deserialize;
use *;
use ;
async
If you enable the schema feature, GenerateObjectOptions::schema is
validated via siumai_extras::schema before deserializing into T.
Orchestrator & agents
Multi-step tool calling, agents, and stop conditions:
use json;
use *;
use ;
;
async
You can attach telemetry to the agent or orchestrator using
siumai::experimental::observability::telemetry::TelemetryConfig:
use TelemetryConfig;
use OrchestratorBuilder;
let telemetry = builder
.record_inputs
.record_outputs
.record_usage
.build;
let builder = new.telemetry;
Schema Validation
use SchemaValidator;
// Validate JSON against a schema
let validator = new?;
validator.validate?;
Telemetry
use init_subscriber;
// Initialize tracing subscriber
init_subscriber?;
Server Adapters
use to_sse_response;
// Convert ChatStream to Axum SSE response
let sse = to_sse_response;
If you are building an OpenAI-compatible gateway and need to output OpenAI Responses SSE,
siumai-extras also provides a helper that:
- bridges provider-specific
ChatStreamEvent::Customparts intoopenai:*stream parts, and - serializes the stream into OpenAI Responses SSE frames.
use Response;
use Body;
use to_openai_responses_sse_response;
use ChatStream;
See the runnable example: siumai-extras/examples/openai-responses-gateway.rs (streaming + non-streaming).
For custom conversion hooks, see: siumai-extras/examples/gateway-custom-transform.rs.
For request-normalization bridge demos, see:
siumai-extras/examples/anthropic-to-openai-responses-gateway.rssiumai-extras/examples/openai-responses-to-anthropic-gateway.rsFor custom lossy-policy handling, see:siumai-extras/examples/gateway-loss-policy.rs
If you need to expose multiple downstream protocol surfaces from the same upstream stream, use the transcoder helper:
use ;
use BridgeTarget;
use ChatStream;
use ;
When a streaming gateway route is cross-protocol and you want strict inspected rejection or a
custom BridgeLossPolicy, declare the upstream protocol with
TranscodeSseOptions::with_bridge_source(...). That enables the same source-aware loss-policy
decision path used by the lower-level core stream bridge helpers while keeping the Axum helper
surface.
If your gateway route also needs to read downstream request bodies or buffered upstream bodies
under GatewayBridgePolicy, use the Axum runtime helpers instead of open-coding to_bytes(...):
use Body;
use ;
use read_request_json_with_policy;
let policy = default.with_request_body_limit_bytes;
let request_json: Value =
read_request_json_with_policy.await?;
If you need to customize the conversion logic, the recommended path is
GatewayBridgePolicy + BridgeOptions + typed bridge hooks as demonstrated in
siumai-extras/examples/gateway-custom-transform.rs.
If the request-side requirement is hosted-tool compatibility across protocols, prefer
ProviderToolRewriteCustomization attached through GatewayBridgePolicy::with_customization(...)
or NormalizeRequestOptions::with_bridge_customization(...) instead of patching raw downstream
JSON. The Anthropic -> OpenAI gateway example demonstrates that path for
anthropic.web_fetch_20250910 -> openai.web_search.
The two request-normalization bridge demos intentionally show a different path:
- source protocol request JSON -> explicit request normalizer ->
ChatRequest - execute on a fixed upstream model handle
- transcode the resulting unified response/stream back into the chosen target protocol
That is useful when you want the bridge surface to stay explicit and testable instead of hiding protocol translation inside route-local JSON glue.
Migration guidance for gateway routes now lives at:
docs/workstreams/protocol-bridge-gateway/migration.md
Recommended route shapes now live at:
docs/workstreams/protocol-bridge-gateway/route-recipes.md
That recipes note covers the currently recommended and test-backed gateway compositions for:
- provider-native ingress -> normalized runtime -> downstream JSON/SSE
- buffered upstream proxy/runtime routes
- cross-protocol SSE with inspected strict rejection
- hosted-tool compatibility via typed request customization
The raw event-transform helper is still available as an escape hatch:
use ;
use ;
use ;
For non-streaming gateways, you can also transcode a ChatResponse into a provider-native
JSON response body:
use ;
use *;
use ;
If you want to customize conversion for non-streaming responses, prefer the response-level transform hook (no JSON parse/round-trip):
use ;
use *;
use ;
MCP Integration
use *;
use mcp_tools_from_stdio;
async
Documentation
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.