Features
- Event-driven architecture -- Type-safe events connect workflow steps with zero boilerplate via derive macros (Rust) or subclassing (Python) or plain objects (TypeScript)
- 15+ LLM providers -- OpenAI, Anthropic, Gemini, Azure, OpenRouter, Groq, Together AI, Mistral, DeepSeek, Fireworks, Perplexity, xAI, Cohere, AWS Bedrock, and fal.ai -- with streaming, tool calling, structured output, and multimodal support
- Multi-workflow pipelines -- Orchestrate sequential and parallel stages with pause/resume and per-workflow streaming
- Branching and fan-out -- Conditional branching, parallel fan-out, and real-time streaming within workflows
- Native Python and TypeScript bindings -- Python via PyO3/maturin, Node.js/TypeScript via napi-rs. Not wrappers around HTTP -- actual compiled Rust running in-process
- WebAssembly SDK -- Run Blazen in the browser, edge workers, Deno, and embedded runtimes via
@blazen/sdk. Same Rust core compiled to WASM - Prompt management -- Versioned prompt templates with
{{variable}}interpolation, YAML/JSON registries, and multimodal attachments - Persistence -- Embedded persistence via redb, or bring-your-own via callbacks. Pause a workflow, serialize state to JSON, resume later
- Identity-preserving live state -- Pass DB connections, Pydantic models, and other live objects through events and the new
ctx.state/ctx.sessionnamespaces.StopEvent(result=obj)round-trips non-JSON Python values withis-identity preserved -- the engine no longer silently stringifies unpicklable results - Observability -- OpenTelemetry, Prometheus metrics, and Langfuse integration via the telemetry crate
Installation
Rust:
Python (requires Python 3.9+):
Node.js / TypeScript:
WebAssembly (browser, edge, Deno, Cloudflare Workers):
Quick Start
Rust
use *;
async
async
async
Python
:
return
return
=
= await
= await
# {"result": {"greeting": "Hello, Zach!"}}
TypeScript
import { Workflow } from "blazen";
const workflow = new Workflow("greeter");
workflow.addStep("parse_input", ["blazen::StartEvent"], async (event, ctx) => {
const name = event.name ?? "World";
return { type: "GreetEvent", name };
});
workflow.addStep("greet", ["GreetEvent"], async (event, ctx) => {
return {
type: "blazen::StopEvent",
result: { greeting: `Hello, ${event.name}!` },
};
});
const result = await workflow.run({ name: "Zach" });
console.log(result.data); // { greeting: "Hello, Zach!" }
LLM Integration
Every provider implements the same CompletionModel trait/interface. Switch providers by changing one line.
Rust
use ;
use OpenAiProvider;
let model = new;
let request = new;
let response = model.complete.await?;
println!;
Use any OpenAI-compatible provider with OpenAiCompatProvider:
use OpenAiCompatProvider;
let groq = groq;
let openrouter = openrouter;
let together = together;
let deepseek = deepseek;
Python
=
# or: CompletionModel.anthropic("sk-ant-...")
# or: CompletionModel.groq("gsk-...")
# or: CompletionModel.openrouter("sk-or-...")
: = await
# typed attribute access
# model name used
# TokenUsage with .prompt_tokens, .completion_tokens, .total_tokens
TypeScript
import { CompletionModel, ChatMessage, Role } from "blazen";
import type { CompletionResponse } from "blazen";
const model = CompletionModel.openai("sk-...");
// or: CompletionModel.anthropic("sk-ant-...")
// or: CompletionModel.groq("gsk-...")
// or: CompletionModel.openrouter("sk-or-...")
const response: CompletionResponse = await model.complete([
ChatMessage.system("You are helpful."),
ChatMessage.user("What is the meaning of life?"),
]);
console.log(response.content); // string
console.log(response.model); // model name used
console.log(response.usage); // { promptTokens, completionTokens, totalTokens }
console.log(response.finishReason);
Streaming
Steps can publish intermediate events to an external stream via write_event_to_stream on the context. Consumers subscribe before awaiting the final result.
Rust
async
// Consumer side:
let handler = workflow.run.await?;
let mut stream = handler.stream_events;
while let Some = stream.next.await
let result = handler.result.await?;
Python
return
# Consumer side:
= await
= await
TypeScript
// Using runStreaming with a callback:
const result = await workflow.runStreaming({ message: "go" }, (event) => {
console.log("stream:", event.type, event);
});
// Or using the handler API:
const handler = await workflow.runWithHandler({ message: "go" });
await handler.streamEvents((event) => {
console.log("stream:", event.type, event);
});
const result = await handler.result();
Crate / Package Structure
| Crate | Description |
|---|---|
blazen |
Umbrella crate re-exporting everything |
blazen-events |
Core event traits, StartEvent, StopEvent, DynamicEvent, and derive macro support |
blazen-macros |
#[derive(Event)] and #[step] proc macros |
blazen-core |
Workflow engine, context, step registry, pause/resume, and snapshots |
blazen-llm |
LLM provider abstraction -- CompletionModel, StructuredOutput, EmbeddingModel, Tool |
blazen-pipeline |
Multi-workflow pipeline orchestrator with sequential/parallel stages |
blazen-prompts |
Prompt template management with versioning and YAML/JSON registries |
blazen-memory |
Memory and vector store with LSH-based approximate nearest-neighbor retrieval |
blazen-memory-valkey |
Valkey/Redis backend for blazen-memory |
blazen-persist |
Optional persistence layer (redb) |
blazen-telemetry |
Observability: OpenTelemetry spans, Prometheus metrics, Langfuse, and LLM call history |
blazen-py |
Python bindings via PyO3/maturin (published to PyPI as blazen) |
blazen-node |
Node.js/TypeScript bindings via napi-rs (published to npm as blazen) |
blazen-wasm-sdk |
TypeScript/JS client SDK via WebAssembly (published to npm as @blazen/sdk) |
blazen-wasm |
WASIp2 WASM component for ZLayer edge deployment |
blazen-cli |
CLI tool for scaffolding projects (blazen init) |
Supported LLM Providers
| Provider | Constructor | Default Model |
|---|---|---|
| OpenAI | OpenAiProvider::new / .openai() |
gpt-4.1 |
| Anthropic | AnthropicProvider::new / .anthropic() |
claude-sonnet-4-5-20250929 |
| Google Gemini | GeminiProvider::new / .gemini() |
gemini-2.5-flash |
| Azure OpenAI | AzureOpenAiProvider::new / .azure() |
(deployment-specific) |
| OpenRouter | .openrouter() |
openai/gpt-4.1 |
| Groq | .groq() |
llama-3.3-70b-versatile |
| Together AI | .together() |
meta-llama/Llama-3.3-70B-Instruct-Turbo |
| Mistral | .mistral() |
mistral-large-latest |
| DeepSeek | .deepseek() |
deepseek-chat |
| Fireworks | .fireworks() |
accounts/fireworks/models/llama-v3p3-70b-instruct |
| Perplexity | .perplexity() |
sonar-pro |
| xAI (Grok) | .xai() |
grok-3 |
| Cohere | .cohere() |
command-a-08-2025 |
| AWS Bedrock | .bedrock() |
anthropic.claude-sonnet-4-5-20250929-v1:0 |
| fal.ai | FalProvider::new / .fal() |
(image generation) |
All OpenAI-compatible providers are accessible through OpenAiCompatProvider in Rust, or through static factory methods on CompletionModel in Python and TypeScript.
Documentation
Full documentation, guides, and API reference are available at blazen.dev/docs/getting-started/introduction.
License
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
Author
Built by Zach Handley.