whatwg_streams
A high-performance, WHATWG Streams API-compliant implementation for Rust, providing ReadableStream, WritableStream, and TransformStream primitives with full backpressure support.
This crate mirrors the browser Streams API while adapting to Rust's ownership model and async ecosystem. It provides built-in flow control to prevent memory exhaustion, zero-copy operations for efficiency, and type-safe locking to prevent reader/writer conflicts at compile time.
Quick Start
Add to your Cargo.toml:
[]
= "0.1.0"
Runtime Agnostic
This crate is not tied to any specific async runtime.
When you call .spawn(...), you provide a function or closure that schedules/runs background tasks in the runtime you choose.
For example:
Tokio
let stream = from_vec
.spawn;
async-std
let stream = from_vec
.spawn;
smol
let stream = from_vec
.spawn;
Custom executor
You can also supply your own executor/spawner.
Here is how you might use futures::executor::LocalPool:
use LocalPool;
use LocalSpawnExt;
let mut pool = new;
let spawner = pool.spawner;
// Drive your application code on the pool
pool.run_until;
Raw Thread Execution
Streams are fully runtime-agnostic: .spawn(...) accepts any spawner that drives a future.
This means you can even run a stream on a raw thread without a full async runtime:
use block_on;
use ReadableStream;
// Each future is driven to completion inside a separate thread
let stream = from_vec
.spawn;
This approach is useful for lightweight single-use threads or environments where you don’t want a full async runtime. For most applications, using a proper runtime like Tokio, async-std, or Smol remains recommended.
Feature Flags: Multi-threaded vs Single-threaded
By default, this crate uses Arc for multi-threaded runtimes (the send feature).
For single-threaded runtimes, you can use the local feature instead, which uses Rc internally:
send(default): Multi-threaded, usesArc, requiresSend + Synclocal: Single-threaded, usesRc, noSendrequirement, lower overhead
Using the local feature
Add to your Cargo.toml:
[]
= { = "0.1.0", = false, = ["local"] }
Then use with single-threaded executors like tokio::task::spawn_local or futures::executor::LocalPool:
use ReadableStream;
use LocalSet;
async
The API is identical for both features.
Choose send (default) for multi-threaded flexibility,
or local when you're certain everything runs on a single thread.
Basic Usage
use ReadableStream;
use StreamExt;
// Create a stream from an iterator
let data = vec!;
let stream = from_iterator
.spawn;
let = stream.get_reader.unwrap;
// Read values
while let Some = reader.read.await.unwrap
Writable Streams
use ;
;
let stream = builder
.spawn;
let = stream.get_writer.unwrap;
writer.write.await.unwrap;
writer.close.await.unwrap;
Transform Streams
use ;
;
let source = from_vec
.spawn;
let transform = builder
.spawn;
let output = source.pipe_through
.spawn;
let = output.get_reader.unwrap;
assert_eq!;
assert_eq!;
Core Concepts
ReadableStream
Represents a source of data that can be read chunk-by-chunk:
// From various sources
let stream1 = from_vec
.spawn;
let stream2 = from_iterator
.spawn;
let async_stream = iter;
let stream3 = from_stream
.spawn;
WritableStream
Represents a destination that accepts data:
let sink = FileSink ;
let stream = builder
.strategy // Buffer up to 10 chunks
.spawn;
Backpressure
Streams automatically handle backpressure to prevent memory issues:
let = writable_stream.get_writer.unwrap;
// Sequential writes - each write waits for completion
writer.write.await?;
writer.write.await?;
// For high throughput without waiting for completion:
// Check if ready first, then enqueue without waiting
writer.ready.await?;
writer.enqueue?; // Enqueues immediately, doesn't wait
// Or use the helper that waits for readiness
writer.enqueue_when_ready.await?; // Waits for ready, then enqueues
Byte Streams
Optimized for binary data with zero-copy operations:
use ReadableByteSource;
let stream = builder_bytes
.spawn;
// BYOB reader for zero-copy reads
let = stream.get_byob_reader.unwrap;
let mut buffer = ;
let bytes_read = reader.read.await?;
Advanced Features
Stream Teeing
Split a stream into multiple independent branches:
let source = from_vec
.spawn;
let = source
.tee
.backpressure_mode
.spawn?;
// Both streams receive the same data
Piping
Connect readable and writable streams:
source_stream.pipe_to.await?;
// With options
use AbortRegistration;
let = new;
let options = StreamPipeOptions ;
source_stream.pipe_to.await?;
Custom Queuing Strategies
Control buffering behavior:
use CountQueuingStrategy;
let stream = builder
.strategy
.spawn;
Error Handling
Streams provide comprehensive error handling:
use StreamError;
// Errors propagate through the stream
match reader.read.await
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
Licensed under the MIT License.
Acknowledgments
This implementation follows the WHATWG Streams Standard and draws inspiration from the browser Streams API while adapting to Rust's ownership model and async ecosystem.