do-over
do-over is an async-first resilience and transient fault handling library for Rust,
inspired by the .NET Polly library.
Goals
- Explicit failure modeling
- Async-native (Tokio)
- No global state
- Observable and composable
- Familiar mental model for Polly users
Installation
From crates.io (recommended)
Add do-over to your Cargo.toml:
[]
= "0.1"
= { = "1", = ["full"] }
From GitHub
To use the latest development version directly from GitHub:
[]
= { = "https://github.com/nwpz/do-over.git", = "main" }
= { = "1", = ["full"] }
Or pin to a specific commit:
[]
= { = "https://github.com/nwpz/do-over.git", = "d01bf82" }
= { = "1", = ["full"] }
Feature Flags
| Feature | Description |
|---|---|
http |
Enables reqwest integration for HTTP clients |
metrics-prometheus |
Prometheus metrics integration |
metrics-otel |
OpenTelemetry metrics integration |
# With optional features
= { = "0.1", = ["http", "metrics-prometheus"] }
Quick Start
1. Create a new project
2. Add dependencies to Cargo.toml
[]
= { = "https://github.com/nwpz/do-over.git" }
= { = "1", = ["full"] }
3. Write your resilient code
Replace src/main.rs with:
use ;
use Duration;
async
4. Run your application
Real-World Example: Resilient HTTP Client
use ;
use Duration;
// Define your error type
type AppError = ;
async
Table of Contents
- Installation
- Quick Start
- Development Setup
- Core Concepts
- Policies
- Advanced Usage
- Tower Integration
- Metrics
- Philosophy
- Examples
- API Reference
Development Setup (VS Code Dev Containers)
Prerequisites
- Docker
- Visual Studio Code
- VS Code Remote Containers extension
Steps
- Unzip this repository
- Open the folder in VS Code
- When prompted: Reopen in Container
- The container will:
- Install Rust
- Run
cargo build - Enable rust-analyzer
You are now ready to develop.
Core Concepts
Policy Trait
All resilience policies in do-over implement the Policy<E> trait:
This trait allows you to wrap any async operation with resilience patterns. The operation must return a Result<T, E>, making error handling explicit.
Error Handling
do-over uses the DoOverError<E> type to wrap your application errors with policy-specific failures:
This allows you to distinguish between infrastructure failures (timeout, circuit open) and application failures.
Policies
Retry Policy
The retry policy automatically retries failed operations with configurable backoff strategies.
Features
- Fixed Backoff: Wait a constant duration between retries
- Exponential Backoff: Increase delay exponentially with each retry
- Metrics Integration: Track retry attempts and outcomes
flowchart TD
Start([Execute Operation]) --> Execute[Run Operation]
Execute --> Check{Success?}
Check -->|Yes| Success([Return Result])
Check -->|No| CountCheck{Retries<br/>Remaining?}
CountCheck -->|Yes| Wait[Wait with Backoff]
Wait --> Increment[Increment Attempt]
Increment --> Execute
CountCheck -->|No| Fail([Return Error])
style Success fill:#90EE90
style Fail fill:#FFB6C6
style Wait fill:#FFE4B5
Usage
Fixed Backoff:
use RetryPolicy;
use Duration;
// Retry up to 3 times with 100ms between attempts
let retry = fixed;
let result = retry.execute.await?;
Exponential Backoff:
// Retry up to 5 times with exponential backoff
// Base delay: 100ms, multiplier: 2.0
// Delays will be: 100ms, 200ms, 400ms, 800ms, 1600ms
let retry = exponential;
With Metrics:
let retry = fixed
.with_metrics;
When to Use
- Network calls that may fail due to transient issues
- Database operations that might temporarily fail
- Any operation where temporary failures are expected
- External API calls with occasional timeouts
Circuit Breaker
The circuit breaker prevents cascading failures by stopping requests to a failing service, giving it time to recover.
How It Works
The circuit breaker has three states:
- Closed: Requests flow normally. Failures are counted.
- Open: After reaching the failure threshold, the circuit opens and immediately rejects requests.
- Half-Open: After the reset timeout, one request is allowed through to test if the service recovered.
stateDiagram-v2
[*] --> Closed
Closed --> Open: Failure threshold reached
Open --> HalfOpen: Reset timeout elapsed
HalfOpen --> Closed: Request succeeds
HalfOpen --> Open: Request fails
Closed --> Closed: Request succeeds
note right of Closed
Requests pass through
Failures are counted
end note
note right of Open
Requests fail immediately
No calls to service
end note
note right of HalfOpen
One test request allowed
Determines next state
end note
Usage
use CircuitBreaker;
use Duration;
// Open after 5 failures, reset after 60 seconds
let breaker = new;
match breaker.execute.await
Configuration
- failure_threshold: Number of consecutive failures before opening the circuit
- reset_timeout: How long to wait before transitioning from Open to Half-Open
When to Use
- Protecting your application from cascading failures
- Preventing resource exhaustion when a dependency is down
- Giving failing services time to recover
- Fast-failing when a service is known to be down
Timeout Policy
The timeout policy ensures operations complete within a specified duration, preventing indefinite hangs.
sequenceDiagram
participant C as Client
participant T as Timeout Policy
participant O as Operation
participant Timer as Timer
C->>T: execute()
T->>Timer: Start timeout
T->>O: Start operation
alt Operation completes first
O-->>T: Result
T->>Timer: Cancel
T-->>C: Return result
else Timer expires first
Timer-->>T: Timeout!
T->>O: Cancel
T-->>C: DoOverError::Timeout
end
Usage
use TimeoutPolicy;
use Duration;
// Fail if operation takes longer than 5 seconds
let timeout = new;
match timeout.execute.await
When to Use
- HTTP requests to external services
- Database queries that might hang
- Any operation with SLA requirements
- Preventing resource leaks from hanging operations
- Ensuring responsive applications with bounded latency
Bulkhead Isolation
The bulkhead policy limits concurrent executions, preventing resource exhaustion and isolating failures.
Features
- Concurrency Limiting: Control maximum parallel operations
- Queue Timeout: Optionally fail fast when bulkhead is full
- Resource Protection: Prevent thread pool or connection pool exhaustion
flowchart LR
subgraph Bulkhead["Bulkhead (Max: 3)"]
S1[Slot 1: 🔵]
S2[Slot 2: 🔵]
S3[Slot 3: 🔵]
end
R1[Request 1] --> S1
R2[Request 2] --> S2
R3[Request 3] --> S3
R4[Request 4] -.->|Rejected| Reject[❌ BulkheadFull]
R5[Request 5] -.->|Waiting| Queue[⏳ Queue]
S1 --> Complete1[✅ Complete]
Queue -.->|Slot available| S1
style Reject fill:#FFB6C6
style Queue fill:#FFE4B5
style Complete1 fill:#90EE90
Usage
Basic Bulkhead:
use Bulkhead;
// Allow maximum 10 concurrent executions
let bulkhead = new;
let result = bulkhead.execute.await;
With Queue Timeout:
use Duration;
// Allow 10 concurrent, wait max 1 second for a slot
let bulkhead = new
.with_queue_timeout;
match bulkhead.execute.await
When to Use
- Protecting limited resources (database connections, file handles)
- Preventing one service from monopolizing thread pools
- Isolating different types of work
- Rate-limiting resource-intensive operations
- Implementing fair resource allocation
Rate Limiter
The rate limiter controls the rate of operations using a token bucket algorithm.
How It Works
The rate limiter maintains a bucket of tokens that refill at regular intervals:
- Each operation consumes one token
- If no tokens are available, the operation is rejected
- Tokens refill to capacity after each interval
flowchart TD
Start([Request]) --> Check{Tokens<br/>Available?}
Check -->|Yes| Consume[Consume Token]
Consume --> Execute[Execute Operation]
Execute --> Success([Return Result])
Check -->|No| Reject([Rate Limit Exceeded])
Interval[Time Interval] -.->|Refill| Bucket[(Token Bucket<br/>🪙🪙🪙)]
Bucket -.-> Check
Consume -.->|Update| Bucket
style Success fill:#90EE90
style Reject fill:#FFB6C6
style Bucket fill:#E6E6FA
style Interval fill:#FFE4B5
Usage
use RateLimiter;
use Duration;
// Allow 100 requests per second
let limiter = new;
match limiter.execute.await
When to Use
- Complying with API rate limits
- Protecting services from overload
- Implementing fair usage policies
- Throttling expensive operations
- Controlling request rates to external services
Hedge Policy
The hedge policy improves latency by starting a backup request if the primary request is slow.
How It Works
- Start the primary request
- After a configured delay, start a hedged (backup) request
- Return whichever completes first
- Cancel the slower request
This reduces tail latency when some requests are unexpectedly slow.
sequenceDiagram
participant C as Client
participant H as Hedge Policy
participant P1 as Primary Request
participant P2 as Hedged Request
C->>H: execute()
H->>P1: Start primary
Note over H: Wait hedge delay
H->>P2: Start hedged request
alt Primary completes first
P1-->>H: Result ✅
H->>P2: Cancel
H-->>C: Return result
else Hedged completes first
P2-->>H: Result ✅
H->>P1: Cancel
H-->>C: Return result
end
Usage
use Hedge;
use Duration;
// Start backup request after 100ms
let hedge = new;
let result = hedge.execute.await?;
When to Use
- Read operations where latency matters more than cost
- Services with high latency variance
- Operations where sending duplicate requests is safe
- Improving P99 latency
- Important: Only use with idempotent operations (safe to execute multiple times)
Advanced Usage
Composing Policies with Wrap
The Wrap utility allows you to compose multiple policies together, creating sophisticated resilience strategies by layering policies.
How Wrap Works
Wrap takes two policies (outer and inner) and chains them together. Execution flows from outer → inner → operation.
flowchart LR
Request([Request]) --> Outer[Outer Policy]
Outer --> Inner[Inner Policy]
Inner --> Operation[Your Operation]
Operation --> Inner
Inner --> Outer
Outer --> Response([Response])
style Outer fill:#E6F3FF
style Inner fill:#FFF4E6
style Operation fill:#E8F5E9
Basic Usage
Simple Two-Policy Composition:
use Wrap;
use RetryPolicy;
use TimeoutPolicy;
use Duration;
// Create individual policies
let retry = fixed;
let timeout = new;
// Wrap them together
let policy = new;
// Execute with composed policy
let result = policy.execute.await?;
In this example, the retry policy wraps the timeout policy. Each retry attempt has a 5-second timeout.
sequenceDiagram
participant C as Client
participant R as Retry Policy
participant T as Timeout Policy
participant O as Operation
C->>R: execute()
R->>T: attempt 1
T->>O: call with timeout
O--xT: fails
T--xR: error
R->>R: wait backoff
R->>T: attempt 2
T->>O: call with timeout
O-->>T: success ✅
T-->>R: result
R-->>C: result
Multi-Layer Composition
Wrapping Multiple Policies:
use Wrap;
use RetryPolicy;
use CircuitBreaker;
use TimeoutPolicy;
use Bulkhead;
use Duration;
// Create policies
let bulkhead = new;
let breaker = new;
let retry = exponential;
let timeout = new;
// Nest wraps for complex composition
let policy = new;
// Execution order: bulkhead → breaker → retry → timeout → operation
let result = policy.execute.await?;
flowchart TB
Client([Client Request])
subgraph Wrap1["Outermost Wrap"]
BH[Bulkhead<br/>10 concurrent max]
subgraph Wrap2["Middle Wrap"]
CB[Circuit Breaker<br/>5 failure threshold]
subgraph Wrap3["Inner Wrap"]
RT[Retry Policy<br/>3 attempts, exponential]
TO[Timeout<br/>5 seconds]
end
end
end
OP[Operation]
Result([Result])
Client --> BH
BH -->|Has capacity| CB
CB -->|Circuit closed| RT
RT --> TO
TO --> OP
OP --> TO
TO --> RT
RT -->|Success/Max retries| CB
CB --> BH
BH --> Result
style BH fill:#E6F3FF
style CB fill:#FFE6E6
style RT fill:#FFF4E6
style TO fill:#E6FFE6
style OP fill:#E8F5E9
style Result fill:#90EE90
Common Patterns
Pattern 1: Retry with Timeout Each retry attempt is individually timed out:
let policy = new;
Pattern 2: Circuit Breaker with Retry Retries only happen when circuit is closed:
let policy = new;
Pattern 3: Bulkhead with Everything Limit concurrency before applying other policies:
let policy = new;
Best Practices for Policy Ordering
The order in which you wrap policies matters significantly:
- Bulkhead (Outermost): Limit concurrency first to protect resources
- Circuit Breaker: Fast-fail before attempting expensive operations
- Rate Limiter: Throttle before retrying
- Retry: Attempt multiple times for transient failures
- Timeout (Innermost): Apply time bounds to individual attempts
- Hedge: Use for read operations where duplicates are acceptable
flowchart TD
Start([Request]) --> Order{Recommended<br/>Policy Order}
Order --> L1[1️⃣ Bulkhead<br/>Control concurrency]
L1 --> L2[2️⃣ Circuit Breaker<br/>Fast fail if open]
L2 --> L3[3️⃣ Rate Limiter<br/>Throttle requests]
L3 --> L4[4️⃣ Retry<br/>Handle transients]
L4 --> L5[5️⃣ Timeout<br/>Bound attempts]
L5 --> L6[6️⃣ Operation<br/>Your code]
L6 --> Result([Result])
style L1 fill:#E6F3FF
style L2 fill:#FFE6E6
style L3 fill:#F0E6FF
style L4 fill:#FFF4E6
style L5 fill:#E6FFE6
style L6 fill:#E8F5E9
style Result fill:#90EE90
Real-World Example
Complete example for a resilient HTTP client:
use Wrap;
use Bulkhead;
use CircuitBreaker;
use RetryPolicy;
use TimeoutPolicy;
use DoOverError;
use Duration;
async
Executing Operations
All policies use the same execution pattern:
let result = policy.execute.await;
The operation must be a closure that returns a Future<Output = Result<T, E>>.
Tower Integration
do-over integrates with Tower middleware for HTTP services:
use tower;
use ServiceBuilder;
let service = new
.layer
.service;
This allows you to apply resilience policies to Tower services, including:
- Hyper HTTP services
- Tonic gRPC services
- Any service implementing
tower::Service
Metrics
Observability
do-over provides hooks for metrics collection to monitor the health of your resilience policies.
Prometheus Integration
Enable Prometheus metrics:
= { = ".", = ["metrics-prometheus"] }
OpenTelemetry Integration
Enable OpenTelemetry metrics:
= { = ".", = ["metrics-otel"] }
Custom Metrics
Implement the Metrics trait to integrate with your observability system:
Philosophy
Unlike Polly, do-over:
- Uses
Result<T, E>instead of exceptions - Makes failures explicit
- Avoids hidden background behavior
This makes do-over ideal for high-reliability systems.
Design Principles
- Explicit over Implicit: All errors are returned as
Resulttypes - Async-First: Built on Tokio for native async/await support
- Zero Global State: All state is explicit and contained in policy instances
- Composable: Policies can be easily combined and nested
- Type-Safe: Leverages Rust's type system for correctness
- Observable: Metrics and instrumentation built into the design
Examples
The examples/ directory contains comprehensive demonstrations of each policy:
| Example | Description | Run Command |
|---|---|---|
| retry | Fixed and exponential backoff strategies | cargo run --example retry |
| circuit_breaker | State transitions: Closed → Open → Half-Open | cargo run --example circuit_breaker |
| timeout | Time-bounded operations | cargo run --example timeout |
| bulkhead | Concurrency limiting with queue timeout | cargo run --example bulkhead |
| rate_limiter | Token bucket rate limiting | cargo run --example rate_limiter |
| hedge | Hedged requests for latency reduction | cargo run --example hedge |
| composition | Policy composition patterns with Wrap | cargo run --example composition |
| comprehensive | Real-world order processing system | cargo run --example comprehensive |
Running Examples
# Run a specific example
# Run all examples (build check)
Example Output
Each example produces visual output showing policy behavior:
=== Do-Over Retry Policy Example ===
📌 Scenario 1: Fixed Backoff (fails twice, succeeds on third)
Configuration: max_retries=2, delay=100ms
[+ 0ms] Attempt 1: ❌ Simulated failure
[+ 105ms] Attempt 2: ❌ Simulated failure
[+ 209ms] Attempt 3: ✅ Success!
Result: "Operation completed successfully"
API Reference
Core Types
| Type | Description |
|---|---|
Policy<E> |
Trait implemented by all resilience policies |
DoOverError<E> |
Error type wrapping policy and application errors |
Wrap<O, I> |
Composes two policies together |
Policy Constructors
// Retry
fixed
exponential
// Circuit Breaker
new
// Timeout
new
// Bulkhead
new
new.with_queue_timeout
// Rate Limiter
new
// Hedge
new
// Composition
new
License
MIT or Apache 2.0