Tailcall
tailcall lets you write deeply recursive functions in Rust without blowing the stack—on stable Rust.
It provides explicit, stack-safe tail calls using a lightweight trampoline runtime, with a macro that keeps usage ergonomic.
The runtime crate is no_std, so it can be used on targets without the standard library.
If the proposed become keyword is ever stabilized, it will likely be the preferred solution for proper tail calls.
Installation
[]
= "~2"
Quick Example
use tailcall;
assert_eq!;
That’s the core API:
- mark the function with
#[tailcall] - wrap recursive tail calls with
tailcall::call!
This runs in constant stack space, even for very large inputs.
When to Use This
tailcall is useful when:
- you want to write naturally recursive code without risking stack overflow
- converting to loops would make the code harder to read
- you’re working with mutual recursion or recursive traversals
- you want stack safety without nightly features
It may not be ideal when:
- a simple loop is clearer
- you need maximum performance (there is some trampoline overhead)
Cost
A common alternative for stack-safe recursion in Rust is to box each step. That can offer a similar interface, but it introduces allocation and indirection on every recursive step.
tailcall keeps each step inline instead, so the main cost is:
- an extra indirect call per
Thunk::bouncestep
In some cases, that cost disappears entirely. If a simple free function or inherent method only
tail-calls itself directly, #[tailcall] can lower it to an inline loop.
Rough Performance Shape
On a simple benchmark (relative to a handwritten loop):
- handwritten loop: 1.0×
#[tailcall](inline loop): ~1.0×#[tailcall](Thunk runtime): ~3.2× slower- boxed runtime: ~14× slower
This is just a local measurement, but the general pattern holds:
- direct self-recursion can optimize down to loop-like performance
- the
Thunkruntime is slower, but supports more complex cases - heap-allocating approaches are slower again
Tradeoff
The slower path is also the more flexible one.
It’s what allows tailcall to support:
- mutual recursion
- borrowed-state builders
- recursive control flow that doesn’t collapse into a single loop
If your recursion is simple, you get loop-like performance.
If it’s not, you still get stack safety—without paying for heap allocation.
How It Works (Briefly)
Rust does not guarantee tail call optimization. Deep recursion can overflow the stack.
tailcall avoids this by using a trampoline:
- each recursive step returns a deferred computation (
Thunk) - the runtime repeatedly executes those steps in a loop
- no additional stack frames are created
The key operation is:
Thunk::bounce(...)— produces the next step instead of recursing
This turns recursion into iteration under the hood.
Macro Usage
Most users only need the macro.
For simple direct self-recursion, the macro can compile free functions and inherent methods to an
inline loop. Mutual recursion and other more complex cases continue to use the hidden Thunk
builder automatically.
For methods, that optimized path works by aliasing the receiver once, rebinding the non-receiver arguments as loop state, and turning each direct self tail call into "assign the next arguments, then continue the loop".
Basic Pattern
Only calls wrapped in tailcall::call! are stack-safe.
If the function only tail-calls itself directly, this pattern is also the one that enables the inline-loop optimization.
Mutual Recursion
use tailcall;
Methods
use tailcall;
;
Mixed Recursion
Only tailcall::call! sites are trampoline-backed:
use tailcall;
Recommended Pattern: Tail-Recursive Helper
use tailcall;
Using the Runtime Directly
The macro is just a thin layer over runtime::Thunk.
A runtime::Thunk<T> is a fixed-size deferred value from a computation, so it can live on the
stack. It may contain the value directly or a type-erased closure that will eventually produce the
value.
On 64-bit targets, the current runtime keeps Thunk at 32 bytes. It does that by storing deferred
closures in a small inline slot, which means manual Thunk values and macro-generated helpers can
only capture a limited amount of data before construction panics.
Pending Thunk values still preserve normal destructor-on-drop behavior for captured values.
You build a chain of steps, then execute it with .call().
Core constructors
Thunk::value(x)— final resultThunk::new(f)— deferred computation returning a valueThunk::bounce(f)— deferred computation returning anotherThunk(this is what enables stack safety)
Example
use Thunk;
Thunk::bounce ensures each step returns control to the runtime loop instead of growing the call stack.
What the Macro Generates
At a high level, this:
becomes:
- a wrapper that calls
.call() - a hidden builder that returns
Thunk<T>
So the macro:
- rewrites your function into a trampoline-compatible form
- leaves control flow and logic unchanged
Limitations
Explicit Tail Calls
Tail-recursive transitions must be written with tailcall::call!.
Plain recursive calls are left alone, which means they still use the native Rust call stack.
Simple Argument Patterns
#[tailcall] currently only supports simple identifier arguments.
Patterns in function parameters are not rewritten by the macro.
? Is Not Supported
The ? operator is not supported inside #[tailcall] functions on stable Rust.
Use match or explicit early returns instead.
Trait Methods
Methods in ordinary impl blocks are supported.
Trait methods are not supported.
async fn and const fn
#[tailcall] does not support async fn or const fn.
Closure Size Limit
Each deferred closure is stored in a fixed-size inline slot (~16 bytes).
Closures that exceed that size panic when the Thunk is constructed.
Macro-generated helper thunks are subject to the same limit, so functions with enough arguments or captured state can also exceed it.
Development
For normal development, the main local checks are:
If you are changing the public docs or runtime internals, it is also worth doing a quick end-to-end smoke test from a fresh crate that depends on the published version from crates.io.
Publishing
tailcall and tailcall-impl are released together.
- Update the shared workspace version in
Cargo.toml. Also update the matchingtailcallandtailcall-implentries in[workspace.dependencies]. - Run the release checks:
- Commit the release version bump.
- Publish
tailcall-implfirst:
- Publish
tailcallafter the proc-macro crate is available:
- Tag the release from
mainand push the commit and tag:
License
Tailcall is distributed under the terms of both the MIT license and the Apache License (Version 2.0).
See LICENSE-APACHE, LICENSE-MIT, and COPYRIGHT for details.