Skip to main content

Crate fp_library

Crate fp_library 

Source
Expand description

§fp-library

A functional programming library for Rust featuring your favourite higher-kinded types and type classes.

§Motivation

Rust is a multi-paradigm language with strong functional programming features like iterators, closures, and algebraic data types. However, it lacks native support for Higher-Kinded Types (HKT), which limits the ability to write generic code that abstracts over type constructors (e.g., writing a function that works for any Monad, whether it’s Option, Result, or Vec).

fp-library aims to bridge this gap by providing:

  1. A robust encoding of HKTs in stable Rust.
  2. A comprehensive set of standard type classes (Functor, Monad, Traversable, etc.).
  3. Zero-cost abstractions that respect Rust’s performance characteristics.

§Examples

§Using Functor with Option

The brand is inferred automatically from the container type:

use fp_library::functions::*;

// Brand inferred from Option<i32>
let y = map(|i: i32| i * 2, Some(5));
assert_eq!(y, Some(10));

// Brand inferred from &Vec<i32> (by-reference dispatch)
let v = vec![1, 2, 3];
let y = map(|i: &i32| *i + 10, &v);
assert_eq!(y, vec![11, 12, 13]);

For types with multiple brands (e.g., Result), use the explicit variant:

use fp_library::{
	brands::*,
	functions::explicit::*,
};

let y = map::<ResultErrAppliedBrand<&str>, _, _, _, _>(|i| i * 2, Ok::<i32, &str>(5));
assert_eq!(y, Ok(10));

§Monadic Do-Notation with m_do!

The m_do! macro provides Haskell/PureScript-style do-notation for flat monadic code. It desugars <- binds into nested bind calls.

use fp_library::{brands::*, functions::*, m_do};

// Inferred mode: brand inferred from container types
let result = m_do!({
	x <- Some(5);
	y <- Some(x + 1);
	let z = x * y;
	Some(z)
});
assert_eq!(result, Some(30));

// Explicit mode: for ambiguous types or when pure() is needed
let result = m_do!(VecBrand {
	x <- vec![1, 2];
	y <- vec![10, 20];
	pure(x + y)
});
assert_eq!(result, vec![11, 21, 12, 22]);

§Features

§Higher-Kinded Types (HKT)

Implemented via lightweight higher-kinded polymorphism (brand pattern). Write generic code over Functor, Monad, Traversable, etc. that works with Option, Result, Vec, or your own types. Procedural macros (trait_kind!, impl_kind!, Apply!, InferableBrand!) simplify defining and applying HKT encodings. m_do! provides monadic do-notation; a_do! provides applicative do-notation. Both support a ref qualifier for by-reference dispatch and an inferred mode (m_do!({ ... })) where the brand is inferred from container types. See Higher-Kinded Types.

§Brand Inference

For types with a single unambiguous brand (Option, Vec, Identity, Thunk, Lazy, etc.), free functions like map, bind, fold_right, and bimap infer the brand automatically from the container type via InferableBrand traits. No turbofish needed:

use fp_library::functions::*;

let y = map(|x: i32| x + 1, Some(5));                     // infers OptionBrand
let z: Vec<i32> = bind(vec![1, 2], |x: i32| vec![x, x*10]); // infers VecBrand
let w = bimap((|e: i32| e+1, |s: i32| s*2), Ok::<i32, i32>(5)); // infers ResultBrand
assert_eq!(y, Some(6));
assert_eq!(z, vec![1, 10, 2, 20]);
assert_eq!(w, Ok(10));

Types with multiple brands (Result at arity 1, Tuple2, Pair, ControlFlow) require the explicit:: variants (explicit::map::<Brand, ...>) for arity-1 operations.

InferableBrand traits are auto-generated by trait_kind! and impl_kind!. The #[no_inferable_brand] attribute on impl_kind! suppresses generation for multi-brand types. See Brand Inference.

§Val/Ref Dispatch

Each free function routes to either a by-value or by-reference trait method based on the closure’s argument type. Closureless operations (alt, compact, etc.) dispatch based on container ownership instead. This means a single map function handles both map(|x: i32| x + 1, Some(5)) (owned, dispatches to Functor::map) and map(|x: &i32| *x + 1, &v) (borrowed, dispatches to RefFunctor::ref_map). See Val/Ref Dispatch.

§Zero-Cost Abstractions

Core operations use uncurried semantics with impl Fn for static dispatch and zero heap allocation. Dynamic dispatch (dyn Fn) is reserved for cases where functions must be stored as data (e.g., Semiapplicative::apply, Lazy thunks, Endofunction). See Zero-Cost Abstractions.

§Thread Safety

A parallel trait hierarchy (ParFunctor, ParFoldable, etc.) mirrors the sequential one with Send + Sync bounds. When the rayon crate feature is enabled, par_* functions use true parallel execution; without it, they fall back to sequential equivalents. See Thread Safety and Parallelism.

§Type Class Hierarchy

The library provides a comprehensive set of type classes. Blanket implementations automatically derive composite traits (Applicative, Monad, Comonad, Alternative, MonadPlus) from their components. For rationale behind the hierarchy design, see Architecture & Design.

graph TD
    Functor --> Alt --> Plus
    Functor --> Extend
    Extend --> Comonad
    Extract --> Comonad
    Functor --> Semiapplicative
    Lift --> Semiapplicative
    Lift --> ApplyFirst
    Lift --> ApplySecond
    Semiapplicative --> Applicative
    Pointed --> Applicative
    ApplyFirst --> Applicative
    ApplySecond --> Applicative
    Applicative --> Alternative
    Plus --> Alternative
    Applicative --> Monad
    Semimonad --> Monad
    Monad --> MonadPlus
    Alternative --> MonadPlus
    Monad --> MonadRec
    Foldable --> Traversable
    Functor --> Traversable
    Compactable --> Filterable
    Functor --> Filterable
    Filterable --> Witherable
    Traversable --> Witherable
    Contravariant

Contravariant is the dual of Functor: it maps functions contravariantly (Fn(B) -> A instead of Fn(A) -> B). It has no supertrait relationship with Functor in the trait hierarchy, but the two are connected through Profunctor: a profunctor with its second parameter fixed (ProfunctorSecondAppliedBrand) implements Contravariant, while a profunctor with its first parameter fixed (ProfunctorFirstAppliedBrand) implements Functor.

graph TD
    Bifunctor --> Bitraversable
    Bifoldable --> Bitraversable
graph TD
    Profunctor --> Strong --> Wander
    Profunctor --> Choice --> Wander
    Profunctor --> Closed
    Profunctor --> Costrong
    Profunctor --> Cochoice
    Category --> Arrow
    Strong --> Arrow
graph TD
    Semigroup --> Monoid
    Semigroupoid --> Category

Other: NaturalTransformation (polymorphic function F a -> G a between type constructors). Pipe (left-to-right function application via .pipe() method, blanket impl on all Sized types).

Indexed variants: FunctorWithIndex, FoldableWithIndex, TraversableWithIndex, FilterableWithIndex extend their base traits with a shared WithIndex associated index type.

Parallel variants: ParFunctor, ParFoldable, ParCompactable, ParFilterable, ParFunctorWithIndex, ParFoldableWithIndex, ParFilterableWithIndex mirror the sequential hierarchy with Send + Sync bounds. Enable the rayon feature for true parallel execution.

By-reference hierarchy: A full by-ref type class stack for memoized types and by-reference iteration over collections:

  • RefFunctor, RefPointed, RefLift, RefSemiapplicative, RefSemimonad, RefApplicative, RefMonad, RefApplyFirst, RefApplySecond
  • RefFoldable, RefTraversable, RefFilterable, RefWitherable
  • RefFunctorWithIndex, RefFoldableWithIndex, RefFilterableWithIndex, RefTraversableWithIndex

Thread-safe by-reference: SendRefFunctor, SendRefPointed, SendRefLift, SendRefSemiapplicative, SendRefSemimonad, SendRefApplicative, SendRefMonad, SendRefFoldable, SendRefFoldableWithIndex, SendRefFunctorWithIndex, SendRefApplyFirst, SendRefApplySecond.

Parallel by-reference: ParRefFunctor, ParRefFoldable, ParRefFilterable, ParRefFunctorWithIndex, ParRefFoldableWithIndex, ParRefFilterableWithIndex.

Laziness and effects: Deferrable, SendDeferrable for lazy construction. LazyConfig for memoization strategy abstraction.

§Optics

Composable data accessors using profunctor encoding (port of PureScript’s purescript-profunctor-lenses): Iso, Lens, Prism, AffineTraversal, Traversal, Getter, Setter, Fold, Review, Grate. Each has a monomorphic Prime variant. Indexed variants available for Lens, Traversal, Getter, Fold, Setter. Zero-cost composition via Composed and optics_compose. See Optics Comparison.

§Data Types

Standard library instances: Option, Result, Vec, String implement relevant type classes.

Lazy evaluation and stack safety (see Lazy Evaluation):

TypePurpose
Thunk / SendThunkLightweight deferred computation.
TrampolineStack-safe recursion via the Free monad.
Lazy (RcLazy, ArcLazy)Memoized (evaluate-at-most-once) computation.
TryThunk / TrySendThunkFallible deferred computation.
TryTrampolineFallible stack-safe recursion.
TryLazy (RcTryLazy, ArcTryLazy)Fallible memoized computation.

Free functors (see Coyoneda Implementations):

TypeWrapperCloneSendMap fusion
CoyonedaBoxNoNoNo (k calls)
RcCoyonedaRcYesNoNo (k calls)
ArcCoyonedaArcYesYesNo (k calls)
CoyonedaExplicitNoneNoConditionalYes (1 call)

Containers: Identity, Pair, CatList (O(1) append/uncons catenable list).

Function wrappers: Endofunction (dynamically composed a -> a), Endomorphism (monoidally composed a -> a).

Pointer abstraction: Pointer, RefCountedPointer, SendRefCountedPointer abstract over Rc/Arc via FnBrand<P>, enabling generic code that works with either pointer type. CloneFn/SendCloneFn provide cloneable closure wrappers for applicative contexts. Arrow provides composable callable wrappers for the optics system. See Pointer Abstraction.

§Numeric Algebra

Semiring, Ring, CommutativeRing, EuclideanRing, DivisionRing, Field, HeytingAlgebra.

§Newtype Wrappers

Additive, Multiplicative, Conjunctive, Disjunctive, First, Last, Dual for selecting Semigroup/Monoid instances.

§Helper Functions

compose, constant, flip, identity, on, pipe.

§How it Works

§Higher-Kinded Types (HKT)

Since Rust doesn’t support HKTs directly (i.e., it’s not possible to use Option in impl Functor for Option, instead of Option<T>), this library uses Lightweight Higher-Kinded Polymorphism (also known as the “Brand” pattern or type-level defunctionalization).

Each type constructor has a corresponding Brand type (e.g., OptionBrand for Option). These brands implement the Kind traits, which map the brand and generic arguments back to the concrete type. The library provides macros to simplify this process.

use fp_library::{
	impl_kind,
	kinds::*,
};

pub struct OptionBrand;

impl_kind! {
	#[no_inferable_brand]
	for OptionBrand {
		type Of<'a, A: 'a>: 'a = Option<A>;
	}
}

§Brand Inference

The HKT encoding above provides a forward mapping from brands to concrete types (OptionBrand -> Option<A>), but the compiler has no way to go backwards: given Option<A>, it cannot determine that OptionBrand is the right brand. This means every free function call would require a turbofish annotation:

use fp_library::{brands::*, functions::explicit::*};

// Without brand inference, the brand must be specified explicitly
let y = map::<OptionBrand, _, _, _, _>(|x: i32| x + 1, Some(5));

Brand inference adds the reverse mapping (concrete type -> brand), letting the compiler infer the brand automatically:

use fp_library::functions::*;

// Brand inferred from Option<i32>
let y = map(|x: i32| x + 1, Some(5));
assert_eq!(y, Some(6));
§How it works

For each Kind_{hash} trait generated by trait_kind!, a corresponding InferableBrand_{hash} trait is also generated:

trait_kind!(type Of<'a, A: 'a>: 'a;)

// Generates both:

// Forward: Brand -> Type
trait Kind_cdc7cd43dac7585f {
    type Of<'a, A: 'a>: 'a;
}

// Reverse: Type -> Brand
trait InferableBrand_cdc7cd43dac7585f {
    type Brand: Kind_cdc7cd43dac7585f;
}

// Blanket ref impl
impl<T: InferableBrand_cdc7cd43dac7585f + ?Sized>
    InferableBrand_cdc7cd43dac7585f for &T
{
    type Brand = T::Brand;
}

When impl_kind! defines a brand, it generates both mappings:

impl_kind! {
    for OptionBrand {
        type Of<'a, A: 'a>: 'a = Option<A>;
    }
}

// Generates both:

// Forward
impl Kind_cdc7... for OptionBrand {
    type Of<'a, A: 'a>: 'a = Option<A>;
}

// Reverse
impl<A> InferableBrand_cdc7... for Option<A> {
    type Brand = OptionBrand;
}

The inference-based map function uses InferableBrand to resolve the brand from the container type FA:

pub fn map<FA, A, B, Marker>(
    f: impl FunctorDispatch<FA::Brand, A, B, FA, Marker>,
    fa: FA,
) -> ...
where
    FA: InferableBrand_cdc7...,
{
    f.dispatch(fa)
}

When the caller writes map(|x| x + 1, Some(5)), the compiler:

  1. Infers FA = Option<i32> from the second argument.
  2. Finds impl InferableBrand for Option<A> and resolves Brand = OptionBrand.
  3. Uses OptionBrand to select the correct FunctorDispatch impl.

The blanket impl InferableBrand for &T enables the same inference for borrowed containers: map(|x: &i32| *x + 1, &Some(5)) infers FA = &Option<i32>, delegates to Option<i32>’s impl, and resolves Brand = OptionBrand.

§Multi-brand types

Some types are reachable through multiple brands at a given arity. Result<A, E> has four arity-1 brands (ResultErrAppliedBrand<E>, ResultOkAppliedBrand<A>, and two bifunctor-derived brands), making arity-1 inference ambiguous. These types do not implement the arity-1 InferableBrand trait, so map(f, Ok(5)) produces a compile error with an actionable diagnostic:

error: `Result<i32, String>` does not have a unique brand and cannot use brand inference
  = note: use the `explicit::` variant with a turbofish to specify the brand manually

However, at arity 2, Result has exactly one brand (ResultBrand), so bimap((f, g), Ok(5)) infers the brand successfully. The ambiguity is arity-specific, not type-specific.

§The #[no_inferable_brand] attribute

impl_kind! generates InferableBrand impls by default. For multi-brand types, add #[no_inferable_brand] to suppress generation:

impl_kind! {
    #[no_inferable_brand]
    impl<E> for ResultErrAppliedBrand<E> {
        type Of<'a, A: 'a>: 'a = Result<A, E>;
    }
}

Generation is also automatically skipped when the target type is a projection (contains Apply! or ::) or when multiple associated types are defined.

§Relationship to Val/Ref dispatch

Brand inference determines which type constructor to use; Val/Ref dispatch determines which trait method (by-value or by-reference) to call. The two systems compose through the shared FA type parameter: brand inference resolves FA to a brand, while dispatch resolves FA’s ownership to a Val or Ref marker. Together they enable fully inferred calls like map(|x: i32| x + 1, Some(5)) with no turbofish and no explicit val/ref selection.

§Val/Ref Dispatch

The library has two parallel type class hierarchies: a by-value hierarchy (Functor, Semimonad, Foldable, etc.) where closures receive owned values, and a by-reference hierarchy (RefFunctor, RefSemimonad, RefFoldable, etc.) where closures receive borrowed references. This split exists because memoized types like Lazy can only lend references to their cached values, not give up ownership (see Limitations).

Rather than exposing two separate functions per operation (map and ref_map), the dispatch system provides a single unified function that routes to the correct trait based on the closure’s argument type:

use fp_library::functions::*;

// Closure takes i32 (owned) -> dispatches to Functor::map
let y = map(|x: i32| x * 2, Some(5));
assert_eq!(y, Some(10));

// Closure takes &i32 (borrowed) -> dispatches to RefFunctor::ref_map
let v = vec![1, 2, 3];
let y = map(|x: &i32| *x + 10, &v);
assert_eq!(y, vec![11, 12, 13]);
§How it works

Each operation has a dispatch trait with two blanket impls selected by a marker type (Val or Ref). The compiler resolves the marker from the closure’s argument type.

Using map as an example:

// The dispatch trait (simplified)
trait FunctorDispatch<Brand, A, B, FA, Marker> {
	fn dispatch(self, fa: FA) -> Brand::Of<B>;
}

// Val impl: closure takes owned A, container is owned
impl<Brand: Functor, A, B, F: Fn(A) -> B>
	FunctorDispatch<Brand, A, B, Brand::Of<A>, Val> for F
{
	fn dispatch(self, fa: Brand::Of<A>) -> Brand::Of<B> {
		Brand::map(self, fa)  // delegates to Functor::map
	}
}

// Ref impl: closure takes &A, container is borrowed
impl<Brand: RefFunctor, A, B, F: Fn(&A) -> B>
	FunctorDispatch<Brand, A, B, &Brand::Of<A>, Ref> for F
{
	fn dispatch(self, fa: &Brand::Of<A>) -> Brand::Of<B> {
		Brand::ref_map(self, fa)  // delegates to RefFunctor::ref_map
	}
}

When the caller writes map(|x: i32| x * 2, Some(5)):

  1. The closure type Fn(i32) -> i32 matches the Val impl (takes owned A).
  2. The compiler infers Marker = Val and FA = Option<i32>.
  3. dispatch delegates to Functor::map.

When the caller writes map(|x: &i32| *x + 10, &v):

  1. The closure type Fn(&i32) -> i32 matches the Ref impl (takes &A).
  2. The compiler infers Marker = Ref and FA = &Vec<i32>.
  3. dispatch delegates to RefFunctor::ref_map.

The FA type parameter is key: it appears in both the dispatch trait (to constrain the container) and in InferableBrand (to resolve the brand). This is how dispatch and brand inference compose through a single type variable. See Brand Inference for how the reverse mapping from concrete types to brands works.

§Closureless dispatch

Functions that take no closure (alt, compact, separate, join, apply_first, apply_second) use a variant where the container type itself drives dispatch instead of a closure’s argument type. Owned containers resolve to Val, borrowed containers resolve to Ref:

use fp_library::functions::*;

// Owned containers -> Alt::alt
let y = alt(None, Some(5));
assert_eq!(y, Some(5));

// Borrowed containers -> RefAlt::ref_alt
let a = vec![1, 2];
let b = vec![3, 4];
let y = alt(&a, &b);
assert_eq!(y, vec![1, 2, 3, 4]);
§Module structure

The dispatch system lives in fp-library/src/dispatch/, with one file per type class operation mirroring classes/. Each dispatch module contains the dispatch trait, Val/Ref impl blocks, the inference wrapper function, and an explicit submodule with the brand-explicit variant:

classes/functor.rs      -> Functor trait (by-value map)
classes/ref_functor.rs  -> RefFunctor trait (by-ref map)
dispatch/functor.rs     -> pub(crate) mod inner {
                              FunctorDispatch trait,
                              Val impl (Fn(A) -> B -> Functor::map),
                              Ref impl (Fn(&A) -> B -> RefFunctor::ref_map),
                              pub fn map (inference wrapper),
                              pub mod explicit { pub fn map (Brand turbofish) },
                           }
functions.rs            -> Re-exports: map (inference), explicit::map (dispatch)

The functions.rs module re-exports inference wrappers from crate::dispatch::* and explicit functions from crate::dispatch::*/explicit::*. There are no intermediate functions/*.rs source files.

§Relationship to thread safety and parallelism

The Val/Ref split is orthogonal to thread safety. The library has separate Send* and Par* trait hierarchies that add Send + Sync bounds for concurrent use. These axes combine independently: a type can implement RefFunctor (by-ref, thread-local), SendRefFunctor (by-ref, thread-safe), ParRefFunctor (by-ref, parallel), etc. See Thread Safety and Parallelism for details on the thread-safe and parallel trait hierarchies.

§Zero-Cost Abstractions & Uncurried Semantics

Unlike many functional programming libraries that strictly adhere to curried functions (e.g., map(f)(fa)), fp-library adopts uncurried semantics (e.g., map(f, fa)) for its core abstractions.

Why? Traditional currying in Rust often requires:

  • Creating intermediate closures for each partial application.
  • Heap-allocating these closures (boxing) or wrapping them in reference counters (Rc/Arc) to satisfy type system constraints.
  • Dynamic dispatch (dyn Fn), which inhibits compiler optimizations like inlining.

By using uncurried functions with impl Fn or generic bounds, fp-library achieves zero-cost abstractions:

  • No Heap Allocation: Operations like map and bind do not allocate intermediate closures.
  • Static Dispatch: The compiler can fully monomorphize generic functions, enabling aggressive inlining and optimization.
  • Ownership Friendly: Better integration with Rust’s ownership and borrowing system.

This approach ensures that using high-level functional abstractions incurs no runtime penalty compared to hand-written imperative code.

Exceptions: While the library strives for zero-cost abstractions, some operations inherently require dynamic dispatch or heap allocation due to Rust’s type system:

  • Functions as Data: When functions are stored in data structures (e.g., inside a Vec for Semiapplicative::apply, or in Lazy thunks), they must often be “type-erased” (wrapped in Rc<dyn Fn> or Arc<dyn Fn>). This is because every closure in Rust has a unique, anonymous type. To store multiple different closures in the same container, or to compose functions dynamically (like in Endofunction), they must be coerced to a common trait object.
  • Lazy Evaluation: The Lazy type relies on storing a thunk that can be cloned and evaluated later, which typically requires reference counting and dynamic dispatch.

For these specific cases, the library provides Brand types (like RcFnBrand and ArcFnBrand) to let you choose the appropriate wrapper (single-threaded vs. thread-safe) while keeping the rest of your code zero-cost. The library uses a unified Pointer hierarchy to abstract over these choices.

§Pointer Abstraction & Shared Semantics

The library uses a unified pointer hierarchy to abstract over reference counting strategies (Rc vs Arc) and to enable shared memoization semantics for lazy evaluation.

§Pointer Hierarchy
graph TD
    Pointer["Pointer (Deref)"] --> RefCountedPointer["RefCountedPointer (+ Clone)"]
    RefCountedPointer --> SendRefCountedPointer["SendRefCountedPointer (+ Send + Sync)"]
    RcBrand -.implements.-> RefCountedPointer
    ArcBrand -.implements.-> SendRefCountedPointer
§Generic Function Brands

FnBrand<P> is parameterized over a RefCountedPointer brand P:

  • RcFnBrand is a type alias for FnBrand<RcBrand>.
  • ArcFnBrand is a type alias for FnBrand<ArcBrand>.

This allows a unified implementation of CloneFn while SendCloneFn is only implemented when P: SendRefCountedPointer. Both traits are parameterized by ClosureMode (Val or Ref), controlling whether the wrapped closure takes its input by value (Fn(A) -> B) or by reference (Fn(&A) -> B). The composable variant Arrow (for optics) adds Category + Strong supertraits. Code that is generic over the pointer brand can work with either Rc or Arc without duplication.

§Shared Memoization

Lazy uses a configuration trait (LazyConfig) to abstract over the underlying storage and synchronization primitives, ensuring shared memoization semantics across clones:

  • Lazy<'a, A, Config> is parameterized by a LazyConfig which defines the storage type.
  • RcLazy uses Rc<LazyCell> for single-threaded, shared memoization.
  • ArcLazy uses Arc<LazyLock> for thread-safe, shared memoization.

This ensures Haskell-like semantics where forcing one reference updates the value for all clones. See Lazy Evaluation for the full type overview, trade-offs, and decision guide.

§Design Rationale
  • Correctness: Ensures Lazy behaves correctly as a shared thunk rather than a value that is re-evaluated per clone.
  • Performance: Leverages standard library types (LazyCell, LazyLock) for efficient, correct-by-construction memoization.
  • Flexibility: Separates the concern of memoization (Lazy) from computation (Trampoline/Thunk), allowing users to choose the right tool for the job.

§Lazy Evaluation & Effect System

Rust is an eagerly evaluated language. To enable functional patterns like deferred execution and safe recursion, fp-library provides a granular set of types that let you opt-in to specific behaviors without paying for unnecessary overhead.

User stories:

  • Thunk / SendThunk: “I want to defer a computation and evaluate it later.”
  • Trampoline (Free): “I want to chain binds without blowing the stack.”
  • Lazy (RcLazy / ArcLazy): “I want to compute a value at most once and cache it.”
  • Try* variants: “I want any of the above, but the computation may fail.”
§Type Overview

The hierarchy consists of infallible computation types, fallible counterparts, and the Free monad infrastructure. Each type makes different trade-offs around stack safety, memoization, lifetimes, and thread safety.

TypeUnderlyingHKTStack SafeMemoizedLifetimesSend
Thunk<'a, A>Box<dyn FnOnce() -> A + 'a>Yes (full)Partial (tail_rec_m only)No'aNo
SendThunk<'a, A>Box<dyn FnOnce() -> A + Send + 'a>NoPartial (tail_rec_m only)No'aYes
Trampoline<A>Free<ThunkBrand, A>NoYesNo'staticNo
RcLazy<'a, A>Rc<LazyCell<A, ...>>Partial (RefFunctor)N/AYes'aNo
ArcLazy<'a, A>Arc<LazyLock<A, ...>>Partial (SendRefFunctor)N/AYes'aYes
TryThunk<'a, A, E>Thunk<'a, Result<A, E>>Yes (full)Partial (tail_rec_m only)No'aNo
TrySendThunk<'a, A, E>SendThunk<'a, Result<A, E>>NoPartial (tail_rec_m only)No'aYes
TryTrampoline<A, E>Trampoline<Result<A, E>>NoYesNo'staticNo
RcTryLazy<'a, A, E>Rc<LazyCell<Result<A, E>, ...>>Partial (RefFunctor, Foldable)N/AYes'aNo
ArcTryLazy<'a, A, E>Arc<LazyLock<Result<A, E>, ...>>Partial (SendRefFunctor, Foldable)N/AYes'aYes
Free<F, A>CatList-based “Reflection without Remorse”NoYesNo'staticNo

Config-dependent Send: ArcLazy/ArcTryLazy are Send + Sync; RcLazy/RcTryLazy are not. The LazyConfig trait abstracts over the underlying pointer and cell types; see Pointer Abstraction for how FnBrand<P> and LazyConfig generalize over Rc/Arc.

At a glance, the primary use cases are:

TypePrimary Use Case
Thunk<'a, A>Glue Code & Borrowing. Lightweight deferred computation. Best for short chains and working with references.
SendThunk<'a, A>Thread-Safe Glue Code. Like Thunk, but the closure is Send. Enables truly lazy into_arc_lazy().
Trampoline<A>Deep Recursion & Pipelines. Heavy-duty computation. Uses a trampoline to guarantee stack safety for infinite recursion.
Lazy<'a, A>Caching. Wraps a computation to ensure it runs at most once. RcLazy for single-threaded, ArcLazy for thread-safe.

Each of these has a fallible counterpart that wraps Result<A, E> with ergonomic error-handling combinators (TryThunk, TrySendThunk, TryTrampoline, TryLazy).

§Supporting Traits
TraitPurposeImplementors in hierarchy
Deferrable<'a>Lazy construction from thunkThunk, SendThunk, Trampoline, RcLazy, ArcLazy, RcTryLazy, ArcTryLazy, TryThunk, TrySendThunk, Free<ThunkBrand, A>
SendDeferrable<'a>Thread-safe lazy construction (extends Deferrable)SendThunk, TrySendThunk, ArcLazy, ArcTryLazy
RefFunctorMapping with &A inputLazyBrand<RcLazyConfig>, TryLazyBrand<E, RcLazyConfig>
SendRefFunctorThread-safe mapping with &A inputLazyBrand<ArcLazyConfig>, TryLazyBrand<E, ArcLazyConfig>
LazyConfigInfallible memoization strategy (pointer + cell choice)RcLazyConfig, ArcLazyConfig
TryLazyConfigFallible memoization strategy (extends LazyConfig)RcLazyConfig, ArcLazyConfig
§The “Why” of Multiple Types

Unlike lazy languages (e.g., Haskell) where the runtime handles everything, Rust requires us to choose our trade-offs:

  1. Thunk vs Trampoline: Thunk is faster and supports borrowing (&'a T). Its tail_rec_m is stack-safe, but deep bind chains will overflow the stack. Trampoline guarantees stack safety for all operations via a trampoline (the Free monad) but requires types to be 'static. Note that !Send types like Rc<T> are fully supported. A key distinction is that Thunk implements Functor, Applicative, and Monad directly, making it suitable for generic programming, while Trampoline does not.

  2. Thunk vs SendThunk: Thunk wraps Box<dyn FnOnce() -> A + 'a> and is !Send. SendThunk wraps Box<dyn FnOnce() -> A + Send + 'a> and can cross thread boundaries. Use SendThunk when you need truly lazy into_arc_lazy() (converting to ArcLazy without eager evaluation), or when building deferred computation chains that will be consumed on another thread. TrySendThunk is the fallible counterpart.

  3. Computation vs Caching: Thunk and Trampoline describe computations that are not memoized. Each instance is consumed on .evaluate() (which takes self by value), so the computation runs exactly once per instance, but constructing a new instance re-executes the work. Lazy, by contrast, caches the result so that all clones share a single evaluation. If you have an expensive operation (like a DB call), convert it to a Lazy to guarantee it runs at most once.

  4. RefFunctor vs Functor: Lazy::evaluate() returns &A, not A. The standard Functor trait expects owned A, so automatically cloning would violate the library’s zero-cost abstraction principle. RefFunctor honestly represents what Lazy can do: mapping with &A -> B.

  5. LazyConfig vs TryLazyConfig: The memoization strategy is split into two traits. LazyConfig covers infallible memoization (pointer type, lazy cell, thunk type). TryLazyConfig extends it with fallible variants (TryLazy, TryThunk). Third-party implementations can choose to implement only LazyConfig when fallible memoization is not needed.

§Quick Decision Guide
  • Short deferred computation chains: Use Thunk.
  • Deep recursion or long pipelines: Use Trampoline.
  • Cache an expensive result (single-threaded): Use RcLazy.
  • Cache an expensive result (multi-threaded): Use ArcLazy.
  • Deferred computation that must be Send: Use SendThunk.
  • Any of the above, but fallible: Use the Try* counterpart.
  • Generic programming over lazy types: Use Deferrable / SendDeferrable trait bounds.
§Workflow Example: Expression Evaluator

A robust pattern is to use TryTrampoline for stack-safe, fallible recursion, TryLazy to memoize expensive results, and TryThunk to create lightweight views.

Consider an expression evaluator that handles division errors and deep recursion:

use fp_library::types::*;

#[derive(Clone)]
enum Expr {
	Val(i32),
	Add(Box<Expr>, Box<Expr>),
	Div(Box<Expr>, Box<Expr>),
}

// 1. Stack-safe recursion with error handling (TryTrampoline)
fn eval(expr: &Expr) -> TryTrampoline<i32, String> {
	let expr = expr.clone(); // Capture owned data for 'static closure
	TryTrampoline::defer(move || match expr {
		Expr::Val(n) => TryTrampoline::ok(n),
		Expr::Add(lhs, rhs) => eval(&lhs).bind(move |l| eval(&rhs).map(move |r| l + r)),
		Expr::Div(lhs, rhs) => eval(&lhs).bind(move |l| {
			eval(&rhs).bind(move |r| {
				if r == 0 {
					TryTrampoline::err("Division by zero".to_string())
				} else {
					TryTrampoline::ok(l / r)
				}
			})
		}),
	})
}

// Usage
fn main() {
	let expr = Expr::Div(Box::new(Expr::Val(100)), Box::new(Expr::Val(2)));

	// 2. Memoize result (TryLazy)
	// The evaluation runs at most once, even if accessed multiple times.
	let result = RcTryLazy::new(move || eval(&expr).evaluate());

	// 3. Create deferred view (TryThunk)
	// Borrow the cached result to format it.
	let view: TryThunk<String, String> = TryThunk::new(|| {
		let val = result.evaluate().map_err(|e| e.clone())?;
		Ok(format!("Result: {}", val))
	});

	assert_eq!(view.evaluate(), Ok("Result: 50".to_string()));
}

§Thread Safety and Parallelism

The library provides a parallel trait hierarchy that mirrors the sequential one. All par_* free functions accept plain impl Fn + Send + Sync closures: no wrapper types required. Element types require A: Send; closures require Send + Sync.

graph TD
    ParFunctor --> ParFilterable
    ParCompactable --> ParFilterable
    ParFunctor --> ParFunctorWithIndex
    ParFoldable --> ParFoldableWithIndex
    ParFilterable --> ParFilterableWithIndex
    ParFoldableWithIndex --> ParFilterableWithIndex
Parallel traitOperationsSupertraits
ParFunctorpar_mapKind
ParCompactablepar_compact, par_separateKind
ParFilterablepar_filter_map, par_filterParFunctor + ParCompactable
ParFoldablepar_fold_mapKind
ParFunctorWithIndexpar_map_with_indexParFunctor + FunctorWithIndex
ParFoldableWithIndexpar_fold_map_with_indexParFoldable + FoldableWithIndex
ParFilterableWithIndexpar_filter_map_with_indexParFilterable + ParFoldableWithIndex + WithIndex

ParFilterable provides default implementations of par_filter_map and par_filter derived from par_map + par_compact; types can override them for single-pass efficiency.

  • SendCloneFn: Thread-safe cloneable function wrappers with Send + Sync bounds. Implemented by ArcFnBrand.
  • Rayon Support: When the rayon feature is enabled, par_* functions use rayon for true parallel execution. Otherwise they fall back to sequential equivalents.
§Parallel By-Reference Traits

A parallel by-reference hierarchy mirrors the parallel one but closures receive &A instead of consuming A. Elements require A: Send + Sync (for rayon’s par_iter() which needs &A: Send, equivalent to A: Sync).

graph TD
    ParRefFunctor --> ParRefFilterable
    ParCompactable --> ParRefFilterable
    ParRefFunctor --> ParRefFunctorWithIndex
    ParRefFoldable --> ParRefFoldableWithIndex
    ParRefFilterable --> ParRefFilterableWithIndex
    ParRefFoldableWithIndex --> ParRefFilterableWithIndex
Par-Ref traitOperationsSupertraits
ParRefFunctorpar_ref_mapRefFunctor
ParRefFoldablepar_ref_fold_mapRefFoldable
ParRefFilterablepar_ref_filter_map, par_ref_filterParRefFunctor + ParCompactable
ParRefFunctorWithIndexpar_ref_map_with_indexParRefFunctor + RefFunctorWithIndex
ParRefFoldableWithIndexpar_ref_fold_map_with_indexParRefFoldable + RefFoldableWithIndex
ParRefFilterableWithIndexpar_ref_filter_map_with_index, par_ref_filter_with_indexParRefFilterable + RefFilterableWithIndex

Key benefit: par_ref_filter_map(|x: &A| ...) avoids consuming elements that get filtered out. Only elements that survive the filter are transformed. Implemented for Vec (using par_iter()) and CatList (collecting to Vec for rayon).

use fp_library::{
	brands::*,
	functions::*,
};

let v = vec![1, 2, 3, 4, 5];
// Map in parallel (uses rayon if feature is enabled)
let doubled: Vec<i32> = par_map::<VecBrand, _, _>(|x: i32| x * 2, v.clone());
assert_eq!(doubled, vec![2, 4, 6, 8, 10]);
// Compact options in parallel
let opts = vec![Some(1), None, Some(3), None, Some(5)];
let compacted: Vec<i32> = par_compact::<VecBrand, _>(opts);
assert_eq!(compacted, vec![1, 3, 5]);
// Fold in parallel
let result = par_fold_map::<VecBrand, _, _>(|x: i32| x.to_string(), v);
assert_eq!(result, "12345".to_string());

§Crate Features

  • rayon: Enables true parallel execution for par_* functions using the rayon library. Without this feature, par_* functions fall back to sequential equivalents.
  • serde: Enables serialization and deserialization support for pure data types using the serde library.
  • stacker: Enables adaptive stack growth for deep Coyoneda, RcCoyoneda, and ArcCoyoneda map chains via the stacker crate. Without this feature, deeply chained maps can overflow the stack.

Modules§

brands
Brands represent higher-kinded (unapplied/partially-applied) forms of types, as opposed to concrete types, which are fully-applied.
classes
Defines traits for common algebraic structures and functional abstractions, such as Functor, Applicative and Monad.
dispatch
Dispatch infrastructure for unified free functions that route to either by-value or by-reference trait methods based on the closure’s argument type.
functions
Contains generic, helper free functions and re-exports of free versions of type class functions.
kinds
Kind and InferableBrand traits for higher-kinded type simulation.
types
Concrete data types, their corresponding implementations and type aliases.

Macros§

Apply
Applies a brand to type arguments.
InferableBrand
Generates the name of an InferableBrand trait based on its signature.
Kind
Generates the name of a Kind trait based on its signature.
a_do
Applicative do-notation.
generate_function_re_exports
Generates re-exports for all public free functions in a directory.
generate_trait_re_exports
Generates re-exports for all public traits in a directory.
impl_kind
Implements a Kind trait for a brand.
m_do
Monadic do-notation.
trait_kind
Defines a new Kind trait.

Attribute Macros§

document_examples
Inserts a ### Examples heading and validates doc comment code blocks.
document_module
Orchestrates documentation generation for an entire module.
document_parameters
Generates documentation for a function’s parameters.
document_returns
Generates documentation for the return value of a function.
document_signature
Generates a Hindley-Milner style type signature for a function.
document_type_parameters
Generates documentation for type parameters.
kind
Adds a Kind supertrait bound to a trait definition.