[][src]Crate mula

Mula provides a way of having several requests for a given computation be serviced without duplicating the actual computation.

Imagine you have a REST API with an endpoint that causes an expensive computation. If that endpoint receives more requests while the first one is being computed it would be nice if you could just let those new requests wait in line for the computation to end and give them all the same result.

That is what Mula allows you to do.


The following example will only run the computation closure twice, once for each of the distinct inputs. The two subscriptions for "burro" will both be serviced by the same computation.

use mula::mula;

fn delayed_uppercase(input: &'static str) -> String {

let thread1 = std::thread::spawn(move || {
    let upper = delayed_uppercase("mula");
    assert_eq!(upper, "MULA".to_string());

let thread2 = std::thread::spawn(move || {
    let upper = delayed_uppercase("burro");
    assert_eq!(upper, "BURRO".to_string());

let thread3 = std::thread::spawn(move || {
    let upper = delayed_uppercase("burro");
    assert_eq!(upper, "BURRO".to_string());



pub extern crate once_cell;



State tracker that allows for sharing of a specific computation.

Attribute Macros


Makes the function run only once at a time for a given input. Any calls made with the same argument after the first will block and wait for it to finish and get the result once it is available.