Module adapton::macros [] [src]

Macros to make using the engine module's interface more ergonomic.

Demand-driven change propagation

The example below demonstrates demand-driven change propagation, which is unique to Adapton's approach to incremental computation. The example constructs two mutable inputs, nom and den, an intermediate subcomputation div that divides the numerator in nom by the denominator in den, and a thunk check that first checks whether the denominator is zero (returning zero if so) and if non-zero, returns the value of the division.

use adapton::macros::*;
use adapton::engine::*;
manage::init_dcg();

// Two mutable inputs, for numerator and denominator of division
let num  = cell!(42);
let den  = cell!(2);

// In Rust, cloning is explicit:
let den2 = den.clone(); // clone _global reference_ to cell.
let den3 = den.clone(); // clone _global reference_ to cell, again.

// Two subcomputations: The division, and a check thunk with a conditional expression
let div   = thunk![ get!(num) / get!(den) ];
let check = thunk![ if get!(den2) == 0 { None } else { Some(get!(div)) } ];

// Observe output of `check` while we change the input `den`
// Step 1: (Explained in detail, below)
assert_eq!(get!(check), Some(21));

// Step 2: (Explained in detail, below)
set(&den3, 0);
assert_eq!(get!(check), None);

// Step 3: (Explained in detail, below)
set(&den3, 2);
assert_eq!(get!(check), Some(21));  // division is reused

The programmer's changes and observations in the last lines induce the following change propagation behavior:

  1. When the check is demanded the first time, it executes the condition, and den holds 2, which is non-zero. Hence, the else branch executes get!(div), which demands the output of the division, 21.

  2. After this first observation of check, the programmer changes den to 0, and re-demands the output of check. In response, change propagation first re-executes the condition (not the division), and the condition branches to the then branch, resulting in None; in particular, it does not re-demand the div node, though this node still exists in the DCG.

  3. Next, the programmer changes den back to its original value, 2, and re-demands the output of check. In response, change propagation re-executes the condition, which re-demands the output of div. Change propagation attempts to "clean" the div node before re-executing it. To do so, it compares its last observations of num and den to their current values, of 42 and 2, respectively. In so doing, it finds that these earlier observations match the current values. Consequently, it reuses the output of the division (21) without having to re-execute the division.

For a graphical illustration of this behavior, see these slides.

In the academic literature on Adapton, we refer to this three-step pattern as switching: The demand of div switches from being present (in step 1) to absent (in step 2) to present (in step 3).

Past work on self-adjusting computation does not support this switching pattern directly: Because of its change propagation semantics, it would "forget" the division in step 2, and rerun it from-scratch in step 3.

Furthermore, some other change propagation algorithms base their re-execution schedule on "node height" (of the graph's topological ordering). These algorithms may also have undesirable behavior. In particular, they may re-execute the division in step 2, though it is not presently in demand. For an example, see this gist.

Use force_map for finer-grained dependence tracking

Below, we show that using force_map prunes the dirtying phase of change propagation; this test traces the engine, counts the number of dirtying steps, and ensures that this count is zero, as expected.

use adapton::macros::*;
use adapton::engine::*;
use adapton::reflect;
manage::init_dcg();    

// Trace the behavior of change propagation; ensure dirtying works as expected
reflect::dcg_reflect_begin();

let pair  = cell!((1234, 5678));
let pair1 = pair.clone();

let t = thunk![{
  // Project the first component of pair:
  let fst = force_map(&pair, |_,x| x.0); 
  fst + 100
}];

// The output is `1234 + 100` = `1334`
assert_eq!(force(&t), 1334);

// Update the second component of the pair; the first is still 1234
set(&pair1, (1234, 8765));

// The output is still `1234 + 100` = `1334`
assert_eq!(force(&t), 1334);

// Assert that nothing was dirtied (due to using `force_map`)
let traces = reflect::dcg_reflect_end();
let counts = reflect::trace::trace_count(&traces, None);
assert_eq!(counts.dirty.0, 0);
assert_eq!(counts.dirty.1, 0);

Nominal memoization: Toy Examples

Adapton offers nominal memoization, which uses first-class names (each of type Name) to identify cached computations and data. Behind the scenes, these names control how and when the engine overwrites cached data and computations. As such, they permit patterns of programmatic cache eviction.

For a simple illustration, we memoize several function calls to sum with different names and arguments. In real applications, the memoized function typically performs more work than summing two machine words. :)

use adapton::macros::*;
use adapton::engine::*;
use adapton::reflect;

// create an empty DCG (demanded computation graph)
manage::init_dcg();

// a simple function (memoized below for illustration purposes;
// probably actually not worth it!)
fn sum(x:usize, y:usize) -> usize {
    x + y
}

// Optional: Traces what the engine does below (for diagnostics, testing, illustration)
reflect::dcg_reflect_begin();

let nm_a_0 : Name = name_of_str("a"); // name "a"
let nm_a_1 : Name = name_of_str("a"); // name "a" (another copy)
let nm_b_0 : Name = name_of_str("b"); // name "b"
let nm_b_1 : Name = name_of_str("b"); // name "b" (another copy)

// create a memo entry, named "a", that remembers that `sum(42,43) = 85`
let res1 : usize = memo!(nm_a_0 =>> sum, x:42, y:43);

// same name "a", same arguments (42, 43) => reuses the memo entry above for `res1`
let res2 : usize = memo!(nm_a_1 =>> sum, x:42, y:43);

// different name "b", same arguments (42, 43) => will *not* match `res1`; creates a new entry
let res3 : usize = memo!(nm_b_0 =>> sum, x:42, y:43);

// same name "b", different arguments; will *overwrite* entry named "b" with new args & result
let res4 : usize = memo!(nm_b_1 =>> sum, x:55, y:66);

// Optional: Assert what happened above, in terms of analytical counts
let traces = reflect::dcg_reflect_end();
let counts = reflect::trace::trace_count(&traces, None);

// Editor allocated two thunks ("a" and "b")
assert_eq!(counts.alloc_fresh.0, 2);

// Editor allocated one thunk without changing it ("a", with same args)
assert_eq!(counts.alloc_nochange.0, 1);

// Editor allocated one thunk by changing it ("b", different args)
assert_eq!(counts.alloc_change.0, 1);

// Archivist allocated nothing
assert_eq!(counts.alloc_fresh.1, 0);

Some notes about the code above:

  • Callsite argument names: The macro memo! relies on programmer-supplied variable names in its macro expansion of these call sites, shown as x and y in the uses above. These can be chosen arbitrarily: So long as these symbols are distinct from one another, they can be any symbols, and need not actually match the formal argument names.

  • Type arguments: If the function call expects type arguments, memo! accomodates these calls with alternative syntax.

  • Spurious arguments: If the function call expects some later arguments that do not implement Eq, but are functionally determined by earlier ones that do (including the supplied Name), memo! accomodates these calls with alternative syntax. We call these arguments "spurious", since the Adapton engine does not check their identity when performing change propagation. Common examples include function values (e.g., anonymous closures).

Structs

ProgPt

Program points: used by the Adapton engine to distinguish different memoized functions.

Rc

A single-threaded reference-counting pointer.

Functions

bump_name_counter

Convenience function: A global counter for creating unique names, e.g., in unit tests. Avoid using this outside of unit tests.