adapton/lib.rs
1#![doc(html_logo_url = "https://raw.githubusercontent.com/cuplv/adapton-talk/master/logos/adapton-logo-bonsai.png",
2 html_root_url = "https://docs.rs/adapton/")]
3/*!
4
5Adapton for Rust
6================
7
8This Rust implementation embodies the latest implementation
9[Adapton](http://adapton.org), which offers a foundational,
10language-based semantics for [general-purpose incremental computation](wikipedia.org/en/Incremental_computing).
11
12Programming model
13--------------------
14
15- The [documentation below](#adapton-programming-model) gives many
16 illustrative examples, with pointers into the other Rust documentation.
17- The [`engine` module](https://docs.rs/adapton/0/adapton/engine/index.html)
18 gives the core programming interface.
19
20Resources
21---------------
22
23- [Presentations and benchmark results](https://github.com/cuplv/adapton-talk#benchmark-results)
24- [Fungi: A typed, functional language for programs that dynamically name their own dependency graphs](https://github.com/Adapton/fungi-lang.rust)
25- [IODyn: Adapton collections, for algorithms with dynamic input and output](https://github.com/cuplv/iodyn.rust)
26
27Background
28---------------
29
30Adapton proposes the _demanded computation graph_ (or **DCG**), and a
31demand-driven _change propagation_ algorithm. Further, it proposes
32first-class _names_ for identifying cached data structures and
33computations. For a quick overview of the role of names in incremental
34computing, we give [background on incremental computing with names](#background-incremental-computing-with-names), below.
35
36The following academic papers detail these technical proposals:
37
38- **DCG, and change propagation**: [_Adapton: Composable, demand-driven incremental computation_, **PLDI 2014**](http://www.cs.umd.edu/~hammer/adapton/).
39- **Nominal memoization**: [_Incremental computation with names_, **OOPSLA 2015**](http://arxiv.org/abs/1503.07792).
40- **Type and effect structures**: The draft [_Typed Adapton: Refinement types for incremental computation with precise names_](https://arxiv.org/abs/1610.00097).
41
42Why Rust?
43----------
44
45Adapton's first implementations used Python and OCaml; The latest
46implementation in Rust offers the best performance thus far, since (1)
47Rust is fast, and (2) [traversal-based garbage collection presents
48performance challenges for incremental
49computation](http://dl.acm.org/citation.cfm?doid=1375634.1375642). By
50liberating Adapton from traversal-based collection, [our empirical
51results](https://github.com/cuplv/adapton-talk#benchmark-results) are
52both predictable and scalable.
53
54Adapton programming model
55==========================
56
57**Adapton roles**: Adapton proposes _editor_ and _achivist roles_:
58
59 - The **Editor role** _creates_ and _mutates_ input, and _demands_ the
60 output of incremental computations in the **Archivist role**.
61
62 - The **Archivist role** consists of **Adapton thunks**, where each is
63 cached computation that consumes incremental input and produces
64 incremental output.
65
66**Examples:** The examples below illustrate these roles, in increasing complexity:
67
68 - [Start the DCG engine](#start-the-dcg-engine)
69 - [Create incremental cells](#create-incremental-cells)
70 - [Observe `Art`s](#observe-arts)
71 - [Mutate input cells](#mutate-input-cells)
72 - [Demand-driven change propagation](#demand-driven-change-propagation) and [switching](#switching)
73 - [Memoization](#memoization)
74 - [Create thunks](#create-thunks)
75 - [Use `force_map` for more precise dependencies](#use-force_map-for-more-precise-dependencies)
76 - [Nominal memoization](#nominal-memoization)
77 - [Nominal cycles](#nominal-cycles)
78 - [Nominal firewalls](#nominal-firewalls)
79
80**Programming primitives:** The following list of primitives covers
81the core features of the Adapton engine. Each primitive below is
82meaningful in each of the two, editor and archivist, roles:
83
84 - **Ref cell allocation**: Mutable input (editor role), and cached data structures that change across runs (archivist role).
85 - [**`cell!`**](https://docs.rs/adapton/0/adapton/macro.cell.html) -- Preferred version
86 - [`let_cell!`](https://docs.rs/adapton/0/adapton/macro.let_cell.html) -- Useful in simple examples
87 - [`engine::cell`](https://docs.rs/adapton/0/adapton/engine/fn.cell.html) -- Engine's raw interface
88 - **Observation** and **demand**: Both editor and archivist role.
89 - [**`get!`**](https://docs.rs/adapton/0/adapton/macro.get.html) -- Preferred version
90 - [`engine::force`](https://docs.rs/adapton/0/adapton/engine/fn.force.html) -- Engine's raw interface
91 - [`engine::force_map`](https://docs.rs/adapton/0/adapton/engine/fn.force_map.html) -- A variant for observations that compose before projections
92 - **Thunk Allocation**: Both editor and archivist role.
93 - Thunk allocation, **_without_ demand**:
94 - [**`thunk!`**](https://docs.rs/adapton/0/adapton/macro.thunk.html) -- Preferred version
95 - [`let_thunk!`](https://docs.rs/adapton/0/adapton/macro.let_thunk.html) -- Useful in simple examples
96 - [`engine::thunk`](https://docs.rs/adapton/0/adapton/engine/fn.thunk.html) -- Engine's raw interface (can be cumbersome)
97 - Thunk allocation, **_with_ demand**:
98 - [**`memo!`**](https://docs.rs/adapton/0/adapton/macro.memo.html) -- Preferred version
99 - [`let_memo!`](https://docs.rs/adapton/0/adapton/macro.let_memo.html) -- Useful in simple examples
100
101Start the DCG engine
102=====================
103
104The call `init_dcg()` below initializes a DCG-based engine, replacing
105the `Naive` default engine.
106
107```
108#[macro_use] extern crate adapton;
109use adapton::macros::*;
110use adapton::engine::*;
111
112fn main() {
113 manage::init_dcg();
114
115 // Put example code below here
116# let c : Art<usize> = cell!( 123 );
117# assert_eq!( get!(c), 123 );
118}
119```
120
121# Create incremental cells
122
123Commonly, the input and intermediate data of Adapton computations
124consists of named reference `cell`s. A reference `cell` is one
125variety of `Art`s; another are [`thunk`s](#create-thunks).
126
127## Implicit `cell` names
128
129Behind the scenes, the (editor's) invocation `cell!(123)` uses an
130imperative global counter to choose a unique name to hold the number
131`123`. After each use, it increments this global counter, ensuring
132that each such number is used at most once.
133
134```
135# #[macro_use] extern crate adapton;
136# fn main() {
137# use adapton::macros::*;
138# use adapton::engine::*;
139# manage::init_dcg();
140let c : Art<usize> = cell!( 123 );
141
142assert_eq!( get!(c), 123 );
143# }
144```
145
146## Naming strategy: *Global counter* ###
147
148Using a global counter to name cells, as above, _may_ be appopriate
149for the Editor role, but is _never appropriate for the Archivist
150role_, since this global counter is too sensitive to global
151(often-changing) properties, such as an index into the sequence of all
152allocations, globally.
153
154To replace this global counter, we the Archivist may give names
155*explicitly*, as shown in various forms, below.
156
157
158## Explicitly named `cell`s
159
160### Names via Rust "identifiers"
161
162Sometimes we name a cell using a Rust identifier. We specify this
163case using the notation `[ name ]`, which specifies that the cell's
164name is a string, constructed from the Rust identifer `name`:
165
166```
167# #[macro_use] extern crate adapton;
168# fn main() {
169# use adapton::macros::*;
170# use adapton::engine::*;
171# manage::init_dcg();
172let c : Art<usize> = cell!([c] 123);
173
174assert_eq!(get!(c), 123);
175# }
176```
177
178### Optional `Name`s
179
180Most generally, we supply an expression `optional_name` of type
181`Option<Name>` to specify the name for the `Art`. This `Art` is
182created by either `cell` or `put`, in the case that `optional_name` is
183`Some(name)` or `None`, respectively:
184
185```
186# #[macro_use] extern crate adapton;
187# fn main() {
188# use adapton::macros::*;
189# use adapton::engine::*;
190# manage::init_dcg();
191let n : Name = name_of_str(stringify!(c));
192let c : Art<usize> = cell!([Some(n)]? 123);
193
194assert_eq!(get!(c), 123);
195
196let c = cell!([None]? 123);
197
198assert_eq!(get!(c), 123);
199# }
200```
201
202# Observe `Art`s
203
204The macro `get!` is sugar for `engine::force!`, with reference
205introduction operation `&`:
206
207```
208# #[macro_use] extern crate adapton;
209# fn main() {
210# use adapton::macros::*;
211# use adapton::engine::*;
212# manage::init_dcg();
213let c : Art<usize> = cell!(123);
214
215assert_eq!( get!(c), force(&c) );
216# }
217```
218
219Since the type `Art<T>` classifies both `cell`s and
220[`thunk`s](#create-thunks), the operations `force` and `get!` can be
221used interchangeably on `Art<T>`s that arise as `cell`s or `thunk`s.
222
223Mutate input cells
224=========================
225
226One may mutate cells explicitly, or _implicitly_, which is common in Nominal Adapton.
227
228The editor (implicitly or explicitly) mutates cells that hold input
229and they re-demand the output of the archivist's computations. During
230change propagation, the archivist mutates cells with implicit
231mutation.
232
233**Implicit mutation uses nominal allocation**: By allocating a cell
234with the same name, one may _overwrite_ cells with new content:
235
236```
237# #[macro_use] extern crate adapton;
238# fn main() {
239# use adapton::macros::*;
240# use adapton::engine::*;
241# manage::init_dcg();
242let n : Name = name_of_str(stringify!(c));
243let c : Art<usize> = cell!([Some(n.clone())]? 123);
244
245assert_eq!(get!(c), 123);
246
247// Implicit mutation (re-use cell by name `n`):
248let d : Art<usize> = cell!([Some(n)]? 321);
249
250assert_eq!(d, c);
251assert_eq!(get!(c), 321);
252assert_eq!(get!(d), 321);
253# }
254```
255
256**No names implies no effects**: Using `None` to allocate cells always
257**gives distinct cells, with no overwriting:
258
259```
260# #[macro_use] extern crate adapton;
261# fn main() {
262# use adapton::macros::*;
263# use adapton::engine::*;
264# manage::init_dcg();
265
266let c = cell!([None]? 123);
267let d = cell!([None]? 321);
268
269assert_eq!(get!(c), 123);
270assert_eq!(get!(d), 321);
271# }
272```
273
274**Explicit mutation, via `set`**: If one wants mutation to be totally
275explicit, one may use `set`:
276
277```
278# #[macro_use] extern crate adapton;
279# fn main() {
280# use adapton::macros::*;
281# use adapton::engine::*;
282# manage::init_dcg();
283let n : Name = name_of_str(stringify!(c));
284let c : Art<usize> = cell!([Some(n)]? 123);
285
286assert_eq!(get!(c), 123);
287
288// Explicit mutation (overwrites cell `c`):
289set(&c, 321);
290
291assert_eq!(get!(c), 321);
292# }
293```
294
295
296Demand-driven change propagation
297=================================
298
299The example below demonstrates _demand-driven change propagation_,
300which is unique to Adapton's DCG, and its approach to incremental
301computation. The example DCG below consists of two kinds of nodes:
302
303- [Cells](#create-incremental-cells) consist of data that changes over
304 time, including (but not limited to) incremental input.
305
306- [Thunks](#create-thunks) consist of computations whose observations
307 and results are cached in the DCG.
308
309The simple example below uses two mutable input cells, `num` and
310`den`, whose values are used by an intermediate subcomputation `div`
311that divides the numerator in `num` by the denominator in `den`, and a
312thunk `check` that first checks whether the denominator is zero
313(returning zero if so) and if non-zero, returns the value of the
314division:
315
316```
317# #[macro_use] extern crate adapton;
318# fn main() {
319# use adapton::macros::*;
320# use adapton::engine::*;
321# manage::init_dcg();
322#
323// Two mutable inputs, for numerator and denominator of division
324let num = cell!(42);
325let den = cell!(2);
326
327// In Rust, cloning is explicit:
328let den2 = den.clone(); // clone _global reference_ to cell.
329let den3 = den.clone(); // clone _global reference_ to cell, again.
330
331// Two subcomputations: The division, and a check thunk with a conditional expression
332let div = thunk![ get!(num) / get!(den) ];
333let check = thunk![ if get!(den2) == 0 { None } else { Some(get!(div)) } ];
334# }
335```
336
337After allocating `num`, `den` and `check`, the programmer changes `den`
338and observes `check`, inducing the following change propagation
339behavior. In sum, _whether_ `div` runs is based on _demand_ from the
340Editor (of the output of `check`), _and_ the value of input cell
341`den`, via the condition in `check`:
342
3431. When the thunk `check` is demanded for first time, Adapton
344 executes the condition, and cell `den` holds `2`, which is non-zero.
345 Hence, the `else` branch executes `get!(div)`, which demands the
346 output of the division, `21`.
347
3482. After this first observation of `check`, the programmer changes cell
349 `den` to `0`, and re-demands the output of thunk `check`. In
350 response, Adapton's change propagation algorithm first re-executes
351 the condition (not the division), and the condition branches to the
352 `then` branch, resulting in `None`; in particular, it does _not_
353 re-demand the `div` node, though this node still exists in the DCG.
354
3553. Next, the programmer changes `den` back to its original value, `2`,
356 and re-demands the output of `check`. In response, change
357 propagation re-executes the condition, which re-demands the output
358 of `div`. Change propagation attempts to "clean" the `div` node
359 before re-executing it. To do so, it compares its _last
360 observations_ of `num` and `den` to their current values, of `42`
361 and `2`, respectively. In so doing, it finds that these earlier
362 observations match the current values. Consequently, it _reuses_
363 the output of the division (`21`) _without_ having to re-execute
364 the division.
365
366
367```
368# #[macro_use] extern crate adapton;
369# fn main() {
370# use adapton::macros::*;
371# use adapton::engine::*;
372# manage::init_dcg();
373#
374# // Two mutable inputs, for numerator and denominator of division
375# let num = cell!(42);
376# let den = cell!(2);
377#
378# // In Rust, cloning is explicit:
379# let den2 = den.clone(); // clone _global reference_ to cell.
380# let den3 = den.clone(); // clone _global reference_ to cell, again.
381#
382# // Two subcomputations: The division, and a check thunk with a conditional expression
383# let div = thunk![ get!(num) / get!(den) ];
384# let check = thunk![ if get!(den2) == 0 { None } else { Some(get!(div)) } ];
385#
386// Observe output of `check` while we change the input `den`
387// Editor Step 1: (Explained in detail, below)
388assert_eq!(get!(check), Some(21));
389
390// Editor Step 2: (Explained in detail, below)
391set(&den3, 0);
392assert_eq!(get!(check), None);
393
394// Editor Step 3: (Explained in detail, below)
395set(&den3, 2);
396assert_eq!(get!(check), Some(21)); // division is reused
397# }
398```
399[Slides with illustrations](https://github.com/cuplv/adapton-talk/blob/master/adapton-example--div-by-zero/)
400of the graph structure and the code side-by-side may help:
401
402**Editor Step 1**
403
404<img src="https://raw.githubusercontent.com/cuplv/adapton-talk/master/adapton-example--div-by-zero/Adapton_Avoiddivbyzero_10.png"
405 alt="Slide-10" style="width: 800px;"/>
406
407**Editor Steps 2 and 3**
408
409<img src="https://raw.githubusercontent.com/cuplv/adapton-talk/master/adapton-example--div-by-zero/Adapton_Avoiddivbyzero_12.png"
410 alt="Slide_12" style="width: 200px;"/>
411<img src="https://raw.githubusercontent.com/cuplv/adapton-talk/master/adapton-example--div-by-zero/Adapton_Avoiddivbyzero_16.png"
412 alt="Slide_16" style="width: 200px;"/>
413<img src="https://raw.githubusercontent.com/cuplv/adapton-talk/master/adapton-example--div-by-zero/Adapton_Avoiddivbyzero_17.png"
414 alt="Slide-17" style="width: 200px;"/>
415<img src="https://raw.githubusercontent.com/cuplv/adapton-talk/master/adapton-example--div-by-zero/Adapton_Avoiddivbyzero_23.png"
416 alt="Slide-23" style="width: 200px;"/>
417
418[Full-sized slides](https://github.com/cuplv/adapton-talk/blob/master/adapton-example--div-by-zero/)
419
420In sum, _whether_ `div` runs is based on _demand_ from the Editor (of
421`check`), _and_ the value of input `den`. The reuse of `div`
422illustrates the _switching pattern_, which is unique to Adapton's
423approach to incremental computation.
424
425Switching
426-----------
427
428In the [academic literature on Adapton](http://matthewhammer.org/adapton/),
429we refer to the three-step
430pattern of change propagation illustrated above as _switching_:
431
4321. [The demand of `div` switches from being present (in step 1)](https://github.com/cuplv/adapton-talk/tree/master/adapton-example--div-by-zero#initial-graph-after-initial-demand-due-to-1st-get),
4332. [to absent (in step 2)](https://github.com/cuplv/adapton-talk/tree/master/adapton-example--div-by-zero#updated-graph-after-first-cleaning-phase-due-to-2nd-get),
4343. [to present (in step 3)](https://github.com/cuplv/adapton-talk/tree/master/adapton-example--div-by-zero#updated-graph-after-second-cleaning-phase-due-to-3rd-get).
435
436Past work on self-adjusting computation does not support the
437switching pattern directly: Because of its change propagation
438semantics, it would "forget" the division in step 2, and rerun it
439_from-scratch_ in step 3.
440
441Furthermore, some other change propagation algorithms base their
442re-execution schedule on "node height" (of the graph's topological
443ordering). These algorithms may also have undesirable behavior. In
444particular, they may re-execute the division `div` in step 2, though
445it is not presently in demand. For an example, see [this
446gist](https://gist.github.com/khooyp/98abc0e64dc296deaa48).
447
448Memoization
449============
450
451Memoization provides a mechanism for caching the results of
452subcomputations; it is a crtical feature of Adapton's approach to
453incremental computation.
454
455In Adapton, each _memoization point_ has three ingredients:
456
457- A function expression (of type `Fn`)
458
459- Zero or more arguments. Each argument type must have an
460 implementation for the traits `Eq + Clone + Hash + Debug`. The
461 traits `Eq` and `Clone` are both critical to Adapton's caching and
462 change propagation engine. The trait `Hash` is required when
463 Adapton's naming strategy is _structural_ (e.g., where function
464 names are based on the hashes of their arguments). The trait
465 `Debug` is useful for debugging, and reflection.
466
467- An optional _name_, which identifies the function call for reuse later.
468
469 - When this optional name is `None`, the memoization point may be
470 treated in one of two ways: either as just an ordinary, uncached
471 function call, or as a cached function call that is identified
472 _structurally_, by its function pointer and arguments. Adapton
473 permits structural subcomputations via the engine's
474 [structural](https://docs.rs/adapton/0/adapton/engine/fn.structural.html)
475 function.
476
477 - When this is `Some(name)`, the memoization point uses `name` to
478 identify the work performed by the function call, and its
479 result. Critically, in future incremental runs, it is possible
480 for `name` to associate with different functions and/or argument
481 values.
482
483Each memoization point yields two results:
484
485- A [thunk](#create-thunks) articulation, of type `Art<Res>`, where
486 `Res` is the result type of the function expression.
487
488- A result value of type `Res`, which is also cached at the articulation.
489
490
491Optional name version
492----------------------
493
494The following form is preferred:
495
496`memo!( [ optional_name ]? fnexp ; lab1 : arg1, ..., labk : argk )`
497
498It accepts an optional name, of type `Option<Name>`, and an arbitrary
499function expression `fnexp` (closure or function pointer). Like the
500other forms, it requires that the programmer label each argument.
501
502Example
503-------
504
505```
506# #[macro_use] extern crate adapton;
507# fn main() {
508# use adapton::macros::*;
509# use adapton::engine::*;
510# manage::init_dcg();
511let (t,z) : (Art<usize>, usize) =
512 memo!([Some(name_unit())]?
513 |x:usize,y:usize|{ if x > y { x } else { y }};
514 x:10, y:20 );
515
516assert_eq!(z, 20);
517# }
518```
519
520[More examples of `memo!` macro](https://docs.rs/adapton/0/adapton/macro.memo.html#memoization)
521
522Create thunks
523===============
524
525**Thunks** consist of suspended computations whose observations,
526allocations and results are cached in the DCG, when `force`d. Each
527thunk has type `Art<Res>`, where `Res` is the return type of the thunk's
528suspended computation.
529
530Each [_memoization point_](#memoization) is merely a _forced thunk_.
531We can also create thunks without demanding them.
532
533The following form is preferred:
534
535`thunk!( [ optional_name ]? fnexp ; lab1 : arg1, ..., labk : argk )`
536
537It accepts an optional name, of type `Option<Name>`, and an arbitrary
538function expression `fnexp` (closure or function pointer). Like the
539other forms, it requires that the programmer label each argument.
540
541Example
542-------
543
544```
545# #[macro_use] extern crate adapton;
546# fn main() {
547# use adapton::macros::*;
548# use adapton::engine::*;
549# manage::init_dcg();
550let t : Art<usize> =
551 thunk!([ Some(name_unit()) ]?
552 |x:usize,y:usize|{ if x > y { x } else { y }};
553 x:10, y:20 );
554
555assert_eq!(get!(t), 20);
556# }
557```
558
559[More examples of `thunk!` macro](https://docs.rs/adapton/0/adapton/macro.thunk.html#thunks)
560
561Use `force_map` for more precise dependencies
562==============================================
563
564Suppose that we want to project only one field of type `A` from a pair
565within an `Art<(A,B)>`. If the field of type `B` changes, our
566observation of the `A` field will not be affected.
567
568Below, we show that using `force_map` prunes the dirtying phase of
569change propagation. Doing so means that computations that would
570otherwise be dirty and cleaned via re-execution are never diritied in
571the first place. We show a simple example of projecting a pair.
572
573To observe this fact, this test traces the engine, counts the number
574of dirtying steps, and ensures that this count is zero, as expected.
575
576```
577# #[macro_use] extern crate adapton;
578# fn main() {
579# use adapton::macros::*;
580# use adapton::engine::*;
581# use adapton::reflect;
582# manage::init_dcg();
583#
584// Trace the behavior of change propagation; ensure dirtying works as expected
585reflect::dcg_reflect_begin();
586
587let pair = cell!((1234, 5678));
588let pair1 = pair.clone();
589
590let t = thunk![{
591 // Project the first component of pair:
592 let fst = force_map(&pair, |_,x| x.0);
593 fst + 100
594}];
595
596// The output is `1234 + 100` = `1334`
597assert_eq!(get!(t), 1334);
598
599// Update the second component of the pair; the first is still 1234
600set(&pair1, (1234, 8765));
601
602// The output is still `1234 + 100` = `1334`
603assert_eq!(get!(t), 1334);
604
605// Assert that nothing was dirtied (due to using `force_map`)
606let traces = reflect::dcg_reflect_end();
607let counts = reflect::trace::trace_count(&traces, None);
608assert_eq!(counts.dirty.0, 0);
609assert_eq!(counts.dirty.1, 0);
610# }
611```
612
613
614Nominal memoization
615=========================
616
617Adapton offers **nominal memoization**, which uses first-class _names_
618(each of type `Name`) to identify cached computations and data.
619
620```
621# #[macro_use] extern crate adapton;
622# fn main() {
623# use adapton::macros::*;
624# use adapton::engine::*;
625# use adapton::reflect;
626#
627# // create an empty DCG (demanded computation graph)
628# manage::init_dcg();
629#
630fn sum(x:usize, y:usize) -> usize {
631 x + y
632}
633
634// create a memo entry, named `a`, that remembers that `sum(42,43) = 85`
635let res1 : usize = get!(thunk!([a] sum; x:42, y:43));
636# }
637```
638Behind the scenes, the name `a` controls how and when the Adapton engine
639_overwrites_ the cached computation of `sum`. As such, names permit
640patterns of programmatic _cache eviction_.
641
642The macro `memo!` relies on programmer-supplied variable names in its
643macro expansion of these call sites, shown as `x` and `y` in the uses
644above. These can be chosen arbitrarily: So long as these symbols are
645distinct from one another, they can be _any_ symbols, and need not
646actually match the formal argument names.
647
648**Example as Editor role**
649For a simple illustration, we memoize several function calls to `sum`
650with different names and arguments. In real applications, the
651memoized function typically performs more work than summing two
652machine words. :)
653
654```
655# #[macro_use] extern crate adapton;
656# fn main() {
657# use adapton::macros::*;
658# use adapton::engine::*;
659# use adapton::reflect;
660# manage::init_dcg();
661# fn sum(x:usize, y:usize) -> usize {
662# x + y
663# }
664#
665// Optional: Traces what the engine does below (for diagnostics, testing, illustration)
666reflect::dcg_reflect_begin();
667
668// create a memo entry, named `a`, that remembers that `sum(42,43) = 85`
669let res1 : usize = get!(thunk!([a] sum; x:42, y:43));
670
671// same name `a`, same arguments (42, 43), Adapton reuses cached result
672let res2 : usize = get!(thunk!([a] sum; x:42, y:43));
673
674// different name `b`, same arguments (42, 43), Adapton re-computes `sum` for `b`
675let res3 : usize = get!(thunk!([b] sum; x:42, y:43));
676
677// same name `b`, different arguments, editor overwrites thunk `b` with new args
678let res4 : usize = get!(thunk!([b] sum; x:55, y:66));
679# }
680```
681
682Below we confirm the following facts:
683
684- The Editor:
685 - allocated two thunks (`a` and `b`),
686 - allocated one thunk without changing it (`a`, with the same arguments)
687 - allocated one thunk by changing it (`b`, with different arguments)
688- The Archivist allocated nothing.
689
690```
691# #[macro_use] extern crate adapton;
692# fn main() {
693# use adapton::macros::*;
694# use adapton::engine::*;
695# use adapton::reflect;
696#
697# // create an empty DCG (demanded computation graph)
698# manage::init_dcg();
699#
700# // a simple function (memoized below for illustration purposes;
701# // probably actually not worth it!)
702# fn sum(x:usize, y:usize) -> usize {
703# x + y
704# }
705#
706# // Optional: Traces what the engine does below (for diagnostics, testing, illustration)
707# reflect::dcg_reflect_begin();
708#
709# // create a memo entry, named `a`, that remembers that `sum(42,43) = 85`
710# let res1 : usize = get!(thunk!([a] sum; x:42, y:43));
711#
712# // same name `a`, same arguments (42, 43) => reuse cached result
713# let res2 : usize = get!(thunk!([a] sum; x:42, y:43));
714#
715# // different name `b`, same arguments (42, 43) => recomputes `sum` for `b`
716# let res3 : usize = get!(thunk!([b] sum; x:42, y:43));
717#
718# // same name `b`, different arguments; *overwrite* `b` with new args & result
719# let res4 : usize = get!(thunk!([b] sum; x:55, y:66));
720#
721// Optional: Assert what happened above, in terms of analytical counts
722let traces = reflect::dcg_reflect_end();
723let counts = reflect::trace::trace_count(&traces, None);
724
725// Editor allocated two thunks (`a` and `b`)
726assert_eq!(counts.alloc_fresh.0, 2);
727
728// Editor allocated one thunk without changing it (`a`, with same args)
729assert_eq!(counts.alloc_nochange.0, 1);
730
731// Editor allocated one thunk by changing it (`b`, different args)
732assert_eq!(counts.alloc_change.0, 1);
733
734// Archivist allocated nothing
735assert_eq!(counts.alloc_fresh.1, 0);
736# drop((res1,res2,res3,res4));
737# }
738```
739
740Nominal Cycles
741===================
742
743In many settings, we explore structures that contain cycles, and it is
744useful to use Adapton's DCG mechanism to detect such cycles.
745
746
747Example problem: Recursive computation over a directed graph
748---------------------------------------------------------------
749
750As a tiny example, consider the following graph, defined as a table of
751adjacencies:
752
753```
754// Node | Adjacency pair
755// | (two outgoing edges to other nodes):
756// -----+-------------------------------------
757// 0 | (1, 0)
758// 1 | (2, 3)
759// 2 | (3, 0)
760// 3 | (3, 1)
761// 4 | (2, 5)
762// 5 | (5, 4)
763```
764
765This is a small arbitrary directed graph, and it has several cycles
766(e.g., `0 --> 0`, `3 --> 3`, `0 --> 1 --> 2 --> 0`). It also has
767distinct strongly-connected components (SCCs), e.g., the one involving
768`0` versus the one involving `4`.
769
770**Problem statement:** Suppose that we wish to explore this graph, to
771build a list (or `Vec`) with all of the edges that it contains.
772
773**Desired solution program:**
774Consider the simple (naive) recursive exploration logic, defined
775as `explore_rec` below. The problems with this logic are that
776
7771. **Repeated work**: `explore_rec` re-explores some sub-graphs multiple times, and
7782. **Divergence**: `explore_rec` diverges on graphs with cycles.
779
780To address the first problem, we can leverage the DCG, which performs
781function caching. To address the second problem, the algorithm needs
782a mechanism to detect cycles.
783
784In terms of DCG evaluation, we can detect a cycle if we can remember
785and check whether we are "currently" visiting the node (on the
786recursive call stack) before we evaluate a node recursively.
787
788Regardless of how we detect the cycle, we wish to do something
789different (other than recur).
790
791### DCG cycles: Detection and valuation
792
793Rather than implement this cycle-detection mechanism directly, we can
794use Adapton's DCG, which operates behind the scenes. Specifically, we
795can use the engine operation [`force_cycle`](https://docs.rs/adapton/0/adapton/engine/fn.force_cycle.html)
796to specify a "cycle value" for the result of a thunk `t` when `t` is forcing itself, or when `t`
797is forcing another thunk `s` that transitively forces `t`.
798
799In either case, the force operation that forms the cycle in the DCG
800evaluates to this programmer-specified cycle value, rather than
801diverging, or using the cached value at the thunk, which generally is
802not sound (e.g., it may be stale, from a prior run).
803
804### Example cycle valuation
805
806Notice that in `explore` below, we use `get!(_, vec![])` on `at` and
807`bt` instead of `get!(_)`. This macro uses `force_cycle` in its
808expansion. The empty vector gives the cycle value for when this force
809forms a cycle in the DCG.
810
811```
812# #[macro_use] extern crate adapton;
813# fn main() {
814# use adapton::macros::*;
815# use adapton::engine::*;
816// Define the graph, following the table above
817fn adjs (n:usize) -> (usize, usize) {
818 match n {
819 0 => (1, 0),
820 1 => (2, 3),
821 2 => (3, 0),
822 3 => (3, 1),
823 4 => (2, 5),
824 5 => (5, 4),
825 _ => unimplemented!()
826 }
827}
828
829// This version will diverge on all of the cycles (e.g., 3 --> 3)
830#[warn(unconditional_recursion)]
831fn explore_rec(cur_n:usize) -> Vec<usize> {
832 let (a,b) = adjs(cur_n);
833 let mut av = explore_rec(a);
834 let mut bv = explore_rec(b);
835 let mut res = vec![cur_n];
836 res.append(&mut av);
837 res.append(&mut bv);
838 res
839}
840
841// This version will not diverge; it gives an empty vector value
842// as "cycle output" when it performs each `get!`. Hence, when
843// Adapton detects a cycle, it will not re-force this thunk
844// cyclicly, but rather return this predetermined "cycle output"
845// value. For non-cyclic calls, the `get!` ignores this value, and
846// works in the usual way.
847fn explore(cur_n:usize) -> Vec<usize> {
848 let (a,b) = adjs(cur_n);
849 let at = explore_thunk(a);
850 let bt = explore_thunk(b);
851 let mut av = get!(at, vec![]);
852 let mut bv = get!(bt, vec![]);
853 let mut res = vec![cur_n];
854 res.append(&mut av);
855 res.append(&mut bv);
856 res
857}
858fn explore_thunk(cur_n:usize) -> Art<Vec<usize>> {
859 thunk!([Some(name_of_usize(cur_n))]? explore ; n:cur_n)
860}
861
862adapton::engine::manage::init_dcg();
863assert_eq!(get!(explore_thunk(0)), vec![0,1,2,3,3])
864# }
865```
866
867Nominal Firewalls
868===================
869
870Nominal firewalls use nominal allocation to dirty the DCG
871incrementally, _while change propagation cleans it_.
872
873In some situations (Run 2, below), these firewalls prevent dirtying
874from cascading, leading to finer-grained dependency tracking, and more
875incremental reuse. Thanks to
876[@nikomatsakis](https://github.com/nikomatsakis) for suggesting the
877term "firewall" in this context.
878
879First, consider this graph, as Rust code (graph picture below):
880
881Example: nominal firewall
882-------------------------
883
884```
885# #[macro_use] extern crate adapton;
886# fn main() {
887# use adapton::macros::*;
888# use adapton::engine::*;
889fn demand_graph(a: Art<i32>) -> String {
890 let_memo!{
891 d =(f)= {
892 let a = a.clone();
893 let_memo!{ b =(g)={ let x = get!(a); cell!([b] x * x) };
894 c =(h)={ format!("{:?}", get!(b)) };
895 c }};
896 d }
897}
898# drop(demand_graph) }
899```
900
901The use of `let_memo!` is [convenient sugar](#let_memo-example) for `thunk!` and `force`.
902This code induces DCGs with the following structure:
903
904```
905/* +---- Legend ------------------+
906cell a | [ 2 ] ref cell holding 2 |
907[ 2 ] "Nominal | (g) thunk named 'g' |
908 ^ firewall" | ----> force/observe edge |
909 | force | | --->> allocation edge |
910 | 2 \|/ +------------------------------+
911 | `
912 | g allocs b cell g forces b When cell a changes, g is dirty, h is not;
913 | to hold 4 b observes 4 in this sense, cell b _firewalls_ h from g:
914 (g)------------->>[ 4 ]<--------------(h) <~~ note that h does not observe cell a, or g.
915 ^ ^
916 | f forces g | f forces h,
917 | g returns cell b | returns String "4"
918 | |
919 (f)------------------------------------+
920 ^
921 | force f,
922 | returns String "4"
923 |
924(demand_graph(a)) */
925```
926
927In this graph, the ref cell `b` acts as the "firewall".
928
929Below, we show a particular input change for cell `a` where a
930subcomputation `h` is never dirtied nor cleaned by change propagation
931(input change 2 to -2). We show another change to the same input where
932this subcomputation `h` *is* _eventually_ dirtied and cleaned by
933Adapton, though not immediately (input change -2 to 3).
934
935Here's the Rust code for generating this DCG, and these changes to its
936input cell, named `"a"`:
937
938```
939# #[macro_use] extern crate adapton;
940# fn main() {
941# use adapton::macros::*;
942# use adapton::engine::*;
943#
944# fn demand_graph(a: Art<i32>) -> String {
945# let_memo!{
946# d =(f)= {
947# let a = a.clone();
948# let_memo!{ b =(g)={ let x = get!(a); cell!([b] x * x) };
949# c =(h)={ format!("{:?}", get!(b)) };
950# c }};
951# d }
952# }
953#
954# manage::init_dcg();
955#
956// 1. Initialize input cell "a" to hold 2, and do the computation illustrated above:
957assert_eq!(demand_graph(let_cell!{a = 2; a}), "4".to_string());
958
959// 2. Change input cell "a" to hold -2, and do the computation illustrated above:
960assert_eq!(demand_graph(let_cell!{a = -2; a}), "4".to_string());
961
962// 3. Change input cell "a" to hold 3, and do the computation illustrated above:
963assert_eq!(demand_graph(let_cell!{a = 3; a}), "9".to_string());
964# }
965```
966
967**Run 1.** In the first computation, the input cell `a` holds 2, and
968the final result is `"4"`.
969
970**Run 2.** When the input cell `a` changes, e.g., from 2 to -2, thunks
971`f` and `g` are dirtied. Thunk `g` is dirty because it observes the
972changed input. Thunk `f` is dirty because it demanded (observed) the
973output of thunk `g` in the extent of its own computation.
974
975_Importantly, thunk `h` is *not* immediately dirtied when cell `a`
976changes._ In a sense, cell `a` is an indirect ("transitive") input to
977thunk `h`. This fact may suggest that when cell `a` is changed from 2
978to -2, we should dirty thunk `h` immediately. However, thunk `h` is
979related to this input only by reading ref cell `b`.
980
981Rather, when the editor re-demands thunk `f`, Adapton will necessarily
982perform a cleaning process (aka, "change propagation"), re-executing
983`g`, its immediate dependent, which is dirty. Since thunk `g` merely
984squares its input, and 2 and -2 both square to 4, the output of thunk
985`g` will not change in this case. Consequently, the observers of cell
986`b`, which holds this output, will not be dirtied or re-executed. In
987this case, thunk `h` is this observer. In situations like these,
988Adapton's dirtying + cleaning algorithms do not dirty nor clean thunk
989`h`.
990
991In sum, under this change, after `f` is re-demanded, the cleaning
992process will first re-execute `g`, the immediate observer of cell `a`.
993Thunk `g` will again allocate cell `b` to hold 4, the same value as
994before. It also yields this same cell pointer (to cell `b`).
995Consequently, thunk `f` is not re-executed, and is cleaned.
996Meanwhile, the outgoing (dependency) edges thunk of `h` are never
997dirtied. Effectively, the work of `h` is reused from cache as well.
998
999Alternatively, if we had placed the code for `format!("{:?}",get!(b))`
1000in thunk `f`, Adapton _would_ have re-executed this step when `a`
1001changes from `2` to `-2`: It would be dirtied when `a` changes, since
1002it directly observes `g`, which directly observes cell `a`.
1003
1004**Run 3.** For some other change, e.g., from 2 to 3, thunk `h` would
1005_eventually_ _will be_
1006
1007 - dirtied, when `f` redemands `g`, which will overwrite cell `b` with `9`,
1008 - and cleaned, when `f` re-demands `h`, which will `format!` a new `String` of `"9"`.
1009
1010
1011`let_memo!` example
1012----------------------------
1013
1014The [use of `let_memo!` macro above](#example-nominal-firewall) expands as follows:
1015
1016```
1017# #[macro_use] extern crate adapton;
1018# fn main() {
1019# use adapton::macros::*;
1020# use adapton::engine::*;
1021fn demand_graph__mid_macro_expansion(a: Art<i32>) -> String {
1022 let f = thunk!([f]{
1023 let a = a.clone();
1024 let g = thunk!([g]{ let x = get!(a);
1025 cell!([b] x * x) });
1026 let b = force(&g);
1027 let h = thunk!([h]{ let x = get!(b);
1028 format!("{:?}", x) });
1029 let c = force(&h);
1030 c });
1031 let d = force(&f);
1032 d
1033};
1034# }
1035```
1036
1037Incremental sequences
1038========================
1039
1040A _level tree_ consists of a binary tree with levels that decrease
1041monotonically along each path to its leaves.
1042
1043Here, we implement incremental level trees by including `Name`s and
1044`Art`s in the tree structure, with two additional constructors for the
1045recursive type, `Rec<X>`:
1046
1047```
1048# #[macro_use] extern crate adapton;
1049# fn main() {
1050# use adapton::macros::*;
1051# use adapton::engine::*;
1052use std::fmt::Debug;
1053use std::hash::Hash;
1054
1055#[derive(Clone,PartialEq,Eq,Debug,Hash)]
1056enum Rec<X> {
1057 Bin(BinCons<X>),
1058 Leaf(LeafCons<X>),
1059 Name(NameCons<X>),
1060 Art(Art<Rec<X>>),
1061}
1062
1063#[derive(Clone,PartialEq,Eq,Debug,Hash)]
1064struct LeafCons<X> {
1065 elms:Vec<X>,
1066}
1067
1068#[derive(Clone,PartialEq,Eq,Debug,Hash)]
1069struct BinCons<X> {
1070 level: u32,
1071 recl:Box<Rec<X>>,
1072 recr:Box<Rec<X>>
1073}
1074
1075#[derive(Clone,PartialEq,Eq,Debug,Hash)]
1076struct NameCons<X> {
1077 level:u32,
1078 name:Name,
1079 rec:Box<Rec<X>>,
1080}
1081# }
1082```
1083
1084Example: Nominal memoization and recursion
1085--------------------------------------------
1086
1087**Introduction forms:**
1088
1089```
1090# #[macro_use] extern crate adapton;
1091# fn main() {
1092#
1093# use std::fmt::Debug;
1094# use std::hash::{Hash};
1095# use adapton::macros::*;
1096# use adapton::engine::*;
1097#
1098# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1099# struct BinCons<X> {
1100# level: u32,
1101# recl:Box<Rec<X>>,
1102# recr:Box<Rec<X>>
1103# }
1104# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1105# struct NameCons<X> {
1106# level:u32,
1107# name:Name,
1108# rec:Box<Rec<X>>,
1109# }
1110# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1111# struct LeafCons<X> {
1112# elms:Vec<X>,
1113# }
1114# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1115# enum Rec<X> {
1116# Leaf(LeafCons<X>),
1117# Bin(BinCons<X>),
1118# Name(NameCons<X>),
1119# Art(Art<Rec<X>>),
1120# }
1121impl<X:'static+Clone+PartialEq+Eq+Debug+Hash>
1122 Rec<X> {
1123
1124pub fn leaf(xs:Vec<X>) -> Self {
1125 Rec::Leaf(LeafCons{elms:xs})
1126}
1127pub fn bin(lev:u32, l:Self, r:Self) -> Self {
1128 Rec::Bin(BinCons{level:lev,recl:Box::new(l),recr:Box::new(r)})
1129}
1130pub fn name(lev:u32, n:Name, r:Self) -> Self {
1131 Rec::Name(NameCons{level:lev,name:n, rec:Box::new(r)})
1132}
1133fn art(a:Art<Rec<X>>) -> Self {
1134 Rec::Art(a)
1135}
1136# }
1137# }
1138```
1139
1140**Elimination forms:** Folds use `memo!` to create and `force` `thunks`:
1141
1142```
1143# #[macro_use] extern crate adapton;
1144# fn main() {
1145#
1146# use std::fmt::Debug;
1147# use std::hash::{Hash};
1148# use adapton::macros::*;
1149# use adapton::engine::*;
1150#
1151# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1152# struct BinCons<X> {
1153# level: u32,
1154# recl:Box<Rec<X>>,
1155# recr:Box<Rec<X>>
1156# }
1157# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1158# struct NameCons<X> {
1159# level:u32,
1160# name:Name,
1161# rec:Box<Rec<X>>,
1162# }
1163# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1164# struct LeafCons<X> {
1165# elms:Vec<X>,
1166# }
1167# #[derive(Clone,PartialEq,Eq,Debug,Hash)]
1168# enum Rec<X> {
1169# Leaf(LeafCons<X>),
1170# Bin(BinCons<X>),
1171# Name(NameCons<X>),
1172# Art(Art<Rec<X>>),
1173# }
1174# impl<X:'static+Clone+PartialEq+Eq+Debug+Hash>
1175# Rec<X>
1176# {
1177# pub fn leaf(xs:Vec<X>) -> Self {
1178# Rec::Leaf(LeafCons{elms:xs})
1179# }
1180# pub fn bin(lev:u32, l:Self, r:Self) -> Self {
1181# Rec::Bin(BinCons{level:lev,recl:Box::new(l),recr:Box::new(r)})
1182# }
1183# pub fn name(lev:u32, n:Name, r:Self) -> Self {
1184# Rec::Name(NameCons{level:lev,name:n, rec:Box::new(r)})
1185# }
1186# fn art(a:Art<Rec<X>>) -> Self {
1187# Rec::Art(a)
1188# }
1189#
1190pub fn fold_monoid<B:'static+Clone+PartialEq+Eq+Debug+Hash>
1191 (t:Rec<X>, z:X, b:B,
1192 bin:fn(B,X,X)->X,
1193 art:fn(Art<X>,X)->X)
1194 -> X
1195{
1196 fn m_leaf<B:Clone,X>(m:(B,fn(B,X,X)->X,X), elms:Vec<X>) -> X {
1197# let mut x = m.2;
1198# for elm in elms {
1199# x = m.1(m.0.clone(), x, elm)
1200# };
1201# x
1202 // ...
1203 }
1204 fn m_bin<B,X>(_n:Option<Name>, m:(B,fn(B,X,X)->X,X), _lev:u32, l:X, r:X) -> X {
1205 m.1(m.0, l, r)
1206 }
1207 Self::fold_up_namebin::<(B,fn(B,X,X)->X,X),
1208 (B,fn(B,X,X)->X,X),X> (t, (b.clone(),bin,z.clone()), m_leaf,
1209 None, (b,bin,z), m_bin, art)
1210}
1211
1212fn fold_up_namebin
1213 <L:'static+Clone+PartialEq+Eq+Debug+Hash,
1214 B:'static+Clone+PartialEq+Eq+Debug+Hash,
1215 R:'static+Clone+PartialEq+Eq+Debug+Hash>
1216 (t:Rec<X>,
1217 l:L, leaf:fn(L,Vec<X>)->R,
1218 n:Option<Name>, b:B,
1219 namebin:fn(Option<Name>,B,u32,R,R)->R,
1220 art:fn(Art<R>,R)->R)
1221 -> R
1222{
1223 match t {
1224 Rec::Art(a) => Self::fold_up_namebin(get!(a), l, leaf, n, b, namebin, art),
1225 Rec::Leaf(leafcons) => leaf(l, leafcons.elms),
1226 Rec::Bin(bincons) => {
1227 let (n1,n2) = forko!(n.clone());
1228 let r1 = memo!([n1]? Self::fold_up_namebin;
1229 t:*bincons.recl,
1230 l:l.clone(), leaf:leaf, n:None, b:b.clone(), namebin:namebin, art:art);
1231 let r1 = art(r1.0,r1.1);
1232 let r2 = memo!([n2]? Self::fold_up_namebin;
1233 t:*bincons.recr,
1234 l:l.clone(), leaf:leaf, n:None, b:b.clone(), namebin:namebin, art:art);
1235 let r2 = art(r2.0,r2.1);
1236 namebin(n, b, bincons.level, r1, r2)
1237 }
1238 Rec::Name(namecons) => {
1239 Self::fold_up_namebin(
1240 *namecons.rec,
1241 l, leaf, Some(namecons.name), b, namebin, art
1242 )
1243 }
1244 }
1245}
1246# }}
1247```
1248
1249
1250Background: Incremental Computing with Names
1251=============================================
1252
1253We explain the role of names in incremental computing.
1254
1255## Pointer locations in incremental computing
1256
1257Suppose that we have a program that we wish to run repeatedly on
1258similar (but changing) inputs, and that this program constructs a
1259dynamic data structure as output. To cache this computation,
1260including its output, we generally require caching some of its
1261function calls, their results, and whatever allocations are relevant
1262to represent these results, including the final output structure.
1263Furthermore, to quickly test for input and output changes (in `O(1)`
1264time per "change") we would like to _store allocate_ input and output,
1265and use allocated _pointer locations_ (globally-unique "names") to
1266compare structures, giving a cheap, conservative approximation of
1267structural equality.
1268
1269## Deterministic allocation
1270
1271The first role of explicit names for incremental computing concerns
1272_deterministic pointer allocation_, which permits us to give a
1273meaningful definition to _cached_ allocation. To understand this
1274role, consider these two evaluation rules:
1275
1276```
1277// l ∉ dom(σ) n ∉ dom(σ)
1278// ---------------------------- :: alloc_1 ------------------------------- :: alloc_2
1279// σ; cell(v) ⇓ σ{l↦v}; ref l σ; cell[n](v) ⇓ σ{n↦v}; ref n
1280```
1281
1282Each rule is of the judgement form `σ1; e ⇓ σ2; v`, where `σ1` and
1283`σ2` are stores that map pointers to thunks and values, and `e` is an
1284expression to evaluate, and `v` is its valuation.
1285
1286The left rule is conventional: it allocates a value `v` at a store
1287location `l`; because the program does not determine `l`, the
1288implementor of this rule has the freedom to choose `l` any way that
1289they wish. Consequently, this program is not deterministic, and not a
1290function. Hence, it is not immediately obvious what it means to
1291_cache_ this kind of dynamic allocation using a technique like
1292_function caching_, or techniques based on it.
1293
1294To address this question, the programmer can determine a name `n`
1295for the value `v` as in the right rule. The point of this version is
1296to expose the naming choice directly to the programmer.
1297
1298### Structural names
1299
1300In some systems, the programmer chooses this name as the hash value of
1301value `v`. This naming style is often called "hash-consing". We
1302refer to it as _structural naming_, since by using it, the name of
1303each ref cell reflects the _entire structure_ of that cell's
1304content. (Structural hashing approach is closely related to Merkle
1305trees, the basis for revision-control systems like `git`.)
1306
1307### Independent names
1308
1309By contrast, in a _nominal_ incremental system, the programmer
1310generally chooses `n` to be related to the _evaluation context of
1311using `v`_, and often, to be _independent_ of the value `v` itself.
1312We give one example of such a naming strategy below.
1313
1314## Nominal independence
1315
1316Names augment programs to permit memoization and change propagation to
1317exploit independence among dynamic dependencies. Specifically, the
1318name value `n` is (generally) unrelated to the content value `v`.
1319In many incremental applications, its role is analogous to that of
1320location `l` in the left rule, where `l` is only related to `v` by the
1321final store. When pointer name `n` is independent of pointer
1322content `v`, we say this name, via the store, affords the program
1323_nominal independence_.
1324
1325Suppose that in a subsequent incremental run, value `v` changes to
1326`v2` at name `n`. The pointer name `n` localizes this change,
1327preventing any larger structure that contains cell `n` from itself
1328having a changed identity. By contrast, consider the case of
1329structural naming, which lacks nominal indirection: by virtue of being
1330determined by the value `v`, the allocated name `n` must change to
1331`n2` when `v` changes to `v2`. Structural naming is deterministic,
1332but lacks nominal independence, by definition.
1333
1334## Simple example of nominal independence
1335
1336The independence afforded by nominal indirection is critical in many
1337incremental programs.
1338As a simple illustrative example, consider `rev`, which
1339recursively reverses a list that, after being reversed, undergoes
1340incremental insertions and removals.
1341
1342```
1343// rev : List -> List -> List
1344// rev l r = match l with
1345// | Nil => r
1346// | Cons(h,t) =>
1347// memo(rev !t (Cons(h,r)))
1348```
1349
1350The function `rev` reverses a list of `Cons` cells, using an
1351accumulator value `r`.
1352To incrementalize the code for `rev`, in the `Cons` case, we
1353memoize the recursive call to `rev`, which involves matching its
1354arguments with those of cached calls.
1355
1356Problematically, if we do not introduce any indirection for it, the
1357accumulator `Cons(h,r)` will contain any incremental changes to head
1358value `h`, as well as any changes in the prefix of the input list (now
1359reversed in the previous accumulator value `r`). This is a problem
1360for memoization because, without indirection in this accumulator list,
1361such changes will repeatedly prevent us from `memo`-matching the
1362recursive call. Consequently, change propagation will re-evaluate all
1363the calls that follow the position of an insertion or removal: This
1364change is recorded in the accumulator, which has changed structurally.
1365
1366To address this issue, which is an example of a general pattern
1367(c.f. the output accumulators of quicksort and quickhull), past
1368researchers suggest that we introduce nominal `cell`s into the
1369accumulator, each allocated with a name associated with their
1370corresponding input `Cons` cell. In place of "names", prior work
1371variously used the terms _(allocation)
1372keys_~\cite{AcarThesis,Hammer08,AcarLeyWild09} and
1373_indices_~\cite{Acar06,Acar06ML}, but the core idea is the same.
1374
1375```
1376// | Cons(n,h,t) =>
1377// let rr = ref[n](r) in
1378// memo(rev !t (Cons(h,rr)))
1379```
1380
1381In this updated version, each name `n` localizes any change to the
1382accumulator argument `r` via nominal indirection.
1383When and if a later part of the program consumes the output in ref
1384cell `rr`, the system will process any changes associated with
1385this accumulator; they may be relevant later, but they are not
1386_directly_ relevant for reversing the tail `!t` in
1387cell `t`: The body of `rev` never inspects `r`, it
1388merely uses it to construct its output, for the `Nil` base case.
1389
1390In summary, this example illustrate a general principle: Nominal
1391indirection augments programs to permit memoization and change
1392propagation to exploit dynamic independence. Specifically, we exploit
1393the independence of the steps that reverse the pointers of a linked
1394list.
1395
1396*/
1397
1398//#![feature(associated_consts)]
1399//#![feature(box_patterns)]
1400//#![feature(box_syntax)]
1401
1402#![crate_name = "adapton"]
1403#![crate_type = "lib"]
1404
1405extern crate core;
1406
1407#[macro_use]
1408pub mod macros ;
1409pub mod engine ;
1410pub mod catalog ;
1411pub mod parse_val;
1412pub mod reflect;
1413
1414
1415mod adapton {
1416 pub use super::*;
1417}