[][src]Crate specs_task

Fork-join multitasking for SPECS ECS

Instead of hand-rolling state machines to sequence the effects of various ECS systems, spawn tasks as entities and declare explicit temporal dependencies between them.

Code Examples

Making task graphs

This example deliberately fails to compile
fn make_static_task_graph(user: &TaskUser) {
    // Any component that implements TaskComponent can be spawned.
    let task_graph: TaskGraph = seq!(
        @TaskFoo("hello"),
        fork!(
            @TaskBar { value: 1 },
            @TaskBar { value: 2 },
            @TaskBar { value: 3 }
        ),
        @TaskZing("goodbye")
    );
    task_graph.assemble(user, OnCompletion::Delete);
}

fn make_dynamic_task_graph(user: &TaskUser) {
    let first: TaskGraph = task!(@TaskFoo("hello"));
    let mut middle: TaskGraph = empty_graph!();
    for i in 0..10 {
        middle = fork!(middle, @TaskBar { value: i });
    }
    let last: TaskGraph = task!(@TaskZing("goodbye"));
    let task_graph: TaskGraph = seq!(first, middle, last);
    task_graph.assemble(user, OnCompletion::Delete);
}

Building a dispatcher with a TaskRunnerSystem

This example deliberately fails to compile
#[derive(Clone, Debug)]
struct PushValue {
    value: usize,
}

impl Component for PushValue {
    type Storage = VecStorage<Self>;
}

impl<'a> TaskComponent<'a> for PushValue {
    type Data = Write<'a, Vec<usize>>;

    fn run(&mut self, data: &mut Self::Data) -> bool {
        data.push(self.value);

        true
    }
}

fn make_dispatcher() -> Dispatcher {
    DispatcherBuilder::new()
        .with(
            TaskRunnerSystem::<PushValue>::default(),
            "push_value",
            &[],
        )
        .with(
            TaskManagerSystem,
            "task_manager",
            &[],
        )
        .build()
}

Data Model

Here we expound on the technical details of this module's implementation. For basic usage, see the tests.

In this model, every task is some entity. The entity is allowed to have exactly one component that implements TaskComponent (it may have other components that don't implement TaskComponent). The task will be run to completion by the corresponding TaskRunnerSystem.

Every task entity is also a node in a (hopefully acyclic) directed graph. An edge t2 --> t1 means that t2 cannot start until t1 has completed.

In order for tasks to become unblocked, the TaskManagerSystem must run, whence it will traverse the graph, starting at the "final entities", and check for entities that have completed, potentially unblocking their parents. In order for a task to be run, it must be the descendent of a final entity. Entity component tuples become final by calling finalize (which adds a FinalTag component).

Edges can either come from SingleEdge or MultiEdge components, but you should not use these types directly. You might wonder why we need both types of edges. It's a fair question, because adding the SingleEdge concept does not actually make the model capable of representing any semantically new graphs. The reason is efficiency.

If you want to implement a fork join like this (note: time is going left to right but the directed edges are going right to left):

 r#"       ----- t1.1 <---   ----- t2.1 <---
          /               \ /               \
      t0 <------ t1.2 <----<------ t2.2 <---- t3
          \               / \               /
           ----- t1.3 <---   ----- t2.3 <---      "#;

You would actually do this by calling make_fork to create two "fork" entities F1 and F2 that don't have TaskComponents, but they can have both a SingleEdge and a MultiEdge. Note that the children on the MultiEdge are called "prongs" of the fork.

 r#"      single          single          single
     t0 <-------- F1 <-------------- F2 <-------- t3
                   |                  |
          t1.1 <---|          t2.1 <--|
          t1.2 <---| multi    t2.2 <--| multi
          t1.3 <---|          t2.3 <--|            "#;

The semantics would be such that this graph is equivalent to the one above. Before any of the tasks connected to F2 by the MultiEdge could run, the tasks connected by the SingleEdge ({ t0, t1.1, t1.2, t1.3 }) would have to be complete. t3 could only run once all of the descendents of F2 had completed.

The advantages of this scheme are:

  • a traversal of the graph starting from t3 does not visit the same node twice
  • it is a bit easier to create fork-join graphs with larger numbers of concurrent tasks
  • there are fewer edges for the most common use cases

Here's another example with "nested forks" to test your understanding:

r#"   With fork entities:

          t0 <-------------- FA <----- t2
                             |
                      tx <---|
              t1 <--- FB <---|
                       |
              ty <-----|
              tz <-----|

      As time orderings:

          t0   < { t1, tx, ty, tz } < t2
          t1   < { ty, tz }

      Induced graph:

          t0 <------- tx <------- t2
           ^                      |
           |      /------ ty <----|
           |     v                |
           ----- t1 <---- tz <-----          "#;

Macro Usage

Every user of this module should create task graphs via the empty_graph!, seq!, fork!, and task! macros, which make it easy to construct task graphs correctly. Once a graph is ready, call assemble on it to mark the task entities for execution.

These systems must be scheduled for tasks to make progress:

  • TaskManagerSystem
  • TaskRunnerSystem

Advanced Usage

If you find the TaskGraph macros limiting, you can use the TaskUser methods; these are the building blocks for creating all task graphs, including buggy ones. These functions are totally dynamic in that they deal directly with entities of various archetypes, assuming that the programmer passed in the correct archetypes for the given function.

Potential bugs that won't be detected for you:

  • leaked orphan entities
  • graph cycles
  • finalizing an entity that has children
  • users manually tampering with the TaskProgress, SingleEdge, MultiEdge, or FinalTag components; these should only be used inside this module

Macros

empty_graph

Make a task graph without any tasks. This is used as the initial value for accumulating graphs dynamically.

fork

Returns a TaskGraph that executes the argument list of TaskGraphs concurrently.

seq

Returns a TaskGraph that executes the argument list of TaskGraphs sequentially.

task

Make a single-node TaskGraph.

Structs

TaskData
TaskManagerSystem

Traverses all descendents of all finalized entities and unblocks them if possible.

TaskRunnerSystem

The counterpart to an implementation TaskComponent. Runs tasks until completion.

Enums

Cons

Implementation detail of TaskGraph. Note that two trees may be unequal yet equivalent in how they assemble, for example, Cons::Seq(x, Cons::Seq(y, z)) != Cons::Seq(Cons::Seq(x, y), z), but they both assemble into a sequence x -> y -> z.

OnCompletion

What to do to a final task and its descendents when it they complete. WARNING: If you specify Delete, then you will not be able to poll for completion, since a non-existent entity is assumed to be "incomplete."

Traits

TaskComponent

An ephemeral component that needs access to SystemData to run some task. Will be run by the TaskRunnerSystem<T> until run returns true.

TaskFactory

Implemented by all nodes of a TaskGraph. Has a blanket impl that should work for most TaskComponents.

Type Definitions

TaskGraph

A node of the binary tree grammar that describes a task graph. Cons::Seq lists represent sequential execution of tasks. Cons::Fork lists represent concurrent execution of tasks. The leaves of the tree are Cons::Tasks.

TaskUser

SystemData for all read-only task-related operations. Can create and modify tasks lazily.

TaskWriter

Creates and modifies task graph entities. Effects are immediate, not lazy like TaskUser, so this requires using WriteStorage for task graph components.