[−][src]Crate specs_task
Fork-join multitasking for SPECS ECS
Instead of hand-rolling state machines to sequence the effects of various ECS systems, spawn tasks as entities and declare explicit temporal dependencies between them.
Code Examples
Making task graphs
fn make_static_task_graph(task_maker: &TaskMaker) { // Any component that implements TaskComponent can be spawned. let task_graph: TaskGraph = seq!( @TaskFoo("hello"), fork!( @TaskBar { value: 1 }, @TaskBar { value: 2 }, @TaskBar { value: 3 } ), @TaskZing("goodbye") ); task_graph.assemble(task_maker, OnCompletion::Delete); } fn make_dynamic_task_graph(task_maker: &TaskMaker) { let first = task!(@TaskFoo("hello")); let mut middle = empty_graph!(); for i in 0..10 { middle = fork!(middle, @TaskBar { value: i }); } let last = task!(@TaskZing("goodbye")); let task_graph: TaskGraph = seq!(first, middle, last); task_graph.assemble(task_maker, OnCompletion::Delete); }
Building a dispatcher with a TaskRunnerSystem
#[derive(Clone, Debug)] struct PushValue { value: usize, } impl Component for PushValue { type Storage = VecStorage<Self>; } impl<'a> TaskComponent<'a> for PushValue { type Data = Write<'a, Vec<usize>>; fn run(&mut self, data: &mut Self::Data) -> bool { data.push(self.value); true } } fn make_dispatcher() -> Dispatcher { DispatcherBuilder::new() .with( TaskRunnerSystem::<PushValue>::default(), "push_value", &[], ) .with( TaskManagerSystem, "task_manager", &[], ) .build() }
Data Model
Here we expound on the technical details of this module's implementation. For basic usage, see the tests.
In this model, every task is some entity. The entity is allowed to have exactly one component
that implements TaskComponent
(it may have other components that don't implement
TaskComponent
). The task will be run to completion by the corresponding TaskRunnerSystem
.
Every task entity is also a node in a (hopefully acyclic) directed graph. An edge t2 --> t1
means that t2
cannot start until t1
has completed.
In order for tasks to become unblocked, the TaskManagerSystem
must run, whence it will
traverse the graph, starting at the "final entities", and check for entities that have
completed, potentially unblocking their parents. In order for a task to be run, it must be the
descendent of a final entity. Entity component tuples become final by calling finalize
(which
adds a FinalTag
component).
Edges can either come from SingleEdge
or MultiEdge
components, but you should not use these
types directly. You might wonder why we need both types of edges. It's a fair question, because
adding the SingleEdge
concept does not actually make the model capable of representing any
semantically new graphs. The reason is efficiency.
If you want to implement a fork join like this (note: time is going left to right but the directed edges are going right to left):
r#" ----- t1.1 <--- ----- t2.1 <---
/ \ / \
t0 <------ t1.2 <----<------ t2.2 <---- t3
\ / \ /
----- t1.3 <--- ----- t2.3 <--- "#;
You would actually do this by calling make_fork
to create two "fork" entities F1
and F2
that don't have TaskComponent
s, but they can have both a SingleEdge
and a MultiEdge
. Note
that the children on the MultiEdge
are called "prongs" of the fork.
r#" single single single
t0 <-------- F1 <-------------- F2 <-------- t3
| |
t1.1 <---| t2.1 <--|
t1.2 <---| multi t2.2 <--| multi
t1.3 <---| t2.3 <--| "#;
The semantics would be such that this graph is equivalent to the one above. Before any of the
tasks connected to F2
by the MultiEdge
could run, the tasks connected by the SingleEdge
({ t0, t1.1, t1.2, t1.3 }
) would have to be complete. t3
could only run once all of the
descendents of F2
had completed.
The advantages of this scheme are:
- a traversal of the graph starting from
t3
does not visit the same node twice - it is a bit easier to create fork-join graphs with larger numbers of concurrent tasks
- there are fewer edges for the most common use cases
Here's another example with "nested forks" to test your understanding:
r#" With fork entities:
t0 <-------------- FA <----- t2
|
tx <---|
t1 <--- FB <---|
|
ty <-----|
tz <-----|
As time orderings:
t0 < { t1, tx, ty, tz } < t2
t1 < { ty, tz }
Induced graph:
t0 <------- tx <------- t2
^ |
| /------ ty <----|
| v |
----- t1 <---- tz <----- "#;
Macro Usage
Every user of this module should create task graphs via the empty_graph!
, seq!
, fork!
, and
task!
macros, which make it easy to construct task graphs correctly. Once a graph is ready,
call assemble
on it to mark the task entities for execution.
These systems must be scheduled for tasks to make progress:
TaskManagerSystem
TaskRunnerSystem
Advanced Usage
If you find the TaskGraph
macros limiting, you can use the TaskMaker
; these are the building
blocks for creating all task graphs, including buggy ones. These functions are totally dynamic
in that they deal directly with entities of various archetypes, assuming that the programmer
passed in the correct archetypes for the given function.
Potential bugs that won't be detected for you:
- leaked orphan entities
- graph cycles
- finalizing an entity that has children
- users manually tampering with the
TaskProgress
,SingleEdge
,MultiEdge
, orFinalTag
components; these should only be used inside this module
Macros
empty_graph | Make a task graph without any tasks. This is used as the initial value for accumulating graphs dynamically. |
fork | Returns a |
seq | Returns a |
task | Make a single-node |
Structs
TaskMaker | A |
TaskManager | The main object for users of this module. Used for creating and connecting tasks. |
TaskManagerSystem | Traverses all descendents of all finalized entities and unblocks them if possible. |
TaskRunnerSystem | The counterpart to an implementation |
Enums
Cons | Implementation detail of |
OnCompletion | What to do to a final task and its descendents when it they complete. |
Traits
TaskComponent | An ephemeral component that needs access to |
TaskFactory | Implemented by all nodes of a |
Type Definitions
TaskGraph | A node of the binary tree grammar that describes a task graph. |