[][src]Crate legion_task

Fork-join multitasking for Legion ECS

Instead of hand-rolling state machines to sequence the effects of various ECS systems, spawn tasks as entities and declare explicit temporal dependencies between them.

Code Example: making task graphs and dispatching task runners

use legion::prelude::*;
use legion_task::*;

struct SaySomething(&'static str);

impl<'a> TaskComponent<'a> for SaySomething {
    type Data = ();

    fn run(&mut self, data: &mut Self::Data) -> bool {
        println!("{}", self.0);


#[derive(Clone, Debug)]
struct PushValue {
    value: usize,

impl<'a> TaskComponent<'a> for PushValue {
    type Data = Vec<usize>;

    fn run(&mut self, data: &mut Self::Data) -> bool {


fn make_static_task_graph(cmd: &mut CommandBuffer) {
    // Any component that implements TaskComponent can be spawned.
    let task_graph: TaskGraph = seq!(
            @PushValue { value: 1 },
            @PushValue { value: 2 },
            @PushValue { value: 3 }
    task_graph.assemble(OnCompletion::Delete, cmd);

fn make_dynamic_task_graph(cmd: &mut CommandBuffer) {
    let first: TaskGraph = task!(@SaySomething("hello"));
    let mut middle: TaskGraph = empty_graph!();
    for i in 0..10 {
        middle = fork!(middle, @PushValue { value: i });
    let last: TaskGraph = task!(@SaySomething("goodbye"));
    let task_graph: TaskGraph = seq!(first, middle, last);
    task_graph.assemble(OnCompletion::Delete, cmd);

fn build_say_something_task_runner_system() -> Box<dyn Schedulable> {
        .build(|_, mut world, _, task_query| {
            run_tasks(&mut world, &mut (), task_query)

fn build_push_value_task_runner_system() -> Box<dyn Schedulable> {
        .build(|_, mut world, value, task_query| {
            run_tasks(&mut world, &mut **value, task_query)

fn make_schedule() -> Schedule {

Data Model

Here we expound on the technical details of this module's implementation. For basic usage, see the tests.

In this model, every task is some entity. The entity is allowed to have exactly one component that implements TaskComponent (it may have other components that don't implement TaskComponent). The task will be run to completion by running a system that calls run_tasks with the proper TaskComponent::Data and task_query.

Every task entity is also a node in a (hopefully acyclic) directed graph. An edge t2 --> t1 means that t2 cannot start until t1 has completed.

In order for tasks to become unblocked, the system created with build_task_manager_system must run, whence it will traverse the graph, starting at the "final entities", and check for entities that have completed, potentially unblocking their parents. In order for a task to be run, it must be the descendent of a final entity. Entity component tuples become final by calling finalize (which adds a FinalTag component).

Edges can either come from SingleEdge or MultiEdge components, but you should not use these types directly. You might wonder why we need both types of edges. It's a fair question, because adding the SingleEdge concept does not actually make the model capable of representing any semantically new graphs. The reason is efficiency.

If you want to implement a fork join like this (note: time is going left to right but the directed edges are going right to left):

 r#"       ----- t1.1 <---   ----- t2.1 <---
          /               \ /               \
      t0 <------ t1.2 <----<------ t2.2 <---- t3
          \               / \               /
           ----- t1.3 <---   ----- t2.3 <---      "#;

You would actually do this by calling make_fork to create two "fork" entities F1 and F2 that don't have TaskComponents, but they can have both a SingleEdge and a MultiEdge. Note that the children on the MultiEdge are called "prongs" of the fork.

 r#"      single          single          single
     t0 <-------- F1 <-------------- F2 <-------- t3
                   |                  |
          t1.1 <---|          t2.1 <--|
          t1.2 <---| multi    t2.2 <--| multi
          t1.3 <---|          t2.3 <--|            "#;

The semantics would be such that this graph is equivalent to the one above. Before any of the tasks connected to F2 by the MultiEdge could run, the tasks connected by the SingleEdge ({ t0, t1.1, t1.2, t1.3 }) would have to be complete. t3 could only run once all of the descendents of F2 had completed.

The advantages of this scheme are:

  • a traversal of the graph starting from t3 does not visit the same node twice
  • it is a bit easier to create fork-join graphs with larger numbers of concurrent tasks
  • there are fewer edges for the most common use cases

Here's another example with "nested forks" to test your understanding:

r#"   With fork entities:

          t0 <-------------- FA <----- t2
                      tx <---|
              t1 <--- FB <---|
              ty <-----|
              tz <-----|

      As time orderings:

          t0   < { t1, tx, ty, tz } < t2
          t1   < { ty, tz }

      Induced graph:

          t0 <------- tx <------- t2
           ^                      |
           |      /------ ty <----|
           |     v                |
           ----- t1 <---- tz <-----          "#;

Macro Usage

Every user of this module should create task graphs via the empty_graph!, seq!, fork!, and task! macros, which make it easy to construct task graphs correctly. Once a graph is ready, call assemble on it to mark the task entities for execution (by finalizing the root of the graph).

These systems must be scheduled for tasks to make progress:

  • a system created with build_task_manager_system
  • a system that calls run_tasks on each TaskComponent used

Advanced Usage

If you find the TaskGraph macros limiting, you can use the make_task, join, make_fork, and add_prong functions; these are the building blocks for creating all task graphs, including buggy ones. These functions are totally dynamic in that they deal directly with entities of various archetypes, assuming that the programmer passed in the correct archetypes for the given function.

Potential bugs that won't be detected for you:

  • leaked orphan entities
  • graph cycles
  • finalizing an entity that has children
  • users manually tampering with the TaskProgress, SingleEdge, MultiEdge, or FinalTag components; these should only be used inside this module



Make a task graph without any tasks. This is used as the initial value for accumulating graphs dynamically.


Returns a TaskGraph that executes the argument list of TaskGraphs concurrently.


Returns a TaskGraph that executes the argument list of TaskGraphs sequentially.


Make a single-node TaskGraph.



Implementation detail of TaskGraph. Note that two trees may be unequal yet equivalent in how they assemble, for example, Cons::Seq(x, Cons::Seq(y, z)) != Cons::Seq(Cons::Seq(x, y), z), but they both assemble into a sequence x -> y -> z.


What to do to a final task and its descendents when they complete.



An ephemeral component that needs access to Data to run some task. Will be run by run_tasks in a system with access to task_runner_query and Data.


Implemented by all nodes of a TaskGraph. Has a blanket impl that should work for most TaskComponents.



Add prong as a child on the MultiEdge of fork_entity.


Creates a system that traverses all descendents of all finalized entities and unblocks them if possible.


Tells you whether a fork or a task entity is complete.


Mark entity as "final," i.e. a task with no parent.


Creates a SingleEdge from parent to child. Creates a fork-join if parent is a fork.


Create a new fork entity with no children.


Create a new task entity.


Run the tasks that match task_query. Should be run in a System created with task_runner_query.


The legion system query required to run all tasks with T: TaskComponent.


Gives read-only access to the task meta-components in order to query the state of task entities.

Type Definitions


The EntityFilterTuple for task_runner_query.


A node of the binary tree grammar that describes a task graph. Cons::Seq lists represent sequential execution of tasks. Cons::Fork lists represent concurrent execution of tasks. The leaves of the tree are Cons::Tasks.


The type of Query created by task_runner_query and used by run_tasks.


The type of SystemQuery created by task_runner_query and used by run_tasks.