A macro to build an AsyncSchedulerConfig without using .unwrap() or .expect().
It returns a Result<AsyncSchedulerConfig, NetworkError> so you can use ?.
Aggregator thread that reads incoming TaskItems, distributing them to workers.
We now log each phase of the aggregator’s loop in more detail, which can
reveal if aggregator is stuck waiting for input or if workers are closed prematurely.
Executes the specified node (node_idx) from net_guard, optionally sends
streaming output (if output_tx is Some(...)), and decrements
in-degs to compute Freed children. Returns (freed_children, error).
Retrieves the next TaskItem from the worker’s channel.
We now log more details about whether we’re waiting, whether a task is present, etc.
This can help confirm if a worker is truly idle or if a hang might be from unreceived tasks.
Sends Freed children to ready_nodes_tx. This step can be a source of hangs if
the channel is at capacity or closed. We add logs at each step to ensure we see
exactly where Freed children get queued.
Processes the task by locking the network, executing the node operator,
and computing Freed children. More detailed logs to help ensure we see
exact timing and concurrency behavior.
Releases any concurrency permit if present. This step is crucial in preventing a hang
if a worker forgets to release. Added explicit logs to confirm the permit is dropped.
Spawns one worker OS thread within the given scope.
The worker runs a mini tokio runtime that calls worker_main_loop.
Returns a ScopedJoinHandle you can store or join later.
Submits each node in chunk to the worker pool, attempting to acquire a concurrency permit.
We now log every step more carefully to aid in debugging concurrency/hangs.
A worker’s main loop: continuously fetch tasks, process them, and send results.
We’ve added extra logs at each stage. If a deadlock or hang occurs, these logs
can help identify if the channel is closed, concurrency is blocked, etc.