1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
pub const PAYLOAD_FIELD: &str = "d";
/// Stream entry field carrying the optional UTF-8 job `name` alongside the
/// msgpack-encoded payload in `d`. Producer omits the field entirely for
/// unnamed jobs; consumer treats absent and empty as equivalent.
pub const NAME_FIELD: &str = "n";
/// Per-queue, per-job-id dedup marker key. Used by the idempotent delayed
/// scheduling path (`Producer::add_in_with_id` / `add_at_with_id` /
/// `add_in_bulk_with_ids`) so a network-driven caller retry doesn't double
/// the scheduled job. Same `{chasqui:<queue>}` hash tag as the delayed ZSET
/// so they always co-locate on the same Redis Cluster slot.
/// Per-queue cross-process events stream. The engine writes engine-internal
/// transitions (waiting / active / completed / failed / retry-scheduled /
/// delayed / dlq / drained) here as Redis Stream entries; subscribers in any
/// process can `XREAD` to observe them. This is a sibling to `MetricsSink`,
/// not a replacement: `MetricsSink` is in-process (zero IPC), the events
/// stream is cross-process (subscribable by an external dashboard or the
/// Node bindings' `QueueEvents` class). Both fire on the same hot-path
/// occurrences. Same `{chasqui:<queue>}` hash tag so it co-locates with the
/// other queue keys on a single Redis Cluster slot.
/// Per-queue ZSET tracking repeatable specs by next fire time. Score =
/// `next_fire_ms`, member = `RepeatableSpec::resolved_key()`. The
/// `Scheduler` (slice 10) tails this with `ZRANGEBYSCORE -inf <now>` to
/// find specs whose next fire time has elapsed.
/// Per-queue, per-spec-key hash storing the full repeatable spec
/// (`pattern`, `payload`, `limit`, etc.) under field `spec` as
/// msgpack-encoded [`crate::repeat::StoredSpec`]. Separate from the ZSET so
/// the scheduler tick only hydrates due specs, not the entire catalog.
/// Per-queue scheduler leader-election lock key. Independent from the
/// `promoter:lock` so a deployment can run scheduler and promoter on
/// disjoint replicas if it chooses.
/// Per-queue, per-job-id result-backend key. Stores the handler's
/// return value (opaque bytes — every shim msgpack-encodes the user's
/// native value before the bytes cross the FFI boundary) with a
/// configurable TTL set by [`crate::config::ConsumerConfig::result_ttl_secs`].
/// Written by `JOB_OK_SCRIPT` in the same Lua round trip as the
/// `XACKDEL` so the result write is gated on a successful ack — no
/// orphan results when a concurrent CLAIM removed the entry first.
/// Same `{chasqui:<queue>}` hash tag as the rest of the queue's
/// keyspace so the result key always co-locates on a single Redis
/// Cluster slot.
/// Per-queue, per-job-id side-index key used by `Producer::cancel_delayed`.
/// Stores the exact encoded ZSET member so cancel can `ZREM` precisely
/// without a slow `ZRANGE` scan. Written by the idempotent schedule path
/// alongside the dedup marker, with the same TTL — after natural expiration
/// (or post-cancel `DEL`) the key disappears on its own; the promoter never
/// has to clean it up because the cancel script is already correct in the
/// "GET hits, ZREM misses (already promoted)" race. Same `{chasqui:<queue>}`
/// hash tag so it co-locates on the same Cluster slot as the ZSET.