pub async fn tail_stream(
client: &Client,
partition_config: &PartitionConfig,
execution_id: &ExecutionId,
attempt_index: AttemptIndex,
after: StreamCursor,
block_ms: u64,
count_limit: u64,
) -> Result<StreamFrames, SdkError>Expand description
Tail a live attempt’s stream.
after is an exclusive StreamCursor — XREAD returns entries
with id strictly greater than after. Pass
StreamCursor::from_beginning() (i.e. At("0-0")) to start from
the beginning. StreamCursor::Start / StreamCursor::End are
REJECTED at this boundary because XREAD does not accept - / +
as cursors — an invalid after surfaces as SdkError::Config.
block_ms == 0 → non-blocking peek. block_ms > 0 → blocks up to that
many ms. Rejects block_ms > MAX_TAIL_BLOCK_MS and count_limit
outside 1..=STREAM_READ_HARD_CAP with SdkError::Config to keep
SDK and REST ceilings aligned.
Returns a StreamFrames including closed_at/closed_reason —
polling consumers should loop until result.is_closed() is true, then
drain and exit. Timeout with no new frames presents as
frames=[], closed_at=None.
§Head-of-line warning — use a dedicated client
ferriskey::Client is a pipelined multiplexed connection; Valkey
processes commands FIFO on it. XREAD BLOCK block_ms does not yield
the read side until a frame arrives or the block elapses. If the
client you pass here is ALSO used for claims, completes, fails,
appends, or any other FCALL, a 30-second tail will stall all those
calls for up to 30 seconds.
Strongly recommended: build a separate ferriskey::Client for
tail callers — mirrors the Server::tail_client split that the REST
server uses internally (see crates/ff-server/src/server.rs and
RFC-006 Impl Notes §“Dedicated stream-op connection”).
§Tail parallelism caveat (same mux)
Even a dedicated tail client is still one multiplexed TCP connection.
Valkey processes XREAD BLOCK calls FIFO on that one socket, and
ferriskey’s per-call request_timeout starts at future-poll — so
two concurrent tails against the same client can time out spuriously:
the second call’s BLOCK budget elapses while it waits for the first
BLOCK to return. The REST server handles this internally with a
tokio::sync::Mutex that serializes xread_block calls, giving
each call its full block_ms budget at the server.
Direct SDK callers that need concurrent tails: either
(1) build ONE ferriskey::Client per concurrent tail call (a small
pool of clients, rotated by the caller), OR
(2) wrap tail_stream calls in your own tokio::sync::Mutex so
only one BLOCK is in flight per client at a time.
If you need the REST-side backpressure (429 on contention) and the
built-in serializer, go through the
/v1/executions/{eid}/attempts/{idx}/stream/tail endpoint rather
than calling this directly.
This SDK does not enforce either pattern — the mutex belongs at the application layer, and the connection pool belongs at the SDK caller’s DI layer; neither has a structured place inside this helper.
§Timeout handling
Blocking calls do not hit ferriskey’s default request_timeout (5s on
the server default). For XREAD/XREADGROUP with a BLOCK argument,
ferriskey’s get_request_timeout returns BlockingCommand(block_ms + 500ms), overriding the client’s default per-call. A tail with
block_ms = 30_000 gets a 30_500ms effective transport timeout even if
the client was built with a shorter request_timeout. No custom client
configuration is required for timeout reasons — only for head-of-line
isolation above.