Expand description
Per-session readiness tracking.
LSP servers complete their initialize handshake in tens of
milliseconds, but real workspace indexing (rust-analyzer’s project
model, pyright’s module graph, tsserver’s file-system walk) can
take 15–60 seconds. Pre-P0-4 harnesses papered over this with a
fixed sleep 45 after prepare_harness_session — honest but
wasteful: every bench run paid the worst-case wait regardless of
how quickly indexing actually finished, and production agent
sessions had no signal at all.
This module exposes a cheap, lock-free readiness snapshot per LSP session. The pool records:
started_at— the wall-clock instant the session was spawned.ms_to_first_response— elapsed milliseconds when any LSP call first returnedOk. Usually the bootstrapworkspace/symbolfrom the auto-attach prewarm. Proves the server’s handshake completed.ms_to_first_nonempty— elapsed milliseconds when a call first returned a non-empty result. This is the stronger signal that indexing has progressed far enough to serve real caller queries: rust-analyzer and pyright both reply with[]while the project is still being walked, then start returning real hits once the module graph is populated.response_count/nonempty_count/failure_count— rolling counters so callers can distinguish “indexing still warming” from “server is failing every request”.
Reads are via Arc<ReadinessState> + atomics, so snapshot calls
never contend with the per-session I/O mutex. That keeps the
downstream MCP get_lsp_readiness handler cheap enough for a
500 ms polling loop to be the canonical wait-for-ready mechanism.
Structs§
- Readiness
Snapshot - Plain-old-data readiness view for callers (MCP handlers, bench
scripts). All milliseconds are relative to
session.started_at. - Readiness
State - Readiness state shared between a session’s owning thread and the pool’s snapshot readers. Created when a session is spawned and retained until the session is dropped.