lsp-bench 0.2.3

Benchmark framework for Language Server Protocol (LSP) servers
## 2. Key Structs for Config Parsing

All config structs are in src/main.rs, lines 23-124. Deserialization happens in load_config() at line 184 using serde_yaml::from_str.

```
Config (line 68)
project: String               // Project root directory
file: String                  // Target file relative to project
line: u32                     // Default 0-based line (default: 102)
col: u32                      // Default 0-based column (default: 15)
iterations: usize             // Measured iterations (default: 10)
warmup: usize                 // Warmup iterations, discarded (default: 2)
timeout: u64                  // Seconds per LSP request (default: 10)
index_timeout: u64            // Seconds for server indexing (default: 15)
output: String                // Output directory for JSON results (default: "benchmarks")
benchmarks: Vec<String>       // Which benchmarks to run (empty or ["all"] = all)
report: Option<String>        // Output path for generated report (optional)
report_style: String          // "delta" (default), "readme", or "analysis"
response_limit: usize         // Response truncation: 0=full, N=N chars (default: 80)
                              //   (YAML field is "response", custom deserializer accepts "full" or number)
trigger_character: Option<String> // Deprecated: legacy completion trigger
methods: HashMap<String, MethodConfig> // Per-method position/trigger overrides
servers: Vec<ServerConfig>    // LSP servers to benchmark
MethodConfig (line 53)
line: Option<u32>             // Override line for this method
col: Option<u32>              // Override column for this method
trigger: Option<String>       // Trigger character (only for textDocument/completion)
did_change: Vec<FileSnapshot> // YAML key: "didChange". Snapshot files for sequential editing
FileSnapshot (line 24)
file: String                  // Path to snapshot file (relative to project)
line: u32                     // 0-based line for the benchmark request after this snapshot
col: u32                      // 0-based column for the benchmark request after this snapshot
ServerConfig (line 109)
label: String                 // Display name for this server
description: String           // Server description (default: "")
link: String                  // URL link (default: "")
cmd: String                   // Command to spawn the LSP server
args: Vec<String>             // CLI arguments for the server command
commit: Option<String>        // Git ref to checkout and build from
repo: Option<String>          // Path to git repo (required when commit is set)
Deserialization location
```

- load_config() at line 184: reads YAML file with std::fs::read_to_string, then serde_yaml::from_str::<Config>()
- Custom deserializer deserialize_response_limit() at line 158: handles the response YAML field accepting "full" (mapped to 0) or a numeric value

---

## 3. Key Structs for Result Output

The JSON output is built dynamically using serde_json::json!() macros in save_json() (line 1231). There are no dedicated output structs with #[derive(Serialize)]; instead, intermediate types are serialized via their to_json() methods.

```
BenchResult (enum, line 717)
Ok { iterations: Vec<(f64, Value)>, rss_kb: Option<u64> }
  // iterations = list of (latency_ms, response_json)
Invalid { first_response: Value, rss_kb: Option<u64> }
Fail { error: String, rss_kb: Option<u64> }
BenchRow (line 732)
label: String                  // Server label
p50: f64                       // Median latency
p95: f64                       // 95th percentile latency
mean: f64                      // Mean latency
iterations: Vec<(f64, Value)>  // (ms, response) per iteration
rss_kb: Option<u64>            // Resident set size after indexing
kind: u8                       // 0=ok, 1=invalid, 2=fail
fail_msg: String               // Error message (kind=2)
summary: Value                 // First response summary
BenchRow::to_json() output shape (line 745)
For kind=0 (ok):
{
  server: <label>,
  status: ok,
  p50_ms: <f64>,
  p95_ms: <f64>,
  mean_ms: <f64>,
  iterations: [{ ms: <f64>, response: <Value> }, ...],
  response: <Value>,
  rss_kb: <u64>        // optional
}
For kind=1 (invalid): { "server", "status": "invalid", "response", "rss_kb?" }
For kind=2 (fail): { "server", "status": "fail", "error", "rss_kb?" }
Top-level JSON output shape (built in save_json(), line 1307)
{
  timestamp: <ISO8601>,
  date: <YYYY-MM-DD>,
  settings: {
    iterations: N,
    warmup: N,
    timeout_secs: N,
    index_timeout_secs: N,
    project: ...,
    file: ...,
    line: N,
    col: N,
    methods: { ... }       // optional, only if non-empty
  },
  servers: [
    { name: ..., version: ..., description?: ..., link?: ... }
  ],
  benchmarks: [
    { name: ..., servers: [ <BenchRow::to_json()>, ... ] }
  ]
}
Serialization location
```

- save_json() at line 1231: constructs the JSON via serde_json::json!() macros, writes with serde_json::to_string_pretty(), saves to {output_dir}/{timestamp}.json

---

4. Main Execution Flow (Function Call Chain)

```
main()                                          [line 1389]
  |
  +-- Cli::parse()                              [line 1390]  -- CLI argument parsing
  |
  +-- [if init subcommand] init_config()        [line 1393-1397]
  |
  +-- load_config()                             [line 1400]  -- YAML -> Config
  |     +-- std::fs::read_to_string()
  |     +-- serde_yaml::from_str::<Config>()
  |
  +-- [for each server with commit] build_from_commit()  [line 1463-1480]
  |     -- git checkout + cargo build --release
  |
  +-- [filter available servers] available()    [line 1482-1492]
  |
  +-- [detect versions] detect_version()        [line 1495-1502]
  |
  +-- [for each benchmark in list]:
  |   |
  |   +-- "initialize" ->
  |   |     run_bench() -> bench_spawn()        [line 1683-1709]
  |   |
  |   +-- "textDocument/diagnostic" ->
  |   |     run_bench() -> bench_diagnostics()  [line 1713-1749]
  |   |
  |   +-- any other LSP method ->
  |   |     [resolve didChange snapshots from methods map]    [line 1760-1772]
  |   |     if snapshots empty:
  |   |       run_bench() -> bench_lsp_method()               [line 1780-1796]
  |   |     else:
  |   |       run_bench() -> bench_lsp_snapshots()            [line 1798-1811]
  |   |
  |   +-- save_json() (partial save after each benchmark)     [line 1814-1829]
  |
  +-- save_json() (final save to output_dir)                  [line 1836-1850]
  |
  +-- std::fs::remove_dir_all(partial_dir)                    [line 1854]
  |
  +-- [if report configured] spawn gen-readme/gen-analysis/gen-delta  [line 1857-1895]
run_bench() orchestration (line 1163)
For each server: creates a spinner, calls the benchmark closure, collects results into BenchRow structs (computing p50/p95/mean via stats()), displays pass/fail.
bench_lsp_method() inner flow (line 947)
LspClient::spawn() -> initialize() -> open_file() -> wait_for_valid_diagnostics()
  -> [measure RSS] -> for each (warmup + iteration):
       send(method, params) -> read_response() -> [retry if invalid until deadline]
       -> collect (ms, response_summary)
  -> kill()
bench_lsp_snapshots() inner flow (line 1050)
LspClient::spawn() -> initialize() -> open_file() -> wait_for_valid_diagnostics()
  -> [measure RSS] -> for each snapshot in snapshots[]:
       read snapshot file -> did_change(file_uri, version, content)
       -> send(method, params with snapshot's line/col) -> read_response()
       -> collect (ms, response_summary)
  -> kill()
```

---

## 5. didChange Snapshot Iteration

Where snapshots are configured
In YAML under methods.<method>.didChange (YAML key didChange, Rust field did_change in MethodConfig, line 63-64). Each entry is a FileSnapshot with file, line, col.
Where snapshots are resolved
In main() at lines 1760-1772. For each LSP method benchmark, the code checks methods.get(method) for did_change entries and maps each FileSnapshot into a ResolvedSnapshot (absolute path via cwd.join(&s.file)).
Where snapshots are applied and results collected
In bench_lsp_snapshots() at lines 1050-1160:

1. Lines 1062-1097: Spawn server, initialize, open original file, wait for diagnostics (same as normal benchmark)
2. Lines 1100-1157: For each snapshot (indexed by si):
   - Line 1103: Compute version = si + 2 (didOpen was version 1)
   - Lines 1113-1125: Read snapshot file content, call c.did_change(&file_uri, version, &content) -- this sends a full-document textDocument/didChange notification replacing the entire file content
   - Lines 1128-1131: Build request params using the snapshot's own line and col (not the global position)
   - Lines 1132-1156: Send the LSP request (e.g. textDocument/definition), read response, collect (ms, response_summary) into the iterations vector
3. Each snapshot produces exactly one iteration/result in the final output
Key: snapshots are NOT warmup+iteration loops
Unlike bench_lsp_method() which repeats the same request warmup + iterations times, bench_lsp_snapshots() runs one request per snapshot. The number of "iterations" in the output equals the number of snapshot entries. There is no warmup in snapshot mode.

---

## 6. CLI Flags

```
lsp-bench (main binary)
USAGE: lsp-bench [OPTIONS] [COMMAND]
OPTIONS:
  -c, --config <CONFIG>    Config file path [default: benchmark.yaml]
  -h, --help               Print help
  -V, --version            Print version (format: X.Y.Z+commit.HASH.OS.ARCH)
COMMANDS:
  init    Generate a benchmark.yaml template
    OPTIONS:
      -c, --config <CONFIG>    Output path for the generated config [default: benchmark.yaml]
gen-readme binary
USAGE: gen-readme [OPTIONS] [INPUT]
ARGS:
  [INPUT]    Path to benchmark JSON (default: latest in benchmarks/)
OPTIONS:
  -o, --output <OUTPUT>    Output file path [default: README.md]
  -q, --quiet              Don't print README to stdout
gen-analysis binary
USAGE: gen-analysis [OPTIONS] [INPUT]
ARGS:
  [INPUT]    Path to benchmark JSON (default: latest in benchmarks/)
OPTIONS:
  -o, --output <OUTPUT>    Output file path [default: ANALYSIS.md]
  --base <BASE>            Server for head-to-head comparison (default: first server)
  -q, --quiet              Don't print analysis to stdout
gen-delta binary
USAGE: gen-delta [OPTIONS] [INPUT]
ARGS:
  [INPUT]    Path to benchmark JSON (default: latest in benchmarks/)
OPTIONS:
  -o, --output <OUTPUT>    Write table to file (default: stdout only)
  --base <BASE>            Baseline server (default: first server)
  --head <HEAD>            Head server to compare (default: second server)
  -q, --quiet              Don't print table to stdout
```