lsp-bench-0.2.0 is not a library.
lsp-bench
A benchmarking framework for Language Server Protocol (LSP) servers. Measures latency, correctness, and memory usage for any LSP server that communicates over JSON-RPC stdio.
Install
Quick Start
# edit benchmark.yaml with your project and servers
What It Measures
| Benchmark | What it tests |
|---|---|
initialize |
Cold-start time (fresh process per iteration) |
textDocument/diagnostic |
Time to analyze a file and return diagnostics |
textDocument/definition |
Go to Definition latency |
textDocument/declaration |
Go to Declaration latency |
textDocument/typeDefinition |
Go to Type Definition latency |
textDocument/implementation |
Go to Implementation latency |
textDocument/hover |
Hover information latency |
textDocument/references |
Find References latency |
textDocument/completion |
Completion suggestions latency |
textDocument/signatureHelp |
Signature Help latency |
textDocument/rename |
Rename symbol latency |
textDocument/prepareRename |
Prepare Rename latency |
textDocument/documentSymbol |
Document Symbols latency |
textDocument/documentLink |
Document Links latency |
textDocument/formatting |
Document Formatting latency |
textDocument/foldingRange |
Folding Ranges latency |
textDocument/selectionRange |
Selection Ranges latency |
textDocument/codeLens |
Code Lens latency |
textDocument/inlayHint |
Inlay Hints latency |
textDocument/semanticTokens/full |
Semantic Tokens latency |
textDocument/documentColor |
Document Color latency |
workspace/symbol |
Workspace Symbol search latency |
Each benchmark records per-iteration latency (p50, p95, mean), the full LSP response, and resident memory (RSS).
Configuration
Create a benchmark.yaml:
project: my-project
file: src/main.rs
line: 45
col: 12
iterations: 10
warmup: 2
timeout: 10
index_timeout: 15
benchmarks:
- all
# Response output: "full" or a number (default: 80)
# response: full
servers:
- label: my-server
cmd: my-language-server
args:
- label: other-server
cmd: other-lsp
args:
Config Fields
| Field | Default | Description |
|---|---|---|
project |
-- | Path to project root |
file |
-- | Target file to benchmark (relative to project) |
line |
102 | Target line for position-based benchmarks (0-based) |
col |
15 | Target column (0-based) |
iterations |
10 | Measured iterations per benchmark |
warmup |
2 | Warmup iterations (discarded) |
timeout |
10 | Seconds per LSP request |
index_timeout |
15 | Seconds for server to index |
output |
benchmarks |
Directory for JSON results |
benchmarks |
all | List of benchmarks to run |
response |
80 | full (no truncation) or a number (truncate to N chars) |
report |
-- | Output path for generated report |
report_style |
delta |
Report format: delta, readme, or analysis |
CLI
| Flag | Description |
|---|---|
-c, --config <PATH> |
Config file (default: benchmark.yaml) |
-V, --version |
Show version (includes commit hash, OS, arch) |
-h, --help |
Show help |
All benchmark settings are configured in the YAML file.
Binaries
| Binary | Purpose |
|---|---|
lsp-bench |
Run benchmarks, produce JSON snapshots |
gen-readme |
Generate README with medals and feature matrix |
gen-analysis |
Generate per-feature analysis report |
gen-delta |
Generate compact comparison table |
Output
JSON snapshots with per-iteration latency, response data, and memory:
Methodology
- Real LSP requests over JSON-RPC stdio
- Sequential iterations (next starts after previous completes)
- Fresh server process for
initializeandtextDocument/diagnostic - Persistent server for method benchmarks (definition, hover, etc.)
- RSS memory sampled after indexing
- Warmup iterations discarded from measurements
License
MIT