---
title: "Performance"
description: >
This page presents performance benchmarks for Panache, comparing its formatting
speed against popular alternatives like Prettier, Pandoc, and rumdl on real
Quarto documents. The benchmarks highlight Panache's efficiency and suitability
for on-save formatting in editors.
engine: knitr
freeze: true
---
## Overview
**panache** is designed for speed without compromising on correctness. Built in
Rust and compiled to native code, it delivers fast formatting with zero startup
overhead.
The numbers on this page are produced by the script in
[`benches/compare_all.sh`](https://github.com/jolars/panache/blob/main/benches/compare_all.sh)
and read from the JSON file written next to this document. Quarto's `freeze`
caches the benchmark run; to refresh the numbers, delete
`docs/_freeze/guide/performance/` and re-render.
```{bash}
#| label: run-benchmarks
#| echo: false
#| output: false
cd ../..
bash benches/compare_all.sh --json --out docs/guide/performance_data.json
```
## Benchmark Results
```{ojs}
//| label: load-data
//| echo: false
data = FileAttachment("performance_data.json").json()
```
```{ojs}
//| label: prepare-panache-data
//| echo: false
panacheBy = new Map(
data.results
.filter(r => r.formatter === "panache")
.map(r => [r.document, r.mean_ms])
)
```
```{ojs}
//| label: compute-ratios-by-formatter
//| echo: false
ratiosByFormatter = {
const out = {};
for (const r of data.results) {
if (r.formatter === "panache") continue;
const base = panacheBy.get(r.document);
if (!base) continue;
(out[r.formatter] ??= []).push(r.mean_ms / base);
}
return out;
}
```
```{ojs}
//| label: compute-ratios
//| echo: false
range = (formatter) => {
const xs = ratiosByFormatter[formatter] ?? [];
if (!xs.length) return "n/a";
const lo = Math.min(...xs), hi = Math.max(...xs);
return `${Math.round(lo)}\u2013${Math.round(hi)}x`;
}
prettierRange = range("prettier")
pandocRange = range("pandoc")
rumdlRange = range("rumdl")
```
```{ojs}
//| label: render-environment
//| echo: false
html`<div class="callout-note">
<div class="callout-header" style="font-weight: 600">Test Environment</div>
<ul style="margin: 0">
<li>Generated: ${new Date(data.meta.generated_at).toUTCString()}</li>
<li>Host: ${data.meta.host.cpu} (${data.meta.host.os}/${data.meta.host.arch})</li>
<li>Backend: <code>${data.meta.backend}</code>${
data.meta.backend === "shell-loop"
? html` <span style="opacity:.7">(install <code>hyperfine</code> for stddev/min/max)</span>`
: ""
}</li>
<li>panache ${data.meta.tools.panache.version}, prettier ${data.meta.tools.prettier?.version ?? "n/a"}, pandoc ${data.meta.tools.pandoc?.version ?? "n/a"}, rumdl ${data.meta.tools.rumdl?.version ?? "n/a"}</li>
<li>Documents: real Quarto/Markdown files (see <code>benches/documents/download.sh</code>)</li>
</ul>
</div>`
```
```{ojs}
//| label: plot-results
//| echo: false
Plot.plot({
marginLeft: 60,
marginBottom: 60,
height: 360,
x: {label: null, tickRotate: -20},
y: {type: "log", label: "Mean time (ms, log)"},
color: {
legend: true,
domain: ["panache", "rumdl", "pandoc", "prettier"],
range: ["#2a9d8f", "#264653", "#e76f51", "#f4a261"]
},
marks: [
Plot.barY(data.results, {
x: "formatter",
y: "mean_ms",
fill: "formatter",
fx: "document",
sort: {fx: null}
}),
Plot.ruleY([1])
]
})
```
### Per-document timings
```{ojs}
//| label: per-document-timings
//| echo: false
html`${data.documents.map(doc => {
const sizeKb = doc.size_bytes >= 1024
? (doc.size_bytes / 1024).toFixed(1) + " KB"
: doc.size_bytes + " B";
const base = panacheBy.get(doc.id);
const rows = data.results
.filter(r => r.document === doc.id)
.sort((a, b) => a.mean_ms - b.mean_ms)
.map(r => ({
Formatter: r.formatter,
"Mean (ms)": r.mean_ms.toFixed(1),
"vs panache": r.formatter === "panache"
? "baseline"
: `${(r.mean_ms / base).toFixed(1)}x slower`,
Runs: r.runs
}));
return html`<h4>${doc.name}</h4>
<p style="opacity:.7;margin-top:-.5em">
${doc.file} \u2014 ${sizeKb}, ${doc.lines} lines, ${doc.iterations} iterations
</p>
${Inputs.table(rows, {
rows: rows.length,
width: {Formatter: 120, "Mean (ms)": 100, "vs panache": 140, Runs: 80}
})}`;
})}`
```
## Directory-scale formatting
The numbers above measure single-document formatting. In real projects, you
format hundreds of files in one invocation -- on save across a workspace, in
pre-commit, or in CI. Panache parallelises file-level work across CPU cores when
given more than one file (controlled via `--jobs`, defaulting to the machine's
parallelism).
The benchmark below formats every Markdown file in this repository (`*.md`,
currently `${multifile.corpus.file_count}` files) in a single invocation per
tool. Each tool runs end-to-end against the same corpus, re-staged from disk
between samples. Pandoc is excluded -- it has no batch mode, so a fair
comparison would mostly measure shell-loop overhead.
```{bash}
#| label: run-multifile
#| echo: false
#| output: false
cd ../..
bash benches/compare_multifile.sh --out docs/guide/performance_multifile_data.json
```
```{ojs}
//| label: load-multifile-data
//| echo: false
multifile = FileAttachment("performance_multifile_data.json").json()
```
```{ojs}
//| label: prepare-multifile-results
//| echo: false
multifileToolLabel = ({
"panache-jobs1": "panache --jobs 1 (serial)",
"panache-jobs0": "panache --jobs 0 (auto)",
prettier: "prettier",
rumdl: "rumdl"
})
multifileResults = multifile.results
.filter(r => !r.failed && r.mean_ms != null)
.map(r => ({...r, label: multifileToolLabel[r.tool] ?? r.tool}))
multifileBaseline = multifileResults.find(r => r.tool === "panache-jobs0")?.mean_ms
```
```{ojs}
//| label: plot-multifile-results
//| echo: false
Plot.plot({
marginLeft: 200,
marginBottom: 50,
height: 240,
x: {label: "Mean time (ms, log)", type: "log"},
y: {label: null},
color: {legend: false},
marks: [
Plot.barX(multifileResults, {
x: "mean_ms",
y: "label",
fill: d => d.tool.startsWith("panache") ? "#2a9d8f" : "#264653",
sort: {y: "x"}
}),
Plot.ruleX([1])
]
})
```
```{ojs}
//| label: render-multifile-table
//| echo: false
multifileTable = multifileResults
.slice()
.sort((a, b) => a.mean_ms - b.mean_ms)
.map(r => ({
Tool: r.label,
"Mean (ms)": r.mean_ms.toFixed(1),
"vs panache --jobs 0": r.tool === "panache-jobs0"
? "baseline"
: `${(r.mean_ms / multifileBaseline).toFixed(2)}x slower`,
Runs: r.runs
}))
Inputs.table(multifileTable, {rows: multifileTable.length})
```
`panache --jobs 0` parallelises across all available cores; `--jobs 1` keeps the
outer loop serial (the pre-parallelism baseline). The gap between the two shows
the speedup file-level parallelism contributes on this corpus.
## Key Takeaways
### 🚀 Faster Than Alternatives
- **${prettierRange} faster than Prettier** -- Zero Node.js startup overhead
- **${pandocRange} faster than Pandoc** -- Optimized for formatting (Pandoc is
general-purpose)
- **${rumdlRange} faster than rumdl** -- Another Rust-native markdown tool, but
with a different design focus
- **Sub-10ms formatting** for typical documents -- imperceptible latency for
on-save formatting
### 💪 Why So Fast?
1. **Native compilation** -- Rust compiles to machine code, no interpreter
overhead
2. **Zero startup time** -- unlike Node.js/JVM-based tools
3. **Efficient parsing** -- single-pass parser with integrated inline handling
4. **Optimized data structures** -- lossless CST built with `rowan` crate
### 📊 What About Prettier?
Prettier's time is dominated by Node.js startup (\~60-70ms), not formatting
logic. For one-off CLI invocations, panache wins decisively. For long-running
processes (e.g., editor integration), Prettier's startup cost is amortized, but
panache remains faster.
### 🎯 What About Pandoc?
Pandoc is a general-purpose document converter supporting 40+ formats. panache
is specialized for markdown/Quarto formatting, enabling focused optimizations.
Pandoc remains the gold standard for syntax correctness, but panache delivers
substantially faster formatting.
### 🦀 What About rumdl?
[rumdl](https://github.com/rvben/rumdl) is another Rust-native markdown tool,
primarily a *linter* whose `rumdl fmt` command is documented as an alias for
`check --fix`. There is no dedicated formatting code path: every "format" run
goes through the full lint engine and applies whichever fixes the rules
produced. Both tools benefit from native compilation and zero startup overhead,
so the gap is much smaller than with Node.js- or Haskell-based alternatives, but
panache's narrower focus on formatting Quarto/Pandoc markdown still wins.
## Reproducing Benchmarks
All benchmarks are reproducible:
```bash
# Download test documents (idempotent)
cd benches/documents && ./download.sh && cd ../..
# Run comparison benchmark and write JSON
bash benches/compare_all.sh --json --out docs/guide/performance_data.json
# Or run the human-readable text variant
bash benches/compare_all.sh
# Re-render this page (uses freeze cache by default)
quarto render docs/guide/performance.qmd
# Force the benchmark to re-run by invalidating the freeze cache
rm -rf docs/_freeze/guide/performance
quarto render docs/guide/performance.qmd
```
For best statistics, install [hyperfine](https://github.com/sharkdp/hyperfine)
-- the script will detect it and use it for warmup + stddev/min/max.