---
title: "Performance"
description: >
This page presents performance benchmarks for Panache, comparing its formatting
and linting speed against popular alternatives like Prettier, Pandoc, rumdl,
mdformat, mado, markdownlint, and markdownlint-cli2 on real Quarto and Markdown
documents. The benchmarks highlight Panache's efficiency and suitability for
on-save formatting and fast repository-wide checks.
engine: knitr
---
## Overview
**Panache** is designed for speed without compromising on correctness. Built in
Rust and compiled to native code, it delivers fast formatting with zero startup
overhead. In this document, we present benchmarks comparing Panache's formatting
and linting performance against popular alternatives like Prettier, Pandoc,
rumdl, mdformat, mado, markdownlint, and markdownlint-cli2 on a realistic corpus
of Quarto and Markdown documents. We don't include R Markdown benchmarks here
because no other tool supports that format.
We have split the benchmarks into two suites: per-document benchmarks that run
each formatter on each document in the corpus individually, and repository-wide
benchmarks that run formatters on entire repositories of tracked documents. The
former highlights raw formatting speed on a variety of real-world documents; the
latter captures the overhead of processing multiple files in a single run, which
is more representative of on-save formatting or large-scale linting. Caches are
disabled for all benchmarks to show worst-case performance.
The numbers on this page are produced by the scripts in
[`benches/`](https://github.com/jolars/panache/tree/main/benches) and read from
the JSON files written next to this document. The benchmark chunks on this page
are intentionally not executed during preview or render, so small content edits
reuse the existing JSON files instead of rerunning the benchmarks. Refresh the
benchmark data explicitly with the commands at the bottom of this page, then
delete `docs/_freeze/guide/performance/` and re-render if you want the page to
pick up newly generated results.
```{bash}
#| label: run-benchmarks
#| echo: false
#| output: false
#| eval: false
cd ../..
bash benches/compare_all.sh --json --out docs/guide/performance_data.json
```
```{ojs}
//| label: load-data
//| echo: false
data = FileAttachment("performance_data.json").json()
```
```{ojs}
//| label: prepare-records
//| echo: false
documentMeta = new Map(data.documents.map(d => [d.id, d]))
panacheByDoc = d3.rollup(
data.results.filter(r => r.formatter === "panache"),
v => v[0].mean_ms,
r => r.document
)
records = data.results.map(r => ({
formatter: r.formatter,
document: r.document,
documentName: documentMeta.get(r.document)?.name ?? r.document,
mean_ms: r.mean_ms,
runs: r.runs,
ratio: r.mean_ms / panacheByDoc.get(r.document)
}))
formatters = Array.from(
d3.rollup(records, v => d3.mean(v, d => d.ratio), d => d.formatter)
).sort(([, a], [, b]) => a - b).map(([k]) => k)
documentIds = data.documents.map(d => d.id)
```
```{ojs}
//| label: compute-ratios
//| echo: false
formatterRanges = d3.rollup(
records.filter(r => r.formatter !== "panache"),
v => {
const lo = Math.round(d3.min(v, d => d.ratio));
const hi = Math.round(d3.max(v, d => d.ratio));
return `${lo}\u2013${hi}x`;
},
d => d.formatter
)
prettierRange = formatterRanges.get("prettier") ?? "n/a"
pandocRange = formatterRanges.get("pandoc") ?? "n/a"
rumdlRange = formatterRanges.get("rumdl") ?? "n/a"
mdformatRange = formatterRanges.get("mdformat") ?? "n/a"
```
```{ojs}
//| label: render-environment
//| echo: false
html`<div class="callout-note">
<div class="callout-header" style="font-weight: 600">Test Environment</div>
<ul style="margin: 0">
<li>Generated: ${new Date(data.meta.generated_at).toUTCString()}</li>
<li>Host: ${data.meta.host.cpu} (${data.meta.host.os}/${data.meta.host.arch})</li>
<li>Minimum runs: ${data.meta.min_runs}</li>
<li>panache ${data.meta.tools.panache.version}, prettier ${data.meta.tools.prettier?.version ?? "n/a"}, pandoc ${data.meta.tools.pandoc?.version ?? "n/a"}, rumdl ${data.meta.tools.rumdl?.version ?? "n/a"}, mdformat ${data.meta.tools.mdformat?.version ?? "n/a"}, mado ${data.meta.tools.mado?.version ?? "n/a"}, markdownlint ${data.meta.tools.markdownlint?.version ?? "n/a"}, markdownlint-cli2 ${data.meta.tools["markdownlint-cli2"]?.version ?? "n/a"}</li>
</ul>
</div>`
```
## Formatting
### Single-Document
In @fig-plot-results, we compare formatting time per document across panache,
Prettier, Pandoc, rumdl, and mdformat. Each dot is one document; the y-axis
shows time relative to panache (×, lower is faster). Panache sits at 1× by
construction (dashed baseline) and other formatters land above it. Hover a point
to see the absolute wall-clock time in milliseconds. Formatters are ordered
left-to-right from fastest to slowest on average.
```{ojs}
//| label: fig-plot-results
//| echo: false
//| fig-cap: "Formatting time per document, comparing panache to Prettier,
//| Pandoc, rumdl, and mdformat. Each dot is one document; the y-axis shows
//| time relative to panache (×, lower is faster)."
Plot.plot({
marginLeft: 60,
marginRight: 20,
marginBottom: 40,
marginTop: 20,
height: 360,
x: {label: null, domain: formatters},
y: {label: "Mean time relative to panache (×, lower is faster)", grid: true},
color: {
legend: true,
domain: documentIds,
tickFormat: id => documentMeta.get(id)?.name ?? id,
scheme: "tableau10"
},
marks: [
Plot.ruleY([1], {stroke: "#2a9d8f", strokeWidth: 2, strokeDasharray: "4 3"}),
Plot.dot(records, {
x: "formatter",
y: "ratio",
fill: "document",
r: 6,
fillOpacity: 0.85,
stroke: "white",
strokeWidth: 1,
channels: {
document: "documentName",
"time (ms)": d => d.mean_ms.toFixed(1),
"× panache": d => d.ratio.toFixed(2)
},
tip: true
})
]
})
```
```{ojs}
//| label: prepare-repo-suite-helper
//| echo: false
prepareRepoSuiteRows = (data, toolLabels) => {
const repoMeta = new Map(data.repos.map(r => [r.id, r]))
const baselineByRepo = new Map(
data.results
.filter(r => r.tool === "panache" && !r.failed && r.mean_ms != null)
.map(r => [r.repo, r.mean_ms])
)
return data.results
.filter(r => !r.failed && r.mean_ms != null && baselineByRepo.has(r.repo))
.map(r => {
const repo = repoMeta.get(r.repo)
const repoLabel = repo?.name ?? r.repo
const toolLabel = toolLabels[r.tool] ?? r.tool
return {
...r,
repoLabel,
toolLabel,
fileCount: repo?.file_count ?? null,
totalBytes: repo?.total_bytes ?? null,
barLabel: `${repoLabel} — ${toolLabel}`,
ratio: r.mean_ms / baselineByRepo.get(r.repo)
}
})
}
```
### Repository-Wide
```{bash}
#| label: run-repo-markdown-format
#| echo: false
#| output: false
#| eval: false
cd ../..
bash benches/compare_repo_suite.sh --mode format --track markdown --out docs/guide/performance_repo_markdown_format_data.json
```
```{ojs}
//| label: load-repo-markdown-format
//| echo: false
repoMarkdownFormat = FileAttachment("performance_repo_markdown_format_data.json").json()
```
```{ojs}
//| label: prepare-repo-markdown-format
//| echo: false
repoMarkdownFormatResults = prepareRepoSuiteRows(repoMarkdownFormat, {
panache: "panache",
prettier: "prettier",
rumdl: "rumdl",
mdformat: "mdformat"
})
repoMarkdownFormatTools = Array.from(
d3.rollup(repoMarkdownFormatResults, v => d3.mean(v, d => d.ratio), d => d.toolLabel)
).sort(([, a], [, b]) => a - b).map(([k]) => k)
repoMarkdownFormatRepos = repoMarkdownFormat.repos.map(r => r.id)
repoMarkdownFormatRepoMeta = new Map(repoMarkdownFormat.repos.map(r => [r.id, r]))
```
In @fig-repo-markdown-format, we compare formatting time across panache,
Prettier, rumdl, and mdformat on tracked Markdown files from several
repositories.
```{ojs}
//| label: fig-repo-markdown-format
//| echo: false
//| fig-cap: "Repository-wide formatting benchmarks on standard Markdown repos,
//| comparing panache to Prettier, rumdl, and mdformat. Each dot is one
//| repo/tool pair; the y-axis shows time relative to panache within that repo
//| (×, lower is faster)."
Plot.plot({
marginLeft: 60,
marginRight: 20,
marginBottom: 40,
marginTop: 20,
height: 320,
x: {label: null, domain: repoMarkdownFormatTools},
y: {label: "Mean time relative to panache within each repo (×, lower is faster)", grid: true},
color: {
legend: true,
domain: repoMarkdownFormatRepos,
tickFormat: id => repoMarkdownFormatRepoMeta.get(id)?.name ?? id,
scheme: "tableau10"
},
marks: [
Plot.ruleY([1], {stroke: "#2a9d8f", strokeWidth: 2, strokeDasharray: "4 3"}),
Plot.dot(repoMarkdownFormatResults, {
x: "toolLabel",
y: "ratio",
fill: "repo",
r: 6,
fillOpacity: 0.85,
stroke: "white",
strokeWidth: 1,
channels: {
repo: "repoLabel",
tool: "toolLabel",
files: d => d.fileCount,
"time (ms)": d => d.mean_ms.toFixed(1),
"× panache": d => d.ratio.toFixed(2)
},
tip: true
})
]
})
```
```{bash}
#| label: run-repo-quarto-format
#| echo: false
#| output: false
#| eval: false
cd ../..
bash benches/compare_repo_suite.sh --mode format --track quarto --out docs/guide/performance_repo_quarto_format_data.json
```
```{ojs}
//| label: load-repo-quarto-format
//| echo: false
repoQuartoFormat = FileAttachment("performance_repo_quarto_format_data.json").json()
```
```{ojs}
//| label: prepare-repo-quarto-format
//| echo: false
repoQuartoFormatResults = prepareRepoSuiteRows(repoQuartoFormat, {
panache: "panache",
rumdl: "rumdl"
})
repoQuartoFormatTools = Array.from(
d3.rollup(repoQuartoFormatResults, v => d3.mean(v, d => d.ratio), d => d.toolLabel)
).sort(([, a], [, b]) => a - b).map(([k]) => k)
repoQuartoFormatRepos = repoQuartoFormat.repos.map(r => r.id)
repoQuartoFormatRepoMeta = new Map(repoQuartoFormat.repos.map(r => [r.id, r]))
```
In @fig-repo-quarto-format, we compare formatting time across panache and rumdl
on tracked `.qmd` files from several Quarto repositories.
```{ojs}
//| label: fig-repo-quarto-format
//| echo: false
//| fig-cap: "Repository-wide formatting benchmarks on Quarto repos. Each dot
//| is one repo/tool pair; the y-axis shows time relative to panache within
//| that repo (×, lower is faster)."
Plot.plot({
marginLeft: 60,
marginRight: 20,
marginBottom: 40,
marginTop: 20,
height: 320,
x: {label: null, domain: repoQuartoFormatTools},
y: {label: "Mean time relative to panache within each repo (×, lower is faster)", grid: true},
color: {
legend: true,
domain: repoQuartoFormatRepos,
tickFormat: id => repoQuartoFormatRepoMeta.get(id)?.name ?? id,
scheme: "tableau10"
},
marks: [
Plot.ruleY([1], {stroke: "#2a9d8f", strokeWidth: 2, strokeDasharray: "4 3"}),
Plot.dot(repoQuartoFormatResults, {
x: "toolLabel",
y: "ratio",
fill: "repo",
r: 6,
fillOpacity: 0.85,
stroke: "white",
strokeWidth: 1,
channels: {
repo: "repoLabel",
tool: "toolLabel",
files: d => d.fileCount,
"time (ms)": d => d.mean_ms.toFixed(1),
"× panache": d => d.ratio.toFixed(2)
},
tip: true
})
]
})
```
Each dot is one repo/tool pair. Panache sits at 1× by construction (dashed
baseline), and points above it are slower on that repository. Hover a point to
see the absolute wall-clock time in milliseconds and the corpus size.
## Linting
### Single-Document
```{bash}
#| label: run-lint-single-benchmark
#| echo: false
#| output: false
#| eval: false
cd ../..
bash benches/compare_lint_single.sh --out docs/guide/performance_lint_single_data.json
```
```{ojs}
//| label: load-lint-single-data
//| echo: false
lintSingleData = FileAttachment("performance_lint_single_data.json").json()
```
```{ojs}
//| label: prepare-lint-single-records
//| echo: false
lintDocumentMeta = new Map(lintSingleData.documents.map(d => [d.id, d]))
lintPanacheByDoc = d3.rollup(
lintSingleData.results.filter(r => r.tool === "panache"),
v => v[0].mean_ms,
r => r.document
)
lintSingleRecords = lintSingleData.results.map(r => ({
tool: r.tool,
document: r.document,
documentName: lintDocumentMeta.get(r.document)?.name ?? r.document,
mean_ms: r.mean_ms,
runs: r.runs,
ratio: r.mean_ms / lintPanacheByDoc.get(r.document)
}))
lintTools = Array.from(
d3.rollup(lintSingleRecords, v => d3.mean(v, d => d.ratio), d => d.tool)
).sort(([, a], [, b]) => a - b).map(([k]) => k)
lintDocumentIds = lintSingleData.documents.map(d => d.id)
```
In @fig-lint-single-results, we compare linting time per document across panache
lint, rumdl check, mado check, markdownlint, and markdownlint-cli2. Each dot is
one document; the y-axis shows time relative to panache lint (×, lower is
faster). Panache sits at 1× by construction.
```{ojs}
//| label: fig-lint-single-results
//| echo: false
//| fig-cap: "Linting time per document, comparing panache lint to rumdl check,
//| mado check, markdownlint, and markdownlint-cli2. Each dot is one document;
//| the y-axis shows time relative to panache lint (×, lower is faster)."
Plot.plot({
marginLeft: 60,
marginRight: 20,
marginBottom: 40,
marginTop: 20,
height: 300,
x: {label: null, domain: lintTools},
y: {label: "Mean time relative to panache lint (×, lower is faster)", grid: true},
color: {
legend: true,
domain: lintDocumentIds,
tickFormat: id => lintDocumentMeta.get(id)?.name ?? id,
scheme: "tableau10"
},
marks: [
Plot.ruleY([1], {stroke: "#2a9d8f", strokeWidth: 2, strokeDasharray: "4 3"}),
Plot.dot(lintSingleRecords, {
x: "tool",
y: "ratio",
fill: "document",
r: 6,
fillOpacity: 0.85,
stroke: "white",
strokeWidth: 1,
channels: {
document: "documentName",
"time (ms)": d => d.mean_ms.toFixed(1),
"× panache lint": d => d.ratio.toFixed(2)
},
tip: true
})
]
})
```
### Repository-Wide
This suite benchmarks linting on the same standard Markdown repositories as the
formatting comparison, using `panache lint`, `rumdl check`, `mado check`,
`markdownlint`, and `markdownlint-cli2`. In @fig-repo-markdown-lint, each dot is
one repo/tool pair; the y-axis shows time relative to panache lint within that
repo (×, lower is faster).
```{bash}
#| label: run-repo-markdown-lint
#| echo: false
#| output: false
#| eval: false
cd ../..
bash benches/compare_repo_suite.sh --mode lint --track markdown --out docs/guide/performance_repo_markdown_lint_data.json
```
```{ojs}
//| label: load-repo-markdown-lint
//| echo: false
repoMarkdownLint = FileAttachment("performance_repo_markdown_lint_data.json").json()
```
```{ojs}
//| label: prepare-repo-markdown-lint
//| echo: false
repoMarkdownLintResults = prepareRepoSuiteRows(repoMarkdownLint, {
panache: "panache lint",
rumdl: "rumdl check",
mado: "mado check",
markdownlint: "markdownlint",
"markdownlint-cli2": "markdownlint-cli2"
})
repoMarkdownLintTools = Array.from(
d3.rollup(repoMarkdownLintResults, v => d3.mean(v, d => d.ratio), d => d.toolLabel)
).sort(([, a], [, b]) => a - b).map(([k]) => k)
repoMarkdownLintRepos = repoMarkdownLint.repos.map(r => r.id)
repoMarkdownLintRepoMeta = new Map(repoMarkdownLint.repos.map(r => [r.id, r]))
```
```{ojs}
//| label: fig-repo-markdown-lint
//| echo: false
//| fig-cap: "Repository-wide linting benchmarks on standard Markdown repos,
//| comparing panache lint to rumdl check, mado check, markdownlint, and
//| markdownlint-cli2. Each dot is one repo/tool pair; the y-axis shows time
//| relative to panache lint within that repo (×, lower is faster)."
Plot.plot({
marginLeft: 60,
marginRight: 20,
marginBottom: 40,
marginTop: 20,
height: 300,
x: {label: null, domain: repoMarkdownLintTools},
y: {label: "Mean time relative to panache lint within each repo (×, lower is faster)", grid: true},
color: {
legend: true,
domain: repoMarkdownLintRepos,
tickFormat: id => repoMarkdownLintRepoMeta.get(id)?.name ?? id,
scheme: "tableau10"
},
marks: [
Plot.ruleY([1], {stroke: "#2a9d8f", strokeWidth: 2, strokeDasharray: "4 3"}),
Plot.dot(repoMarkdownLintResults, {
x: "toolLabel",
y: "ratio",
fill: "repo",
r: 6,
fillOpacity: 0.85,
stroke: "white",
strokeWidth: 1,
channels: {
repo: "repoLabel",
tool: "toolLabel",
files: d => d.fileCount,
"time (ms)": d => d.mean_ms.toFixed(1),
"× panache lint": d => d.ratio.toFixed(2)
},
tip: true
})
]
})
```
In @fig-repo-quarto-lint, we compare linting time across panache and rumdl on
tracked `.qmd` files from several Quarto repositories.
```{bash}
#| label: run-repo-quarto-lint
#| echo: false
#| output: false
#| eval: false
cd ../..
bash benches/compare_repo_suite.sh --mode lint --track quarto --out docs/guide/performance_repo_quarto_lint_data.json
```
```{ojs}
//| label: load-repo-quarto-lint
//| echo: false
repoQuartoLint = FileAttachment("performance_repo_quarto_lint_data.json").json()
```
```{ojs}
//| label: prepare-repo-quarto-lint
//| echo: false
repoQuartoLintResults = prepareRepoSuiteRows(repoQuartoLint, {
panache: "panache lint",
rumdl: "rumdl check"
})
repoQuartoLintTools = Array.from(
d3.rollup(repoQuartoLintResults, v => d3.mean(v, d => d.ratio), d => d.toolLabel)
).sort(([, a], [, b]) => a - b).map(([k]) => k)
repoQuartoLintRepos = repoQuartoLint.repos.map(r => r.id)
repoQuartoLintRepoMeta = new Map(repoQuartoLint.repos.map(r => [r.id, r]))
```
```{ojs}
//| label: fig-repo-quarto-lint
//| echo: false
//| fig-cap: "Repository-wide linting benchmarks on Quarto repos. Each dot is
//| one repo/tool pair; the y-axis shows time relative to panache lint within
//| that repo (×, lower is faster)."
Plot.plot({
marginLeft: 60,
marginRight: 20,
marginBottom: 40,
marginTop: 20,
height: 320,
x: {label: null, domain: repoQuartoLintTools},
y: {label: "Mean time relative to panache lint within each repo (×, lower is faster)", grid: true},
color: {
legend: true,
domain: repoQuartoLintRepos,
tickFormat: id => repoQuartoLintRepoMeta.get(id)?.name ?? id,
scheme: "tableau10"
},
marks: [
Plot.ruleY([1], {stroke: "#2a9d8f", strokeWidth: 2, strokeDasharray: "4 3"}),
Plot.dot(repoQuartoLintResults, {
x: "toolLabel",
y: "ratio",
fill: "repo",
r: 6,
fillOpacity: 0.85,
stroke: "white",
strokeWidth: 1,
channels: {
repo: "repoLabel",
tool: "toolLabel",
files: d => d.fileCount,
"time (ms)": d => d.mean_ms.toFixed(1),
"× panache lint": d => d.ratio.toFixed(2)
},
tip: true
})
]
})
```
## Reproducing
All benchmarks are reproducible and require
[`hyperfine`](https://github.com/sharkdp/hyperfine) to run. The scripts in
`benches/` are designed to be run from the repository root:
```bash
# Download test documents (idempotent)
cd benches/documents && ./download.sh && cd ../..
# Run comparison benchmark and write JSON
bash benches/compare_all.sh --json --out docs/guide/performance_data.json
# Run repository-wide Markdown formatting benchmark
bash benches/compare_repo_suite.sh --mode format --track markdown --out docs/guide/performance_repo_markdown_format_data.json
# Run repository-wide Quarto formatting benchmark
bash benches/compare_repo_suite.sh --mode format --track quarto --out docs/guide/performance_repo_quarto_format_data.json
# Run repository-wide Markdown lint benchmark
bash benches/compare_repo_suite.sh --mode lint --track markdown --out docs/guide/performance_repo_markdown_lint_data.json
# Run repository-wide Quarto lint benchmark
bash benches/compare_repo_suite.sh --mode lint --track quarto --out docs/guide/performance_repo_quarto_lint_data.json
# Run per-document lint benchmark
bash benches/compare_lint_single.sh --out docs/guide/performance_lint_single_data.json
# Or run the human-readable text variant
bash benches/compare_all.sh
# Re-render this page (uses freeze cache by default)
quarto render docs/guide/performance.qmd
# Force the benchmark to re-run by invalidating the freeze cache
rm -rf docs/_freeze/guide/performance
quarto render docs/guide/performance.qmd
```