worktrunk 0.35.2

A CLI for Git worktree management, designed for parallel AI agent workflows
Documentation
+++
title = "LLM Commit Messages"
description = "Generate commit messages from diffs using any LLM. Integrates with wt merge, wt step commit, and wt step squash."
weight = 22

[extra]
group = "Reference"
+++

Worktrunk generates commit messages by building a templated prompt and piping it to an external command. This integrates with `wt merge`, `wt step commit`, and `wt step squash`.

<figure class="demo">
<picture>
  <source srcset="/assets/docs/dark/wt-commit.gif" media="(prefers-color-scheme: dark)">
  <img src="/assets/docs/light/wt-commit.gif" alt="LLM commit message generation demo" width="1600" height="900">
</picture>
</figure>

## Setup

Any command that reads a prompt from stdin and outputs a commit message works. Add to `~/.config/worktrunk/config.toml`:

### Claude Code

```toml
[commit.generation]
command = "CLAUDECODE= MAX_THINKING_TOKENS=0 claude -p --no-session-persistence --model=haiku --tools='' --disable-slash-commands --setting-sources='' --system-prompt=''"
```

`CLAUDECODE=` unsets the nesting guard so `claude -p` works from within a Claude Code session. `--no-session-persistence` prevents the commit conversation from polluting `claude --continue`. The other flags disable tools, skills, settings, and system prompt for fast text-only output. See [Claude Code docs](https://docs.anthropic.com/en/docs/build-with-claude/claude-code) for installation.

### Codex

```toml
[commit.generation]
command = "codex exec -m gpt-5.1-codex-mini -c model_reasoning_effort='low' -c system_prompt='' --sandbox=read-only --json - | jq -sr '[.[] | select(.item.type? == \"agent_message\")] | last.item.text'"
```

Uses the fast mini model with low reasoning effort and an empty system prompt for faster output. Requires `jq` for JSON parsing. See [Codex CLI docs](https://developers.openai.com/codex/cli/).

### Other tools

```toml
# opencode — use a fast model variant
command = "opencode run -m anthropic/claude-haiku-4.5 --variant fast"

# llm
command = "llm -m claude-haiku-4.5"

# aichat
command = "aichat -m claude:claude-haiku-4.5"
```

## How it works

When worktrunk needs a commit message, it builds a prompt from a template and pipes it to the configured command via shell (`sh -c`). Environment variables can be set inline in the command string.

## Usage

These examples assume a feature worktree with changes to commit.

### wt merge

Squashes all changes (uncommitted + existing commits) into one commit with an LLM-generated message, then merges to the default branch:

{% terminal(cmd="wt merge") %}
<span class=c>◎</span> <span class=c>Squashing 3 commits into a single commit <span style='color:var(--bright-black,#555)'>(5 files, <span class=g>+48</span></span></span><span style='color:var(--bright-black,#555)'>)</span>...
<span class=c>◎</span> <span class=c>Generating squash commit message...</span>
<span style='background:var(--bright-white,#fff)'> </span> <b>feat(auth): Implement JWT authentication system</b>
<span style='background:var(--bright-white,#fff)'> </span> ...
{% end %}

### wt step commit

Stages and commits with LLM-generated message:

{{ terminal(cmd="wt step commit") }}

### wt step squash

Squashes branch commits into one with LLM-generated message:

{{ terminal(cmd="wt step squash") }}

See [`wt merge`](@/merge.md) and [`wt step`](@/step.md) for full documentation.

## Branch summaries

<span class="badge-experimental"></span>

With `summary = true` and a `[commit.generation] command` configured, Worktrunk generates LLM branch summaries — one-line descriptions of each branch's changes since the default branch.

Summaries appear in:

- **`wt switch`** [interactive picker]@/switch.md#interactive-picker — preview tab 5
- **`wt list --full`** — the Summary column (see [`wt list`]@/list.md#llm-summaries)

Enable in user config:

```toml
[list]
summary = true
```

Summaries are cached and regenerated only when the diff changes.

## Prompt templates

Worktrunk uses [minijinja](https://docs.rs/minijinja/) templates (Jinja2-like syntax) to build prompts. There are sensible defaults, but templates are fully customizable.

### Custom templates

Override the defaults with inline templates:

```toml
[commit.generation]
command = "llm -m claude-haiku-4.5"

template = """
Write a commit message for this diff. One line, under 50 chars.

Branch: {{ branch }}
Diff:
{{ git_diff }}
"""

squash-template = """
Combine these {{ commits | length }} commits into one message:
{% for c in commits %}
- {{ c }}
{% endfor %}

Diff:
{{ git_diff }}
"""
```

### Template variables

| Variable | Description |
|----------|-------------|
| `{{ git_diff }}` | The diff (staged changes or combined diff for squash) |
| `{{ git_diff_stat }}` | Diff statistics (files changed, insertions, deletions) |
| `{{ branch }}` | Current branch name |
| `{{ repo }}` | Repository name |
| `{{ recent_commits }}` | Recent commit subjects (for style reference) |
| `{{ commits }}` | Commits being squashed (squash template only) |
| `{{ target_branch }}` | Merge target branch (squash template only) |

### Template syntax

Templates use [minijinja](https://docs.rs/minijinja/latest/minijinja/syntax/index.html), which supports:

- **Variables**: `{{ branch }}`, `{{ repo | upper }}`
- **Filters**: `{{ commits | length }}`, `{{ repo | upper }}`
- **Conditionals**: `{% if recent_commits %}...{% endif %}`
- **Loops**: `{% for c in commits %}{{ c }}{% endfor %}`
- **Loop variables**: `{{ loop.index }}`, `{{ loop.length }}`
- **Whitespace control**: `{%- ... -%}` strips surrounding whitespace

See `wt config create --help` for the full default templates.

## Fallback behavior

When no LLM is configured, worktrunk generates deterministic messages based on changed filenames (e.g., "Changes to auth.rs & config.rs").