omni-dev
An intelligent Git commit message toolkit with AI-powered contextual intelligence. Transform messy commit histories into professional, conventional commit formats with project-aware suggestions.
β¨ Key Features
- π€ AI-Powered Intelligence: Claude AI analyzes your code changes to suggest meaningful commit messages and PR descriptions
- π§ Contextual Awareness: Understands your project structure, conventions, and work patterns
- π Comprehensive Analysis: Deep analysis of commits, branches, and file changes
- βοΈ Smart Amendments: Safely improve single or multiple commit messages
- π PR Creation: Generate professional pull requests with AI-powered descriptions
- π¦ Automatic Batching: Handles large commit ranges intelligently
- π― Conventional Commits: Automatic detection and formatting
- π‘οΈ Safety First: Working directory validation and error recovery
- β‘ Fast & Reliable: Built with Rust for memory safety and performance
π Quick Start
Installation
# Install from crates.io
# Install with Nix
# Install with Nix flakes (development)
# Enable binary cache for faster builds (optional)
# Set up Claude API key (required for AI features)
Nix Binary Cache (Optional)
For faster Nix builds, you can use the binary cache:
# Install cachix if you don't have it
# Enable the omni-dev binary cache
# Now Nix installations will use pre-built binaries instead of compiling from source
π¬ See It In Action
Watch omni-dev transform messy commits into professional ones with AI-powered analysis
30-Second Demo
Transform your commit messages and create professional PRs with AI intelligence:
# Analyze and improve commit messages in your current branch
# Before: "fix stuff", "wip", "update files"
# After: "feat(auth): implement OAuth2 authentication system"
# "docs(api): add comprehensive endpoint documentation"
# "fix(ui): resolve mobile responsive layout issues"
# Create a professional PR with AI-generated description
# π Generates comprehensive PR with detailed description, testing info, and more
π Core Commands
π€ AI-Powered Commit Improvement (twiddle)
The star feature - intelligently improve your commit messages with real-time model information display:
# Improve commits with contextual intelligence
# Process large commit ranges with parallel processing
# Save suggestions to file for review
# Auto-apply improvements without confirmation
π Analysis Commands
# Analyze commits in detail (YAML output)
# Analyze current branch vs main
# Get comprehensive help
π AI-Powered PR Creation
Create professional pull requests with AI-generated descriptions:
# Generate and create PR with AI-powered description
# Create PR with specific base branch
# Save PR details to file without creating
# Auto-create without confirmation
π Atlassian Integration
Read, write, and manage JIRA issues and Confluence pages from the command line:
# Authenticate with Atlassian Cloud
# Check authentication status
# Fetch a JIRA issue as markdown
# Fetch as raw ADF JSON
# Push markdown changes back to JIRA
# Interactive edit: fetch, edit in $EDITOR, push
# Search issues with JQL
# Create an issue
# Transition an issue
# Confluence: read, search, create pages
# Convert markdown to ADF JSON (offline)
π Datadog Integration (read-only)
Authenticate against the Datadog API. Subsequent slices add metrics, monitor, dashboard, and logs subcommands.
# Configure Datadog API credentials (prompts for API key, APP key, and site)
# Verify the credentials by calling /api/v1/validate
# Remove Datadog credentials from ~/.omni-dev/settings.json
DATADOG_SITE defaults to datadoghq.com. Other regions (datadoghq.eu,
us3.datadoghq.com, us5.datadoghq.com, ap1.datadoghq.com, ddog-gov.com)
are recognised without warning. Environment variables DATADOG_API_KEY,
DATADOG_APP_KEY, DATADOG_SITE override the stored settings.
For on-prem or proxied Datadog installs, set DATADOG_API_URL to the full
API base URL (e.g. https://datadog.corp.example) β it overrides the
site-derived URL entirely.
βοΈ Manual Amendment
# Apply specific amendments from YAML file
ποΈ Claude Conversation History
Export your Claude Code chat history to a directory of .jsonl files for
behavioural analysis, work-log generation, or downstream tooling. Re-running
acts as an idempotent sync: new chats are added, modified chats are
overwritten, unchanged chats are skipped.
# Mirror ~/.claude/projects to ./history/ (one .jsonl per chat, grouped by project slug)
# Limit to one project (encoded slug or decoded cwd path)
# Only sessions touched in the last week
# Preview without writing, then prune target files for sessions removed upstream
# Render LLM-friendly markdown alongside the raw jsonl (one .md per session)
# Markdown only β suitable for piping into a coaching LLM
The export is a behavioural transcript, not a faithful archive. The top-level session jsonl captures all prompts, responses, thinking blocks, tool calls, and tool-result metadata β the signal needed for analysis. Sub-agent internal turns, large tool-output sidecars, PDF page rasters, and Claude's auto-memory are deliberately excluded; they would bloat any LLM-ingested corpus without adding interaction-pattern signal.
In-progress chats produce a valid jsonl prefix (the source size is captured
once at the start of the copy), so you can sync safely while a chat is open.
The target layout mirrors the source β <target>/<slug>/<uuid>.jsonl β and
source mtime is preserved on each target file so downstream tooling can
sort sessions chronologically without parsing every file.
--output-format markdown writes a derived <target>/<slug>/<uuid>.md
alongside (or instead of) the jsonl. Each markdown file has YAML frontmatter
with session metadata followed by ## User / ## Assistant turns; tool calls
render as ### Tool call: <name> blocks, thinking blocks collapse into
<details>, and sub-agent (Agent) calls render the prompt argument only.
Agent-to-user interactions are surfaced as first-class structured events so the analyst LLM sees what was actually asked and how the user responded:
AskUserQuestioncalls render as### Agent question: <header>with the question text and a bulleted list of options (with descriptions); the paired user reply renders as## User response.- Tool denials show up as
**Tool result (<tool>, denied by user):**β detected by the canonical "The user doesn't want to proceed with this tool use" sentinel Claude Code stuffs into the nexttool_result. - Tool interrupts (escape mid-execution) render as
**Tool result (<tool>, interrupted by user):**. - Errors (real tool failures, distinct from user denials) keep the
errorlabel; successes useok.
System reminders, attachments, and permission-mode events are included by
default β pass --exclude-system to drop them. Markdown idempotency keys off
source mtime alone (the rendered length differs from the source length), and
--prune only deletes artifacts whose extension matches one of the formats
listed in --output-format.
π MCP Server
omni-dev ships an optional Model Context Protocol server so AI assistants (Claude Desktop, Claude Code, the MCP Inspector, custom agents) can call omni-dev over stdio instead of shelling out to the CLI.
Tools currently exposed:
git_view_commitsβ YAML commit analysis (mirrorsomni-dev git commit message view)
Resources exposed via URI templates:
| URI template | Returns |
|---|---|
git://repo/commits/{range} |
YAML commit analysis |
jira://issue/{key} |
JIRA issue as JFM |
jira://issue/{key}.adf |
JIRA issue body as ADF |
confluence://page/{id} |
Confluence page as JFM |
confluence://page/{id}.adf |
Confluence page body as ADF |
Install
This adds a second binary, omni-dev-mcp, alongside the regular omni-dev
CLI. The default cargo install omni-dev build is unchanged β no MCP
dependencies are pulled in unless the mcp feature is enabled.
Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json on
macOS (or %APPDATA%\Claude\claude_desktop_config.json on Windows):
Claude Code
Per-project β create .mcp.json at the repo root:
Or register globally with the Claude Code CLI:
Smoke-test with the MCP Inspector
The Inspector opens a browser UI where you can list tools and resources,
call git_view_commits, and fetch git://repo/commits/HEAD against the
current working directory.
Troubleshooting
- Logs go to stderr. MCP uses stdin/stdout for protocol framing, so tracing output is routed to stderr β tail your client's MCP log pane or run the binary in a terminal to see it.
- Verbose tracing:
RUST_LOG=debug omni-dev-mcpturns on debug-level logs. Module-scoped filters work too, e.g.RUST_LOG=omni_dev::mcp=trace. - Permission errors: the assistant runs
omni-dev-mcpwith its own working directory. Tools that open a git repository use that directory unless an explicitrepo_pathparameter (or a resource URI placing you elsewhere) overrides it. If tool calls fail with "failed to open git repository", confirm the assistant launched the server from inside the repo you expected.
βοΈ Configuration Commands
# Show supported AI models and their specifications
# View model information with token limits and capabilities
|
π§ Contextual Intelligence
omni-dev understands your project context to provide better suggestions:
Project Configuration
Create .omni-dev/ directory in your repo root:
Scope Definitions (.omni-dev/scopes.yaml)
scopes:
- name: "auth"
description: "Authentication and authorization systems"
examples:
file_patterns:
- name: "api"
description: "REST API endpoints and handlers"
examples:
file_patterns:
Commit Guidelines (.omni-dev/commit-guidelines.md)
- --
- --
π― Advanced Features
Intelligent Context Detection
omni-dev automatically detects:
- Project Conventions: From
.omni-dev/,CONTRIBUTING.md - Work Patterns: Feature development, bug fixes, documentation, refactoring
- Branch Context: Extracts work type from branch names
(
feature/auth-system) - File Architecture: Understands UI, API, core logic, configuration changes
- Change Significance: Adjusts detail level based on impact
Automatic Batching
Large commit ranges are automatically split into manageable batches:
# Processes 50 commits in batches of 4 (default)
# Custom concurrency for very large ranges
Command Options
| Option | Description | Example |
|---|---|---|
--use-context |
Enable contextual intelligence | --use-context |
--concurrency N |
Number of parallel commit processors (default: 4) | --concurrency 3 |
--no-coherence |
Skip cross-commit coherence refinement pass | --no-coherence |
--context-dir PATH |
Custom context directory | --context-dir ./config |
--auto-apply |
Apply without confirmation | --auto-apply |
--save-only FILE |
Save to file without applying | --save-only fixes.yaml |
--edit |
Edit amendments in external editor | --edit |
π Real-World Examples
Before & After
Before: Messy commit history
e4b2c1a fix stuff
a8d9f3e wip
c7e1b4f update files
9f2a6d8 more changes
After: Professional commit messages
e4b2c1a feat(auth): implement JWT token validation system
a8d9f3e docs(api): add comprehensive OpenAPI documentation
c7e1b4f fix(ui): resolve mobile responsive layout issues
9f2a6d8 refactor(core): optimize database query performance
Workflow Integration
# 1. Work on your feature branch
# 2. Make commits (don't worry about perfect messages)
# 3. Before merging, improve all commit messages
# 4. Create professional PR with AI-generated description
# β
Professional commit history + comprehensive PR description ready for review
Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Development Setup
-
Clone the repository:
-
Install Rust (if you haven't already):
| -
Build the project:
-
Run the build script (includes tests, linting, and formatting):
Or run individual steps:
π Documentation
- User Guide - Comprehensive usage guide with examples
- Configuration Guide - Set up contextual intelligence
- API Documentation - Rust API reference
- Troubleshooting - Common issues and solutions
- Examples - Real-world usage examples
- Release Process - For contributors
π§ Requirements
- Rust: 1.70+ (for installation from source)
- Claude API Key: Required for AI-powered features
- Get your key from Anthropic Console
- Set:
export CLAUDE_API_KEY="your-key"
- AI Model Selection: Optional configuration for specific Claude models
- View available models:
omni-dev config models show - Configure via
~/.omni-dev/settings.jsonorANTHROPIC_MODELenvironment variable - Supports standard identifiers and Bedrock-style formats
- View available models:
- Atlassian Credentials (for JIRA/Confluence features): Instance URL, email, and
API token
- Configure with:
omni-dev atlassian auth login
- Configure with:
- Datadog Credentials (for Datadog features): API key, application key, and site
- Configure with:
omni-dev datadog auth login
- Configure with:
- Git: Any modern version
AI backend selection
By default, omni-dev calls the Anthropic API (or Bedrock/OpenAI/Ollama via
the USE_*/CLAUDE_CODE_USE_BEDROCK env vars). As an alternative, you can
route AI calls through an already-authenticated
Claude Code CLI session, avoiding
the need for a separate API key:
# Per-invocation flag:
# Or set persistently:
The flag takes precedence over the environment variable.
Model selection follows the precedence chain
--model β CLAUDE_MODEL β CLAUDE_CODE_MODEL β ANTHROPIC_MODEL β registry
default. Short aliases (sonnet, opus, haiku) and full identifiers
(claude-sonnet-4-6) are both accepted and forwarded verbatim to
claude -p --model.
Sandboxing guarantees. The claude -p subprocess is locked down so it
behaves as a pure promptβcompletion service, not a nested agent with
filesystem or shell access:
- Built-in tools disabled (
--tools ""). - MCP servers blocked (
--strict-mcp-configwith no config). - User/project/local settings ignored (
--setting-sources ""). - Slash commands / skills disabled.
- Session persistence disabled.
- Subprocess runs in a fresh temp directory (not your repo root).
- Environment is scrubbed of
CLAUDE_PROJECT_DIR,CLAUDE_CODE_*, andCLAUDE_PROJECT_*before spawn.
Cost baseline. Because the claude -p session still loads a small system
prompt, each invocation has a floor cost (~$0.007 Haiku, ~$0.03 Sonnet,
~$0.15 Opus) before any user content. Back-to-back calls within one hour hit
the prompt cache and are much cheaper. If you pay per token, compare against
the default HTTP backend which has no such floor.
Overrides. Optional environment variables:
OMNI_DEV_CLAUDE_CLI_BINβ path to theclaudebinary (default: resolved fromPATH).OMNI_DEV_CLAUDE_CLI_TIMEOUT_SECSβ subprocess timeout (default: 600).OMNI_DEV_CLAUDE_CLI_STDOUT_MAX_BYTESβ stdout cap in bytes (default: 4 MiB).OMNI_DEV_CLAUDE_CLI_ALLOW_TOOLSβ escape hatch (default: disabled). See below.OMNI_DEV_CLAUDE_CLI_ALLOW_MCPβ escape hatch for MCP server pickup (default: disabled). See below.OMNI_DEV_CLAUDE_CLI_MAX_BUDGET_USDβ per-invocation spending cap in USD (default: none). See below.
The --beta-header flag is ignored with this backend (the CLI's --betas
flag is API-key-user-only and has different semantics).
Escape hatch: --claude-cli-allow-tools
By default the nested claude -p session is run with --tools "" and
cannot read, edit, or execute anything on your system. For deliberately
tool-capable use cases, you can weaken the sandbox:
# or:
When enabled, the nested session uses the CLI's default built-in tool set (Read / Edit / Write / Bash / Glob / Grep). This means the session can access your repository and run commands. Only enable it when you want that behaviour. When active, omni-dev logs a warning on every invocation.
--strict-mcp-config and --setting-sources "" still apply unless you
also enable --claude-cli-allow-mcp (see below). The two escape hatches
are independent so you can grant tool access without exposing MCP server
credentials, and vice-versa.
Escape hatch: --claude-cli-allow-mcp
By default the nested claude -p session is run with --strict-mcp-config
and no --mcp-config, blocking every MCP server you have configured in
~/.claude/settings.json. To re-enable MCP server pickup:
# or:
When enabled, the nested session can connect to any MCP server in your user settings. Be aware that MCP servers commonly hold OAuth tokens (Gmail, Drive, Slack) or expose internal network services; enabling this exposes them to the nested session. Only enable it when you want that behaviour. When active, omni-dev logs a warning on every invocation.
This flag is independent of --claude-cli-allow-tools. Built-in tools
remain disabled unless you enable that flag separately.
Spending cap: --claude-cli-max-budget-usd
Pass a per-invocation spending cap in USD:
# or:
The value is forwarded to claude -p --max-budget-usd. If the nested
session exceeds the cap, it aborts with an error rather than running away
with cost. Regardless of whether a cap is set, each invocation's
total_cost_usd is logged at INFO level for observability β run with
RUST_LOG=omni_dev=info to see it.
The cap is ignored when --ai-backend is not claude-cli. Non-positive,
non-finite, or non-numeric values are silently treated as no cap.
π Debugging
For troubleshooting and detailed logging, use the RUST_LOG environment variable:
# Enable debug logging for omni-dev components
RUST_LOG=omni_dev=debug
# Debug specific modules (e.g., context discovery)
RUST_LOG=omni_dev::claude::context::discovery=debug
# Show only errors and warnings
RUST_LOG=warn
See Troubleshooting Guide for detailed debugging information.
Changelog
See CHANGELOG.md for a list of changes in each version.
License
This project is licensed under the BSD 3-Clause License - see the LICENSE file for details.
Support
- π Issues
- π¬ Discussions
Acknowledgments
- Thanks to all contributors who help make this project better!
- Built with β€οΈ using Rust