aicommit

A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).
Features
Implemented Features
- โ Uses LLMs to generate meaningful commit messages from your changes
- โ Supports multiple LLM providers (OpenRouter, Ollama)
- โ Custom api keys for services through open router api (for google aistudio and etc) - go to https://openrouter.ai/settings/integrations and paste key from any of them: AI21, Amazon BedRock, Anthropic, AnyScale, Avian.io, Cloudflare, Cohere, DeepInfra, DeepSeek, Fireworks, Google AI Studio, Google Vertex, Hyperbolic, Infermatic, Inflection, Lambda, Lepton, Mancer, Mistral, NovitaAI, OpenAI, Perplexity, Recursal, SambaNova, SF Compute, Together, xAI
- โ Fast and efficient - works directly from your terminal
- โ Easy configuration and customization
- โ Transparent token usage and cost tracking
- โ Version management with automatic incrementation
- โ Version synchronization with Cargo.toml
- โ Provider management (add, list, set active)
- โ Interactive configuration setup
- โ Configuration file editing
Planned Features
- ๐ง Tests for each feature to prevent breaking changes
- ๐ง Split commits by file (
aicommit --by-file) - ๐ง Split commits by feature (
aicommit --by-feature) - ๐ง Basic .gitignore file checks and management
- ๐ง Watch mode (
aicommit --watch 1m) - ๐ง Watch with edit delay (
aicommit --watch 1m --wait-for-edit 30s) - ๐ง Version management for multiple languages (package.json, requirements.txt, etc.)
- ๐ง Interactive commit message generation (
aicommit --generate) - ๐ง Auto push functionality (
aicommit --push) - ๐ง Branch safety checks for push operations
- ๐ง Auto pull functionality
Legend:
- โ Implemented
- ๐ง Planned
- ๐งช Has tests
Installation
Install via cargo:
Or build from source:
Quick Start
- Add a provider:
-
Make some changes to your code
-
Create a commit:
Provider Management
List all configured providers:
Set active provider:
Version Management
Automatically increment version in a file before commit:
Synchronize version with Cargo.toml:
Both operations can be combined:
Configuration
The configuration file is stored at ~/.aicommit.json. You can edit it directly with:
Provider Configuration
Each provider can be configured with the following settings:
max_tokens: Maximum number of tokens in the response (default: 50)temperature: Controls randomness in the response (0.0-1.0, default: 0.3)
For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.
Supported LLM Providers
OpenRouter
Ollama
Recommended Providers through OpenRouter
- ๐ Google AI Studio - 1000000 tokens for free
- "google/gemini-2.0-flash-exp:free"
- ๐ DeepSeek
- "deepseek/deepseek-chat"
Usage Information
When generating a commit message, the tool will display:
- Number of tokens used (input and output)
- Total API cost (calculated separately for input and output tokens)
Example output:
Generated commit message: Add support for multiple LLM providers
Tokens: 8โ 32โ
API Cost: $0.0100
You can have multiple providers configured and switch between them by changing the active_provider field to match the desired provider's id.
License
This project is licensed under the MIT License - see the LICENSE file for details.