aicommit 0.1.7

A CLI tool that generates concise and descriptive git commit messages using LLMs
aicommit-0.1.7 is not a library.

aicommit

logo

Crates.io Documentation License: MIT

A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).

Features

Implemented Features

  • ✅ Uses LLMs to generate meaningful commit messages from your changes
  • ✅ Supports multiple LLM providers (OpenRouter, Ollama)
  • ✅ Custom api keys for services through open router api (for google aistudio and etc) - go to https://openrouter.ai/settings/integrations and paste key from any of them: AI21, Amazon BedRock, Anthropic, AnyScale, Avian.io, Cloudflare, Cohere, DeepInfra, DeepSeek, Fireworks, Google AI Studio, Google Vertex, Hyperbolic, Infermatic, Inflection, Lambda, Lepton, Mancer, Mistral, NovitaAI, OpenAI, Perplexity, Recursal, SambaNova, SF Compute, Together, xAI
  • ✅ Fast and efficient - works directly from your terminal
  • ✅ Easy configuration and customization
  • ✅ Transparent token usage and cost tracking
  • ✅ Version management with automatic incrementation
  • ✅ Version synchronization with Cargo.toml
  • ✅ Provider management (add, list, set active)
  • ✅ Interactive configuration setup
  • ✅ Configuration file editing

Planned Features

  • 🚧 Tests for each feature to prevent breaking changes
  • 🚧 Split commits by file (aicommit --by-file)
  • 🚧 Split commits by feature (aicommit --by-feature)
  • 🚧 Basic .gitignore file checks and management
  • 🚧 Watch mode (aicommit --watch 1m)
  • 🚧 Watch with edit delay (aicommit --watch 1m --wait-for-edit 30s)
  • 🚧 Version management for multiple languages (package.json, requirements.txt, etc.)
  • 🚧 Interactive commit message generation (aicommit --generate)
  • 🚧 Auto push functionality (aicommit --push)
  • 🚧 Branch safety checks for push operations
  • 🚧 Auto pull functionality
  • 🚧 Support for conventional commits format
  • 🚧 Project icon
  • 🚧 OpenRouter project listing

Legend:

  • ✅ Implemented
  • 🚧 Planned
  • 🧪 Has tests

Installation

Install via cargo:

cargo install aicommit

Or build from source:

git clone https://github.com/suenot/aicommit
cd aicommit
cargo install --path .

Quick Start

  1. Add a provider:
aicommit --add
  1. Make some changes to your code

  2. Create a commit:

aicommit

Provider Management

List all configured providers:

aicommit --list

Set active provider:

aicommit --set <provider-id>

Version Management

Automatically increment version in a file before commit:

aicommit --version-file "./version" --version-iterate

Synchronize version with Cargo.toml:

aicommit --version-file "./version" --version-cargo

Both operations can be combined:

aicommit --version-file "./version" --version-cargo --version-iterate

Configuration

The configuration file is stored at ~/.aicommit.json. You can edit it directly with:

aicommit --config

Provider Configuration

Each provider can be configured with the following settings:

  • max_tokens: Maximum number of tokens in the response (default: 50)
  • temperature: Controls randomness in the response (0.0-1.0, default: 0.3)

For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.

Supported LLM Providers

OpenRouter

{
  "providers": [{
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "provider": "openrouter",
    "api_key": "sk-or-v1-...",
    "model": "mistralai/mistral-tiny",
    "max_tokens": 50,
    "temperature": 0.3,
    "input_cost_per_1k_tokens": 0.25,
    "output_cost_per_1k_tokens": 0.25
  }],
  "active_provider": "550e8400-e29b-41d4-a716-446655440000"
}

Ollama

{
  "providers": [{
    "id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
    "provider": "ollama",
    "url": "http://localhost:11434",
    "model": "llama2",
    "max_tokens": 50,
    "temperature": 0.3,
    "input_cost_per_1k_tokens": 0.0,
    "output_cost_per_1k_tokens": 0.0
  }],
  "active_provider": "67e55044-10b1-426f-9247-bb680e5fe0c8"
}

Recommended Providers through OpenRouter

  • 🌟 Google AI Studio - 1000000 tokens for free
    • "google/gemini-2.0-flash-exp:free"
  • 🌟 DeepSeek
    • "deepseek/deepseek-chat"

Usage Information

When generating a commit message, the tool will display:

  • Number of tokens used (input and output)
  • Total API cost (calculated separately for input and output tokens)

Example output:

Generated commit message: Add support for multiple LLM providers
Tokens: 8↑ 32↓
API Cost: $0.0100

You can have multiple providers configured and switch between them by changing the active_provider field to match the desired provider's id.

License

This project is licensed under the MIT License - see the LICENSE file for details.