cliai 0.2.0

A small Rust library for invoking AI tools through CLI backends like Ollama and GitHub Copilot CLI.
Documentation
  • Coverage
  • 73.81%
    31 out of 42 items documented1 out of 21 items with examples
  • Size
  • Source code size: 23.48 kB This is the summed size of all the files inside the crates.io package for this release.
  • Documentation size: 3.33 MB This is the summed size of all files generated by rustdoc for all configured targets
  • Ø build duration
  • this release: 1m Average build duration of successful builds.
  • all releases: 52s Average build duration of successful builds in releases after 2024-10-23.
  • Links
  • Homepage
  • KiTechSoftware/cliai
    0 0 0
  • crates.io
  • Dependencies
  • Versions
  • Owners
  • swayechateau

cliai

Small Rust library for invoking AI tools through local CLI backends such as Ollama and GitHub Copilot CLI.

cliai provides a minimal trait-based interface so your applications can swap providers without coupling to a specific vendor SDK or HTTP API.

Features

  • Lightweight crate with no runtime dependencies

  • Unified backend trait

  • Uses installed local CLI tools

  • Current backends:

    • Ollama
    • GitHub Copilot CLI
  • Simple request / response types

  • Suitable for scripts, developer tooling, and automation

Installation

Add to your Cargo.toml:

[dependencies]
cliai = "0.1"

Package Metadata

Use dual licensing in your manifest:

license = "MIT OR Apache-2.0"

Requirements

This crate shells out to external binaries. The relevant CLI tool must be installed and available on your PATH, unless you override the executable path with with_bin().

Ollama

Install Ollama and ensure ollama is available on your PATH.

ollama serve
ollama pull llama3.2

GitHub Copilot CLI

Install the Copilot CLI and ensure the copilot binary is available on your PATH.

Authenticate it first if required by your environment.

Quick Start

Ollama

use cliai::{AiBackend, GenerateRequest, Ollama};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let ai = Ollama::new("llama3.2");

    let response = ai.generate(
        &GenerateRequest::new("Write a haiku about Rust.")
            .with_instructions("Be concise.")
    )?;

    println!("{}", response.text);
    Ok(())
}

Copilot

use cliai::{AiBackend, Copilot, GenerateRequest};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let ai = Copilot::new();

    let response = ai.generate(
        &GenerateRequest::new("Explain ownership in Rust.")
    )?;

    println!("{}", response.text);
    Ok(())
}

Examples

Run packaged examples:

cargo run --example ollama_basic
cargo run --example copilot_basic
cargo run --example commit_message
cargo run --example changelog
cargo run --example rebase_plan

API Overview

Trait

pub trait AiBackend {
    fn name(&self) -> &'static str;
    fn generate(&self, request: &GenerateRequest)
        -> Result<GenerateResponse, AiError>;
}

Request

GenerateRequest {
    prompt: String,
    instructions: Option<String>,
}

Response

GenerateResponse {
    text: String,
}

Backend Configuration

Ollama

let ai = Ollama::new("llama3.2")
    .with_model("mistral")
    .with_bin("ollama");

Copilot

let ai = Copilot::new()
    .with_bin("copilot");

Error Handling

Errors are returned as AiError:

  • I/O failures
  • UTF-8 decoding failures
  • External command failures
match ai.generate(&request) {
    Ok(res) => println!("{}", res.text),
    Err(err) => eprintln!("error: {}", err),
}

Design Notes

This crate intentionally shells out to installed CLI tools instead of embedding provider SDKs. This is useful when:

  • You already use local AI CLIs
  • You want fewer dependencies
  • You prefer OS-level credential handling
  • You need portable internal tooling

Security Notes

Prompts are sent to external binaries you execute. Review the trust model, permissions, and data handling of each installed CLI and model provider before use.

Roadmap

Potential future additions:

  • Streaming responses
  • Structured outputs
  • Timeout configuration
  • Retries
  • Async support
  • Additional backends

Contributing

Issues and pull requests are welcome.

License

Licensed under either of:

  • MIT License
  • Apache License 2.0

at your option.