slippy-cli 0.1.0

AI Linter for Rust projects
# slippy

Some antipatterns are hard to detect programmatically, but this LLM linter can sometimes somewhat-reliably detect some of the issues.

Install: `cargo install slippy-cli` and `cargo slippy --help`

## Available Lints

Check [here](https://sliman4.github.io/slippy/index.html) (website vibe-coded because fits the theme and im lazy lol)

## Configuration

Create slippy.toml file, you can copy it from [example-slippy.toml](./example-slippy.toml)

Supported providers: Ollama, OpenAI. Claude coming soon.

Recommended models for OpenAI: gpt-5.4-nano for fast and gpt-5.4 for best. Use `service_tear="flex"` to save 50% of the cost by sacrificing speed. Flex tier can sometimes time out, but in that case slippy retries the request with default service tier. Slippy is tested and developed against `gpt-5.4-mini` with the assumption that if the mini model works with test cases, a larger model hopefully would work well with real codebases.

Recommended local models for Ollama (from smallest to largest, pick based on your VRAM):
- `nemotron-3-nano:4b` (5.5GB)
- `glm-4.7-flash:q4_K_M` (20.7GB)
- `nemotron-cascade-2:30b` (don't know, please contribute if you can run it locally)
- `glm-4.7-flash:q8_0` (don't know, please contribute if you can run it locally)
- `qwen3-coder-next:q4_K_M` (don't know, please contribute if you can run it locally)

You can also run `:cloud` models with Ollama

## Architecture

All lints consist of these steps, for each file separately:

- Early check. Uses cheap `fast` model or heuristics that don't produce false-negatives (e.g. check if there is a function that returns `Result`/`Option` for `option_result_misuse` lint). Inference cost scales with `O(files_checked * lints_enabled)`
- If early check passed, check more thoroughly with `best` model
- If anything is found, render a clippy-like message with the warning, code annotations, and a proposed fix
- Send to the `best` model (same conversation that produced the lint) to make sure the lint looks right
- If all is ok, display it to the user. If not, the model produces a fixed version of the lint, verify it too and retry up to 3 times until giving up and printing what we have

This is a flow hardcoded in every lint separately. It can be different for specific lints, but generally it's not agentic and doesn't have long conversations with tool calls.

The LLM-generated warnings are then compared to the code and necessary paths are annotated using [annotate-snippets](https://github.com/rust-lang/annotate-snippets-rs).

After that, the code segment is checked if it's inside a module / code block that is marked with directives such as `/// allow(slippy::<lint>)`. Note that it's rustdoc notation (`///`, `//!` and not just a normal comment `//`, since `syn` ignores regular comments and transforms rustdoc to `#[doc = " allow(...)"]`). `#[allow]` integration would be tricky as slippy would have to be registered as a rustc driver (like clippy) and somehow supported in rust-analyzer and other tooling. The scope / modules of directive are selected (or rather, intended to be selected) in the same way as `#[allow]`: works with function / code block / type definition bodies, `mod module {}`, `mod file;`, etc. If the entire file is `allow` and has no `warn`/`deny`/`forbid` sections, the lint is skipped for that file.