CHACE
Pronounced: /tʃeɪs/ (chase)
CHamal's AutoComplete Engine
Overview
CHACE is a Rust-based engine designed for controlled AI-assisted code completion. Unlike traditional AI coding assistants that generate large code blocks, CHACE:
- lets you target empty function definitions at the cursor position
- Extracts the function signature and documentation (docstrings)
- Sends only the minimal context to the LLM
- Generates and inserts the function body
This approach keeps the AI focused on the specific task, reduces token usage, maintains precision and efficiency and produces more predictable results.
Inspiration
CHACE is heavily inspired by ThePrimeagen's new approach to AI-assisted coding. While his implementation is built with Lua integrating Opencode (not yet open-sourced), CHACE takes a different architectural approach: built as a standalone Rust binary that operates independently and can be integrated into any editor through plugins. This design ensures CHACE is editor-agnostic, lightweight, and easy to extend to other development environments.
Architecture
CHACE runs as a Unix socket server (/tmp/chace.sock) that accepts JSON requests containing source code and cursor position. The engine:
- Parses the source code using Tree-sitter
- Locates empty functions at the cursor
- Sends function signatures to the configured LLM backend
- Returns the generated function body with precise byte offsets
Supported LLM Backends
- Google Gemini (gemini-2.5-flash)
- Groq (gpt-oss-20b)
Language Support
Currently supports:
- Rust
Installation
Prerequisites
- Rust toolchain (2024 edition or later)
- API keys for LLM providers
Build from Source
Configuration
Set the required environment variables:
Usage
Running the Server
The server listens on /tmp/chace.sock and handles concurrent connections.
Request Format
Send JSON-encoded requests via the Unix socket:
Response Format
IDE Integration
CHACE is designed to be integrated with IDEs via plugins. See chace.nvim for reference.
Protocol
CHACE uses a line-delimited JSON protocol over Unix sockets:
- Each request is a single JSON object terminated by a newline
- Each response is a single JSON object terminated by a newline
- Multiple requests can be sent over the same connection
- Connections are handled asynchronously
Development
Adding Language Support
To add support for a new language:
- Add the Tree-sitter grammar to
Cargo.toml - Create a new backend in
src/languages/ - Implement function detection and signature extraction
- Update the request handler in
main.rs
Adding LLM Backends
To add a new LLM provider:
- Create a new module in
src/ai/ - Implement the
LLMBackendtrait - Add initialization in
main.rs - Update the backend selection logic
License
MIT License - see LICENSE for details.