llama_cpp-rs
Safe, high-level Rust bindings to the C++ project of the same name, meant to be as user-friendly as possible. Run GGUF-based large language models directly on your CPU in fifteen lines of code, no ML experience required!
// Create a model from anything that implements `AsRef<Path>`:
let model = load_from_file.expect;
// A `LlamaModel` holds the weights shared across many _sessions_; while your model may be
// several gigabytes large, a session is typically a few dozen to a hundred megabytes!
let mut ctx = model.create_session.expect;
// You can feed anything that implements `AsRef<[u8]>` into the model's context.
ctx.advance_context.unwrap;
// LLMs are typically used to predict the next word in a sequence. Let's generate some tokens!
let max_tokens = 1024;
let mut decoded_tokens = 0;
// `ctx.start_completing_with` creates a worker thread that generates tokens. When the completion
// handle is dropped, tokens stop generating!
let mut completions = ctx.start_completing_with.into_strings;
for completion in completions
This repository hosts the high-level bindings (crates/llama_cpp
) as well as automatically generated bindings to
llama.cpp's low-level C API (crates/llama_cpp_sys
). Contributions are welcome--just keep the UX clean!
License
MIT or Apache-2.0, at your option (the "Rust" license). See LICENSE-MIT
and LICENSE-APACHE
.