docs.rs failed to build llama_cpp_sys-0.2.0
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build:
llama_cpp_sys-0.3.2
llama_cpp-rs
Safe, high-level Rust bindings to the C++ project of the same name, meant to be as user-friendly as possible. Run GGUF-based large language models directly on your CPU in fifteen lines of code, no ML experience required!
// Create a model from anything that implements `AsRef<Path>`:
let model = load_from_file.expect;
// A `LlamaModel` holds the weights shared across many _sessions_; while your model may be
// several gigabytes large, a session is typically a few dozen to a hundred megabytes!
let mut ctx = model.create_session;
// You can feed anything that implements `AsRef<[u8]>` into the model's context.
ctx.advance_context.unwrap;
// LLMs are typically used to predict the next word in a sequence. Let's generate some tokens!
let max_tokens = 1024;
let mut decoded_tokens = 0;
// `ctx.get_completions` creates a worker thread that generates tokens. When the completion
// handle is dropped, tokens stop generating!
while let Some = ctx.get_completions.next_token
This repository hosts the high-level bindings (crates/llama_cpp
) as well as automatically generated bindings to
llama.cpp's low-level C API (crates/llama_cpp_sys
). Contributions are welcome--just keep the UX clean!
License
MIT or Apache-2.0, at your option (the "Rust" license). See LICENSE-MIT
and LICENSE-APACHE
.