Crate llama_cpp_2
source ·Expand description
Bindings to the llama.cpp library.
As llama.cpp is a very fast moving target, this crate does not attempt to create a stable API with all the rust idioms. Instead it provided safe wrappers around nearly direct bindings to llama.cpp. This makes it easier to keep up with the changes in llama.cpp, but does mean that the API is not as nice as it could be.
§Examples
§Feature Flags
cublas
enables CUDA gpu support.sampler
adds thecontext::sample::sampler
struct for a more rusty way of sampling.
Modules§
- Safe wrapper around
llama_context
. - The grammar module contains the grammar parser and the grammar struct.
- Representation of an initialized llama backend
- Safe wrapper around
llama_batch
. - A safe wrapper around
llama_model
. - Safe wrapper around
llama_timings
. - Safe wrappers around
llama_token_data
andllama_token_data_array
. - Utilities for working with
llama_token_type
values.
Enums§
- Failed to apply model chat template.
- There was an error while getting the chat template from a model.
- Failed to decode a batch.
- When embedding related functions fail
- All errors that can occur in the llama-cpp crate.
- Failed to Load context
- An error that can occur when loading a model.
- Failed to apply model chat template.
- Failed to convert a string to a token sequence.
- An error that can occur when converting a token to a string.
Functions§
- Get the time in microseconds according to ggml
- checks if mlock is supported
- get the time (in microseconds) according to llama.cpp
- get the max number of devices according to llama.cpp (this is generally cuda devices)
- is memory locking supported according to llama.cpp
- is memory mapping supported according to llama.cpp
Type Aliases§
- A failable result from a llama.cpp function.