rig-llama-cpp 0.1.4

Rig completion provider for local GGUF models via llama.cpp, with streaming, tool calling, reasoning, and multimodal (mtmd) support.
Documentation

rig-llama-cpp

There is very little structured metadata to build this page from currently. You should check the main library docs, readme, or Cargo.toml in case the author documented the features in them.

This version has 6 feature flags, 0 of them enabled by default.

default

This feature flag does not enable additional features.

cuda

metal

mtmd

openmp

rocm

vulkan