llama-runner 0.1.1

A straightforward Rust library for running llama.cpp models locally on device
Documentation

llama-runner

There is very little structured metadata to build this page from currently. You should check the main library docs, readme, or Cargo.toml in case the author documented the features in them.

This version has 4 feature flags, 0 of them enabled by default.

cuda

metal

rocm

vulkan