bitnet-inference 1.0.0

High-performance inference engine for BitNet models

bitnet-inference

There is very little structured metadata to build this page from currently. You should check the main library docs, readme, or Cargo.toml in case the author documented the features in them.

This version has 11 feature flags, 6 of them enabled by default.

default

batching (default)

This feature flag does not enable additional features.

metal (default)

simd (default)

This feature flag does not enable additional features.

std (default)

This feature flag does not enable additional features.

streaming (default)

This feature flag does not enable additional features.

tokenizers (default)

apple-silicon

benchmarks

generation

mlx

mlx-inference