aha 0.2.4

aha model inference library, now supports Qwen(2.5VL/3/3VL/3.5/ASR), MiniCPM4, VoxCPM/1.5, DeepSeek-OCR/2, Hunyuan-OCR, PaddleOCR-VL/1.5, RMBG2.0, GLM(ASR-Nano-2512/OCR), Fun-ASR-Nano-2512, LFM(2/2.5)
Documentation

aha

There is very little structured metadata to build this page from currently. You should check the main library docs, readme, or Cargo.toml in case the author documented the features in them.

This version has 6 feature flags, 0 of them enabled by default.

candle-flash-attn

cuda

ffmpeg

ffmpeg-next

flash-attn

metal