transcribe-cli 0.0.1

Whisper CLI transcription pipeline on CTranslate2 with CPU and optional CUDA support
# transcribe-cli

`transcribe-cli` is a Rust command-line transcription pipeline built on Whisper and CTranslate2.

It supports:

- CPU-optimized transcription
- optional NVIDIA CUDA execution
- automatic Whisper model download into `models/`
- local files or `http/https` audio URLs
- streaming transcription modes
- model cleanup commands

## Install

From a local checkout:

```bash
cargo install --path . --locked
```

With CUDA support:

```bash
cargo install --path . --locked --features cuda
```

## Usage

```bash
transcribe-cli --model small audio.mp3
transcribe-cli --model medium --stream audio.flac
transcribe-cli --model tiny https://example.com/audio.wav
```

## Features

- `cuda`: enable CUDA support with dynamic loading
- `cuda-static`: enable static CUDA support
- `cuda-dynamic-loading`: alias for the dynamic CUDA path
- `cudnn`: enable cuDNN on top of CUDA

## Notes

- Whisper models are downloaded automatically on first use.
- By default models are stored in `models/` next to the executable unless `--models-dir` is set.
- Whisper decoding is handled in-project through a local wrapper around CTranslate2 `sys::Whisper` and Hugging Face `tokenizers`.