walrus-model 0.0.2

Walrus LLM provider implementations
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# walrus-model

LLM provider implementations for Walrus.

Supports DeepSeek, OpenAI-compatible, Claude, and local inference (via
mistral.rs). Includes `ProviderManager` for multi-provider routing and
`ProviderConfig` for configuration.

## Features

- `local` (default) — Local model inference via mistral.rs
- `cuda` — NVIDIA CUDA GPU acceleration
- `metal` — Apple Metal GPU acceleration

## License

GPL-3.0