litellm-rs 0.4.16

A high-performance AI Gateway written in Rust, providing OpenAI-compatible APIs with intelligent routing, load balancing, and enterprise features
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
//! Petals Distributed LLM Provider
//!
//! Petals allows running large language models collaboratively through distributed inference.
//! This implementation provides access to chat completions through Petals' OpenAI-compatible API.

mod config;
mod model_info;
mod provider;

pub use config::PetalsConfig;
pub use model_info::{PetalsModel, get_model_info};
pub use provider::PetalsProvider;