atomr-infer-runtime-litellm 0.8.0

LiteLLM proxy provider for atomr-infer — implements ModelRunner against the LiteLLM unified API gateway, fronting OpenAI-compatible endpoints for any provider LiteLLM supports.
Documentation