genai - Multiprovider Generative AI Client
# cargo.toml
= {'=0.0.3', features = ["with-all-providers"]}
The goal of this library is to provide a common and ergonomic single API to many generative AI providers, such as OpenAI and Ollama.
-
IMPORTANT 1
0.0.xis still in heavy development. Cherry-pick code, don't depend on it.0.0.xreleases will be yanked. -
IMPORTANT 2
0.1.xwill still have some breaking changes in patches, so make sure to lock your version, e.g.,genai = "=0.1.0". In short,0.1.xcan be considered "beta releases." -
IMPORTANT 3 This is not intended to be a replacement for async-openai and ollama-rs, but rather to tackle the simpler lowest common denominator of chat generation use cases, where API depth is less a priority than API commonality.
Library Focus:
-
Focuses on ergonomics and commonality first, and depth second. (If you need client API completeness, use async-openai and ollama-rs; they are awesome and relatively simple to use.)
-
Initially, this library will mostly focus on text chat API (no simple generation, images, or even function calling in the first stage).
-
The
0.1.xversion will work, but the APIs will change in the patch version, not following semver strictly. -
For now, it focuses on OpenAI and Ollama, using async-openai and ollama-rs.
-
Version
0.2.xwill follow semver more strictly.
Notes on Possible Direction
-
Function calling will probably come before image support. The challenge is to normalize it between the OpenAI function API, which is relatively mature, and the open model ones, which are a little more ad hoc but still relatively well supported by some open models.
-
One of the goals is to support serverless (i.e., without Ollama server) support for open models. floneum seems very promising but probably heavy (hence the feature approach for that provider).
Dev Commands
Here are some quick dev commands.
# cargo watch c01-simple
# cargo watch c02-stream
# cargo watch c03-conv
Links
- crates.io: crates.io/crates/genai
- GitHub: github.com/jeremychone/rust-genai