Expand description
§AnyLM - Universal API for Every AI
Sick of juggling separate APIs for each AI model—wrestling with their quirky syntax and endless docs?
I was too. That’s why I built AnyLM: learn one intuitive API once, then unleash it across any service—LLMs, embeddings, vision models, you name it. Seamless, powerful, done.
§Supported:
- Standarts: Supported
OpenAIandAnthropicAPI standarts (what 90% of AI uses). - Services:
LM Studio,ChatGPT,Cerebras,OpenRouter,Perplexity,ClaudeandVoyage. - Stream Response: Allows you to read the LM response in parts without waiting for the full completion.
- Context Control: Automatic trimming of the dialog context when exceeding the token limits.
- Image View: Image analysis support with reading from files and directly via
base64 url. - Structured Output: Structured AI-response in JSON format.
- Tool Calls: Calling handlers with arguments for smart AI agents.
- Embeddings: Text embeddings support for fast text analysis.
- Proxy Support: Support for using proxy/vpn request tunneling.
- Is something missing?: Write to me and I will add it too. (
Telegram: @fuderis).
§Examples:
§Cerebras:
use anylm::{Chunk, Completions, Proxy, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = std::env::var("CEREBRAS_API_KEY")?;
// send request:
let mut response = Completions::cerebras(api_key, "llama3.1-8b")
//.proxy(Proxy::all("socks5://127.0.0.1:1080")?)
.user_message(vec!["Hello, how are you doing?".into()])
.send()
.await?;
// read response stream:
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
eprint!("{text}");
}
}
println!();
Ok(())
}§Claude:
use anylm::{Chunk, Completions, Proxy, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = std::env::var("ANTHROPIC_API_KEY")?;
// send request:
let mut response = Completions::claude(api_key, "claude-opus-4-6")
//.proxy(Proxy::all("socks5://127.0.0.1:1080")?)
.user_message(vec!["Hello, how are you doing?".into()])
.send()
.await?;
// read response stream:
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
eprint!("{text}");
}
}
println!();
Ok(())
}§ImageView:
use anylm::{Chunk, Completions, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
// send request:
let mut response = Completions::lmstudio("", "qwen/qwen3-vl-4b")
.server("http://localhost:1234")
.user_message(vec![
Path::new("test-image.png").into(),
"What's on the picture?".into(),
])
.send()
.await?;
// read response stream:
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
eprint!("{text}");
}
}
println!();
Ok(())
}§Structured Output (JSON):
use anylm::{Chunk, Completions, Schema, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
/// The person structure
#[derive(Debug, serde::Deserialize)]
struct Person {
first_name: String,
last_name: Option<String>,
age: u8,
}
// send request:
let mut response = Completions::lmstudio("", "mistralai/ministral-3-3b")
.user_message(vec!["John Smith, 30 years old".into()])
.schema(
Schema::object("The user structure")
.required_property("first_name", Schema::string("The user first name"))
.optional_property("last_name", Schema::string("The user last name"))
.required_property("age", Schema::integer("The user age")),
)
.send()
.await?;
// read response stream:
let mut json_str = String::new();
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
json_str.push_str(&text);
}
}
// parse response as JSON:
let person: Person = serde_json::from_str(&json_str)?;
println!("{person:#?}");
Ok(())
}§Tool Calls:
use anylm::{Chunk, Completions, Schema, Tool, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
/// The weather tool data
#[derive(Debug, serde::Deserialize)]
struct LocationData {
location: String,
}
// send request:
let mut response = Completions::lmstudio("", "mistralai/ministral-3-3b")
.user_message(vec!["What's the weather like in London?".into()])
.tool(Tool::new(
"weather",
"Search weather by location",
Schema::object("Location data")
.required_property("location", Schema::string("The location")),
))
.send()
.await?;
// read response stream:
let mut tool_calls = vec![];
while let Some(chunk) = response.next().await {
match chunk? {
Chunk::Text(text) => {
eprint!("{text}");
}
Chunk::Tool(name, json_str) => {
tool_calls.push((name, json_str));
}
}
}
println!();
// handle tool calls:
for (name, json_str) in tool_calls {
match name.as_ref() {
"weather" => {
let location: LocationData = serde_json::from_str(&json_str)?;
println!("{location:#?}");
}
_ => {}
}
}
Ok(())
}§Embeddings:
use anylm::{Embeddings, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
// send request:
let response = Embeddings::lmstudio("", "nomic-ai/nomic-embed-text-v1.5")
.input("Hello, how are you doing?")
.send()
.await?;
// print response:
println!("Embeddings: {:?}", embeddings.data);
Ok(())
}And etc., it all has the same logic..
§License & Credits:
- License: Distributed under the MIT license.
- Contacts: GitHub, Behance, Telegram, Telegram Channel, VKontakte.
- Donat:
TON: UQB_r6IFgMYTJUKkhZNgjXcgp4aIJYwB6Gfiiukzg2lIC_Kc
Thank you for your support! Don’t forget to check out my other projects as well. GitHub
P.s.: This software is actively evolving, and your suggestions and feedback are always welcome!
Re-exports§
pub use error::Error;pub use error::Result;pub use error::StdResult;pub use api::ApiKind;pub use api::Chunk;pub use api::Completions;pub use api::Content;pub use api::Embedding;pub use api::Embeddings;pub use api::EmbeddingsData;pub use api::Message;pub use api::Role;pub use api::Schema;pub use api::SchemaKind;pub use api::Stream;pub use api::Tool;pub use api::Usage;pub use bytes;pub use reqwest;pub use utils::*;