
AnyLM - Universal API for Every AI
Sick of juggling separate APIs for each AI model—wrestling with their quirky syntax and endless docs?
I was too. That's why I built AnyLM: learn one intuitive API once, then unleash it across any service—LLMs, embeddings, vision models, you name it. Seamless, powerful, done.
Supported:
- Standarts: Supported
OpenAI and Anthropic API standarts (what 90% of AI uses).
- Services:
LM Studio, ChatGPT, Cerebras, OpenRouter, Perplexity, Claude and Voyage.
- Stream Response: Allows you to read the LM response in parts without waiting for the full completion.
- Context Control: Automatic trimming of the dialog context when exceeding the token limits.
- Image View: Image analysis support with reading from files and directly via
base64 url.
- Structured Output: Structured AI-response in JSON format.
- Tool Calls: Calling handlers with arguments for smart AI agents.
- Embeddings: Text embeddings support for fast text analysis.
- Proxy Support: Support for using proxy/vpn request tunneling.
- Is something missing?: Write to me and I will add it too. (
Telegram: @fuderis).
Examples:
Cerebras:
use anylm::{Chunk, Completions, Proxy, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = std::env::var("CEREBRAS_API_KEY")?;
let mut response = Completions::cerebras(api_key, "llama3.1-8b")
.user_message(vec!["Hello, how are you doing?".into()])
.send()
.await?;
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
eprint!("{text}");
}
}
println!();
Ok(())
}
Claude:
use anylm::{Chunk, Completions, Proxy, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = std::env::var("ANTHROPIC_API_KEY")?;
let mut response = Completions::claude(api_key, "claude-opus-4-6")
.user_message(vec!["Hello, how are you doing?".into()])
.send()
.await?;
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
eprint!("{text}");
}
}
println!();
Ok(())
}
ImageView:
use anylm::{Chunk, Completions, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
let mut response = Completions::lmstudio("", "qwen/qwen3-vl-4b")
.server("http://localhost:1234")
.user_message(vec![
Path::new("test-image.png").into(),
"What's on the picture?".into(),
])
.send()
.await?;
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
eprint!("{text}");
}
}
println!();
Ok(())
}
Structured Output (JSON):
use anylm::{Chunk, Completions, Schema, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
#[derive(Debug, serde::Deserialize)]
struct Person {
first_name: String,
last_name: Option<String>,
age: u8,
}
let mut response = Completions::lmstudio("", "mistralai/ministral-3-3b")
.user_message(vec!["John Smith, 30 years old".into()])
.schema(
Schema::object("The user structure")
.required_property("first_name", Schema::string("The user first name"))
.optional_property("last_name", Schema::string("The user last name"))
.required_property("age", Schema::integer("The user age")),
)
.send()
.await?;
let mut json_str = String::new();
while let Some(chunk) = response.next().await {
if let Chunk::Text(text) = chunk? {
json_str.push_str(&text);
}
}
let person: Person = serde_json::from_str(&json_str)?;
println!("{person:#?}");
Ok(())
}
Tool Calls:
use anylm::{Chunk, Completions, Schema, Tool, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
#[derive(Debug, serde::Deserialize)]
struct LocationData {
location: String,
}
let mut response = Completions::lmstudio("", "mistralai/ministral-3-3b")
.user_message(vec!["What's the weather like in London?".into()])
.tool(Tool::new(
"weather",
"Search weather by location",
Schema::object("Location data")
.required_property("location", Schema::string("The location")),
))
.send()
.await?;
let mut tool_calls = vec![];
while let Some(chunk) = response.next().await {
match chunk? {
Chunk::Text(text) => {
eprint!("{text}");
}
Chunk::Tool(name, json_str) => {
tool_calls.push((name, json_str));
}
}
}
println!();
for (name, json_str) in tool_calls {
match name.as_ref() {
"weather" => {
let location: LocationData = serde_json::from_str(&json_str)?;
println!("{location:#?}");
}
_ => {}
}
}
Ok(())
}
Embeddings:
use anylm::{Embeddings, prelude::*};
#[tokio::main]
async fn main() -> Result<()> {
let response = Embeddings::lmstudio("", "nomic-ai/nomic-embed-text-v1.5")
.input("Hello, how are you doing?")
.send()
.await?;
println!("Embeddings: {:?}", embeddings.data);
Ok(())
}
And etc., it all has the same logic..
License & Credits:
Thank you for your support! Don't forget to check out my other projects as well. GitHub
P.s.: This software is actively evolving, and your suggestions and feedback are always welcome!