
LM Studio API (UNOFFICIAL)
Is a high-performance and user-friendly library for interacting with locally running Llama-based language models via LM Studio. It allows you to send requests to models, receive responses both in full and in streaming mode, and manage model parameters.
Key features:
- Support for both regular and streaming response modes.
- Context management and system prompt customization.
- Flexible configuration of request and model parameters.
- Supports structured response schemes in JSON format.
Examples:
use lm_studio_api::prelude::*;
struct SystemPrompt;
impl SystemInfo for SystemPrompt {
fn new() -> Box<Self> {
Box::new(Self {})
}
fn update(&mut self) -> String {
format!(r##"
You're Jarvis — is a personal assistant created by the best programmer 'Fuderis'.
Response briefly and clearly.
Response language: English.
Actual system Info:
* datetime: 1969-10-29 22:30:00.
* location: Russian Federation, Moscow.
"##)
}
}
#[tokio::main]
async fn main() -> Result<()> {
let mut chat = Chat::new(
Model::Gemma3_4b, Context::new(SystemPrompt::new(), 8192), 9090, );
let request = Messages {
messages: vec![
Message {
role: Role::User,
content: vec![
Content::Text { text: "What is shown in the picture?".into() },
Content::Image { image_url: Image::from_file("rust-logo.png").unwrap() }
]
}
],
context: true,
stream: true,
..Default::default()
};
let _ = chat.send(request.into()).await?;
while let Some(result) = chat.next().await {
match result {
Ok(r) => if let Some(text) = r.text() { eprint!("{text}"); }else{ },
Err(e) => eprintln!("Error: {e}"),
}
}
Ok(())
}
Licensing:
Distributed under the MIT license.
Feedback:
You can find me here, also see my channel.
I welcome your suggestions and feedback!
Copyright (c) 2025 Bulat Sh. (fuderis)