LM Studio API
Introduction
This API is designed for interacting with LM Studio. It allows you to send requests to locally running models, receive results, and manage model parameters. The API uses JSON for data exchange.
Examples:
No Stream Using:
let mut chat = Chat::new(
Model::Gemma3_4b, Context::new("You're Jarvis - my personal assistant. Call me master", 4090), "9090" );
let request = Request {
messages: vec!["Hi, what's your name?".into()],
context: true,
stream: false, ..Request::default()
};
let result = chat.send(request).await;
match result {
Ok(Some(response)) => println!("{}", response.text()),
Err(e) => eprintln!("Error: {e}"),
_ => {}
}
Stream Using (real-time generation):
use lm_studio_api::{ Model, Context, Chat, Request };
let mut chat = Chat::new(
Model::Gemma3_4b, Context::new("You're Jarvis - my personal assistant. Call me master", 4090), "9090" );
loop {
eprint!("\n>> ");
let mut buf = String::new();
std::io::stdin().read_line(&mut buf).unwrap();
eprint!("<< ");
let request = Request {
messages: vec![buf.into()],
context: true,
stream: true, ..Request::default()
};
let _ = chat.send(request).await.unwrap();
while let Some(result) = chat.next().await {
match result {
Ok(text) if !text.is_empty() => {
eprint!("{text}");
},
Err(e) => {
eprintln!("Error: {e}");
break;
},
_ => {}
}
}
}
Licensing:
Distributed under the MIT license.
Feedback:
You can contact me via GitHub or send a message to my Telegram @fuderis.
This library is constantly evolving, and I welcome your suggestions and feedback.