# llam







`llam` is the single-crate Rust binding for the LLAM C runtime.
- Rust crate repository: <https://github.com/Feralthedogg/LLAM-rs>
- C runtime repository: <https://github.com/Feralthedogg/LLAM>
- Crate: <https://crates.io/crates/llam>
- Documentation: <https://docs.rs/llam>
LLAM-rs gives Rust code Go-style synchronous concurrency on top of LLAM's
stackful task scheduler. You write ordinary blocking-looking Rust code with
tasks, channels, `select!`, sleeps, socket I/O, mutexes, condition variables,
blocking offload, and diagnostics. Underneath, managed LLAM tasks park
cooperatively so the runtime can keep OS worker threads busy.
No `async`, no `await`, no executor handles.
## Install
Add the crate:
```bash
cargo add llam@0.1.2
```
Or edit `Cargo.toml`:
```toml
[dependencies]
llam = "0.1.2"
```
Build normally:
```bash
cargo build
```
By default, the build script downloads and installs the LLAM C SDK into Cargo's
private build output, then links `libllam_runtime` statically.
Use a preinstalled LLAM SDK instead:
```bash
LLAM_SYS_PREFIX="$HOME/.local/llam" cargo build
```
Disable automatic installation in locked-down CI:
```bash
LLAM_SYS_NO_INSTALL=1 \
LLAM_SYS_PREFIX="$HOME/.local/llam" \
cargo test
```
## Platform Support
| Linux x86_64 | io_uring/liburing | Supported |
| Linux aarch64 | io_uring/liburing | Supported |
| macOS arm64 | kqueue | Supported |
| macOS x86_64 | kqueue | Supported |
| Windows x86_64 | IOCP | Supported |
Unix file wrappers are Unix-only. Windows sockets are Winsock `SOCKET`s, not
POSIX file descriptors.
## Runtime Model
Every LLAM-aware operation should run inside `llam::run(...)` or
`llam::run_with_profile(...)`.
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
println!("running inside a managed LLAM task");
Ok(())
})
}
```
For I/O-heavy applications:
```rust
fn main() -> llam::Result<()> {
llam::run_with_profile(llam::Profile::IoLatency, || {
println!("I/O latency profile enabled");
Ok(())
})
}
```
The closure passed to `run` becomes the first managed task. Tasks spawned from
that closure are scheduled by the LLAM runtime.
## Why There Is No `#[llam::main]`
LLAM-rs intentionally publishes one crate only: `llam`.
Rust attribute macros require a separate `proc-macro` crate. Publishing a
`#[llam::main]` attribute would force an extra crates.io package, so the public
entrypoint is explicit:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
Ok(())
})
}
```
This keeps install and import simple:
```rust
use llam;
```
## Key Features
- Stackful LLAM tasks from Rust closures.
- Go-like `spawn!` and `try_spawn!` macros.
- `JoinHandle<T>` with typed task results.
- `TaskGroup` and `TaskBatch` helpers for related work.
- Typed bounded channels over LLAM's C channel primitive.
- `select!` over receive, send, close, timeout, and default arms.
- LLAM-aware `Mutex<T>` and `Condvar`.
- Cooperative sleep and monotonic deadlines.
- Blocking offload for filesystem, CPU, and foreign blocking work.
- TCP, UDP, Unix socket, raw descriptor, and owned-buffer I/O wrappers.
- Task-local storage.
- ABI/runtime diagnostics.
- Raw C ABI access through `llam::sys` for advanced integration.
## Minimal Example
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let handle = llam::spawn!({
21 * 2
});
let value = handle.join().expect("task failed");
assert_eq!(value, 42);
Ok(())
})
}
```
## Tasks
Use `spawn!` when spawn failure should panic:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let task = llam::spawn!(move {
"hello from LLAM"
});
assert_eq!(task.join().expect("join failed"), "hello from LLAM");
Ok(())
})
}
```
Use `try_spawn!` when spawn failure should be returned:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let task = llam::try_spawn!(move {
7usize
})?;
assert_eq!(task.join().expect("join failed"), 7);
Ok(())
})
}
```
Tune task class and stack size:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let token = llam::CancelToken::new()?;
let task = llam::try_spawn!(
class = llam::TaskClass::Latency,
stack = llam::StackClass::Large,
cancel = token,
move {
"done"
}
)?;
assert_eq!(task.join().expect("join failed"), "done");
Ok(())
})
}
```
Detach a fire-and-forget task:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let task = llam::try_spawn!(move {
let _ = llam::time::sleep(std::time::Duration::from_millis(10));
})?;
task.detach()?;
Ok(())
})
}
```
Yield explicitly:
```rust
llam::task::yield_now();
```
Inspect the current managed task:
```rust
println!("task id = {:?}", llam::task::current_id());
println!("task class = {:?}", llam::task::current_class());
println!("state = {:?}", llam::task::current_state_name());
```
## Task Groups
Use `TaskGroup` when all child tasks must finish before the scope continues:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let mut group = llam::TaskGroup::new()?;
for i in 0..8 {
group.spawn(move || {
println!("group task {i}");
})?;
}
group.join()?;
Ok(())
})
}
```
Use `TaskBatch<T>` when each task returns a typed result:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let mut batch = llam::TaskBatch::new();
for i in 0..4 {
batch.spawn(move || i * i)?;
}
let values = batch.join().expect("batch failed");
assert_eq!(values, vec![0, 1, 4, 9]);
Ok(())
})
}
```
## Channels
Channels are typed and bounded. Values are moved into LLAM and moved back out on
receive.
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let (tx, rx) = llam::channel::bounded::<String>(32)?;
llam::spawn!(move {
tx.send("ping".to_string()).expect("send failed");
});
assert_eq!(rx.recv()?, "ping");
Ok(())
})
}
```
Close-aware receive:
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let (tx, rx) = llam::channel::bounded::<u64>(4)?;
tx.send(1).expect("send failed");
drop(tx);
assert_eq!(rx.recv_option()?, Some(1));
assert_eq!(rx.recv_option()?, None);
Ok(())
})
}
```
Timeouts and nonblocking attempts:
```rust
use std::time::Duration;
fn main() -> llam::Result<()> {
llam::run(|| {
let (tx, rx) = llam::channel::bounded::<u32>(1)?;
tx.try_send(10).expect("send failed");
assert!(tx.try_send(20).is_err());
assert_eq!(rx.recv_timeout(Duration::from_millis(50))?, 10);
assert!(rx.try_recv().is_err());
Ok(())
})
}
```
Useful channel methods:
```rust
tx.send(value)
tx.try_send(value)
tx.send_timeout(value, Duration::from_millis(10))
tx.close()
rx.recv()
rx.try_recv()
rx.recv_timeout(Duration::from_millis(10))
rx.recv_option()
rx.try_recv_option()
rx.recv_timeout_option(Duration::from_millis(10))
```
`recv()` returns `EPIPE` on close. `recv_option()` returns `Ok(None)` on close.
## Select
`llam::select!` waits on multiple channel operations with Go-like syntax.
Receive with timeout:
```rust
use std::time::Duration;
fn main() -> llam::Result<()> {
llam::run(|| {
let (tx, rx) = llam::channel::bounded::<u32>(1)?;
llam::spawn!(move {
tx.send(42).expect("send failed");
});
let value = llam::select! {
recv(rx) -> msg => {
msg.expect("recv failed")
},
after(Duration::from_secs(1)) => {
0
},
};
assert_eq!(value, 42);
Ok(())
})
}
```
Closed-channel arm:
```rust
let done = llam::select! {
recv(rx) -> msg => {
println!("value = {:?}", msg);
false
},
closed(rx) => {
true
},
};
```
Send arm:
```rust
let selected = llam::select! {
send(tx, 7u32) => {
"sent"
},
after(Duration::from_millis(10)) => {
"timeout"
},
};
println!("{selected}");
```
Default arm:
```rust
let ready = llam::select! {
recv(rx) -> value => {
let _ = value;
true
},
default => {
false
},
};
```
`select!` accepts either one `after(...)` arm or one `default` arm, not both.
## Time
Sleep cooperatively:
```rust
llam::time::sleep(std::time::Duration::from_millis(10))?;
```
Use absolute LLAM deadlines:
```rust
let deadline = llam::time::deadline_after(std::time::Duration::from_secs(1));
llam::time::sleep_until(deadline)?;
```
Read the monotonic nanosecond clock:
```rust
let now = llam::time::now_ns();
println!("now = {now}");
```
## Mutex And Condvar
Use LLAM-aware synchronization primitives for waits inside managed tasks.
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let value = llam::sync::Mutex::new(0usize)?;
{
let mut guard = value.lock()?;
*guard += 1;
}
assert_eq!(*value.lock()?, 1);
Ok(())
})
}
```
Condition variable with a predicate loop:
```rust
use std::sync::Arc;
fn main() -> llam::Result<()> {
llam::run(|| {
let ready = Arc::new(llam::sync::Mutex::new(false)?);
let cond = Arc::new(llam::sync::Condvar::new()?);
let worker_ready = Arc::clone(&ready);
let worker_cond = Arc::clone(&cond);
llam::spawn!(move {
let mut guard = worker_ready.lock().expect("lock failed");
*guard = true;
worker_cond.notify_one().expect("notify failed");
});
let mut guard = ready.lock()?;
while !*guard {
guard = cond.wait(guard)?;
}
Ok(())
})
}
```
Like standard condition variables, wait in a predicate loop.
## Blocking Work
Use `llam::blocking::call` for work that should run outside the cooperative
scheduler path.
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let result = llam::blocking::call(|| {
std::fs::read_to_string("Cargo.toml")
})?;
let text = result.expect("blocking closure panicked")?;
println!("{} bytes", text.len());
Ok(())
})
}
```
Use `BlockingRegion` for manual enter/leave around foreign blocking code:
```rust
let _region = llam::blocking::BlockingRegion::enter()?;
// Call a blocking C API here.
```
## TCP Echo Server
This is the typical LLAM-rs shape: synchronous-looking `accept`, `read`, and
`write_all`, with one LLAM task per connection.
```rust
use std::io::{Read, Write};
fn main() -> llam::Result<()> {
llam::run_with_profile(llam::Profile::IoLatency, || {
let listener = llam::net::TcpListener::bind("127.0.0.1:9090")?;
println!("echo server listening on {}", listener.local_addr()?);
loop {
let (mut stream, peer) = listener.accept()?;
println!("accepted {peer:?}");
llam::spawn!(move {
let mut buf = [0u8; 4096];
loop {
let n = match stream.read(&mut buf) {
Ok(0) => break,
Ok(n) => n,
Err(error) => {
eprintln!("read failed: {error}");
break;
}
};
if let Err(error) = stream.write_all(&buf[..n]) {
eprintln!("write failed: {error}");
break;
}
}
});
}
})
}
```
## TCP Client
```rust
use std::io::{Read, Write};
fn main() -> llam::Result<()> {
llam::run_with_profile(llam::Profile::IoLatency, || {
let mut stream = llam::net::TcpStream::connect("127.0.0.1:9090")?;
stream.write_all(b"ping")?;
let mut buf = [0u8; 4];
stream.read_exact(&mut buf)?;
assert_eq!(&buf, b"ping");
Ok(())
})
}
```
## UDP
Connected UDP sockets use LLAM's read/write path.
```rust
fn main() -> llam::Result<()> {
llam::run_with_profile(llam::Profile::IoLatency, || {
let a = llam::net::UdpSocket::bind("127.0.0.1:0")?;
let b = llam::net::UdpSocket::bind("127.0.0.1:0")?;
a.connect(b.local_addr()?)?;
b.connect(a.local_addr()?)?;
a.send(b"pong")?;
let mut buf = [0u8; 4];
let n = b.recv(&mut buf)?;
assert_eq!(&buf[..n], b"pong");
Ok(())
})
}
```
## Raw I/O
`llam::io` exposes LLAM-aware descriptor/socket operations for integrations
that do not want the higher-level network wrappers.
```rust
let mut buf = [0u8; 1024];
let n = llam::io::read(fd, &mut buf)?;
let written = llam::io::write(fd, &buf[..n])?;
let revents = llam::io::poll_fd(fd, llam::io::READABLE, 1000)?;
println!("written = {written}, revents = {revents}");
```
Owned buffers:
```rust
if let Some(buf) = llam::io::read_owned(fd, 4096)? {
println!("{} bytes", buf.as_slice().len());
}
```
EOF or zero-byte reads return `Ok(None)` for owned-buffer reads.
## Files
Unix file wrappers are available through `llam::fs::File`.
```rust
#[cfg(unix)]
fn read_file() -> std::io::Result<String> {
use std::io::Read;
let mut file = llam::fs::File::open("Cargo.toml")?;
let mut text = String::new();
file.read_to_string(&mut text)?;
Ok(text)
}
```
## Task-Local Storage
Task-local values are scoped to the current managed LLAM task.
```rust
fn main() -> llam::Result<()> {
llam::run(|| {
let key = llam::TaskLocalKey::<String>::new()?;
key.set("root".to_string())?;
assert_eq!(key.get_cloned()?.as_deref(), Some("root"));
let value = key.with("scoped".to_string(), || {
key.get_cloned().unwrap().unwrap()
})?;
assert_eq!(value, "scoped");
assert_eq!(key.get_cloned()?.as_deref(), Some("root"));
Ok(())
})
}
```
If task-local values own resources, call `take()` or `clear()` before task exit.
The C runtime stores raw pointers and does not provide a destructor hook.
## Diagnostics
ABI information:
```rust
let abi = llam::AbiInfo::load()?;
println!("LLAM ABI {}.{}", abi.abi_major(), abi.abi_minor());
println!("runtime {}", abi.runtime_name());
println!("version {}", abi.version_string());
println!("platform {}", abi.platform_name());
```
Runtime stats:
```rust
fn main() -> llam::Result<()> {
let runtime = llam::Runtime::builder()
.profile(llam::Profile::Balanced)
.init()?;
let task = llam::try_spawn!({
for _ in 0..10 {
llam::task::yield_now();
}
})?;
unsafe {
llam::sys::llam_run();
}
task.join().expect("task failed");
let stats = runtime.stats()?;
println!("ctx switches = {}", stats.ctx_switches());
println!("yields = {}", stats.yields());
runtime.shutdown();
Ok(())
}
```
Write stats JSON on Unix:
```rust
#[cfg(unix)]
{
let file = std::fs::File::create("llam-stats.json")?;
llam::diagnostics::write_stats_json(&file)?;
}
```
## Build Environment
| `LLAM_SYS_PREFIX` | Installed LLAM SDK prefix. Must contain `include/` and `lib/`. |
| `LLAM_SYS_LIB_DIR` | Directory containing `libllam_runtime`. |
| `LLAM_SYS_INCLUDE_DIR` | Include directory to use with `LLAM_SYS_LIB_DIR`. |
| `LLAM_SYS_LIB_NAME` | Link name. Default: `llam_runtime`. |
| `LLAM_SYS_LINK_KIND` | Cargo link kind. Default: `static`. |
| `LLAM_SYS_INSTALL_PREFIX` | Override the automatic install prefix. |
| `LLAM_SYS_INSTALL_VERSION` | LLAM release version. Default: `1.0.0`. |
| `LLAM_SYS_INSTALL_TARGET` | Explicit release target. Examples: `macos-aarch64`, `macos-x86_64`, `linux-x86_64`, `linux-aarch64`, `windows-x86_64`. |
| `LLAM_SYS_INSTALL_BASE_URL` | Release asset base URL. |
| `LLAM_SYS_INSTALL_SCRIPT` | Local path or URL for `install.sh` / `install.ps1`. |
| `LLAM_SYS_FORCE_INSTALL=1` | Reinstall even if the build prefix already looks valid. |
| `LLAM_SYS_NO_INSTALL=1` | Do not run the installer. Require `LLAM_SYS_PREFIX` or `LLAM_SYS_LIB_DIR`. |
## Examples
Examples are Cargo targets, not standalone `rustc` inputs:
```bash
cargo run -p llam --example hello
cargo run -p llam --example tcp_echo
cargo run -p llam --example chat_server -- 7777
cargo run -p llam --example chat_server -- --public 7777
LLAM_CHAT_QUIET=1 cargo run -p llam --example chat_server -- 7777
```
The chat server mirrors the C LLAM chat server shape: each client gets a bounded
outbox channel, reader/writer tasks run per connection, full input lines are
broadcast as `[client N] ...`, and slow receivers shed queued messages instead
of blocking global fanout.
## Checks
From a LLAM-rs checkout:
```bash
cargo fmt --all -- --check
cargo test -p llam
cargo check -p llam --examples
cargo clippy -p llam --examples --tests -- -D warnings
```
Bench and stress helpers:
```bash
cargo run -p llam --bin llam-rs-bench -- 50000
cargo run -p llam --bin llam-rs-stress -- 10 128
cargo bench -p llam --bench runtime_bench
```
## Troubleshooting
### The build downloads LLAM every time
Set a stable install prefix:
```bash
LLAM_SYS_INSTALL_PREFIX="$HOME/.local/llam" cargo build
```
Or install LLAM once and reuse it:
```bash
LLAM_SYS_PREFIX="$HOME/.local/llam" cargo build
```
### Automatic install is not allowed in CI
Provide an SDK and disable automatic install:
```bash
LLAM_SYS_NO_INSTALL=1 \
LLAM_SYS_PREFIX="$HOME/.local/llam" \
cargo test -p llam
```
### Cross-building
Automatic installation is host-oriented. For cross builds, provide a matching
prebuilt SDK:
```bash
LLAM_SYS_INCLUDE_DIR="/path/to/target/include" \
LLAM_SYS_LIB_DIR="/path/to/target/lib" \
cargo build --target <target-triple>
```
### Dynamic linking
Static linking is the default:
```bash
LLAM_SYS_LINK_KIND=static cargo build
```
Dynamic linking requires a matching dynamic LLAM runtime discoverable by the
loader at runtime:
```bash
LLAM_SYS_LINK_KIND=dylib LLAM_SYS_PREFIX="$HOME/.local/llam" cargo build
```
## Safety
`llam::sys` is raw and unsafe by design. The safe `llam` layer owns heap
payloads across channels, task trampolines, task-local values, and owned I/O
buffers. Direct C handles are hidden behind RAII wrappers where the C API
provides a lifetime contract.
See the repository `SAFETY.md` for the unsafe boundary audit.
## License
Apache-2.0