llam 0.1.4

Safe, Go-style Rust bindings for the LLAM runtime
docs.rs failed to build llam-0.1.4
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

llam

crates.io docs.rs Rust Linux macOS Windows License

llam is the safe Rust binding for the LLAM C runtime.

LLAM-rs gives Rust code Go-style synchronous concurrency on top of LLAM's stackful task scheduler. You write ordinary blocking-looking Rust code with tasks, channels, select!, sleeps, socket I/O, mutexes, condition variables, blocking offload, and diagnostics. Underneath, managed LLAM tasks park cooperatively so the runtime can keep OS worker threads busy.

llam 0.1.4 requires LLAM C SDK 1.0.1 or newer. The build script installs 1.0.1 by default unless LLAM_SYS_INSTALL_VERSION, LLAM_SYS_PREFIX, or LLAM_SYS_LIB_DIR is provided.

No async, no await, no executor handles.

Install

Add the crate:

cargo add llam@0.1.4

Or edit Cargo.toml:

[dependencies]
llam = "0.1.4"

Build normally:

cargo build

By default, the build script downloads and installs the LLAM C SDK into Cargo's private build output, then links libllam_runtime statically.

Use a preinstalled LLAM SDK instead:

LLAM_SYS_PREFIX="$HOME/.local/llam" cargo build

Disable automatic installation in locked-down CI:

LLAM_SYS_NO_INSTALL=1 \
LLAM_SYS_PREFIX="$HOME/.local/llam" \
cargo test

Platform Support

Platform C runtime backend Rust support
Linux x86_64 io_uring/liburing Supported
Linux aarch64 io_uring/liburing Supported
macOS arm64 kqueue Supported
macOS x86_64 kqueue Supported
Windows x86_64 IOCP Supported

Unix-domain sockets are Unix-only. Windows sockets are Winsock SOCKETs, not POSIX file descriptors. llam::fs::File is available on Windows through LLAM's blocking offload path for ordinary sequential files. Raw overlapped pipe/device/custom HANDLE integrations can use llam::io::Handle plus read_handle, write_handle, and poll_handle.

Runtime Model

Every LLAM-aware operation should run inside #[llam::main], llam::run(...), or llam::run_with_profile(...).

#[llam::main]
fn main() {
    println!("running inside a managed LLAM task");
}

For I/O-heavy applications:

#[llam::main(profile = "io_latency")]
fn main() -> llam::Result<()> {
    println!("I/O latency profile enabled");
    Ok(())
}

The #[llam::main] function body, or the closure passed to run, becomes the first managed task. Tasks spawned from there are scheduled by the LLAM runtime.

Runtime::builder().create_handle() exposes LLAM's low-level runtime handle API for experiments and future multi-runtime work. The safe high-level APIs still target the active/default LLAM runtime, so application code should prefer run, run_with_profile, or Runtime::builder().run(...).

Runtime Entry Macros

llam re-exports procedural macros from llam-macros, so users only need to depend on llam.

Application entrypoint:

#[llam::main]
fn main() {
    println!("hello from LLAM");
}

Fallible entrypoint:

#[llam::main(profile = "io_latency")]
fn main() -> llam::Result<()> {
    Ok(())
}

Supported profile strings are balanced, release_fast, debug_safe, and io_latency. Passing a Rust expression is also supported:

#[llam::main(profile = llam::Profile::IoLatency)]
fn main() -> llam::Result<()> {
    Ok(())
}

Test entrypoint:

#[llam::test(profile = "debug_safe")]
fn channel_roundtrip() -> llam::Result<()> {
    let (tx, rx) = llam::channel::bounded::<u32>(1)?;
    tx.send(42).expect("send failed");
    assert_eq!(rx.recv()?, 42);
    Ok(())
}

The explicit runtime API remains available:

fn main() -> llam::Result<()> {
    llam::run(|| {
        Ok(())
    })
}

Runtime Profiles

Profiles are high-level runtime presets. They map to the C runtime's llam_runtime_profile_t values and can be selected with #[llam::main], #[llam::test], llam::run_with_profile, or Runtime::builder().profile(...).

Rust value Attribute string Intended use
llam::Profile::Balanced "balanced" Default profile. Good general-purpose scheduling and I/O behavior.
llam::Profile::ReleaseFast "release_fast" Fewer diagnostic checks and lower overhead for benchmark/release-style runs.
llam::Profile::DebugSafe "debug_safe" More conservative debug/safety behavior for tests and failure investigation.
llam::Profile::IoLatency "io_latency" I/O-oriented server/client workloads where wakeup and completion latency matter.

Attribute form:

#[llam::main(profile = "io_latency")]
fn main() -> llam::Result<()> {
    Ok(())
}

Expression form:

#[llam::main(profile = llam::Profile::ReleaseFast)]
fn main() -> llam::Result<()> {
    Ok(())
}

Builder form:

fn main() -> llam::Result<()> {
    llam::Runtime::builder()
        .profile(llam::Profile::IoLatency)
        .dynamic_workers(true)
        .lockfree_normq(true)
        .run(|| Ok(()))
}

The builder also exposes lower-level knobs such as deterministic mode, forced yield intervals, idle spin timing, SQPOLL CPU selection, worker rings, dynamic workers, huge allocation, and raw experimental flags.

Key Features

  • Stackful LLAM tasks from Rust closures.
  • #[llam::main] and #[llam::test] runtime entry macros.
  • Go-like spawn! and try_spawn! macros.
  • JoinHandle<T> with typed task results.
  • TaskGroup and TaskBatch helpers for related work.
  • Typed bounded channels over LLAM's C channel primitive.
  • select! over receive, send, close, timeout, and default arms.
  • LLAM-aware Mutex<T> and Condvar.
  • Cooperative sleep and monotonic deadlines.
  • Blocking offload for filesystem, CPU, and foreign blocking work.
  • TCP, UDP, Unix socket, raw descriptor, and owned-buffer I/O wrappers.
  • Task-local storage.
  • ABI/runtime diagnostics.
  • Raw C ABI access through llam::sys for advanced integration.

Minimal Example

fn main() -> llam::Result<()> {
    llam::run(|| {
        let handle = llam::spawn!({
            21 * 2
        });

        let value = handle.join().expect("task failed");
        assert_eq!(value, 42);
        Ok(())
    })
}

Tasks

Use spawn! when spawn failure should panic:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let task = llam::spawn!(move {
            "hello from LLAM"
        });

        assert_eq!(task.join().expect("join failed"), "hello from LLAM");
        Ok(())
    })
}

Use try_spawn! when spawn failure should be returned:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let task = llam::try_spawn!(move {
            7usize
        })?;

        assert_eq!(task.join().expect("join failed"), 7);
        Ok(())
    })
}

Tune task class and stack size:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let token = llam::CancelToken::new()?;

        let task = llam::try_spawn!(
            class = llam::TaskClass::Latency,
            stack = llam::StackClass::Large,
            cancel = token,
            move {
                "done"
            }
        )?;

        assert_eq!(task.join().expect("join failed"), "done");
        Ok(())
    })
}

Use CancelScope when several tasks should share a cancellation token:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let scope = llam::CancelScope::new()?;
        let token = scope.token();

        let task = scope.try_spawn(move || {
            while !token.is_cancelled() {
                llam::task::yield_now();
            }
            "stopped"
        })?;

        scope.cancel()?;
        assert_eq!(task.join().expect("join failed"), "stopped");
        Ok(())
    })
}

CancelScope::new() cancels on drop. CancelScope::detached() creates a scope that only cancels when cancel() is called. The scope is a cancellation helper, not a nursery: keep and join JoinHandles when task completion matters. If try_spawn_with receives options that already contain a cancellation token, the scope token is used.

Use scope or try_scope for structured tasks that must finish before the caller continues. Scoped tasks may borrow from the caller:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let name = String::from("llam");
        let len = llam::try_scope(|scope| {
            let task = scope.spawn(|| name.len());
            task.join().expect("scoped task failed")
        })?;
        assert_eq!(len, 4);
        Ok(())
    })
}

Use nursery when the group should also have a shared cancellation token:

fn main() -> llam::Result<()> {
    llam::run(|| {
        llam::nursery(|nursery| {
            let token = nursery.token();
            let task = nursery.try_spawn(move || {
                while !token.is_cancelled() {
                    llam::task::yield_now();
                }
                "stopped"
            })?;

            nursery.cancel()?;
            assert_eq!(task.join().expect("join failed"), "stopped");
            Ok(())
        })?;
        Ok(())
    })
}

Unjoined scoped or nursery tasks are joined automatically before the scope returns. nursery requests cancellation before joining children when the closure returns Err or panics. Call nursery.cancel() explicitly for long-running workers that should stop on normal scope exit.

Detach a fire-and-forget task:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let task = llam::try_spawn!(move {
            let _ = llam::time::sleep(std::time::Duration::from_millis(10));
        })?;

        task.detach()?;
        Ok(())
    })
}

Yield explicitly:

llam::task::yield_now();

Inspect the current managed task:

println!("task id = {:?}", llam::task::current_id());
println!("task class = {:?}", llam::task::current_class());
println!("state = {:?}", llam::task::current_state_name());

Task Groups

Use TaskGroup when all child tasks must finish before the scope continues:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let mut group = llam::TaskGroup::new()?;

        for i in 0..8 {
            group.spawn(move || {
                println!("group task {i}");
            })?;
        }

        group.join()?;
        Ok(())
    })
}

Use TaskBatch<T> when each task returns a typed result:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let mut batch = llam::TaskBatch::new();

        for i in 0..4 {
            batch.spawn(move || i * i)?;
        }

        let values = batch.join().expect("batch failed");
        assert_eq!(values, vec![0, 1, 4, 9]);
        Ok(())
    })
}

TaskBatch::join() returns the typed values and stops on the first task error. Use TaskBatch::join_results() when each task result should be inspected independently.

Channels

Channels are typed and bounded. Values are moved into LLAM and moved back out on receive.

fn main() -> llam::Result<()> {
    llam::run(|| {
        let (tx, rx) = llam::channel::bounded::<String>(32)?;

        llam::spawn!(move {
            tx.send("ping".to_string()).expect("send failed");
        });

        assert_eq!(rx.recv()?, "ping");
        Ok(())
    })
}

Close-aware receive:

fn main() -> llam::Result<()> {
    llam::run(|| {
        let (tx, rx) = llam::channel::bounded::<u64>(4)?;

        tx.send(1).expect("send failed");
        drop(tx);

        assert_eq!(rx.recv_option()?, Some(1));
        assert_eq!(rx.recv_option()?, None);
        Ok(())
    })
}

Timeouts and nonblocking attempts:

use std::time::Duration;

fn main() -> llam::Result<()> {
    llam::run(|| {
        let (tx, rx) = llam::channel::bounded::<u32>(1)?;

        tx.try_send(10).expect("send failed");
        assert!(tx.try_send(20).is_err());

        assert_eq!(rx.recv_timeout(Duration::from_millis(50))?, 10);
        assert!(rx.try_recv().is_err());
        Ok(())
    })
}

Useful channel methods:

tx.send(value)
tx.try_send(value)
tx.send_timeout(value, Duration::from_millis(10))
tx.close()

rx.recv()
rx.try_recv()
rx.recv_timeout(Duration::from_millis(10))
rx.recv_option()
rx.try_recv_option()
rx.recv_timeout_option(Duration::from_millis(10))

recv() returns EPIPE on close. recv_option() returns Ok(None) on close.

Select

llam::select! waits on multiple channel operations with Go-like syntax.

Receive with timeout:

use std::time::Duration;

fn main() -> llam::Result<()> {
    llam::run(|| {
        let (tx, rx) = llam::channel::bounded::<u32>(1)?;

        llam::spawn!(move {
            tx.send(42).expect("send failed");
        });

        let value = llam::select! {
            recv(rx) -> msg => {
                msg.expect("recv failed")
            },
            after(Duration::from_secs(1)) => {
                0
            },
        };

        assert_eq!(value, 42);
        Ok(())
    })
}

Closed-channel arm:

let done = llam::select! {
    recv(rx) -> msg => {
        println!("value = {:?}", msg);
        false
    },
    closed(rx) => {
        true
    },
};

Send arm:

let selected = llam::select! {
    send(tx, 7u32) => {
        "sent"
    },
    after(Duration::from_millis(10)) => {
        "timeout"
    },
};

println!("{selected}");

Default arm:

let ready = llam::select! {
    recv(rx) -> value => {
        let _ = value;
        true
    },
    default => {
        false
    },
};

select! accepts either one after(...) arm or one default arm, not both. Use try_select! when backend errors should be returned instead of panicking:

let value = llam::try_select! {
    recv(rx) -> value => value?,
    default => 0,
}?;

Low-level integrations can call the raw C select surface through llam::channel::select_raw. This is unsafe because the caller owns all raw pointers in llam::sys::llam_select_op_t.

let raw = unsafe { llam::sys::llam_channel_create(1) };
assert!(!raw.is_null());

let value = Box::into_raw(Box::new(42u32)).cast();
unsafe {
    assert_eq!(llam::sys::llam_channel_send(raw, value), 0);
}

let mut out = std::ptr::null_mut();
let mut ops = [llam::sys::llam_select_op_t {
    kind: llam::sys::LLAM_SELECT_OP_RECV,
    reserved0: 0,
    channel: raw,
    send_value: std::ptr::null_mut(),
    recv_out: &mut out,
    result_errno: 0,
}];

let selected = unsafe { llam::channel::select_raw(&mut ops, u64::MAX)? };
assert_eq!(selected.selected, 0);
assert_eq!(selected.result_errno, 0);
assert_eq!(unsafe { *Box::from_raw(out.cast::<u32>()) }, 42);

unsafe {
    let _ = llam::sys::llam_channel_destroy(raw);
}

Time

Sleep cooperatively:

llam::time::sleep(std::time::Duration::from_millis(10))?;

Use absolute LLAM deadlines:

let deadline = llam::time::deadline_after(std::time::Duration::from_secs(1));
llam::time::sleep_until(deadline)?;

Read the monotonic nanosecond clock:

let now = llam::time::now_ns();
println!("now = {now}");

Mutex And Condvar

Use LLAM-aware synchronization primitives for waits inside managed tasks.

fn main() -> llam::Result<()> {
    llam::run(|| {
        let value = llam::sync::Mutex::new(0usize)?;

        {
            let mut guard = value.lock()?;
            *guard += 1;
        }

        assert_eq!(*value.lock()?, 1);
        Ok(())
    })
}

Condition variable with a predicate loop:

use std::sync::Arc;

fn main() -> llam::Result<()> {
    llam::run(|| {
        let ready = Arc::new(llam::sync::Mutex::new(false)?);
        let cond = Arc::new(llam::sync::Condvar::new()?);

        let worker_ready = Arc::clone(&ready);
        let worker_cond = Arc::clone(&cond);

        llam::spawn!(move {
            let mut guard = worker_ready.lock().expect("lock failed");
            *guard = true;
            worker_cond.notify_one().expect("notify failed");
        });

        let mut guard = ready.lock()?;
        while !*guard {
            guard = cond.wait(guard)?;
        }

        Ok(())
    })
}

Like standard condition variables, wait in a predicate loop.

Blocking Work

Use llam::blocking::call for work that should run outside the cooperative scheduler path.

fn main() -> llam::Result<()> {
    llam::run(|| {
        let result = llam::blocking::call(|| {
            std::fs::read_to_string("Cargo.toml")
        })?;

        let text = result.expect("blocking closure panicked")?;
        println!("{} bytes", text.len());
        Ok(())
    })
}

Use BlockingRegion for manual enter/leave around foreign blocking code:

let _region = llam::blocking::BlockingRegion::enter()?;
// Call a blocking C API here.

TCP Echo Server

This is the typical LLAM-rs shape: synchronous-looking accept, read, and write_all, with one LLAM task per connection.

use std::io::{Read, Write};

fn main() -> llam::Result<()> {
    llam::run_with_profile(llam::Profile::IoLatency, || {
        let listener = llam::net::TcpListener::bind("127.0.0.1:9090")?;
        println!("echo server listening on {}", listener.local_addr()?);

        loop {
            let (mut stream, peer) = listener.accept()?;
            println!("accepted {peer:?}");

            llam::spawn!(move {
                let mut buf = [0u8; 4096];
                loop {
                    let n = match stream.read(&mut buf) {
                        Ok(0) => break,
                        Ok(n) => n,
                        Err(error) => {
                            eprintln!("read failed: {error}");
                            break;
                        }
                    };

                    if let Err(error) = stream.write_all(&buf[..n]) {
                        eprintln!("write failed: {error}");
                        break;
                    }
                }
            });
        }
    })
}

TCP Client

use std::io::{Read, Write};

fn main() -> llam::Result<()> {
    llam::run_with_profile(llam::Profile::IoLatency, || {
        let mut stream = llam::net::TcpStream::connect("127.0.0.1:9090")?;

        stream.write_all(b"ping")?;

        let mut buf = [0u8; 4];
        stream.read_exact(&mut buf)?;

        assert_eq!(&buf, b"ping");
        Ok(())
    })
}

Wrapping Existing Standard Sockets

LLAM-rs can take ownership of existing standard sockets and switch them to nonblocking mode. This is useful when another library performs bind/connect but you still want LLAM-aware I/O afterwards.

let std_listener = std::net::TcpListener::bind("127.0.0.1:0")?;
let listener = llam::net::TcpListener::from_std(std_listener)?;

let std_stream = std::net::TcpStream::connect(listener.local_addr()?)?;
let mut stream = llam::net::TcpStream::from_std(std_stream)?;

UDP and Unix sockets have the same shape:

let std_udp = std::net::UdpSocket::bind("127.0.0.1:0")?;
let udp = llam::net::UdpSocket::from_std(std_udp)?;
#[cfg(unix)]
{
    let std_listener = std::os::unix::net::UnixListener::bind("/tmp/llam.sock")?;
    let listener = llam::net::UnixListener::from_std(std_listener)?;
    drop(listener);
}

UDP

Connected UDP sockets use LLAM's read/write path.

fn main() -> llam::Result<()> {
    llam::run_with_profile(llam::Profile::IoLatency, || {
        let a = llam::net::UdpSocket::bind("127.0.0.1:0")?;
        let b = llam::net::UdpSocket::bind("127.0.0.1:0")?;

        a.connect(b.local_addr()?)?;
        b.connect(a.local_addr()?)?;

        a.send(b"pong")?;

        let mut buf = [0u8; 4];
        let n = b.recv(&mut buf)?;

        assert_eq!(&buf[..n], b"pong");
        Ok(())
    })
}

Unconnected UDP uses send_to and recv_from. The wrapper waits for readiness through LLAM when the socket would otherwise block.

fn main() -> llam::Result<()> {
    llam::run_with_profile(llam::Profile::IoLatency, || {
        let a = llam::net::UdpSocket::bind("127.0.0.1:0")?;
        let b = llam::net::UdpSocket::bind("127.0.0.1:0")?;

        a.send_to(b"datagram", b.local_addr()?)?;

        let mut buf = [0u8; 64];
        let (n, peer) = b.recv_from(&mut buf)?;

        println!("received {:?} from {peer}", &buf[..n]);
        Ok(())
    })
}

Raw I/O

llam::io exposes LLAM-aware descriptor/socket operations for integrations that do not want the higher-level network wrappers.

let mut buf = [0u8; 1024];

let n = llam::io::read(fd, &mut buf)?;
let written = llam::io::write(fd, &buf[..n])?;
let revents = llam::io::poll_fd(fd, llam::io::READABLE, 1000)?;

println!("written = {written}, revents = {revents}");

Accept with peer address:

let accepted = llam::io::accept_with_addr(listener_fd)?;
println!("accepted fd={:?} peer={:?}", accepted.fd, accepted.addr);

Owned buffers:

if let Some(buf) = llam::io::read_owned(fd, 4096)? {
    println!("{} bytes", buf.as_slice().len());
}

EOF or zero-byte reads return Ok(None) for owned-buffer reads.

Files

File wrappers are available through llam::fs::File.

fn read_file() -> std::io::Result<String> {
    use std::io::Read;

    let mut file = llam::fs::File::open("Cargo.toml")?;
    let mut text = String::new();
    file.read_to_string(&mut text)?;
    Ok(text)
}

Unix file wrappers use LLAM fd operations. Windows file wrappers use llam::blocking::call around standard file handles so Rust Read/Write offset semantics remain predictable. Use raw llam::io::{read_handle, write_handle, poll_handle} for explicitly overlapped HANDLE integrations.

Raw HANDLE integration:

#[cfg(windows)]
fn probe_handle(file: &std::fs::File) -> llam::Result<()> {
    use std::os::windows::io::AsRawHandle;

    let handle = file.as_raw_handle() as llam::io::Handle;
    let ready = llam::io::poll_handle(handle, llam::io::READABLE, 0)?;
    println!("handle ready={}", ready.revents);
    Ok(())
}

Run the Windows-only raw HANDLE example with:

cargo run -p llam --example windows_handle_io

Task-Local Storage

Task-local values are scoped to the current managed LLAM task.

fn main() -> llam::Result<()> {
    llam::run(|| {
        let key = llam::TaskLocalKey::<String>::new()?;

        key.set("root".to_string())?;
        assert_eq!(key.get_cloned()?.as_deref(), Some("root"));

        let value = key.with("scoped".to_string(), || {
            key.get_cloned().unwrap().unwrap()
        })?;

        assert_eq!(value, "scoped");
        assert_eq!(key.get_cloned()?.as_deref(), Some("root"));

        assert_eq!(key.replace("next".to_string())?.as_deref(), Some("root"));
        assert!(key.is_set()?);
        assert_eq!(key.get_cloned_or_default()?, "next");
        Ok(())
    })
}

with() and bind() are nest-safe: they restore the previous value when the temporary value leaves scope. Use replace() when the previous value should be recovered instead of dropped. A TaskLocalGuard can also be restored explicitly with restore() or cleared with clear().

If task-local values own resources, call take() or clear() before task exit. The C runtime stores raw pointers and does not provide a destructor hook.

Diagnostics

ABI information:

let abi = llam::AbiInfo::load()?;

println!("LLAM ABI {}.{}", abi.abi_major(), abi.abi_minor());
println!("runtime {}", abi.runtime_name());
println!("version {}", abi.version_string());
println!("platform {}", abi.platform_name());

Runtime stats:

fn main() -> llam::Result<()> {
    let runtime = llam::Runtime::builder()
        .profile(llam::Profile::Balanced)
        .init()?;

    let task = llam::try_spawn!({
        for _ in 0..10 {
            llam::task::yield_now();
        }
    })?;

    unsafe {
        llam::sys::llam_run();
    }

    task.join().expect("task failed");

    let stats = runtime.stats()?;
    println!("ctx switches = {}", stats.ctx_switches());
    println!("yields = {}", stats.yields());
    println!("io submits = {}", stats.io_submits());
    println!("active workers = {}", stats.active_workers());

    runtime.shutdown();
    Ok(())
}

The safe RuntimeStats wrapper exposes getters for the full LLAM 1.0 stats struct:

Group Getters
Scheduling ctx_switches, yields, parks, wakes, steals, migrations
Blocking blocking_calls, blocking_completions
I/O io_submits, io_submit_calls, io_submit_syscalls, io_completions
Idle spin idle_polls, idle_spin_loops, idle_spin_hits, idle_spin_fallbacks, idle_spin_ns
Workers active_workers, online_workers, online_workers_floor, online_workers_min, online_workers_max, active_nodes
Runtime flags dynamic_workers, worker_rings, worker_rings_multishot, lockfree_normq, huge_alloc, sqpoll
Queues queue_overflows, overflow_depth
Opaque blocking opaque_block_ns, opaque_block_samples, opaque_block_max_ns, opaque_enter_wait_ns, opaque_enter_wait_samples, opaque_enter_wait_max_ns, opaque_leave_wait_ns, opaque_leave_wait_samples, opaque_leave_wait_max_ns
Direct yield handoff yield_direct_attempts, yield_direct_fast_hits, yield_direct_locked_hits, yield_direct_fail_context, yield_direct_fail_policy, yield_direct_fail_no_work, yield_direct_fail_self, yield_direct_fail_push

stats.raw() returns the underlying llam::sys::llam_runtime_stats_t for advanced consumers that need exact ABI field access.

Write stats JSON on Unix:

#[cfg(unix)]
{
    let file = std::fs::File::create("llam-stats.json")?;
    llam::diagnostics::write_stats_json(&file)?;
}

Build Environment

Variable Meaning
LLAM_SYS_PREFIX Installed LLAM SDK prefix. Must contain include/ and lib/.
LLAM_SYS_LIB_DIR Directory containing libllam_runtime.
LLAM_SYS_INCLUDE_DIR Include directory to use with LLAM_SYS_LIB_DIR.
LLAM_SYS_LIB_NAME Link name. Default: llam_runtime.
LLAM_SYS_LINK_KIND Cargo link kind. Default: static.
LLAM_SYS_INSTALL_PREFIX Override the automatic install prefix.
LLAM_SYS_INSTALL_VERSION LLAM release version. Default: 1.0.1.
LLAM_SYS_INSTALL_TARGET Explicit release target. Examples: macos-aarch64, macos-x86_64, linux-x86_64, linux-aarch64, windows-x86_64.
LLAM_SYS_INSTALL_BASE_URL Release asset base URL.
LLAM_SYS_INSTALL_SCRIPT Local path or URL for install.sh / install.ps1.
LLAM_SYS_FORCE_INSTALL=1 Reinstall even if the build prefix already looks valid.
LLAM_SYS_NO_INSTALL=1 Do not run the installer. Require LLAM_SYS_PREFIX or LLAM_SYS_LIB_DIR.

Examples

Examples are Cargo targets, not standalone rustc inputs:

cargo run -p llam --example hello
cargo run -p llam --example attribute_main
cargo run -p llam --example tcp_echo
cargo run -p llam --example chat_server -- 7777
cargo run -p llam --example chat_server -- --public 7777
LLAM_CHAT_QUIET=1 cargo run -p llam --example chat_server -- 7777

The chat server mirrors the C LLAM chat server shape: each client gets a bounded outbox channel, reader/writer tasks run per connection, full input lines are broadcast as [client N] ..., and slow receivers shed queued messages instead of blocking global fanout.

Checks

From a LLAM-rs checkout:

cargo fmt --all -- --check
cargo test -p llam
cargo check -p llam --examples
cargo clippy -p llam --examples --tests -- -D warnings

Bench and stress helpers:

cargo run -p llam --bin llam-rs-bench -- 50000
cargo run -p llam --bin llam-rs-stress -- 10 128
cargo bench -p llam --bench runtime_bench

Troubleshooting

The build downloads LLAM every time

Set a stable install prefix:

LLAM_SYS_INSTALL_PREFIX="$HOME/.local/llam" cargo build

Or install LLAM once and reuse it:

LLAM_SYS_PREFIX="$HOME/.local/llam" cargo build

Automatic install is not allowed in CI

Provide an SDK and disable automatic install:

LLAM_SYS_NO_INSTALL=1 \
LLAM_SYS_PREFIX="$HOME/.local/llam" \
cargo test -p llam

Cross-building

Automatic installation is host-oriented. For cross builds, provide a matching prebuilt SDK:

LLAM_SYS_INCLUDE_DIR="/path/to/target/include" \
LLAM_SYS_LIB_DIR="/path/to/target/lib" \
cargo build --target <target-triple>

Dynamic linking

Static linking is the default:

LLAM_SYS_LINK_KIND=static cargo build

Dynamic linking requires a matching dynamic LLAM runtime discoverable by the loader at runtime:

LLAM_SYS_LINK_KIND=dylib LLAM_SYS_PREFIX="$HOME/.local/llam" cargo build

Safety

llam::sys is raw and unsafe by design. The safe llam layer owns heap payloads across channels, task trampolines, task-local values, and owned I/O buffers. Direct C handles are hidden behind RAII wrappers where the C API provides a lifetime contract.

See the repository SAFETY.md for the unsafe boundary audit.

License

Apache-2.0