Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
llam
llam is the safe Rust binding for the LLAM C runtime.
- Rust crate repository: https://github.com/Feralthedogg/LLAM-rs
- C runtime repository: https://github.com/Feralthedogg/LLAM
- Crate: https://crates.io/crates/llam
- Documentation: https://docs.rs/llam
LLAM-rs gives Rust code Go-style synchronous concurrency on top of LLAM's
stackful task scheduler. You write ordinary blocking-looking Rust code with
tasks, channels, select!, sleeps, socket I/O, mutexes, condition variables,
blocking offload, and diagnostics. Underneath, managed LLAM tasks park
cooperatively so the runtime can keep OS worker threads busy.
llam 0.1.4 requires LLAM C SDK 1.0.1 or newer. The build script installs
1.0.1 by default unless LLAM_SYS_INSTALL_VERSION, LLAM_SYS_PREFIX, or
LLAM_SYS_LIB_DIR is provided.
No async, no await, no executor handles.
Install
Add the crate:
Or edit Cargo.toml:
[]
= "0.1.4"
Build normally:
By default, the build script downloads and installs the LLAM C SDK into Cargo's
private build output, then links libllam_runtime statically.
Use a preinstalled LLAM SDK instead:
LLAM_SYS_PREFIX="/.local/llam"
Disable automatic installation in locked-down CI:
LLAM_SYS_NO_INSTALL=1 \
LLAM_SYS_PREFIX="/.local/llam" \
Platform Support
| Platform | C runtime backend | Rust support |
|---|---|---|
| Linux x86_64 | io_uring/liburing | Supported |
| Linux aarch64 | io_uring/liburing | Supported |
| macOS arm64 | kqueue | Supported |
| macOS x86_64 | kqueue | Supported |
| Windows x86_64 | IOCP | Supported |
Unix-domain sockets are Unix-only. Windows sockets are Winsock SOCKETs, not
POSIX file descriptors. llam::fs::File is available on Windows through LLAM's
blocking offload path for ordinary sequential files. Raw overlapped
pipe/device/custom HANDLE integrations can use llam::io::Handle plus
read_handle, write_handle, and poll_handle.
Runtime Model
Every LLAM-aware operation should run inside #[llam::main],
llam::run(...), or llam::run_with_profile(...).
For I/O-heavy applications:
The #[llam::main] function body, or the closure passed to run, becomes the
first managed task. Tasks spawned from there are scheduled by the LLAM runtime.
Runtime::builder().create_handle() exposes LLAM's low-level runtime handle API
for experiments and future multi-runtime work. The safe high-level APIs still
target the active/default LLAM runtime, so application code should prefer
run, run_with_profile, or Runtime::builder().run(...).
Runtime Entry Macros
llam re-exports procedural macros from llam-macros, so users only need to
depend on llam.
Application entrypoint:
Fallible entrypoint:
Supported profile strings are balanced, release_fast, debug_safe, and
io_latency. Passing a Rust expression is also supported:
Test entrypoint:
The explicit runtime API remains available:
Runtime Profiles
Profiles are high-level runtime presets. They map to the C runtime's
llam_runtime_profile_t values and can be selected with #[llam::main],
#[llam::test], llam::run_with_profile, or Runtime::builder().profile(...).
| Rust value | Attribute string | Intended use |
|---|---|---|
llam::Profile::Balanced |
"balanced" |
Default profile. Good general-purpose scheduling and I/O behavior. |
llam::Profile::ReleaseFast |
"release_fast" |
Fewer diagnostic checks and lower overhead for benchmark/release-style runs. |
llam::Profile::DebugSafe |
"debug_safe" |
More conservative debug/safety behavior for tests and failure investigation. |
llam::Profile::IoLatency |
"io_latency" |
I/O-oriented server/client workloads where wakeup and completion latency matter. |
Attribute form:
Expression form:
Builder form:
The builder also exposes lower-level knobs such as deterministic mode, forced yield intervals, idle spin timing, SQPOLL CPU selection, worker rings, dynamic workers, huge allocation, and raw experimental flags.
Key Features
- Stackful LLAM tasks from Rust closures.
#[llam::main]and#[llam::test]runtime entry macros.- Go-like
spawn!andtry_spawn!macros. JoinHandle<T>with typed task results.TaskGroupandTaskBatchhelpers for related work.- Typed bounded channels over LLAM's C channel primitive.
select!over receive, send, close, timeout, and default arms.- LLAM-aware
Mutex<T>andCondvar. - Cooperative sleep and monotonic deadlines.
- Blocking offload for filesystem, CPU, and foreign blocking work.
- TCP, UDP, Unix socket, raw descriptor, and owned-buffer I/O wrappers.
- Task-local storage.
- ABI/runtime diagnostics.
- Raw C ABI access through
llam::sysfor advanced integration.
Minimal Example
Tasks
Use spawn! when spawn failure should panic:
Use try_spawn! when spawn failure should be returned:
Tune task class and stack size:
Use CancelScope when several tasks should share a cancellation token:
CancelScope::new() cancels on drop. CancelScope::detached() creates a scope
that only cancels when cancel() is called. The scope is a cancellation helper,
not a nursery: keep and join JoinHandles when task completion matters. If
try_spawn_with receives options that already contain a cancellation token, the
scope token is used.
Use scope or try_scope for structured tasks that must finish before the
caller continues. Scoped tasks may borrow from the caller:
Use nursery when the group should also have a shared cancellation token:
Unjoined scoped or nursery tasks are joined automatically before the scope
returns. nursery requests cancellation before joining children when the
closure returns Err or panics. Call nursery.cancel() explicitly for
long-running workers that should stop on normal scope exit.
Detach a fire-and-forget task:
Yield explicitly:
yield_now;
Inspect the current managed task:
println!;
println!;
println!;
Task Groups
Use TaskGroup when all child tasks must finish before the scope continues:
Use TaskBatch<T> when each task returns a typed result:
TaskBatch::join() returns the typed values and stops on the first task error.
Use TaskBatch::join_results() when each task result should be inspected
independently.
Channels
Channels are typed and bounded. Values are moved into LLAM and moved back out on receive.
Close-aware receive:
Timeouts and nonblocking attempts:
use Duration;
Useful channel methods:
tx.send
tx.try_send
tx.send_timeout
tx.close
rx.recv
rx.try_recv
rx.recv_timeout
rx.recv_option
rx.try_recv_option
rx.recv_timeout_option
recv() returns EPIPE on close. recv_option() returns Ok(None) on close.
Select
llam::select! waits on multiple channel operations with Go-like syntax.
Receive with timeout:
use Duration;
Closed-channel arm:
let done = select! ;
Send arm:
let selected = select! ;
println!;
Default arm:
let ready = select! ;
select! accepts either one after(...) arm or one default arm, not both.
Use try_select! when backend errors should be returned instead of panicking:
let value = try_select! ?;
Low-level integrations can call the raw C select surface through
llam::channel::select_raw. This is unsafe because the caller owns all raw
pointers in llam::sys::llam_select_op_t.
let raw = unsafe ;
assert!;
let value = Boxinto_raw.cast;
unsafe
let mut out = null_mut;
let mut ops = ;
let selected = unsafe ;
assert_eq!;
assert_eq!;
assert_eq!;
unsafe
Time
Sleep cooperatively:
sleep?;
Use absolute LLAM deadlines:
let deadline = deadline_after;
sleep_until?;
Read the monotonic nanosecond clock:
let now = now_ns;
println!;
Mutex And Condvar
Use LLAM-aware synchronization primitives for waits inside managed tasks.
Condition variable with a predicate loop:
use Arc;
Like standard condition variables, wait in a predicate loop.
Blocking Work
Use llam::blocking::call for work that should run outside the cooperative
scheduler path.
Use BlockingRegion for manual enter/leave around foreign blocking code:
let _region = enter?;
// Call a blocking C API here.
TCP Echo Server
This is the typical LLAM-rs shape: synchronous-looking accept, read, and
write_all, with one LLAM task per connection.
use ;
TCP Client
use ;
Wrapping Existing Standard Sockets
LLAM-rs can take ownership of existing standard sockets and switch them to nonblocking mode. This is useful when another library performs bind/connect but you still want LLAM-aware I/O afterwards.
let std_listener = bind?;
let listener = from_std?;
let std_stream = connect?;
let mut stream = from_std?;
UDP and Unix sockets have the same shape:
let std_udp = bind?;
let udp = from_std?;
UDP
Connected UDP sockets use LLAM's read/write path.
Unconnected UDP uses send_to and recv_from. The wrapper waits for readiness
through LLAM when the socket would otherwise block.
Raw I/O
llam::io exposes LLAM-aware descriptor/socket operations for integrations
that do not want the higher-level network wrappers.
let mut buf = ;
let n = read?;
let written = write?;
let revents = poll_fd?;
println!;
Accept with peer address:
let accepted = accept_with_addr?;
println!;
Owned buffers:
if let Some = read_owned?
EOF or zero-byte reads return Ok(None) for owned-buffer reads.
Files
File wrappers are available through llam::fs::File.
Unix file wrappers use LLAM fd operations. Windows file wrappers use
llam::blocking::call around standard file handles so Rust Read/Write
offset semantics remain predictable. Use raw llam::io::{read_handle, write_handle, poll_handle} for explicitly overlapped HANDLE integrations.
Raw HANDLE integration:
Run the Windows-only raw HANDLE example with:
Task-Local Storage
Task-local values are scoped to the current managed LLAM task.
with() and bind() are nest-safe: they restore the previous value when the
temporary value leaves scope. Use replace() when the previous value should be
recovered instead of dropped. A TaskLocalGuard can also be restored explicitly
with restore() or cleared with clear().
If task-local values own resources, call take() or clear() before task exit.
The C runtime stores raw pointers and does not provide a destructor hook.
Diagnostics
ABI information:
let abi = load?;
println!;
println!;
println!;
println!;
Runtime stats:
The safe RuntimeStats wrapper exposes getters for the full LLAM 1.0 stats
struct:
| Group | Getters |
|---|---|
| Scheduling | ctx_switches, yields, parks, wakes, steals, migrations |
| Blocking | blocking_calls, blocking_completions |
| I/O | io_submits, io_submit_calls, io_submit_syscalls, io_completions |
| Idle spin | idle_polls, idle_spin_loops, idle_spin_hits, idle_spin_fallbacks, idle_spin_ns |
| Workers | active_workers, online_workers, online_workers_floor, online_workers_min, online_workers_max, active_nodes |
| Runtime flags | dynamic_workers, worker_rings, worker_rings_multishot, lockfree_normq, huge_alloc, sqpoll |
| Queues | queue_overflows, overflow_depth |
| Opaque blocking | opaque_block_ns, opaque_block_samples, opaque_block_max_ns, opaque_enter_wait_ns, opaque_enter_wait_samples, opaque_enter_wait_max_ns, opaque_leave_wait_ns, opaque_leave_wait_samples, opaque_leave_wait_max_ns |
| Direct yield handoff | yield_direct_attempts, yield_direct_fast_hits, yield_direct_locked_hits, yield_direct_fail_context, yield_direct_fail_policy, yield_direct_fail_no_work, yield_direct_fail_self, yield_direct_fail_push |
stats.raw() returns the underlying llam::sys::llam_runtime_stats_t for
advanced consumers that need exact ABI field access.
Write stats JSON on Unix:
Build Environment
| Variable | Meaning |
|---|---|
LLAM_SYS_PREFIX |
Installed LLAM SDK prefix. Must contain include/ and lib/. |
LLAM_SYS_LIB_DIR |
Directory containing libllam_runtime. |
LLAM_SYS_INCLUDE_DIR |
Include directory to use with LLAM_SYS_LIB_DIR. |
LLAM_SYS_LIB_NAME |
Link name. Default: llam_runtime. |
LLAM_SYS_LINK_KIND |
Cargo link kind. Default: static. |
LLAM_SYS_INSTALL_PREFIX |
Override the automatic install prefix. |
LLAM_SYS_INSTALL_VERSION |
LLAM release version. Default: 1.0.1. |
LLAM_SYS_INSTALL_TARGET |
Explicit release target. Examples: macos-aarch64, macos-x86_64, linux-x86_64, linux-aarch64, windows-x86_64. |
LLAM_SYS_INSTALL_BASE_URL |
Release asset base URL. |
LLAM_SYS_INSTALL_SCRIPT |
Local path or URL for install.sh / install.ps1. |
LLAM_SYS_FORCE_INSTALL=1 |
Reinstall even if the build prefix already looks valid. |
LLAM_SYS_NO_INSTALL=1 |
Do not run the installer. Require LLAM_SYS_PREFIX or LLAM_SYS_LIB_DIR. |
Examples
Examples are Cargo targets, not standalone rustc inputs:
LLAM_CHAT_QUIET=1
The chat server mirrors the C LLAM chat server shape: each client gets a bounded
outbox channel, reader/writer tasks run per connection, full input lines are
broadcast as [client N] ..., and slow receivers shed queued messages instead
of blocking global fanout.
Checks
From a LLAM-rs checkout:
Bench and stress helpers:
Troubleshooting
The build downloads LLAM every time
Set a stable install prefix:
LLAM_SYS_INSTALL_PREFIX="/.local/llam"
Or install LLAM once and reuse it:
LLAM_SYS_PREFIX="/.local/llam"
Automatic install is not allowed in CI
Provide an SDK and disable automatic install:
LLAM_SYS_NO_INSTALL=1 \
LLAM_SYS_PREFIX="/.local/llam" \
Cross-building
Automatic installation is host-oriented. For cross builds, provide a matching prebuilt SDK:
LLAM_SYS_INCLUDE_DIR="/path/to/target/include" \
LLAM_SYS_LIB_DIR="/path/to/target/lib" \
Dynamic linking
Static linking is the default:
LLAM_SYS_LINK_KIND=static
Dynamic linking requires a matching dynamic LLAM runtime discoverable by the loader at runtime:
LLAM_SYS_LINK_KIND=dylib LLAM_SYS_PREFIX="/.local/llam"
Safety
llam::sys is raw and unsafe by design. The safe llam layer owns heap
payloads across channels, task trampolines, task-local values, and owned I/O
buffers. Direct C handles are hidden behind RAII wrappers where the C API
provides a lifetime contract.
See the repository SAFETY.md for the unsafe boundary audit.
License
Apache-2.0