supermachine 0.7.1

Run any OCI/Docker image as a hardware-isolated microVM on macOS HVF (Linux KVM and Windows WHP in progress). Single library API, zero flags for the common case, sub-100 ms cold-restore from snapshot.
docs.rs failed to build supermachine-0.7.1
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

supermachine

Run any OCI/Docker image as a hardware-isolated microVM, embedded directly in your Rust app. macOS Apple Silicon (HVF) today; Linux KVM and Windows WHP in progress.

Using Node.js, Bun, or Deno? Install @supermachine/core instead — same Rust core, full API parity, ~10–20 µs of napi-rs binding overhead per call (0.3–0.5 % of total cost on the typical acquire/exec/release path).

[dependencies]
supermachine = "0.4"

supermachine-kernel is pulled in transitively (it ships the bundled Linux kernel + init shim) — you don't need to list it explicitly.

use std::time::Duration;
use supermachine::{Image, VmConfig};

let image = Image::from_snapshot("path/to/snapshot")?;
let vm = image.start(&VmConfig::new())?;

let out = vm.exec_builder()
    .stage_file("/tmp/main.rs", b"fn main() { println!(\"hi\"); }".to_vec())
    .argv(["rustc", "/tmp/main.rs", "-o", "/tmp/m"])
    .chain(["/tmp/m"])
    .timeout(Duration::from_secs(30))
    .output()?;

assert!(out.success());
println!("ran in {:?}, stdout: {:?}", out.duration, out.stdout);
vm.stop()?;
# Ok::<(), supermachine::Error>(())

VM start (snapshot restore + first byte) is ~10–50 ms on Apple Silicon. Steady-state cycle for a Rust hello-world (compile + run) is ~44 ms with the recommended config; sustained throughput is ~125 cycles/s at 8 concurrent workers.

What's new in 0.4

  • Auto-scaling pool with min/max/idle_timeout/acquire_timeoutimage.pool().min(0).max(64).build() replaces the legacy fixed warm_pool(n). Lazy spawn, auto-grow, idle eviction. See Image::pool.
  • Skip-restore mode for warm-cache workloads — pool().restore_on_release(false) keeps the guest's page cache hot across cycles instead of restoring after every drop. ~7× faster on rustc-class workloads where the guest re-reads its sysroot every invocation.
  • Batched stage_file + chain on ExecBuilder — fold a small file write and a multi-step &&-style command sequence into one vsock RPC, no shell wrapper. Cleaner than write_file() + argv(["sh","-c","..."]).
  • with_warmup snapshots are sound now — pre-bake a workload's warm cache into the snapshot via OciImageBuilder::with_warmup. Gives you cold-start parity with steady-state warm cycles.
  • Multi-vCPU verified — bake with --vcpus N, get linear scaling on parallel workloads (3.2× on 4 vCPUs for 4-way parallel rustc).

If you're on 0.3.x, replace image.warm_pool(n)? with image.pool().min(n).max(n).idle_timeout(Duration::MAX).build()?.

What you get

Type Stable Purpose
Image Identifies a baked snapshot. start(), acquire(), acquire_with(), pool().
Vm One running microVM. exec, exec_builder, write_file, read_file, connect, expose_tcp, workload_signal, snapshot, stop.
PooledVm Deref<Target=Vm>; auto-returned to subprocess pool on Drop.
VmConfig Builder for memory, vCPUs, asset paths, restore timeout.
Pool / PoolBuilder New in 0.4. min, max, idle_timeout, acquire_timeout, restore_on_release, stats().
ExecBuilder argv, env, cwd, tty, winsize, timeout, stage_file, chain, spawn, output.
ExecOutcome status, stdout, stderr, duration, timed_out, peak_rss_kib.
AssetPaths Where to find the kernel + init shim. Auto-discovers; override for .app bundles.
Error #[non_exhaustive] typed variants with source chain.
async_::* (feature tokio) AsyncImage / AsyncVm / AsyncPooledVm mirror the sync types via spawn_blocking.
OciImageBuilder Bake an OCI image into a snapshot from your build process. with_warmup pre-warms the guest cache.

Common patterns

Verifier loop (compile + run user code, repeat). For maximum throughput, combine warm-baked snapshot + skip-restore + batched exec — see Performance recipes below.

# use std::time::Duration;
# use supermachine::Image;
let image = Image::from_snapshot("path/to/rust-slim")?;
let pool = image.pool()
    .min(4).max(8)
    .restore_on_release(false)              // keep guest cache warm
    .build()?;

for src in candidate_sources {
    let vm = pool.acquire()?;               // ~µs when warm
    let out = vm.exec_builder()
        .stage_file("/tmp/main.rs", src.as_bytes().to_vec())
        .argv(["rustc", "-O", "/tmp/main.rs", "-o", "/tmp/m"])
        .chain(["/tmp/m"])
        .timeout(Duration::from_secs(30))
        .output()?;
    // vm dropped → returned to pool
}
# Ok::<(), supermachine::Error>(())

Long-lived service (start once, serve traffic until shutdown):

# use supermachine::{Image, VmConfig};
let image = Image::from_snapshot("path/to/nginx")?;
let vm = image.start(&VmConfig::new())?;
let _forwarder = vm.expose_tcp(8080, 80)?;              // host:8080 → guest:80
println!("listening on http://127.0.0.1:8080/");
// ... wait for shutdown signal ...
vm.stop()?;
# Ok::<(), supermachine::Error>(())

Live host-directory mount (dev loop / workspace sandbox):

# use supermachine::{Image, MountSpec, SymlinkPolicy};
let image = Image::build()
    .ref_("node:20-alpine")
    .with_mount(
        MountSpec::new("/Users/me/my-app", "workspace")
            .with_symlinks(SymlinkPolicy::Opaque),   // default
    )
    .run()?;

// Inside the guest:
//   mount -t virtiofs workspace /work
//   cd /work && npm install
# Ok::<(), supermachine::Error>(())

SymlinkPolicy picks the trust posture for guest-writable mounts:

  • Deny — guest cannot create symlinks (EPERM); existing host symlinks pointing outside the mount root are rejected at LOOKUP. Paranoid mounts.
  • Opaque (default) — guest can create symlinks (npm/pnpm/yarn workspace tools rely on this); targets are stored verbatim per POSIX symlink(2) and the host never resolves them. Existing external host symlinks still blocked at LOOKUP. Safe for hostile multi-tenant: a guest planting escape -> /etc/passwd gets a readlink-able symlink but the host won't follow it.
  • Follow — guest can create symlinks AND existing external host symlinks are followed unconditionally. Trusted single-tenant only.

Streaming exec (when output()'s collect-everything model doesn't fit — long-running processes, interactive shells):

# use std::io::{Read, Write};
# use supermachine::{Image, VmConfig};
# let image = Image::from_snapshot("path")?;
# let vm = image.start(&VmConfig::new())?;
let mut child = vm.exec_builder().argv(["cat"]).spawn()?;
let mut stdin = child.stdin().unwrap();
stdin.write_all(b"hello\n")?;
stdin.close()?;
let mut buf = String::new();
child.stdout().unwrap().read_to_string(&mut buf)?;
let status = child.wait()?;
# Ok::<(), Box<dyn std::error::Error>>(())

Performance recipes

For an embedder running a hot, repeated workload (verifier, sandbox, code playground), three knobs matter. Combining all three brings a Rust hello-world cycle from ~330 ms to ~44 ms median, with per-fresh-VM cold-start of ~50 ms (vs ~280 ms without):

  1. Bake a warm snapshot via with_warmup — runs your workload once during the bake, captures a snapshot whose guest page cache is already populated. Even brand-new pool workers start "warm".

    # use std::time::Duration;
    # use supermachine::Image;
    let image = Image::builder("rust:1-slim")
        .name("rust_warm")
        .with_warmup(|vm| {
            vm.exec_builder()
                .stage_file(
                    "/tmp/seed.rs",
                    b"fn main() { println!(\"warm\"); }".to_vec(),
                )
                .argv(["rustc", "-O", "/tmp/seed.rs", "-o", "/tmp/seed"])
                .chain(["/tmp/seed"])
                .timeout(Duration::from_secs(60))
                .output()?;
            Ok(())
        })
        .with_warmup_tag("v1")  // bump to invalidate cached warm bake
        .build()?;
    # Ok::<(), supermachine::Error>(())
    
  2. Use restore_on_release(false) — drop pushes the worker directly back to idle without restoring snapshot state. Guest page cache stays hot across cycles. Safe for workloads that overwrite their own outputs (e.g. rustc -o /tmp/m && /tmp/m).

  3. Batched stage_file + chain on ExecBuilder — eliminates the separate write_file round-trip and the sh -c fork. Cleaner code, faster per cycle.

End-to-end best-config example:

# use std::time::Duration;
# use supermachine::Image;
let image = Image::from_snapshot("path/to/rust_warm")?;
let pool = image.pool()
    .min(2).max(8)
    .idle_timeout(Duration::MAX)
    .restore_on_release(false)
    .build()?;

for src in user_inputs {
    let vm = pool.acquire()?;
    let out = vm.exec_builder()
        .stage_file("/tmp/main.rs", src)
        .argv(["rustc", "-O", "/tmp/main.rs", "-o", "/tmp/m"])
        .chain(["/tmp/m"])
        .timeout(Duration::from_secs(30))
        .output()?;
    // ...
}
# Ok::<(), supermachine::Error>(())

For genuinely parallel workloads (cargo build, parallel test runners, lld linking), bake with --vcpus N — multi-vCPU scales near-linearly. Hello-world rustc is single-threaded; more vCPUs won't help.

Sizing the pool against host RAM. Each warm worker holds a CoW-mapped snapshot of the guest. Resident set is the working set after warmup, not the full --memory cap: rust:1-slim runs ~250-300 MiB per worker post-rustc-warmup, idle nginx ~50-80 MiB. Default min=4 max=8 for rust verifier sits at ~1.5 GiB; comfy on 16 GiB hosts, but drop to min=2 max=4 on 8 GiB machines that are also running an IDE / browser. See docs/perf-guide.md for measurement guidance.

First-bake vs. warm-cycle timing. The benchmark numbers above are runtime with the snapshot already baked. A fresh bake (pull

  • build delta + boot + run with_warmup + capture snapshot) is 3-10 seconds, paid once per (image, version) combo and cached on disk. Subsequent process starts use the cached snapshot.

Codesigning

macOS HVF requires the com.apple.security.hypervisor entitlement on whatever process calls hv_vm_create — in our architecture that's supermachine-worker, not your app. The CLI auto-signs the worker on first run; no manual setup. If you only depend on this library and let supermachine spawn its own worker process, you don't need to codesign your own binary at all.

If you embed the library so your binary spawns workers in-process (e.g. via Vm::start from your own thread), your binary itself does need the entitlement. The bundled cargo-supermachine plugin handles it:

cargo install supermachine          # one-time; gets the plugin
cargo supermachine build --release   # = cargo build + codesign
./target/release/your-app

cargo supermachine run, cargo supermachine test, and cargo supermachine check all wrap the equivalent cargo commands with the codesign step. For a distributable .app, pass --identity "Developer ID Application: ..." to enable Hardened Runtime; ad-hoc signing is the default for local dev.

Manual flow without the plugin:

cargo build --release
supermachine codesign target/release/your-app

Baking snapshots

Image::from_snapshot needs a snapshot dir. Bake with the CLI:

supermachine pull nginx:1.27-alpine --name my-nginx
# snapshot lands in ~/.local/supermachine-snapshots/my-nginx/

# multi-vCPU bake:
supermachine pull rust:1-slim --name rust_4v --memory 2048 --vcpus 4

Then in your app:

let image = Image::from_snapshot(
    format!("{}/.local/supermachine-snapshots/my-nginx", env!("HOME"))
)?;

Or programmatically via OciImageBuilder (Image::builder).

Bundling for distribution

For a self-contained .app, stage the kernel + init shim into Contents/Resources/ at build time:

// build.rs
fn main() -> std::io::Result<()> {
    let resources = std::path::PathBuf::from(
        std::env::var("OUT_DIR").unwrap()
    ).join("../../../bundle-resources");
    std::fs::create_dir_all(&resources)?;
    supermachine_kernel::extract_kernel_to(&resources.join("kernel"))?;
    supermachine_kernel::extract_init_oci_to(&resources.join("init-oci"))?;
    Ok(())
}

Or extract once at runtime:

let scratch = std::env::temp_dir().join("supermachine-assets");
std::fs::create_dir_all(&scratch)?;
supermachine_kernel::extract_kernel_to(&scratch.join("kernel"))?;
supermachine_kernel::extract_init_oci_to(&scratch.join("init-oci"))?;

let assets = supermachine::AssetPaths::from_dir(&scratch);
let vm = Vm::start(&image, &VmConfig::new().with_assets(assets))?;

End users do nothing: drag the .dmg, run the app, microVMs work.

Status

  • ✅ macOS Apple Silicon (HVF). 100/100 cold-restore reliability at --vcpus 1..=16. Nginx, httpd, redis, memcached, python, node, postgres tested.
  • 🚧 Linux KVM backend. Tracked.
  • 🚧 Windows WHP backend. Tracked.
  • ⚠️ TSI socket family handles AF_INET TCP/UDP transparently; workloads using AF_NETLINK, raw sockets, multicast, TUN/TAP, or ICMP are unsupported. virtio-net opt-in is on the roadmap.

License

This crate (the supermachine library + CLI) is licensed under Apache-2.0; see LICENSE-APACHE.

The hard transitive dep on supermachine-kernel brings in additional components (Linux kernel image under GPL-2.0-only, musl libc inside the init shim under MIT). Redistributors of binaries built against this crate must comply with those licenses on their respective components — see the supermachine-kernel crate's NOTICE for source-availability details. Your own code's license is unaffected: the kernel runs as guest data inside an isolated VM, not as linked host code.