supermachine 0.3.6

Run any OCI/Docker image as a hardware-isolated microVM on macOS HVF (Linux KVM and Windows WHP in progress). Single library API, zero flags for the common case, sub-100 ms cold-restore from snapshot.
docs.rs failed to build supermachine-0.3.6
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

supermachine

Run any OCI/Docker image as a hardware-isolated microVM, embedded directly in your Rust app. macOS Apple Silicon (HVF) today; Linux KVM and Windows WHP in progress.

[dependencies]
supermachine = "=0.3"

supermachine-kernel is pulled in transitively (it ships the bundled Linux kernel + init shim) — you don't need to list it explicitly.

use std::time::Duration;
use supermachine::{Image, VmConfig};

let image = Image::from_snapshot("path/to/snapshot")?;
let vm = image.start(&VmConfig::new())?;

vm.write_file("/tmp/main.rs", b"fn main() { println!(\"hi\"); }")?;
let out = vm.exec_builder()
    .argv(["sh", "-c", "rustc /tmp/main.rs -o /tmp/m && /tmp/m"])
    .timeout(Duration::from_secs(30))
    .output()?;

assert!(out.success());
println!("ran in {:?}, stdout: {:?}", out.duration, out.stdout);
vm.stop()?;
# Ok::<(), supermachine::Error>(())

VM start (snapshot restore + first byte) is ~10–50 ms on Apple Silicon. About 1.8× faster than docker run, 3× faster than krunvm.

What you get

Type Stable Purpose
Image Identifies a baked snapshot. start(), acquire(), acquire_with().
Vm One running microVM. exec, exec_builder, write_file, read_file, connect, expose_tcp, workload_signal, snapshot, stop.
PooledVm Deref<Target=Vm>; auto-returned to subprocess pool on Drop.
VmConfig Builder for memory, vCPUs, asset paths, restore timeout, pool_warm (concurrent VM count).
ExecBuilder argv, env, cwd, tty, winsize, timeout, spawn, output.
ExecOutcome Collected status, stdout, stderr, duration, timed_out, peak_rss_kib.
AssetPaths Where to find the kernel + init shim. Auto-discovers; override for .app bundles.
Error #[non_exhaustive] struct-variants with typed source chain via std::error::Error::source().
async_::* (feature tokio) AsyncImage / AsyncVm / AsyncPooledVm mirror the sync types via spawn_blocking.
OciImageBuilder Bake an OCI image into a snapshot from your build process.

Common patterns

Eval / verifier loop (drop a file in, run a command, check output):

# use std::time::Duration;
# use std::sync::Arc;
# use supermachine::{Image, VmConfig};
// Concurrent eval: 5 candidate programs, all evaluated in parallel.
let image = Arc::new(Image::from_snapshot("path/to/rust-slim")?);
{ // Prime the pool to 5 warm workers.
    let _ = image.acquire_with(&VmConfig::new().with_pool_warm(5))?;
}
let candidates: Vec<&str> = vec![/* ... */];
let handles: Vec<_> = candidates.into_iter().map(|src| {
    let image = Arc::clone(&image);
    std::thread::spawn(move || {
        let vm = image.acquire()?;                      // ~ns when warm
        vm.write_file("/tmp/main.rs", src.as_bytes())?;
        vm.exec_builder()
            .argv(["sh", "-c", "rustc /tmp/main.rs -o /tmp/m && /tmp/m"])
            .timeout(Duration::from_secs(30))
            .output()
            .map_err(supermachine::Error::from)
        // vm dropped → returned to pool → fresh worker spawned
    })
}).collect();
let results: Vec<_> = handles.into_iter().map(|h| h.join().unwrap()).collect();
# Ok::<(), supermachine::Error>(())

Long-lived service (start once, serve traffic until shutdown):

# use supermachine::{Image, VmConfig};
let image = Image::from_snapshot("path/to/nginx")?;
let vm = image.start(&VmConfig::new())?;
let _forwarder = vm.expose_tcp(8080, 80)?;              // host:8080 → guest:80
println!("listening on http://127.0.0.1:8080/");
// ... wait for shutdown signal ...
vm.stop()?;
# Ok::<(), supermachine::Error>(())

Streaming exec (when output()'s collect-everything model doesn't fit — long-running processes, interactive shells):

# use std::io::{Read, Write};
# use supermachine::{Image, VmConfig};
# let image = Image::from_snapshot("path")?;
# let vm = image.start(&VmConfig::new())?;
let mut child = vm.exec_builder().argv(["cat"]).spawn()?;
let mut stdin = child.stdin().unwrap();
stdin.write_all(b"hello\n")?;
stdin.close()?;
let mut buf = String::new();
child.stdout().unwrap().read_to_string(&mut buf)?;
let status = child.wait()?;
# Ok::<(), Box<dyn std::error::Error>>(())

Codesigning

macOS HVF requires the com.apple.security.hypervisor entitlement on whatever process calls hv_vm_create — in our architecture that's supermachine-worker, not your app. The CLI auto-signs the worker on first run; no manual setup. If you only depend on this library and let supermachine spawn its own worker process, you don't need to codesign your own binary at all.

If you embed the library so your binary spawns workers in-process (e.g. via Vm::start from your own thread), your binary itself does need the entitlement. The bundled cargo-supermachine plugin handles it:

cargo install supermachine          # one-time; gets the plugin
cargo supermachine build --release   # = cargo build + codesign
./target/release/your-app

cargo supermachine run, cargo supermachine test, and cargo supermachine check all wrap the equivalent cargo commands with the codesign step. For a distributable .app, pass --identity "Developer ID Application: ..." to enable Hardened Runtime; ad-hoc signing is the default for local dev.

Manual flow without the plugin:

cargo build --release
supermachine codesign target/release/your-app

Baking snapshots

Image::from_snapshot needs a snapshot dir. Bake with the CLI:

supermachine pull nginx:1.27-alpine --name my-nginx
# snapshot lands in ~/.local/supermachine-snapshots/my-nginx/

Then in your app:

let image = Image::from_snapshot(
    format!("{}/.local/supermachine-snapshots/my-nginx", env!("HOME"))
)?;

Or programmatically via OciImageBuilder — requires the kernel build pipeline to be set up; most users go through the CLI for this step.

Bundling for distribution

For a self-contained .app, stage the kernel + init shim into Contents/Resources/ at build time:

// build.rs
fn main() -> std::io::Result<()> {
    let resources = std::path::PathBuf::from(
        std::env::var("OUT_DIR").unwrap()
    ).join("../../../bundle-resources");
    std::fs::create_dir_all(&resources)?;
    supermachine_kernel::extract_kernel_to(&resources.join("kernel"))?;
    supermachine_kernel::extract_init_oci_to(&resources.join("init-oci"))?;
    Ok(())
}

Or extract once at runtime:

let scratch = std::env::temp_dir().join("supermachine-assets");
std::fs::create_dir_all(&scratch)?;
supermachine_kernel::extract_kernel_to(&scratch.join("kernel"))?;
supermachine_kernel::extract_init_oci_to(&scratch.join("init-oci"))?;

let assets = supermachine::AssetPaths::from_dir(&scratch);
let vm = Vm::start(&image, &VmConfig::new().with_assets(assets))?;

End users do nothing: drag the .dmg, run the app, microVMs work.

Status

  • ✅ macOS Apple Silicon (HVF). 100/100 cold-restore reliability at --vcpus 1..=16. Nginx, httpd, redis, memcached, python, node, postgres tested.
  • 🚧 Linux KVM backend. Tracked.
  • 🚧 Windows WHP backend. Tracked.
  • ⚠️ TSI socket family handles AF_INET TCP/UDP transparently; workloads using AF_NETLINK, raw sockets, multicast, TUN/TAP, or ICMP are unsupported. virtio-net opt-in is on the roadmap.

Full snapshot-format / API stability contract + the three integration patterns (CLI, daemon, embed) live in the project's design docs (published alongside the source repo when public).

License

This crate (the supermachine library + CLI) is licensed under Apache-2.0; see LICENSE-APACHE.

The hard transitive dep on supermachine-kernel brings in additional components (Linux kernel image under GPL-2.0-only, musl libc inside the init shim under MIT). Redistributors of binaries built against this crate must comply with those licenses on their respective components — see the supermachine-kernel crate's NOTICE for source-availability details. Your own code's license is unaffected: the kernel runs as guest data inside an isolated VM, not as linked host code.