Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
supermachine
Run any OCI/Docker image as a hardware-isolated microVM, embedded directly in your Rust app. macOS Apple Silicon (HVF) today; Linux KVM and Windows WHP in progress.
[]
= "0.4"
supermachine-kernel is pulled in transitively (it ships the
bundled Linux kernel + init shim) — you don't need to list it
explicitly.
use Duration;
use ;
let image = from_snapshot?;
let vm = image.start?;
let out = vm.exec_builder
.stage_file
.argv
.chain
.timeout
.output?;
assert!;
println!;
vm.stop?;
# Ok::
VM start (snapshot restore + first byte) is ~10–50 ms on Apple Silicon. Steady-state cycle for a Rust hello-world (compile + run) is ~44 ms with the recommended config; sustained throughput is ~125 cycles/s at 8 concurrent workers.
What's new in 0.4
- Auto-scaling pool with min/max/idle_timeout/acquire_timeout
—
image.pool().min(0).max(64).build()replaces the legacy fixedwarm_pool(n). Lazy spawn, auto-grow, idle eviction. SeeImage::pool. - Skip-restore mode for warm-cache workloads —
pool().restore_on_release(false)keeps the guest's page cache hot across cycles instead of restoring after every drop. ~7× faster on rustc-class workloads where the guest re-reads its sysroot every invocation. - Batched
stage_file+chainonExecBuilder— fold a small file write and a multi-step&&-style command sequence into one vsock RPC, no shell wrapper. Cleaner thanwrite_file()+argv(["sh","-c","..."]). with_warmupsnapshots are sound now — pre-bake a workload's warm cache into the snapshot viaOciImageBuilder::with_warmup. Gives you cold-start parity with steady-state warm cycles.- Multi-vCPU verified — bake with
--vcpus N, get linear scaling on parallel workloads (3.2× on 4 vCPUs for 4-way parallel rustc).
If you're on 0.3.x, replace image.warm_pool(n)? with
image.pool().min(n).max(n).idle_timeout(Duration::MAX).build()?.
What you get
| Type | Stable | Purpose |
|---|---|---|
Image |
✅ | Identifies a baked snapshot. start(), acquire(), acquire_with(), pool(). |
Vm |
✅ | One running microVM. exec, exec_builder, write_file, read_file, connect, expose_tcp, workload_signal, snapshot, stop. |
PooledVm |
✅ | Deref<Target=Vm>; auto-returned to subprocess pool on Drop. |
VmConfig |
✅ | Builder for memory, vCPUs, asset paths, restore timeout. |
Pool / PoolBuilder |
✅ | New in 0.4. min, max, idle_timeout, acquire_timeout, restore_on_release, stats(). |
ExecBuilder |
✅ | argv, env, cwd, tty, winsize, timeout, stage_file, chain, spawn, output. |
ExecOutcome |
✅ | status, stdout, stderr, duration, timed_out, peak_rss_kib. |
AssetPaths |
✅ | Where to find the kernel + init shim. Auto-discovers; override for .app bundles. |
Error |
✅ | #[non_exhaustive] typed variants with source chain. |
async_::* (feature tokio) |
✅ | AsyncImage / AsyncVm / AsyncPooledVm mirror the sync types via spawn_blocking. |
OciImageBuilder |
✅ | Bake an OCI image into a snapshot from your build process. with_warmup pre-warms the guest cache. |
Common patterns
Verifier loop (compile + run user code, repeat). For maximum throughput, combine warm-baked snapshot + skip-restore + batched exec — see Performance recipes below.
# use Duration;
# use Image;
let image = from_snapshot?;
let pool = image.pool
.min.max
.restore_on_release // keep guest cache warm
.build?;
for src in candidate_sources
# Ok::
Long-lived service (start once, serve traffic until shutdown):
# use ;
let image = from_snapshot?;
let vm = image.start?;
let _forwarder = vm.expose_tcp?; // host:8080 → guest:80
println!;
// ... wait for shutdown signal ...
vm.stop?;
# Ok::
Streaming exec (when output()'s collect-everything model
doesn't fit — long-running processes, interactive shells):
# use ;
# use ;
# let image = from_snapshot?;
# let vm = image.start?;
let mut child = vm.exec_builder.argv.spawn?;
let mut stdin = child.stdin.unwrap;
stdin.write_all?;
stdin.close?;
let mut buf = Stringnew;
child.stdout.unwrap.read_to_string?;
let status = child.wait?;
# Ok::
Performance recipes
For an embedder running a hot, repeated workload (verifier, sandbox, code playground), three knobs matter. Combining all three brings a Rust hello-world cycle from ~330 ms to ~44 ms median, with per-fresh-VM cold-start of ~50 ms (vs ~280 ms without):
-
Bake a warm snapshot via
with_warmup— runs your workload once during the bake, captures a snapshot whose guest page cache is already populated. Even brand-new pool workers start "warm".# use Duration; # use Image; let image = builder .name .with_warmup .with_warmup_tag // bump to invalidate cached warm bake .build?; # Ok:: -
Use
restore_on_release(false)— drop pushes the worker directly back to idle without restoring snapshot state. Guest page cache stays hot across cycles. Safe for workloads that overwrite their own outputs (e.g.rustc -o /tmp/m && /tmp/m). -
Batched
stage_file+chainonExecBuilder— eliminates the separatewrite_fileround-trip and thesh -cfork. Cleaner code, faster per cycle.
End-to-end best-config example:
# use Duration;
# use Image;
let image = from_snapshot?;
let pool = image.pool
.min.max
.idle_timeout
.restore_on_release
.build?;
for src in user_inputs
# Ok::
For genuinely parallel workloads (cargo build, parallel test
runners, lld linking), bake with --vcpus N — multi-vCPU scales
near-linearly. Hello-world rustc is single-threaded; more vCPUs
won't help.
Codesigning
macOS HVF requires the com.apple.security.hypervisor entitlement
on whatever process calls hv_vm_create — in our architecture
that's supermachine-worker, not your app. The CLI auto-signs the
worker on first run; no manual setup. If you only depend on
this library and let supermachine spawn its own worker process,
you don't need to codesign your own binary at all.
If you embed the library so your binary spawns workers
in-process (e.g. via Vm::start from your own thread), your
binary itself does need the entitlement. The bundled
cargo-supermachine plugin handles it:
cargo supermachine run, cargo supermachine test, and
cargo supermachine check all wrap the equivalent cargo commands
with the codesign step. For a distributable .app, pass
--identity "Developer ID Application: ..." to enable Hardened
Runtime; ad-hoc signing is the default for local dev.
Manual flow without the plugin:
Baking snapshots
Image::from_snapshot needs a snapshot dir. Bake with the CLI:
# snapshot lands in ~/.local/supermachine-snapshots/my-nginx/
# multi-vCPU bake:
Then in your app:
let image = from_snapshot?;
Or programmatically via OciImageBuilder (Image::builder).
Bundling for distribution
For a self-contained .app, stage the kernel + init shim into
Contents/Resources/ at build time:
// build.rs
Or extract once at runtime:
let scratch = temp_dir.join;
create_dir_all?;
extract_kernel_to?;
extract_init_oci_to?;
let assets = from_dir;
let vm = start?;
End users do nothing: drag the .dmg, run the app, microVMs work.
Status
- ✅ macOS Apple Silicon (HVF). 100/100 cold-restore reliability
at
--vcpus 1..=16. Nginx, httpd, redis, memcached, python, node, postgres tested. - 🚧 Linux KVM backend. Tracked.
- 🚧 Windows WHP backend. Tracked.
- ⚠️ TSI socket family handles AF_INET TCP/UDP transparently; workloads using AF_NETLINK, raw sockets, multicast, TUN/TAP, or ICMP are unsupported. virtio-net opt-in is on the roadmap.
License
This crate (the supermachine library + CLI) is licensed under
Apache-2.0; see LICENSE-APACHE.
The hard transitive dep on
supermachine-kernel
brings in additional components (Linux kernel image under
GPL-2.0-only, musl libc inside the init shim under MIT).
Redistributors of binaries built against this crate must comply
with those licenses on their respective components — see the
supermachine-kernel crate's NOTICE for source-availability
details. Your own code's license is unaffected: the kernel runs
as guest data inside an isolated VM, not as linked host code.