Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
supermachine
Run any OCI/Docker image as a hardware-isolated microVM, embedded directly in your Rust app. macOS Apple Silicon (HVF) today; Linux KVM and Windows WHP in progress.
[]
= "=0.3"
supermachine-kernel is pulled in transitively (it ships the
bundled Linux kernel + init shim) — you don't need to list it
explicitly.
use Duration;
use ;
let image = from_snapshot?;
let vm = image.start?;
vm.write_file?;
let out = vm.exec_builder
.argv
.timeout
.output?;
assert!;
println!;
vm.stop?;
# Ok::
VM start (snapshot restore + first byte) is ~10–50 ms on Apple Silicon. About 1.8× faster than docker run, 3× faster than krunvm.
What you get
| Type | Stable | Purpose |
|---|---|---|
Image |
✅ | Identifies a baked snapshot. start(), acquire(), acquire_with(). |
Vm |
✅ | One running microVM. exec, exec_builder, write_file, read_file, connect, expose_tcp, workload_signal, snapshot, stop. |
PooledVm |
✅ | Deref<Target=Vm>; auto-returned to subprocess pool on Drop. |
VmConfig |
✅ | Builder for memory, vCPUs, asset paths, restore timeout, pool_warm (concurrent VM count). |
ExecBuilder |
✅ | argv, env, cwd, tty, winsize, timeout, spawn, output. |
ExecOutcome |
✅ | Collected status, stdout, stderr, duration, timed_out, peak_rss_kib. |
AssetPaths |
✅ | Where to find the kernel + init shim. Auto-discovers; override for .app bundles. |
Error |
✅ | #[non_exhaustive] struct-variants with typed source chain via std::error::Error::source(). |
async_::* (feature tokio) |
✅ | AsyncImage / AsyncVm / AsyncPooledVm mirror the sync types via spawn_blocking. |
OciImageBuilder |
✅ | Bake an OCI image into a snapshot from your build process. |
Common patterns
Eval / verifier loop (drop a file in, run a command, check output):
# use Duration;
# use Arc;
# use ;
// Concurrent eval: 5 candidate programs, all evaluated in parallel.
let image = new;
let candidates: = vec!;
let handles: = candidates.into_iter.map.collect;
let results: = handles.into_iter.map.collect;
# Ok::
Long-lived service (start once, serve traffic until shutdown):
# use ;
let image = from_snapshot?;
let vm = image.start?;
let _forwarder = vm.expose_tcp?; // host:8080 → guest:80
println!;
// ... wait for shutdown signal ...
vm.stop?;
# Ok::
Streaming exec (when output()'s collect-everything model
doesn't fit — long-running processes, interactive shells):
# use ;
# use ;
# let image = from_snapshot?;
# let vm = image.start?;
let mut child = vm.exec_builder.argv.spawn?;
let mut stdin = child.stdin.unwrap;
stdin.write_all?;
stdin.close?;
let mut buf = Stringnew;
child.stdout.unwrap.read_to_string?;
let status = child.wait?;
# Ok::
Codesigning
macOS HVF requires the com.apple.security.hypervisor entitlement
on whatever process calls hv_vm_create — in our architecture
that's supermachine-worker, not your app. The CLI auto-signs the
worker on first run; no manual setup. If you only depend on
this library and let supermachine spawn its own worker process,
you don't need to codesign your own binary at all.
If you embed the library so your binary spawns workers
in-process (e.g. via Vm::start from your own thread), your
binary itself does need the entitlement. The bundled
cargo-supermachine plugin handles it:
cargo supermachine run, cargo supermachine test, and
cargo supermachine check all wrap the equivalent cargo commands
with the codesign step. For a distributable .app, pass
--identity "Developer ID Application: ..." to enable Hardened
Runtime; ad-hoc signing is the default for local dev.
Manual flow without the plugin:
Baking snapshots
Image::from_snapshot needs a snapshot dir. Bake with the CLI:
# snapshot lands in ~/.local/supermachine-snapshots/my-nginx/
Then in your app:
let image = from_snapshot?;
Or programmatically via OciImageBuilder — requires the kernel
build pipeline to be set up; most users go through the CLI for
this step.
Bundling for distribution
For a self-contained .app, stage the kernel + init shim into
Contents/Resources/ at build time:
// build.rs
Or extract once at runtime:
let scratch = temp_dir.join;
create_dir_all?;
extract_kernel_to?;
extract_init_oci_to?;
let assets = from_dir;
let vm = start?;
End users do nothing: drag the .dmg, run the app, microVMs work.
Status
- ✅ macOS Apple Silicon (HVF). 100/100 cold-restore reliability
at
--vcpus 1..=16. Nginx, httpd, redis, memcached, python, node, postgres tested. - 🚧 Linux KVM backend. Tracked.
- 🚧 Windows WHP backend. Tracked.
- ⚠️ TSI socket family handles AF_INET TCP/UDP transparently; workloads using AF_NETLINK, raw sockets, multicast, TUN/TAP, or ICMP are unsupported. virtio-net opt-in is on the roadmap.
Full snapshot-format / API stability contract + the three integration patterns (CLI, daemon, embed) live in the project's design docs (published alongside the source repo when public).
License
This crate (the supermachine library + CLI) is licensed under
Apache-2.0; see LICENSE-APACHE.
The hard transitive dep on
supermachine-kernel
brings in additional components (Linux kernel image under
GPL-2.0-only, musl libc inside the init shim under MIT).
Redistributors of binaries built against this crate must comply
with those licenses on their respective components — see the
supermachine-kernel crate's NOTICE for source-availability
details. Your own code's license is unaffected: the kernel runs
as guest data inside an isolated VM, not as linked host code.