supermachine 0.1.1

Run any OCI/Docker image as a hardware-isolated microVM on macOS HVF (Linux KVM and Windows WHP in progress). Single library API, zero flags for the common case, sub-100 ms cold-restore from snapshot.
# supermachine

Run any OCI/Docker image as a hardware-isolated microVM, embedded
directly in your Rust app. macOS Apple Silicon (HVF) today; Linux
KVM and Windows WHP in progress.

```toml
[dependencies]
supermachine        = "=0.1"
supermachine-kernel = "=0.1"
```

```rust
use std::io::{Read, Write};
use supermachine::{Image, Vm, VmConfig};

let image = Image::from_snapshot("snapshots/nginx_1_27-alpine")?;
let vm = Vm::start(&image, &VmConfig::new())?;

let mut sock = vm.connect()?;
sock.write_all(b"GET / HTTP/1.0\r\n\r\n")?;
let mut response = Vec::new();
sock.read_to_end(&mut response)?;

vm.stop()?;
# Ok::<(), supermachine::Error>(())
```

VM start (snapshot restore + first byte) is **~10–50 ms** on Apple
Silicon. About 1.8× faster than docker run, 3× faster than krunvm.

## What you get

| Type | Stable | Purpose |
| --- | :---: | --- |
| [`Image`] || Identifies a baked snapshot on disk. |
| [`Vm`] || One running microVM. Owns the worker process. |
| [`VmConfig`] || Builder for memory, vCPUs, asset paths, restore timeout. |
| [`AssetPaths`] || Where to find the kernel + init shim. Auto-discovers; override for `.app` bundles. |
| [`Error`] || `#[non_exhaustive]` so adding variants doesn't break consumers. |
| [`OciImageBuilder`] || Bake an OCI image into a snapshot from your build process. |
| `internal::*` || Lower-level VMM, devices, snapshot format. Use only if the stable surface doesn't cover you and pin `=0.1.0`. |

[`Image`]: https://docs.rs/supermachine/latest/supermachine/struct.Image.html
[`Vm`]: https://docs.rs/supermachine/latest/supermachine/struct.Vm.html
[`VmConfig`]: https://docs.rs/supermachine/latest/supermachine/struct.VmConfig.html
[`AssetPaths`]: https://docs.rs/supermachine/latest/supermachine/struct.AssetPaths.html
[`Error`]: https://docs.rs/supermachine/latest/supermachine/enum.Error.html
[`OciImageBuilder`]: https://docs.rs/supermachine/latest/supermachine/struct.OciImageBuilder.html

## Codesigning

macOS HVF requires the `com.apple.security.hypervisor` entitlement
on whatever process calls `hv_vm_create` — in our architecture
that's `supermachine-worker`, not your app. The CLI auto-signs the
worker on first run; no manual setup. **If you only depend on
this library** and let supermachine spawn its own worker process,
you don't need to codesign your own binary at all.

**If you embed the library so your binary spawns workers
in-process** (e.g. via `Vm::start` from your own thread), your
binary itself does need the entitlement. The bundled
`cargo-supermachine` plugin handles it:

```bash
cargo install supermachine          # one-time; gets the plugin
cargo supermachine build --release   # = cargo build + codesign
./target/release/your-app
```

`cargo supermachine run`, `cargo supermachine test`, and
`cargo supermachine check` all wrap the equivalent cargo commands
with the codesign step. For a distributable `.app`, pass
`--identity "Developer ID Application: ..."` to enable Hardened
Runtime; ad-hoc signing is the default for local dev.

Manual flow without the plugin:

```bash
cargo build --release
supermachine codesign target/release/your-app
```

## Baking snapshots

`Image::from_snapshot` needs a snapshot dir. Bake with the CLI:

```bash
supermachine pull nginx:1.27-alpine --name my-nginx
# snapshot lands in ~/.local/supermachine-snapshots/my-nginx/
```

Then in your app:

```rust
let image = Image::from_snapshot(
    format!("{}/.local/supermachine-snapshots/my-nginx", env!("HOME"))
)?;
```

Or programmatically via [`OciImageBuilder`] — requires the kernel
build pipeline to be set up; most users go through the CLI for
this step.

## Bundling for distribution

For a self-contained `.app`, stage the kernel + init shim into
`Contents/Resources/` at build time:

```rust
// build.rs
fn main() -> std::io::Result<()> {
    let resources = std::path::PathBuf::from(
        std::env::var("OUT_DIR").unwrap()
    ).join("../../../bundle-resources");
    std::fs::create_dir_all(&resources)?;
    supermachine_kernel::extract_kernel_to(&resources.join("kernel"))?;
    supermachine_kernel::extract_init_oci_to(&resources.join("init-oci"))?;
    Ok(())
}
```

Or extract once at runtime:

```rust
let scratch = std::env::temp_dir().join("supermachine-assets");
std::fs::create_dir_all(&scratch)?;
supermachine_kernel::extract_kernel_to(&scratch.join("kernel"))?;
supermachine_kernel::extract_init_oci_to(&scratch.join("init-oci"))?;

let assets = supermachine::AssetPaths::from_dir(&scratch);
let vm = Vm::start(&image, &VmConfig::new().with_assets(assets))?;
```

End users do nothing: drag the .dmg, run the app, microVMs work.

## Status

- ✅ macOS Apple Silicon (HVF). 100/100 cold-restore reliability
  at `--vcpus 1..=16`. Nginx, httpd, redis, memcached, python,
  node, postgres tested.
- 🚧 Linux KVM backend. Tracked.
- 🚧 Windows WHP backend. Tracked.
- ⚠️ TSI socket family handles AF_INET TCP/UDP transparently;
  workloads using AF_NETLINK, raw sockets, multicast, TUN/TAP, or
  ICMP are unsupported. virtio-net opt-in is on the roadmap.

Full snapshot-format / API stability contract + the three
integration patterns (CLI, daemon, embed) live in the project's
design docs (published alongside the source repo when public).

## License

This crate (the `supermachine` library + CLI) is licensed under
**Apache-2.0**; see [`LICENSE-APACHE`](./LICENSE-APACHE).

The hard transitive dep on
[`supermachine-kernel`](https://crates.io/crates/supermachine-kernel)
brings in additional components (Linux kernel image under
**GPL-2.0-only**, musl libc inside the init shim under **MIT**).
Redistributors of binaries built against this crate must comply
with those licenses on their respective components — see the
`supermachine-kernel` crate's `NOTICE` for source-availability
details. Your own code's license is unaffected: the kernel runs
as guest data inside an isolated VM, not as linked host code.