supermachine 0.5.0

Run any OCI/Docker image as a hardware-isolated microVM on macOS HVF (Linux KVM and Windows WHP in progress). Single library API, zero flags for the common case, sub-100 ms cold-restore from snapshot.
#![recursion_limit = "256"]
//! supermachine — run any OCI/Docker image as a hardware-isolated
//! microVM on macOS HVF (Linux KVM and Windows WHP in progress).
//!
//! ## Quick start
//!
//! ```no_run
//! use supermachine::{Image, Vm, VmConfig};
//!
//! // Load an already-baked snapshot (use the `supermachine` CLI to
//! // bake one once: `supermachine run nginx:1.27-alpine --no-push`).
//! let image = Image::from_snapshot("snapshots/nginx/restore.snap")?;
//!
//! // Spin up a microVM. Default VmConfig: 256 MiB, 1 vCPU,
//! // auto-discovered kernel + init shim.
//! let vm = Vm::start(&image, &VmConfig::new())?;
//!
//! // Talk to the guest via the host-side vsock-mux unix socket.
//! // Bytes you write here proxy through to the first TSI listener
//! // inside the guest (typically the workload's `:80`).
//! let mut sock = vm.connect()?;
//! use std::io::Write;
//! sock.write_all(b"GET / HTTP/1.0\r\nHost: workload\r\n\r\n")?;
//!
//! vm.stop()?;
//! # Ok::<(), supermachine::Error>(())
//! ```
//!
//! ## Three integration patterns
//!
//! 1. **Shell out to the CLI** — exec `supermachine run IMAGE`.
//! 2. **Long-lived router daemon** — start `supermachine-router`,
//!    talk HTTP to it. Process-isolated from your app.
//! 3. **Embed this library directly** (this crate) — lowest
//!    latency, in-process VMM. Requires codesigning your binary;
//!    see [`assets::ENTITLEMENTS_PLIST`] and the
//!    `cargo-supermachine` plugin (`cargo install supermachine`).

#![allow(dead_code)]

// ---------- public API ----------

pub mod assets;
#[cfg(target_os = "macos")]
pub mod codesign;
pub mod exec;
#[cfg(feature = "tokio")]
#[path = "async.rs"]
pub mod async_;

mod api;
pub use api::{
    Error, Image, OciImageBuilder, Pool, PoolBuilder, PoolStats, PooledVm, PullPolicy,
    TcpForwarder, Vm, VmConfig,
};

/// Internal wire-format helpers exposed for sibling crates that
/// need to talk to the in-guest agent directly (specifically the
/// napi binding in `npm/supermachine-core/`). The action JSON shape
/// these support is internal — outside embedders should prefer
/// [`Vm::write_file`] / [`Vm::read_file`] / [`Vm::exec`] which call
/// these for you.
pub mod wire {
    pub use crate::api::{b64_decode, b64_encode};
    pub use crate::exec::send_control_with_ack;
}
pub use assets::AssetPaths;
pub use exec::{ExecBuilder, ExecChild, ExecOutcome, ExecStderr, ExecStdin, ExecStdout};

// ---------- internal modules used by the in-tree binaries ----------
//
// These are implementation modules for the CLI, router, and
// bench-compare crates that ship inside this same workspace. They
// aren't a stable public API: their shape changes as production
// hardening lands (multi-vCPU, vsock auth tokens, snapshot format
// revisions, KVM/WHP backends, …).
//
// We expose them through a single `internal` namespace so any
// embedder reading the docs sees a clear "not for you" boundary.
// If you reach into `supermachine::internal::*` from your own
// crate, that's a deliberate choice; pin a specific git commit or
// be ready for breakage on every minor.

/// **Unstable internals.** Subject to breaking changes on every
/// minor. The stable embedder API is at the crate root: [`Image`],
/// [`Vm`], [`VmConfig`], [`AssetPaths`], [`Error`].
///
/// Exposed for the in-tree binaries (`supermachine-router`,
/// `supermachine-bench-compare`) and for rare cases where the
/// stable surface doesn't yet cover a need (raise an issue if so;
/// the high-level types are designed to grow additive methods).
pub mod internal {
    /// VMM modules: device emulation, HVF wrappers, kernel loader,
    /// VM lifecycle. The public API at the crate root sits on top
    /// of these.
    pub mod vmm {
        pub use crate::vmm::*;
    }
    pub use crate::arch;
    pub use crate::bake;
    pub use crate::devices;
    pub use crate::hvf;
    pub use crate::kernel;
    pub use crate::utils;

    pub use crate::vmm::pool::{
        warm_pool, PoolClientError, PoolHandle, PoolRestoreResult, PoolWorker, WarmPool,
        WarmPoolError, WarmRestoreTimings,
    };
    pub use crate::vmm::resources::{
        EndpointResources, ResourceError, SnapshotResources, VmProfile, VmResources,
    };
    pub use crate::vmm::runner::{run, RunError, RunOptions, RunReport};
    pub use crate::vmm::tls::TlsConfig;
}

// ---------- module declarations ----------
//
// We need the underlying modules `pub` so the `internal` re-exports
// above resolve, and so the in-tree binaries that live in
// `src/bin/` can use them across the lib boundary. Each is hidden
// from rustdoc; consumers should reach for them only through
// `supermachine::internal::*`.

#[doc(hidden)]
pub mod arch;
#[doc(hidden)]
pub mod devices;
/// FUSE-over-virtio wire protocol types. Shared between the virtio-fs
/// device emulation and the host-side FUSE server. No logic — just the
/// struct definitions matching `include/uapi/linux/fuse.h`. Will be
/// promoted out of `doc(hidden)` once the mount API stabilizes.
#[doc(hidden)]
pub mod fuse;
#[doc(hidden)]
pub mod hvf;
#[doc(hidden)]
pub mod kernel;
#[doc(hidden)]
pub mod utils;
#[doc(hidden)]
pub mod vmm;
#[doc(hidden)]
pub mod bake;