1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
//! supermachine — run any OCI/Docker image as a hardware-isolated
//! microVM on macOS HVF (Linux KVM and Windows WHP in progress).
//!
//! ## Quick start
//!
//! ```no_run
//! use supermachine::{Image, Vm, VmConfig};
//!
//! // Load an already-baked snapshot (use the `supermachine` CLI to
//! // bake one once: `supermachine run nginx:1.27-alpine --no-push`).
//! let image = Image::from_snapshot("snapshots/nginx/restore.snap")?;
//!
//! // Spin up a microVM. Default VmConfig: 256 MiB, 1 vCPU,
//! // auto-discovered kernel + init shim.
//! let vm = Vm::start(&image, &VmConfig::new())?;
//!
//! // Talk to the guest via the host-side vsock-mux unix socket.
//! // Bytes you write here proxy through to the first TSI listener
//! // inside the guest (typically the workload's `:80`).
//! let mut sock = vm.connect()?;
//! use std::io::Write;
//! sock.write_all(b"GET / HTTP/1.0\r\nHost: workload\r\n\r\n")?;
//!
//! vm.stop()?;
//! # Ok::<(), supermachine::Error>(())
//! ```
//!
//! ## Three integration patterns
//!
//! 1. **Shell out to the CLI** — exec `supermachine run IMAGE`.
//! 2. **Long-lived router daemon** — start `supermachine-router`,
//! talk HTTP to it. Process-isolated from your app.
//! 3. **Embed this library directly** (this crate) — lowest
//! latency, in-process VMM. Requires codesigning your binary;
//! see [`assets::ENTITLEMENTS_PLIST`] and the
//! `cargo-supermachine` plugin (`cargo install supermachine`).
// ---------- public API ----------
pub use ;
/// Internal wire-format helpers exposed for sibling crates that
/// need to talk to the in-guest agent directly (specifically the
/// napi binding in `npm/supermachine-core/`). The action JSON shape
/// these support is internal — outside embedders should prefer
/// [`Vm::write_file`] / [`Vm::read_file`] / [`Vm::exec`] which call
/// these for you.
pub use AssetPaths;
pub use ;
// ---------- internal modules used by the in-tree binaries ----------
//
// These are implementation modules for the CLI, router, and
// bench-compare crates that ship inside this same workspace. They
// aren't a stable public API: their shape changes as production
// hardening lands (multi-vCPU, vsock auth tokens, snapshot format
// revisions, KVM/WHP backends, …).
//
// We expose them through a single `internal` namespace so any
// embedder reading the docs sees a clear "not for you" boundary.
// If you reach into `supermachine::internal::*` from your own
// crate, that's a deliberate choice; pin a specific git commit or
// be ready for breakage on every minor.
/// **Unstable internals.** Subject to breaking changes on every
/// minor. The stable embedder API is at the crate root: [`Image`],
/// [`Vm`], [`VmConfig`], [`AssetPaths`], [`Error`].
///
/// Exposed for the in-tree binaries (`supermachine-router`,
/// `supermachine-bench-compare`) and for rare cases where the
/// stable surface doesn't yet cover a need (raise an issue if so;
/// the high-level types are designed to grow additive methods).
// ---------- module declarations ----------
//
// We need the underlying modules `pub` so the `internal` re-exports
// above resolve, and so the in-tree binaries that live in
// `src/bin/` can use them across the lib boundary. Each is hidden
// from rustdoc; consumers should reach for them only through
// `supermachine::internal::*`.
/// FUSE-over-virtio wire protocol types. Shared between the virtio-fs
/// device emulation and the host-side FUSE server. No logic — just the
/// struct definitions matching `include/uapi/linux/fuse.h`. Will be
/// promoted out of `doc(hidden)` once the mount API stabilizes.