soroban-fork 0.9.2

Lazy-loading mainnet/testnet fork for Soroban tests. Inspired by Foundry's Anvil.
Documentation

soroban-fork

crates.io docs.rs License: MIT OR Apache-2.0

Lazy-loading mainnet/testnet fork for Soroban tests. Think Foundry's Anvil, but for Stellar Soroban.

When a test reads a ledger entry that isn't in the local cache, soroban-fork fetches it from the Soroban RPC on the fly. No need to pre-snapshot every contract your test might touch.

Install

[dev-dependencies]
soroban-fork = "0.8"

Usage

use soroban_fork::ForkConfig;
use soroban_sdk::{Address, String, Symbol, vec};

#[test]
fn test_against_real_state() {
    let env = ForkConfig::new("https://soroban-testnet.stellar.org:443")
        .cache_file("test_cache.json")   // optional: persist for faster reruns
        .build()
        .expect("fork setup");

    env.mock_all_auths();

    let contract = Address::from_string(&String::from_str(
        &env,
        "CABC...YOUR_CONTRACT_ID",
    ));

    // This lazily fetches the contract's instance, WASM code,
    // and any storage entries from the real network.
    let result: i128 = env.invoke_contract(
        &contract,
        &Symbol::new(&env, "total_assets"),
        vec![&env],
    );

    assert!(result >= 0);

    // env.fetch_count() tells you how many RPC calls were made.
    // Cache is auto-saved on drop (includes lazy-fetched entries).
}

How it works

Your test calls contract.total_assets()
         |
         v
  Soroban VM needs a ledger entry
         |
         v
  RpcSnapshotSource.get(key)
         |
    +----+----+
    |         |
  Cache     Cache miss
  hit         |
    |         v
    |    getLedgerEntries RPC call
    |         |
    |         v
    |    Cache result locally
    |         |
    +----+----+
         |
         v
  Return entry to VM
  • First run: entries are fetched from the Soroban RPC as needed. Each unique entry = one HTTP call (batched in chunks of 200 if pre-fetching).
  • Subsequent runs: if cache_file is set, entries are loaded from disk. Only new entries trigger RPC calls.
  • State changes are local: the real network is never modified. Deposits, transfers, and other mutations happen in memory only.

API

ForkConfig

ForkConfig::new(rpc_url)           // Soroban RPC endpoint
    .cache_file("cache.json")             // optional: disk persistence + auto-save on drop
    .network_id(bytes)                    // optional: override the SHA-256 network id
    .fetch_mode(FetchMode::Strict)        // optional: Strict (default) or Lenient
    .at_ledger(1_234_567)                 // optional: pin the Env's reported sequence
    .pinned_timestamp(1_700_000_000)      // optional: pin the Env's close time
    .max_protocol_version(25)             // optional: cap the protocol the VM reports
    .tracing(true)                        // optional: capture cross-contract call tree
    .rpc_config(RpcConfig { retries: 5, ..RpcConfig::default() })
    .build()?                             // returns Result<ForkedEnv, ForkError>

Network metadata (passphrase + SHA-256 id) is fetched from the RPC's getNetwork method at build time — no URL heuristics, no silent defaults. Override with .network_id(bytes) only if you actually need to.

The Env's reported timestamp defaults to the close time of the latest ledger, fetched via getLedgers at build time. Tests are reproducible across runs out of the box — pin an explicit value via .pinned_timestamp(...) only when you need to anchor to a specific moment (e.g. reproducing a known historical scenario).

ForkedEnv

Returned by ForkConfig::build(). Implements Deref<Target = Env> so all SDK methods work transparently. Adds fork-specific capabilities:

let env = ForkConfig::new(rpc_url).cache_file("cache.json").build()?;

// Use like a regular Env (via Deref)
env.mock_all_auths();
let result: i128 = env.invoke_contract(&addr, &symbol, vec![&env]);

// Fork-specific methods
env.fetch_count();                 // number of RPC calls made
env.save_cache()?;                 // explicit save (also called automatically on drop)
env.warp_time(86_400);             // advance ledger timestamp + sequence
env.deal_token(&usdc, &who, amt);  // Foundry-style balance deal
env.env();                         // &Env (for edge cases where Deref doesn't suffice)

FetchMode

Controls behavior when the RPC fails from inside the VM loop (where the SnapshotSource trait can't return a typed error):

  • Strict (default): panic. Best for tests — a fetch failure means the test setup is wrong, and you want the stack trace.
  • Lenient: log at warn! level and return None. Useful when partial state is acceptable.

RpcConfig

Transport tunables. Defaults: 3 retries with 300 ms exponential backoff plus full jitter (so concurrent test runners don't synchronise their retries into a thundering herd), 30 s per-request timeout, 200-key batch size (Soroban RPC cap). Customize via .rpc_config(RpcConfig { .. }) on the builder. HTTP 408, 425, 429, and 5xx responses are retried; other 4xx codes fail fast and include the response body for diagnostics.

Tracing — Foundry-style call trees

Set .tracing(true) on the builder to capture cross-contract call trees. The host runs in DiagnosticLevel::Debug, every fn_call/fn_return emits a diagnostic event, and env.trace() reconstructs the tree:

let env = ForkConfig::new(rpc_url)
    .tracing(true)
    .build()?;

env.invoke_contract::<i128>(&vault, &Symbol::new(&env, "deposit"), args);
env.print_trace();
[TRACE]
  [CABC…XYZ1] deposit(GACC…QRST, 1000000)
    [CCDE…UVW2] transfer_from(GACC…QRST, CABC…XYZ1, 1000000)
      ← ()
    [CFGH…IJK3] invest(1000000)
      ← 1010000
    ← 1010000

Programmatic access via env.trace() returns a Trace with structured TraceFrames — useful for asserting call structure or balances inside a test. Failed calls render as [rolled back]; WASM traps show as TRAPPED (no fn_return).

Per-invocation scoping. The host's InvocationMeter clears the events buffer at the start of every top-level invoke_contract, so each trace() reflects only the most recent top-level call. Capture before the next call if you need history. See the trace module docs for wire-format details and caveats (single-Vec-arg ambiguity).

Auth introspection — print_auth_tree

When debugging cross-contract authorization (Error(Auth, InvalidAction), unexpected require_auth panics, "did Alice actually have to sign this?"), read out the recording auth manager's payload set:

let env = ForkConfig::new(rpc_url).build()?;
env.mock_all_auths();

env.invoke_contract::<()>(&usdc, &Symbol::new(&env, "transfer"), args);

env.print_auth_tree();
[AUTH]
  payload #0  signer=GA62…J3CT  nonce=5541220902715666415
    [CCW6…MI75] transfer(GA62…J3CT, GB7O…7AMM, 250000000)

Programmatic access via env.auth_tree() returns an AuthTree with payload_count() / invocation_count() / is_empty() accessors — useful for asserting that a multi-hop call demanded exactly the expected number of require_auths. Raw Vec<RecordedAuthPayload> is available via env.auth_payloads().

Per-invocation scoping, like trace(): only payloads from the most recent top-level invoke_contract are visible.

Limits inherited from the upstream host crate, documented honestly:

  • Error(Auth, InvalidAction) carries only the address — the failed contract / function / expected authorizer are constructed locally inside the host and not persisted to any accessor we can read out. After a failed call, auth_tree() reflects whatever payload set the host left in its previous_authorization_manager; the exact contents after a panic mid-invocation are an implementation detail of soroban-env-host. A structured last_auth_failure() awaits an upstream change.
  • Whether mock_all_auths_allowing_non_root_auth was used is not exposed by the host. The README's "Common pitfalls" section documents the trap; an enforceable strict_auth mode awaits an upstream change.

JSON-RPC server mode

Library mode (everything above) is for Rust tests. Server mode turns soroban-fork into a Stellar Soroban RPC drop-in that any tooling — JS, Python, Go SDKs, Stellar Lab, Freighter, custom clients — can point at:

cargo install soroban-fork --features server
soroban-fork serve --rpc https://soroban-rpc.mainnet.stellar.gateway.fm
# → serving JSON-RPC on http://127.0.0.1:8000

Then any client speaking the Stellar RPC dialect:

import { SorobanRpc } from "@stellar/stellar-sdk";
const server = new SorobanRpc.Server("http://localhost:8000");
const account = await server.getAccount("GA5...");
const result = await server.simulateTransaction(tx);  // hits the fork

Or via the Rust stellar-rpc-client, raw curl, or anything that understands the spec.

Pre-funded test accounts (new in v0.7)

The fork mints 10 deterministic test accounts at build time, each with 100K XLM and a USDC trustline ready to receive. Same seed produces the same accounts every run, so test code can hard-code addresses by index. The CLI prints them on startup:

soroban-fork v0.7
Listening on http://127.0.0.1:8000

Available test accounts:
(0) GBXXX...AB12  (100000.0000000 XLM)  ->  SAXXX...CD34
(1) GCYYY...EF56  (100000.0000000 XLM)  ->  SAXXX...GH78
...

Pass them to JS-SDK's Keypair.fromSecret(...) to sign envelopes. After every successful sendTransaction, the source account's sequence number auto-increments — so chained getAccountTransactionBuildersendTransaction loops just work.

Real DEX flow works end-to-end. A test account can swap XLM → USDC against the live Phoenix DEX (or Soroswap, Aquarius, …) and the USDC actually lands in its trustline. Smoke-tested: 1000 XLM → 167.4020548 USDC at the live mainnet pool reserves.

No hidden hardcode. The trustline default targets the mainnet USDC issuer (Circle); for testnet, futurenet, or a custom fork, override via ForkConfig::test_account_trustlines(vec![...]). The trustlines are written with flags = AUTHORIZED_FLAG, limit = i64::MAX — shape-equivalent to running ChangeTrust then having the issuer authorize, just bootstrapped at build time. Auth runs in trust mode (Recording(false)) so unsigned envelopes from test code apply without ceremony.

Override count via --accounts N (set to 0 to disable). For library users, ForkConfig::test_account_count(n).build() exposes the same machinery; read accounts back with env.test_accounts().

Deploy your own contracts onto the fork (new in v0.7)

The same sendTransaction accepts HostFunction::UploadContractWasm and HostFunction::CreateContract, so you can deploy custom contracts straight onto the forked mainnet state and have them call live production contracts. The test suite's server_deploy_and_invoke_custom_contract covers the full loop:

  1. Upload a tiny add(i32, i32) -> i32 WASM
  2. Create the contract instance from the uploaded hash
  3. Invoke add(2, 3) on the deployed contract — returns 5

Cross-protocol scenarios (your contract calls Blend, Phoenix, Soroswap, etc.) follow the same pattern: dependencies the deployed contract reaches into get lazy-fetched from mainnet and cached locally.

The headline showcase: cheatcode-only deploy

What makes the toolset matter, in one test:

  1. fork_setCode installs your WASM bytes (no UploadContractWasm envelope)
  2. fork_setStorage installs the contract instance entry pointing at that WASM, at a synthetic contract address (no CreateContract envelope, no source-account juggling, no salt)
  3. simulateTransaction invokes your cheatcode-deployed contract, returns the result
  4. The same fork still serves live mainnet contracts — XLM SAC.decimals() returns 7, your synth_contract.add(2,3) returns 5, both in the same simulation context

Two cheatcode calls and a contract is callable. That's the Foundry-vm.etch-equivalent — the headline reason this toolset exists. Live in server_cheatcode_only_deploy_coexists_with_mainnet; end-to-end against mainnet, ~70 LoC.

Methods supported in v0.9.2

  • getHealth — fork status + latest ledger
  • getVersionInfo — server version + protocol version
  • getNetwork — passphrase + protocol version + network ID (proxied from the upstream RPC at fork-build time, then served locally)
  • getLatestLedger — fork's reported ledger sequence + protocol
  • getLedgers — single-element page describing the fork point with real ledgerCloseTime (Unix-seconds string, per Stellar convention)
  • getLedgerEntries — base64-XDR LedgerKey array → array of entries; routed through the fork's lazy-fetch cache, so first hit proxies upstream and subsequent hits are local
  • simulateTransaction — accepts a base64-XDR TransactionEnvelope with one InvokeHostFunctionOp, runs it via the host's recording-mode primitive, returns:
    • results[0].xdr — the function's return value (ScVal)
    • results[0].auth — auth entries sendTransaction would need
    • transactionDataSorobanTransactionData with recorded footprint and resourceFee matching minResourceFee
    • events — diagnostic events emitted during simulation
    • cost.cpuInsns / cost.memBytes — real numbers from the host's Budget, not a write_bytes proxy
    • minResourceFee — derived from the live on-chain Soroban fee schedule via compute_transaction_resource_fee (since v0.5.2)
    • latestLedger — fork's reported ledger
  • sendTransaction (new in v0.6) — applies the host invocation's writes back to the snapshot source so subsequent reads see them. Auth runs in trust mode (Recording(false)) so unsigned envelopes from test code apply without ceremony. Returns status ("SUCCESS" / "ERROR"), hash (sha256 of the envelope), appliedChanges (number of LedgerEntryChanges written), and the original envelope echo.
  • getTransaction (new in v0.6) — receipt lookup by hash. Returns "SUCCESS" / "FAILED" / "NOT_FOUND", plus the original envelope, the host function's ScVal return value, and the applied-changes count when found.

Fork-mode extensions (fork_*)

Non-standard methods, only available against soroban-fork. The fork_ prefix marks the namespace boundary explicitly so a client can distinguish "this works against any Stellar RPC" from "this only works against the fork."

  • fork_setLedgerEntry (new in v0.8, renamed from anvil_setLedgerEntry in v0.8.1) — force-write a base64-XDR LedgerEntry to any LedgerKey directly in the snapshot source, bypassing host-level checks. Load-bearing primitive for stress-test scenarios — oracle price manipulation, force-set token balances, replace contract code, all reduce to this one entry write.
  • fork_setStorage (new in v0.8.2) — sugar over fork_setLedgerEntry for the common case of writing into a contract's storage. Takes contract (strkey), key (base64 ScVal), value (base64 ScVal), optional durability ("persistent" (default) / "temporary"), and optional liveUntilLedgerSeq. The handler builds the ContractData XDR server-side so clients don't have to assemble the multi-level enum nesting themselves. Use this for oracle price overrides and contract-storage scenarios.
  • fork_setCode (new in v0.8.3) — upload WASM bytes as a ContractCode ledger entry, keyed by sha256 of those bytes. Takes wasm (base64) and optional liveUntilLedgerSeq. Returns { ok, hash, latestLedger } — the hash is server-derived (the host computes it the same way), so callers can wire a follow-up CreateContract (or fork_setStorage over a ContractInstance ScVal) to point at the uploaded code without any host invocation.
  • fork_setBalance (new in v0.8.4, Soroban-token path added in v0.8.7) — Foundry's deal()-equivalent for Stellar. Three asset shapes:
    • "native" (default) — XLM, balance lives on AccountEntry. Auto-creates the account with master threshold 1 if missing.
    • { code, issuer } — Classic credit asset (USDC, EURC, …), balance lives on TrustLineEntry. Auto-creates the trustline with flags = AUTHORIZED, limit = i64::MAX — equivalent to having run ChangeTrust and the issuer authorising.
    • { contract } (v0.8.7) — any SEP-41-shaped Soroban token (the SAC for Classic assets, custom Soroban tokens like BLND). Handler simulates balance(to), computes the delta, and invokes mint(to, delta) or burn(to, |delta|) with trust-mode auth bypassing admin checks.
    • amount is a decimal string. For Classic paths it's i64 stroops; for the contract path it's i128.
    • Takes account (G-strkey) for the recipient and the asset discriminant above. Returns { ok, latestLedger }.
  • fork_etch (new in v0.8.6) — Foundry's vm.etch-equivalent. Hot-swap the WASM under any contract address in one wire call. Takes contract (strkey), wasm (base64 bytes), optional liveUntilLedgerSeq. Internally: install ContractCode, then read-modify-write the contract's instance entry to point at the new code hash. Storage is preserved verbatim — if the existing instance carries contract state, swapping code keeps that state intact (the hotfix scenario). Auto-creates the instance entry if the target address has none yet — works on any address regardless of prior state, just like Anvil. One wire call replaces the fork_setCode + fork_setStorage dance the v0.8.5 showcase uses.
  • fork_closeLedgers (new in v0.8, renamed from anvil_mine in v0.8.1) — close ledgers ledgers (default 1) and bump close-time by timestampAdvanceSeconds (default ledgers * 5 — Stellar's average close rate). Stellar's verb is closing a ledger; pushes time-sensitive contract logic (vesting cliffs, oracle staleness) past thresholds without orchestrating real transactions.

What v0.9.2 server does NOT support

Listed up front so nothing surprises you:

  • getEvents — historical event filtering. Diagnostic events emitted during simulation are reachable via simulateTransaction's response.
  • Ergonomic fork_* wrapperssetNonce, impersonate. The primitive fork_setLedgerEntry covers all of these once the client constructs the right XDR; fork_setStorage (v0.8.2), fork_setCode (v0.8.3), fork_setBalance (v0.8.4 + v0.8.7), fork_etch (v0.8.6) are the sugar wrappers landed so far.
  • fork_snapshot / fork_revert — saved-state checkpoints. Scoped to v0.9 (the Rc<HostImpl> snapshot model needs its own design pass — either a journaling layer over RpcSnapshotSource or a clone-on-snapshot of the entire cache map).
  • Ledger close as a sendTransaction side-effect — each send applies its writes and bumps the source's seq_num, but does not automatically advance env.ledger().sequence_number(). Use fork_closeLedgers (or env.warp(...) from lib mode) to push the ledger forward. Auto-close on send is a v0.8.x ergonomic followup.
  • resultMetaXdr on getTransaction — Stellar's TransactionMeta::V3 carries state-change deltas in a Stellar-core-XDR-heavy shape; v0.6 returns returnValueXdr and appliedChanges instead. Full meta XDR is a v0.6.x followup.

Architecture: single-threaded actor

axum HTTP handlers run on a multi-thread tokio runtime; commands flow through a bounded mpsc channel to one OS thread that owns the ForkedEnv. The SDK's Env contains Rc<HostImpl> and is !Send, so it can't live behind Arc<RwLock> — single-thread ownership with explicit messaging is the load-bearing constraint of this design.

[HTTP handler 1]──┐
[HTTP handler 2]──┼──mpsc::channel──→ [worker thread] owns ForkedEnv
[HTTP handler N]──┘                           │
                                              └─→ snapshot_source.get()
                                                  └─→ on cache miss → upstream RPC

Cache misses on getLedgerEntries block the worker for one upstream round-trip. Steady state (after first contact) is local.

Library API for server mode

If you'd rather embed the server in your own Rust process (CI test harness, custom Stellar tooling), use the library API:

use soroban_fork::{ForkConfig, server::Server};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = ForkConfig::new("https://soroban-rpc.mainnet.stellar.gateway.fm");

    Server::builder(config)
        .listen("127.0.0.1:8000".parse().unwrap())
        .serve()        // runs until SIGINT/SIGTERM
        .await?;

    Ok(())
}

For tests that need to bind ephemeral ports and shut down programmatically:

let running = Server::builder(config)
    .listen("127.0.0.1:0".parse().unwrap())   // OS-assigned port
    .start()
    .await?;
let url = format!("http://{}", running.local_addr());
// ... drive the server with a real client ...
running.shutdown().await?;

RpcSnapshotSource

The core primitive. Implements soroban_env_host::storage::SnapshotSource:

use std::sync::Arc;
use soroban_fork::{RpcSnapshotSource, RpcConfig};
use soroban_fork::RpcClient; // re-exported

let client = Arc::new(RpcClient::new("https://soroban-testnet.stellar.org:443", RpcConfig::default())?);
let source = Arc::new(RpcSnapshotSource::new(client));
source.preload(entries);          // pre-load entries from a snapshot file
let all_entries = source.entries();  // export for persistence

RpcSnapshotSource is Send + Sync, so it can be wrapped in Arc and shared across threads — useful for parallel test runners and the upcoming RPC-server mode. Internally the cache stores XDR-encoded bytes and parses to LedgerEntry only at the SDK boundary, so no Rc ever crosses threads.

Errors

Every public fallible API returns Result<T, ForkError>. The error enum discriminates transport failures, RPC-level errors, XDR codec failures, cache I/O, and protocol-violation cases — no string-typed errors.

Logging

Uses the log facade — no output unless a logger is initialized in the test binary. Typical setup:

RUST_LOG=soroban_fork=info cargo test -- --ignored

Examples

Runnable demos against live Stellar mainnet. Each one targets a real contract — Blend lending, Phoenix DEX — to show where lazy-fork pays off compared to fabricated reserves in a snapshot test.

# What does my 50K USDC deposit do to the Blend Fixed pool?
cargo run --release --example blend_lending

# What's my fill price market-selling 1M XLM into Phoenix?
cargo run --release --example phoenix_slippage

# Phoenix vs Soroswap on the same XLM/USDC trade — how big is the
# cross-DEX price gap right now?
cargo run --release --example cross_dex_arbitrage

MAINNET_RPC_URL overrides the upstream RPC. Each example prints the forked ledger sequence and the number of RPC fetches it triggered.

For server-mode tooling (@stellar/stellar-sdk, Stellar Lab, Freighter), examples/server_demo.mjs shows the JSON-RPC dialect working from Node — no npm install, no XDR dance, just fetch():

# shell A — start the fork server
cargo run --release --features server --bin soroban-fork -- \
    serve --rpc https://soroban-rpc.mainnet.stellar.gateway.fm

# shell B — drive it from Node
node examples/server_demo.mjs

Combining with stellar snapshot create

For maximum speed, pre-snapshot known contracts and let soroban-fork handle the rest lazily:

# Snapshot the main contracts you know about
stellar snapshot create \
  --address $VAULT_CONTRACT \
  --network testnet --output json --out vault_state.json

stellar snapshot create \
  --address $STRATEGY_CONTRACT \
  --network testnet --output json --out strategy_state.json

stellar snapshot merge \
  --input vault_state.json --input strategy_state.json \
  --output merged.json
let env = ForkConfig::new("https://soroban-testnet.stellar.org:443")
    .cache_file("merged.json")  // pre-loaded entries skip RPC
    .build();

// Calls to vault/strategy use cached entries (fast).
// Calls to USDC token or other dependencies are fetched lazily from RPC.

Diagnostics

Every lazy fetch is logged to stderr with human-readable key types:

[soroban-fork] forked at ledger 2070078 (protocol 25)
[soroban-fork] fetch #1: ContractData(instance)
[soroban-fork] fetch #2: ContractCode(dee2d494...)
[soroban-fork] fetch #3: ContractData(persistent)
[soroban-fork] saved 3 entries to test_cache.json

env.fetch_count() returns the total number of RPC calls for programmatic assertions.

Cache format

The cache file uses the same JSON format as stellar snapshot create (LedgerSnapshot). You can:

  • Use a stellar snapshot create output as the cache input
  • Share cache files between team members for reproducible tests
  • Inspect cached entries with stellar xdr decode

Cache is saved automatically when ForkedEnv is dropped, including all entries that were lazy-fetched during the test. This means the second run of a test with cache_file set will be fully local -- zero RPC calls.

Common pitfalls

Real things that have tripped up integrators wiring soroban-fork into Blend-style mainnet tests. Read these before opening an issue.

mock_all_auths() vs mock_all_auths_allowing_non_root_auth()

Reach for plain env.mock_all_auths(). Don't use mock_all_auths_allowing_non_root_auth() unless you understand exactly which authorisations it disables.

The _allowing_non_root_auth variant skips authorisation checks for non-root (cross-contract callee) frames. If your contract makes a self-call (to=self) or relies on authorize_as_current_contract declarations, the relaxed mode silently masks missing auth declarations during fork tests — they pass locally and only fail on testnet with Error(Auth, InvalidAction). By the time you see the testnet error, you've lost the trail back to the missing declaration.

Rule of thumb: start with plain mock_all_auths(). Only switch to the relaxed variant when you can name which non-root authorisations you're intentionally bypassing — and document why in a comment.

A ForkConfig::strict_auth(true) switch that refuses the relaxed variant on the env was scoped for v0.9.0 but dropped when research showed the host's disable_non_root_auth flag has no public getter — any implementation we shipped would have been a half-measure with no real enforcement. Will revisit if rs-soroban-env adds an accessor.

include_bytes!("…wasm") does not rebuild the wasm

If your test loads a contract via include_bytes!("../target/wasm32v1-none/release/my_contract.wasm"), Cargo will not rebuild the wasm when you edit the contract's .rs files. cargo test rebuilds the test binary, which sees only whatever wasm bytes were on disk the last time stellar contract build ran.

You will see tests pass against stale wasm. The symptom is "I fixed the contract but the test still observes the old behaviour."

The fix shipped in v0.9.1 is soroban_fork::workspace_wasm:

let wasm = soroban_fork::workspace_wasm("my_contract")
    .expect("build my_contract for wasm32v1-none");
// Pass `wasm` to whatever needs the bytes — Env::register, an
// UploadContractWasm envelope, fork_setCode over JSON-RPC, etc.

It runs cargo build -p my_contract --target wasm32v1-none --release at test runtime and reads the resulting .wasm file. Cargo's incremental compilation keeps the rebuild cheap when the source hasn't actually changed (sub-second on small crates). Because the build runs every time you call workspace_wasm, the bytes are always in sync with the source — the trap closes.

Layout assumption: my_contract is a member of the same Cargo workspace as the test, with a cdylib target. The workspace root is located via cargo metadata, so CARGO_TARGET_DIR overrides are honored automatically. See workspace module docs for workspace_wasm_with(crate_name, target, profile) if you need to override the default target/profile.

For test suites that load wasm in many places, the cheaper alternative is a build.rs in the test crate that runs cargo build -p <contract> once per cargo test invocation (and emits cargo:rerun-if-changed= directives on the contract's source files). That keeps the compile cost out of every individual test. Pick whichever fits — workspace_wasm is the smaller diff for most projects.

cache_file for cheap CI

A typical Blend-style mainnet fork test fetches 30–40 ledger entries on the first run. Without cache_file, every CI run pays that round-trip cost — five tests × ~35 fetches ≈ 175 live RPC calls per CI build, gated by the upstream RPC's rate limits.

Set .cache_file("…") once and check the file into git. Subsequent runs (local and CI) read entirely from the cache: zero RPC calls, full repro of the captured state. See Cache format above.

let env = ForkConfig::new(rpc_url)
    .cache_file("tests/fixtures/blend_pool_cache.json")
    .build()?;

Debugging cross-contract auth chains

When you see Error(Auth, InvalidAction) or unexpected require_auth panics, two complementary tools cover most cases:

  • env.print_auth_tree() (new in v0.9.0) — dumps the recording auth manager's payload set as a Foundry-style indented tree. Names every signer, nonce, contract, function, and arg list that require_auth was demanded for during the most recent top-level invocation. See Auth introspection above.
  • env.print_trace() (with .tracing(true) on the builder) — dumps the cross-contract call tree. The rolled-back frame surfaces which contract refused which authorisation. See Tracing above.

A structured env.last_auth_failure() accessor — exact contract, function, expected authorizer that fired the InvalidAction — was scoped for v0.9.0 but dropped when research showed the host constructs the failure with only the address in its diagnostic args and discards the rest. Will revisit if rs-soroban-env persists the failure context.

into_val(&env) and ForkedEnv

ForkedEnv derefs to &Env, but the IntoVal blanket impls don't follow the deref. If you write something.into_val(&env) in a test body where env: ForkedEnv, the compiler will reject it. Either pass &*env explicitly, or extract the conversion into a helper that takes env: &Env. A direct IntoVal<ForkedEnv, Val> impl is being evaluated for v0.9.x.

Limitations

What soroban-fork does NOT yet do — listed up front so nothing surprises you in production:

  • No sendTransaction / state mutation through RPC. (closed in v0.6.) Server-mode sendTransaction applies writes back to the snapshot source so subsequent reads see them; getTransaction retrieves receipts by hash. (closed in v0.7:) the fork now mints 10 pre-funded test accounts at build, auto-increments seq_num after every successful send, and accepts UploadContractWasm + CreateContract host functions — full deploy-then-call workflow against forked mainnet works. (closed in v0.8:) fork_setLedgerEntry and fork_closeLedgers extensions land — force-write any LedgerEntry and advance the reported ledger directly. Ergonomic fork_* wrappers (impersonate / setBalance / setCode / setStorage) are a v0.8.x followup; fork_snapshot / fork_revert are scoped to v0.9.
  • No TTL / archival simulation. Soroban entries carry a live_until_ledger_seq; on real mainnet they become archived past that ledger and need a RestoreFootprint operation. We track live_until in the cache but do not yet model expiry — bumping env.ledger() past an entry's live_until will not flip it to archived. Tests that depend on TTL-expiry semantics will see false-positives.
  • No historical state. at_ledger(N) shifts only what env.ledger().sequence_number() reports; the actual ledger entries are always fetched at the RPC's current latest. Pin to a specific ledger only when paired with cache_file for reproducibility, not when expecting historical state.
  • Tracing renders structure, not metering. env.trace() captures the call tree with decoded args and return values. It does not yet render per-frame gas / cost units, contract events, or decoded HostError reasons. (Diagnostic events from the host carry call structure but not metering numbers; metering is planned. Server-mode simulateTransaction does return real cost.cpuInsns separately.)
  • Server simulateTransaction fee fields are stubbed. (closed in v0.5.2.) minResourceFee is now derived from the live on-chain fee schedule via compute_transaction_resource_fee, and cost.memBytes reads Budget::get_mem_bytes_consumed directly. Bandwidth + historical-data fees use the actual envelope size received over the wire.
  • Footprint discovery. Soroban requires declaring the transaction footprint before execution. The fork tool handles this transparently via the recording-mode footprint in the test environment.

Requirements

  • Rust 1.91+ (the Soroban SDK 25.3.1 floor)
  • soroban-sdk 25.x (with testutils feature)
  • Network access to a Soroban RPC endpoint

Why this exists

The Stellar SDK supports snapshot-based fork testing via stellar snapshot create + Env::from_snapshot_file(). But you must know every contract address your test will touch in advance. Miss one dependency and the test fails.

This tool adds the missing piece: lazy loading on cache miss. It implements SnapshotSource (the trait that feeds ledger entries to the Soroban VM) with an RPC fallback. The standard soroban_sdk::Env works unchanged.

See stellar/rs-soroban-sdk#1440 for the upstream issue tracking this gap.

License

MIT OR Apache-2.0