pub struct OptimizationFlags {Show 17 fields
pub llvm_passes: bool,
pub peephole: bool,
pub register_cache: bool,
pub icmp_branch_fusion: bool,
pub shrink_wrap_callee_saves: bool,
pub dead_store_elimination: bool,
pub constant_propagation: bool,
pub inlining: bool,
pub cross_block_cache: bool,
pub register_allocation: bool,
pub dead_function_elimination: bool,
pub fallthrough_jumps: bool,
pub aggressive_register_allocation: bool,
pub allocate_scratch_regs: bool,
pub allocate_caller_saved_regs: bool,
pub lazy_spill: bool,
pub inline_threshold: Option<u32>,
}Expand description
Flags to enable/disable individual compiler optimizations. All optimizations are enabled by default.
Fields§
§llvm_passes: boolRun LLVM optimization passes (mem2reg, instcombine, simplifycfg, gvn, dce). When false, also disables inlining (all LLVM passes are skipped).
peephole: boolRun peephole optimizer (fallthrough removal, dead code elimination).
register_cache: boolEnable per-block register cache (store-load forwarding).
icmp_branch_fusion: boolFuse ICmp + Branch into a single PVM branch instruction.
shrink_wrap_callee_saves: boolOnly save/restore callee-saved registers (r9-r12) that are actually used.
dead_store_elimination: boolEliminate SP-relative stores whose target offset is never loaded from.
constant_propagation: boolSkip redundant LoadImm/LoadImm64 when the register already holds the constant.
inlining: boolInline small functions at the LLVM IR level to eliminate call overhead.
cross_block_cache: boolPropagate register cache across single-predecessor block boundaries.
register_allocation: boolAllocate long-lived SSA values to physical registers (r5, r6) across block boundaries.
dead_function_elimination: boolEliminate unreachable functions not called from entry points or the function table.
fallthrough_jumps: boolEliminate unconditional jumps to the immediately following block (fallthrough).
aggressive_register_allocation: boolLower the minimum-use threshold for register allocation candidates from 2 to 1.
Captures more values (e.g. two-branch if-else patterns) at the cost of slightly
more MoveReg traffic in small leaf functions.
allocate_scratch_regs: boolAllocate r5/r6 (abi::SCRATCH1/SCRATCH2) in all functions that don’t
clobber them (no bulk memory ops, no funnel shifts). In non-leaf functions,
spill/reload around calls is handled automatically.
allocate_caller_saved_regs: boolAllocate r7/r8 (RETURN_VALUE_REG/ARGS_LEN_REG) in all functions.
These are caller-saved and idle after the prologue; in non-leaf functions,
they are invalidated after calls via arity-aware predicate.
lazy_spill: boolSkip stack stores at definition for register-allocated values (lazy spill).
Values are only written to the stack when required (call clobber, return,
phi reads, eviction). Requires register_allocation to be effective.
inline_threshold: Option<u32>Max LLVM IR instructions for a function to be inlineable.
Functions exceeding this are marked noinline. None uses LLVM’s
default (225). Default: Some(5) — only tiny helpers are inlined.
Only effective when inlining is true.
Trait Implementations§
Source§impl Clone for OptimizationFlags
impl Clone for OptimizationFlags
Source§fn clone(&self) -> OptimizationFlags
fn clone(&self) -> OptimizationFlags
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more