cranelift_codegen/machinst/buffer.rs
1//! In-memory representation of compiled machine code, with labels and fixups to
2//! refer to those labels. Handles constant-pool island insertion and also
3//! veneer insertion for out-of-range jumps.
4//!
5//! This code exists to solve three problems:
6//!
7//! - Branch targets for forward branches are not known until later, when we
8//! emit code in a single pass through the instruction structs.
9//!
10//! - On many architectures, address references or offsets have limited range.
11//! For example, on AArch64, conditional branches can only target code +/- 1MB
12//! from the branch itself.
13//!
14//! - The lowering of control flow from the CFG-with-edges produced by
15//! [BlockLoweringOrder](super::BlockLoweringOrder), combined with many empty
16//! edge blocks when the register allocator does not need to insert any
17//! spills/reloads/moves in edge blocks, results in many suboptimal branch
18//! patterns. The lowering also pays no attention to block order, and so
19//! two-target conditional forms (cond-br followed by uncond-br) can often by
20//! avoided because one of the targets is the fallthrough. There are several
21//! cases here where we can simplify to use fewer branches.
22//!
23//! This "buffer" implements a single-pass code emission strategy (with a later
24//! "fixup" pass, but only through recorded fixups, not all instructions). The
25//! basic idea is:
26//!
27//! - Emit branches as they are, including two-target (cond/uncond) compound
28//! forms, but with zero offsets and optimistically assuming the target will be
29//! in range. Record the "fixup" for later. Targets are denoted instead by
30//! symbolic "labels" that are then bound to certain offsets in the buffer as
31//! we emit code. (Nominally, there is a label at the start of every basic
32//! block.)
33//!
34//! - As we do this, track the offset in the buffer at which the first label
35//! reference "goes out of range". We call this the "deadline". If we reach the
36//! deadline and we still have not bound the label to which an unresolved branch
37//! refers, we have a problem!
38//!
39//! - To solve this problem, we emit "islands" full of "veneers". An island is
40//! simply a chunk of code inserted in the middle of the code actually produced
41//! by the emitter (e.g., vcode iterating over instruction structs). The emitter
42//! has some awareness of this: it either asks for an island between blocks, so
43//! it is not accidentally executed, or else it emits a branch around the island
44//! when all other options fail (see `Inst::EmitIsland` meta-instruction).
45//!
46//! - A "veneer" is an instruction (or sequence of instructions) in an "island"
47//! that implements a longer-range reference to a label. The idea is that, for
48//! example, a branch with a limited range can branch to a "veneer" instead,
49//! which is simply a branch in a form that can use a longer-range reference. On
50//! AArch64, for example, conditionals have a +/- 1 MB range, but a conditional
51//! can branch to an unconditional branch which has a +/- 128 MB range. Hence, a
52//! conditional branch's label reference can be fixed up with a "veneer" to
53//! achieve a longer range.
54//!
55//! - To implement all of this, we require the backend to provide a `LabelUse`
56//! type that implements a trait. This is nominally an enum that records one of
57//! several kinds of references to an offset in code -- basically, a relocation
58//! type -- and will usually correspond to different instruction formats. The
59//! `LabelUse` implementation specifies the maximum range, how to patch in the
60//! actual label location when known, and how to generate a veneer to extend the
61//! range.
62//!
63//! That satisfies label references, but we still may have suboptimal branch
64//! patterns. To clean up the branches, we do a simple "peephole"-style
65//! optimization on the fly. To do so, the emitter (e.g., `Inst::emit()`)
66//! informs the buffer of branches in the code and, in the case of conditionals,
67//! the code that would have been emitted to invert this branch's condition. We
68//! track the "latest branches": these are branches that are contiguous up to
69//! the current offset. (If any code is emitted after a branch, that branch or
70//! run of contiguous branches is no longer "latest".) The latest branches are
71//! those that we can edit by simply truncating the buffer and doing something
72//! else instead.
73//!
74//! To optimize branches, we implement several simple rules, and try to apply
75//! them to the "latest branches" when possible:
76//!
77//! - A branch with a label target, when that label is bound to the ending
78//! offset of the branch (the fallthrough location), can be removed altogether,
79//! because the branch would have no effect).
80//!
81//! - An unconditional branch that starts at a label location, and branches to
82//! another label, results in a "label alias": all references to the label bound
83//! *to* this branch instruction are instead resolved to the *target* of the
84//! branch instruction. This effectively removes empty blocks that just
85//! unconditionally branch to the next block. We call this "branch threading".
86//!
87//! - A conditional followed by an unconditional, when the conditional branches
88//! to the unconditional's fallthrough, results in (i) the truncation of the
89//! unconditional, (ii) the inversion of the condition's condition, and (iii)
90//! replacement of the conditional's target (using the original target of the
91//! unconditional). This is a fancy way of saying "we can flip a two-target
92//! conditional branch's taken/not-taken targets if it works better with our
93//! fallthrough". To make this work, the emitter actually gives the buffer
94//! *both* forms of every conditional branch: the true form is emitted into the
95//! buffer, and the "inverted" machine-code bytes are provided as part of the
96//! branch-fixup metadata.
97//!
98//! - An unconditional B preceded by another unconditional P, when B's label(s) have
99//! been redirected to target(B), can be removed entirely. This is an extension
100//! of the branch-threading optimization, and is valid because if we know there
101//! will be no fallthrough into this branch instruction (the prior instruction
102//! is an unconditional jump), and if we know we have successfully redirected
103//! all labels, then this branch instruction is unreachable. Note that this
104//! works because the redirection happens before the label is ever resolved
105//! (fixups happen at island emission time, at which point latest-branches are
106//! cleared, or at the end of emission), so we are sure to catch and redirect
107//! all possible paths to this instruction.
108//!
109//! # Branch-optimization Correctness
110//!
111//! The branch-optimization mechanism depends on a few data structures with
112//! invariants, which are always held outside the scope of top-level public
113//! methods:
114//!
115//! - The latest-branches list. Each entry describes a span of the buffer
116//! (start/end offsets), the label target, the corresponding fixup-list entry
117//! index, and the bytes (must be the same length) for the inverted form, if
118//! conditional. The list of labels that are bound to the start-offset of this
119//! branch is *complete* (if any label has a resolved offset equal to `start`
120//! and is not an alias, it must appear in this list) and *precise* (no label
121//! in this list can be bound to another offset). No label in this list should
122//! be an alias. No two branch ranges can overlap, and branches are in
123//! ascending-offset order.
124//!
125//! - The labels-at-tail list. This contains all MachLabels that have been bound
126//! to (whose resolved offsets are equal to) the tail offset of the buffer.
127//! No label in this list should be an alias.
128//!
129//! - The label_offsets array, containing the bound offset of a label or
130//! UNKNOWN. No label can be bound at an offset greater than the current
131//! buffer tail.
132//!
133//! - The label_aliases array, containing another label to which a label is
134//! bound or UNKNOWN. A label's resolved offset is the resolved offset
135//! of the label it is aliased to, if this is set.
136//!
137//! We argue below, at each method, how the invariants in these data structures
138//! are maintained (grep for "Post-invariant").
139//!
140//! Given these invariants, we argue why each optimization preserves execution
141//! semantics below (grep for "Preserves execution semantics").
142//!
143//! # Avoiding Quadratic Behavior
144//!
145//! There are two cases where we've had to take some care to avoid
146//! quadratic worst-case behavior:
147//!
148//! - The "labels at this branch" list can grow unboundedly if the
149//! code generator binds many labels at one location. If the count
150//! gets too high (defined by the `LABEL_LIST_THRESHOLD` constant), we
151//! simply abort an optimization early in a way that is always correct
152//! but is conservative.
153//!
154//! - The fixup list can interact with island emission to create
155//! "quadratic island behavior". In a little more detail, one can hit
156//! this behavior by having some pending fixups (forward label
157//! references) with long-range label-use kinds, and some others
158//! with shorter-range references that nonetheless still are pending
159//! long enough to trigger island generation. In such a case, we
160//! process the fixup list, generate veneers to extend some forward
161//! references' ranges, but leave the other (longer-range) ones
162//! alone. The way this was implemented put them back on a list and
163//! resulted in quadratic behavior.
164//!
165//! To avoid this fixups are split into two lists: one "pending" list and one
166//! final list. The pending list is kept around for handling fixups related to
167//! branches so it can be edited/truncated. When an island is reached, which
168//! starts processing fixups, all pending fixups are flushed into the final
169//! list. The final list is a `BinaryHeap` which enables fixup processing to
170//! only process those which are required during island emission, deferring
171//! all longer-range fixups to later.
172
173use crate::binemit::{Addend, CodeOffset, Reloc};
174use crate::ir::function::FunctionParameters;
175use crate::ir::{DebugTag, ExceptionTag, ExternalName, RelSourceLoc, SourceLoc, TrapCode};
176use crate::isa::unwind::UnwindInst;
177use crate::machinst::{
178 BlockIndex, MachInstLabelUse, TextSectionBuilder, VCodeConstant, VCodeConstants, VCodeInst,
179};
180use crate::trace;
181use crate::{MachInstEmitState, ir};
182use crate::{VCodeConstantData, timing};
183use core::ops::Range;
184use cranelift_control::ControlPlane;
185use cranelift_entity::{PrimaryMap, SecondaryMap, entity_impl};
186use smallvec::SmallVec;
187use std::cmp::Ordering;
188use std::collections::BinaryHeap;
189use std::mem;
190use std::string::String;
191use std::vec::Vec;
192
193#[cfg(feature = "enable-serde")]
194use serde::{Deserialize, Serialize};
195
196#[cfg(feature = "enable-serde")]
197pub trait CompilePhase {
198 type MachSrcLocType: for<'a> Deserialize<'a> + Serialize + core::fmt::Debug + PartialEq + Clone;
199 type SourceLocType: for<'a> Deserialize<'a> + Serialize + core::fmt::Debug + PartialEq + Clone;
200}
201
202#[cfg(not(feature = "enable-serde"))]
203pub trait CompilePhase {
204 type MachSrcLocType: core::fmt::Debug + PartialEq + Clone;
205 type SourceLocType: core::fmt::Debug + PartialEq + Clone;
206}
207
208/// Status of a compiled artifact that needs patching before being used.
209#[derive(Clone, Debug, PartialEq)]
210#[cfg_attr(feature = "enable-serde", derive(Serialize, Deserialize))]
211pub struct Stencil;
212
213/// Status of a compiled artifact ready to use.
214#[derive(Clone, Debug, PartialEq)]
215pub struct Final;
216
217impl CompilePhase for Stencil {
218 type MachSrcLocType = MachSrcLoc<Stencil>;
219 type SourceLocType = RelSourceLoc;
220}
221
222impl CompilePhase for Final {
223 type MachSrcLocType = MachSrcLoc<Final>;
224 type SourceLocType = SourceLoc;
225}
226
227#[derive(Clone, Copy, Debug, PartialEq, Eq)]
228enum ForceVeneers {
229 Yes,
230 No,
231}
232
233/// A buffer of output to be produced, fixed up, and then emitted to a CodeSink
234/// in bulk.
235///
236/// This struct uses `SmallVec`s to support small-ish function bodies without
237/// any heap allocation. As such, it will be several kilobytes large. This is
238/// likely fine as long as it is stack-allocated for function emission then
239/// thrown away; but beware if many buffer objects are retained persistently.
240pub struct MachBuffer<I: VCodeInst> {
241 /// The buffer contents, as raw bytes.
242 data: SmallVec<[u8; 1024]>,
243 /// The required alignment of this buffer.
244 min_alignment: u32,
245 /// Any relocations referring to this code. Note that only *external*
246 /// relocations are tracked here; references to labels within the buffer are
247 /// resolved before emission.
248 relocs: SmallVec<[MachReloc; 16]>,
249 /// Any trap records referring to this code.
250 traps: SmallVec<[MachTrap; 16]>,
251 /// Any call site records referring to this code.
252 call_sites: SmallVec<[MachCallSite; 16]>,
253 /// Any patchable call site locations.
254 patchable_call_sites: SmallVec<[MachPatchableCallSite; 16]>,
255 /// Any exception-handler records referred to at call sites.
256 exception_handlers: SmallVec<[MachExceptionHandler; 16]>,
257 /// Any source location mappings referring to this code.
258 srclocs: SmallVec<[MachSrcLoc<Stencil>; 64]>,
259 /// Any debug tags referring to this code.
260 debug_tags: Vec<MachDebugTags>,
261 /// Pool of debug tags referenced by `MachDebugTags` entries.
262 debug_tag_pool: Vec<DebugTag>,
263 /// Any user stack maps for this code.
264 ///
265 /// Each entry is an `(offset, span, stack_map)` triple. Entries are sorted
266 /// by code offset, and each stack map covers `span` bytes on the stack.
267 user_stack_maps: SmallVec<[(CodeOffset, u32, ir::UserStackMap); 8]>,
268 /// Any unwind info at a given location.
269 unwind_info: SmallVec<[(CodeOffset, UnwindInst); 8]>,
270 /// The current source location in progress (after `start_srcloc()` and
271 /// before `end_srcloc()`). This is a (start_offset, src_loc) tuple.
272 cur_srcloc: Option<(CodeOffset, RelSourceLoc)>,
273 /// Known label offsets; `UNKNOWN_LABEL_OFFSET` if unknown.
274 label_offsets: SmallVec<[CodeOffset; 16]>,
275 /// Label aliases: when one label points to an unconditional jump, and that
276 /// jump points to another label, we can redirect references to the first
277 /// label immediately to the second.
278 ///
279 /// Invariant: we don't have label-alias cycles. We ensure this by,
280 /// before setting label A to alias label B, resolving B's alias
281 /// target (iteratively until a non-aliased label); if B is already
282 /// aliased to A, then we cannot alias A back to B.
283 label_aliases: SmallVec<[MachLabel; 16]>,
284 /// Constants that must be emitted at some point.
285 pending_constants: SmallVec<[VCodeConstant; 16]>,
286 /// Byte size of all constants in `pending_constants`.
287 pending_constants_size: CodeOffset,
288 /// Traps that must be emitted at some point.
289 pending_traps: SmallVec<[MachLabelTrap; 16]>,
290 /// Fixups that haven't yet been flushed into `fixup_records` below and may
291 /// be related to branches that are chomped. These all get added to
292 /// `fixup_records` during island emission.
293 pending_fixup_records: SmallVec<[MachLabelFixup<I>; 16]>,
294 /// The nearest upcoming deadline for entries in `pending_fixup_records`.
295 pending_fixup_deadline: CodeOffset,
296 /// Fixups that must be performed after all code is emitted.
297 fixup_records: BinaryHeap<MachLabelFixup<I>>,
298 /// Latest branches, to facilitate in-place editing for better fallthrough
299 /// behavior and empty-block removal.
300 latest_branches: SmallVec<[MachBranch; 4]>,
301 /// All labels at the current offset (emission tail). This is lazily
302 /// cleared: it is actually accurate as long as the current offset is
303 /// `labels_at_tail_off`, but if `cur_offset()` has grown larger, it should
304 /// be considered as empty.
305 ///
306 /// For correctness, this *must* be complete (i.e., the vector must contain
307 /// all labels whose offsets are resolved to the current tail), because we
308 /// rely on it to update labels when we truncate branches.
309 labels_at_tail: SmallVec<[MachLabel; 4]>,
310 /// The last offset at which `labels_at_tail` is valid. It is conceptually
311 /// always describing the tail of the buffer, but we do not clear
312 /// `labels_at_tail` eagerly when the tail grows, rather we lazily clear it
313 /// when the offset has grown past this (`labels_at_tail_off`) point.
314 /// Always <= `cur_offset()`.
315 labels_at_tail_off: CodeOffset,
316 /// Metadata about all constants that this function has access to.
317 ///
318 /// This records the size/alignment of all constants (not the actual data)
319 /// along with the last available label generated for the constant. This map
320 /// is consulted when constants are referred to and the label assigned to a
321 /// constant may change over time as well.
322 constants: PrimaryMap<VCodeConstant, MachBufferConstant>,
323 /// All recorded usages of constants as pairs of the constant and where the
324 /// constant needs to be placed within `self.data`. Note that the same
325 /// constant may appear in this array multiple times if it was emitted
326 /// multiple times.
327 used_constants: SmallVec<[(VCodeConstant, CodeOffset); 4]>,
328 /// Indicates when a patchable region is currently open, to guard that it's
329 /// not possible to nest patchable regions.
330 open_patchable: bool,
331 /// Stack frame layout metadata. If provided for a MachBuffer
332 /// containing a function body, this allows interpretation of
333 /// runtime state given a view of an active stack frame.
334 frame_layout: Option<MachBufferFrameLayout>,
335}
336
337impl MachBufferFinalized<Stencil> {
338 /// Get a finalized machine buffer by applying the function's base source location.
339 pub fn apply_base_srcloc(self, base_srcloc: SourceLoc) -> MachBufferFinalized<Final> {
340 MachBufferFinalized {
341 data: self.data,
342 relocs: self.relocs,
343 traps: self.traps,
344 call_sites: self.call_sites,
345 patchable_call_sites: self.patchable_call_sites,
346 exception_handlers: self.exception_handlers,
347 srclocs: self
348 .srclocs
349 .into_iter()
350 .map(|srcloc| srcloc.apply_base_srcloc(base_srcloc))
351 .collect(),
352 debug_tags: self.debug_tags,
353 debug_tag_pool: self.debug_tag_pool,
354 user_stack_maps: self.user_stack_maps,
355 unwind_info: self.unwind_info,
356 alignment: self.alignment,
357 frame_layout: self.frame_layout,
358 nop: self.nop,
359 }
360 }
361}
362
363/// A `MachBuffer` once emission is completed: holds generated code and records,
364/// without fixups. This allows the type to be independent of the backend.
365#[derive(PartialEq, Debug, Clone)]
366#[cfg_attr(
367 feature = "enable-serde",
368 derive(serde_derive::Serialize, serde_derive::Deserialize)
369)]
370pub struct MachBufferFinalized<T: CompilePhase> {
371 /// The buffer contents, as raw bytes.
372 pub(crate) data: SmallVec<[u8; 1024]>,
373 /// Any relocations referring to this code. Note that only *external*
374 /// relocations are tracked here; references to labels within the buffer are
375 /// resolved before emission.
376 pub(crate) relocs: SmallVec<[FinalizedMachReloc; 16]>,
377 /// Any trap records referring to this code.
378 pub(crate) traps: SmallVec<[MachTrap; 16]>,
379 /// Any call site records referring to this code.
380 pub(crate) call_sites: SmallVec<[MachCallSite; 16]>,
381 /// Any patchable call site locations refering to this code.
382 pub(crate) patchable_call_sites: SmallVec<[MachPatchableCallSite; 16]>,
383 /// Any exception-handler records referred to at call sites.
384 pub(crate) exception_handlers: SmallVec<[FinalizedMachExceptionHandler; 16]>,
385 /// Any source location mappings referring to this code.
386 pub(crate) srclocs: SmallVec<[T::MachSrcLocType; 64]>,
387 /// Any debug tags referring to this code.
388 pub(crate) debug_tags: Vec<MachDebugTags>,
389 /// Pool of debug tags referenced by `MachDebugTags` entries.
390 pub(crate) debug_tag_pool: Vec<DebugTag>,
391 /// Any user stack maps for this code.
392 ///
393 /// Each entry is an `(offset, span, stack_map)` triple. Entries are sorted
394 /// by code offset, and each stack map covers `span` bytes on the stack.
395 pub(crate) user_stack_maps: SmallVec<[(CodeOffset, u32, ir::UserStackMap); 8]>,
396 /// Stack frame layout metadata. If provided for a MachBuffer
397 /// containing a function body, this allows interpretation of
398 /// runtime state given a view of an active stack frame.
399 pub(crate) frame_layout: Option<MachBufferFrameLayout>,
400 /// Any unwind info at a given location.
401 pub unwind_info: SmallVec<[(CodeOffset, UnwindInst); 8]>,
402 /// The required alignment of this buffer.
403 pub alignment: u32,
404 /// The means by which to NOP out patchable call sites.
405 ///
406 /// This allows a consumer of a `MachBufferFinalized` to disable
407 /// patchable call sites (which are enabled by default) without
408 /// specific knowledge of the target ISA.
409 pub nop: SmallVec<[u8; 8]>,
410}
411
412const UNKNOWN_LABEL_OFFSET: CodeOffset = 0xffff_ffff;
413const UNKNOWN_LABEL: MachLabel = MachLabel(0xffff_ffff);
414
415/// Threshold on max length of `labels_at_this_branch` list to avoid
416/// unbounded quadratic behavior (see comment below at use-site).
417const LABEL_LIST_THRESHOLD: usize = 100;
418
419/// A label refers to some offset in a `MachBuffer`. It may not be resolved at
420/// the point at which it is used by emitted code; the buffer records "fixups"
421/// for references to the label, and will come back and patch the code
422/// appropriately when the label's location is eventually known.
423#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
424pub struct MachLabel(u32);
425entity_impl!(MachLabel);
426
427impl MachLabel {
428 /// Get a label for a block. (The first N MachLabels are always reserved for
429 /// the N blocks in the vcode.)
430 pub fn from_block(bindex: BlockIndex) -> MachLabel {
431 MachLabel(bindex.index() as u32)
432 }
433
434 /// Creates a string representing this label, for convenience.
435 pub fn to_string(&self) -> String {
436 format!("label{}", self.0)
437 }
438}
439
440impl Default for MachLabel {
441 fn default() -> Self {
442 UNKNOWN_LABEL
443 }
444}
445
446/// Represents the beginning of an editable region in the [`MachBuffer`], while code emission is
447/// still occurring. An [`OpenPatchRegion`] is closed by [`MachBuffer::end_patchable`], consuming
448/// the [`OpenPatchRegion`] token in the process.
449pub struct OpenPatchRegion(usize);
450
451/// A region in the [`MachBuffer`] code buffer that can be edited prior to finalization. An example
452/// of where you might want to use this is for patching instructions that mention constants that
453/// won't be known until later: [`MachBuffer::start_patchable`] can be used to begin the patchable
454/// region, instructions can be emitted with placeholder constants, and the [`PatchRegion`] token
455/// can be produced by [`MachBuffer::end_patchable`]. Once the values of those constants are known,
456/// the [`PatchRegion::patch`] function can be used to get a mutable buffer to the instruction
457/// bytes, and the constants uses can be updated directly.
458pub struct PatchRegion {
459 range: Range<usize>,
460}
461
462impl PatchRegion {
463 /// Consume the patch region to yield a mutable slice of the [`MachBuffer`] data buffer.
464 pub fn patch<I: VCodeInst>(self, buffer: &mut MachBuffer<I>) -> &mut [u8] {
465 &mut buffer.data[self.range]
466 }
467}
468
469impl<I: VCodeInst> MachBuffer<I> {
470 /// Create a new section, known to start at `start_offset` and with a size limited to
471 /// `length_limit`.
472 pub fn new() -> MachBuffer<I> {
473 MachBuffer {
474 data: SmallVec::new(),
475 min_alignment: I::function_alignment().minimum,
476 relocs: SmallVec::new(),
477 traps: SmallVec::new(),
478 call_sites: SmallVec::new(),
479 patchable_call_sites: SmallVec::new(),
480 exception_handlers: SmallVec::new(),
481 srclocs: SmallVec::new(),
482 debug_tags: vec![],
483 debug_tag_pool: vec![],
484 user_stack_maps: SmallVec::new(),
485 unwind_info: SmallVec::new(),
486 cur_srcloc: None,
487 label_offsets: SmallVec::new(),
488 label_aliases: SmallVec::new(),
489 pending_constants: SmallVec::new(),
490 pending_constants_size: 0,
491 pending_traps: SmallVec::new(),
492 pending_fixup_records: SmallVec::new(),
493 pending_fixup_deadline: u32::MAX,
494 fixup_records: Default::default(),
495 latest_branches: SmallVec::new(),
496 labels_at_tail: SmallVec::new(),
497 labels_at_tail_off: 0,
498 constants: Default::default(),
499 used_constants: Default::default(),
500 open_patchable: false,
501 frame_layout: None,
502 }
503 }
504
505 /// Current offset from start of buffer.
506 pub fn cur_offset(&self) -> CodeOffset {
507 self.data.len() as CodeOffset
508 }
509
510 /// Add a byte.
511 pub fn put1(&mut self, value: u8) {
512 self.data.push(value);
513
514 // Post-invariant: conceptual-labels_at_tail contains a complete and
515 // precise list of labels bound at `cur_offset()`. We have advanced
516 // `cur_offset()`, hence if it had been equal to `labels_at_tail_off`
517 // before, it is not anymore (and it cannot become equal, because
518 // `labels_at_tail_off` is always <= `cur_offset()`). Thus the list is
519 // conceptually empty (even though it is only lazily cleared). No labels
520 // can be bound at this new offset (by invariant on `label_offsets`).
521 // Hence the invariant holds.
522 }
523
524 /// Add 2 bytes.
525 pub fn put2(&mut self, value: u16) {
526 let bytes = value.to_le_bytes();
527 self.data.extend_from_slice(&bytes[..]);
528
529 // Post-invariant: as for `put1()`.
530 }
531
532 /// Add 4 bytes.
533 pub fn put4(&mut self, value: u32) {
534 let bytes = value.to_le_bytes();
535 self.data.extend_from_slice(&bytes[..]);
536
537 // Post-invariant: as for `put1()`.
538 }
539
540 /// Add 8 bytes.
541 pub fn put8(&mut self, value: u64) {
542 let bytes = value.to_le_bytes();
543 self.data.extend_from_slice(&bytes[..]);
544
545 // Post-invariant: as for `put1()`.
546 }
547
548 /// Add a slice of bytes.
549 pub fn put_data(&mut self, data: &[u8]) {
550 self.data.extend_from_slice(data);
551
552 // Post-invariant: as for `put1()`.
553 }
554
555 /// Reserve appended space and return a mutable slice referring to it.
556 pub fn get_appended_space(&mut self, len: usize) -> &mut [u8] {
557 let off = self.data.len();
558 let new_len = self.data.len() + len;
559 self.data.resize(new_len, 0);
560 &mut self.data[off..]
561
562 // Post-invariant: as for `put1()`.
563 }
564
565 /// Align up to the given alignment.
566 pub fn align_to(&mut self, align_to: CodeOffset) {
567 trace!("MachBuffer: align to {}", align_to);
568 assert!(
569 align_to.is_power_of_two(),
570 "{align_to} is not a power of two"
571 );
572 while self.cur_offset() & (align_to - 1) != 0 {
573 self.put1(0);
574 }
575
576 // Post-invariant: as for `put1()`.
577 }
578
579 /// Begin a region of patchable code. There is one requirement for the
580 /// code that is emitted: It must not introduce any instructions that
581 /// could be chomped (branches are an example of this). In other words,
582 /// you must not call [`MachBuffer::add_cond_branch`] or
583 /// [`MachBuffer::add_uncond_branch`] between calls to this method and
584 /// [`MachBuffer::end_patchable`].
585 pub fn start_patchable(&mut self) -> OpenPatchRegion {
586 assert!(!self.open_patchable, "Patchable regions may not be nested");
587 self.open_patchable = true;
588 OpenPatchRegion(usize::try_from(self.cur_offset()).unwrap())
589 }
590
591 /// End a region of patchable code, yielding a [`PatchRegion`] value that
592 /// can be consumed later to produce a one-off mutable slice to the
593 /// associated region of the data buffer.
594 pub fn end_patchable(&mut self, open: OpenPatchRegion) -> PatchRegion {
595 // No need to assert the state of `open_patchable` here, as we take
596 // ownership of the only `OpenPatchable` value.
597 self.open_patchable = false;
598 let end = usize::try_from(self.cur_offset()).unwrap();
599 PatchRegion { range: open.0..end }
600 }
601
602 /// Allocate a `Label` to refer to some offset. May not be bound to a fixed
603 /// offset yet.
604 pub fn get_label(&mut self) -> MachLabel {
605 let l = self.label_offsets.len() as u32;
606 self.label_offsets.push(UNKNOWN_LABEL_OFFSET);
607 self.label_aliases.push(UNKNOWN_LABEL);
608 trace!("MachBuffer: new label -> {:?}", MachLabel(l));
609 MachLabel(l)
610
611 // Post-invariant: the only mutation is to add a new label; it has no
612 // bound offset yet, so it trivially satisfies all invariants.
613 }
614
615 /// Reserve the first N MachLabels for blocks.
616 pub fn reserve_labels_for_blocks(&mut self, blocks: usize) {
617 trace!("MachBuffer: first {} labels are for blocks", blocks);
618 debug_assert!(self.label_offsets.is_empty());
619 self.label_offsets.resize(blocks, UNKNOWN_LABEL_OFFSET);
620 self.label_aliases.resize(blocks, UNKNOWN_LABEL);
621
622 // Post-invariant: as for `get_label()`.
623 }
624
625 /// Registers metadata in this `MachBuffer` about the `constants` provided.
626 ///
627 /// This will record the size/alignment of all constants which will prepare
628 /// them for emission later on.
629 pub fn register_constants(&mut self, constants: &VCodeConstants) {
630 for (c, val) in constants.iter() {
631 self.register_constant(&c, val);
632 }
633 }
634
635 /// Similar to [`MachBuffer::register_constants`] but registers a
636 /// single constant metadata. This function is useful in
637 /// situations where not all constants are known at the time of
638 /// emission.
639 pub fn register_constant(&mut self, constant: &VCodeConstant, data: &VCodeConstantData) {
640 let c2 = self.constants.push(MachBufferConstant {
641 upcoming_label: None,
642 align: data.alignment(),
643 size: data.as_slice().len(),
644 });
645 assert_eq!(*constant, c2);
646 }
647
648 /// Completes constant emission by iterating over `self.used_constants` and
649 /// filling in the "holes" with the constant values provided by `constants`.
650 ///
651 /// Returns the alignment required for this entire buffer. Alignment starts
652 /// at the ISA's minimum function alignment and can be increased due to
653 /// constant requirements.
654 fn finish_constants(&mut self, constants: &VCodeConstants) -> u32 {
655 let mut alignment = self.min_alignment;
656 for (constant, offset) in mem::take(&mut self.used_constants) {
657 let constant = constants.get(constant);
658 let data = constant.as_slice();
659 self.data[offset as usize..][..data.len()].copy_from_slice(data);
660 alignment = constant.alignment().max(alignment);
661 }
662 alignment
663 }
664
665 /// Returns a label that can be used to refer to the `constant` provided.
666 ///
667 /// This will automatically defer a new constant to be emitted for
668 /// `constant` if it has not been previously emitted. Note that this
669 /// function may return a different label for the same constant at
670 /// different points in time. The label is valid to use only from the
671 /// current location; the MachBuffer takes care to emit the same constant
672 /// multiple times if needed so the constant is always in range.
673 pub fn get_label_for_constant(&mut self, constant: VCodeConstant) -> MachLabel {
674 let MachBufferConstant {
675 align,
676 size,
677 upcoming_label,
678 } = self.constants[constant];
679 if let Some(label) = upcoming_label {
680 return label;
681 }
682
683 let label = self.get_label();
684 trace!(
685 "defer constant: eventually emit {size} bytes aligned \
686 to {align} at label {label:?}",
687 );
688 self.pending_constants.push(constant);
689 self.pending_constants_size += size as u32;
690 self.constants[constant].upcoming_label = Some(label);
691 label
692 }
693
694 /// Bind a label to the current offset. A label can only be bound once.
695 pub fn bind_label(&mut self, label: MachLabel, ctrl_plane: &mut ControlPlane) {
696 trace!(
697 "MachBuffer: bind label {:?} at offset {}",
698 label,
699 self.cur_offset()
700 );
701 debug_assert_eq!(self.label_offsets[label.0 as usize], UNKNOWN_LABEL_OFFSET);
702 debug_assert_eq!(self.label_aliases[label.0 as usize], UNKNOWN_LABEL);
703 let offset = self.cur_offset();
704 self.label_offsets[label.0 as usize] = offset;
705 self.lazily_clear_labels_at_tail();
706 self.labels_at_tail.push(label);
707
708 // Invariants hold: bound offset of label is <= cur_offset (in fact it
709 // is equal). If the `labels_at_tail` list was complete and precise
710 // before, it is still, because we have bound this label to the current
711 // offset and added it to the list (which contains all labels at the
712 // current offset).
713
714 self.optimize_branches(ctrl_plane);
715
716 // Post-invariant: by `optimize_branches()` (see argument there).
717 }
718
719 /// Lazily clear `labels_at_tail` if the tail offset has moved beyond the
720 /// offset that it applies to.
721 fn lazily_clear_labels_at_tail(&mut self) {
722 let offset = self.cur_offset();
723 if offset > self.labels_at_tail_off {
724 self.labels_at_tail_off = offset;
725 self.labels_at_tail.clear();
726 }
727
728 // Post-invariant: either labels_at_tail_off was at cur_offset, and
729 // state is untouched, or was less than cur_offset, in which case the
730 // labels_at_tail list was conceptually empty, and is now actually
731 // empty.
732 }
733
734 /// Resolve a label to an offset, if known. May return `UNKNOWN_LABEL_OFFSET`.
735 pub(crate) fn resolve_label_offset(&self, mut label: MachLabel) -> CodeOffset {
736 let mut iters = 0;
737 while self.label_aliases[label.0 as usize] != UNKNOWN_LABEL {
738 label = self.label_aliases[label.0 as usize];
739 // To protect against an infinite loop (despite our assurances to
740 // ourselves that the invariants make this impossible), assert out
741 // after 1M iterations. The number of basic blocks is limited
742 // in most contexts anyway so this should be impossible to hit with
743 // a legitimate input.
744 iters += 1;
745 assert!(iters < 1_000_000, "Unexpected cycle in label aliases");
746 }
747 self.label_offsets[label.0 as usize]
748
749 // Post-invariant: no mutations.
750 }
751
752 /// Emit a reference to the given label with the given reference type (i.e.,
753 /// branch-instruction format) at the current offset. This is like a
754 /// relocation, but handled internally.
755 ///
756 /// This can be called before the branch is actually emitted; fixups will
757 /// not happen until an island is emitted or the buffer is finished.
758 pub fn use_label_at_offset(&mut self, offset: CodeOffset, label: MachLabel, kind: I::LabelUse) {
759 trace!(
760 "MachBuffer: use_label_at_offset: offset {} label {:?} kind {:?}",
761 offset, label, kind
762 );
763
764 // Add the fixup, and update the worst-case island size based on a
765 // veneer for this label use.
766 let fixup = MachLabelFixup {
767 label,
768 offset,
769 kind,
770 };
771 self.pending_fixup_deadline = self.pending_fixup_deadline.min(fixup.deadline());
772 self.pending_fixup_records.push(fixup);
773
774 // Post-invariant: no mutations to branches/labels data structures.
775 }
776
777 /// Inform the buffer of an unconditional branch at the given offset,
778 /// targeting the given label. May be used to optimize branches.
779 /// The last added label-use must correspond to this branch.
780 /// This must be called when the current offset is equal to `start`; i.e.,
781 /// before actually emitting the branch. This implies that for a branch that
782 /// uses a label and is eligible for optimizations by the MachBuffer, the
783 /// proper sequence is:
784 ///
785 /// - Call `use_label_at_offset()` to emit the fixup record.
786 /// - Call `add_uncond_branch()` to make note of the branch.
787 /// - Emit the bytes for the branch's machine code.
788 ///
789 /// Additional requirement: no labels may be bound between `start` and `end`
790 /// (exclusive on both ends).
791 pub fn add_uncond_branch(&mut self, start: CodeOffset, end: CodeOffset, target: MachLabel) {
792 debug_assert!(
793 !self.open_patchable,
794 "Branch instruction inserted within a patchable region"
795 );
796 assert!(self.cur_offset() == start);
797 debug_assert!(end > start);
798 assert!(!self.pending_fixup_records.is_empty());
799 let fixup = self.pending_fixup_records.len() - 1;
800 self.lazily_clear_labels_at_tail();
801 self.latest_branches.push(MachBranch {
802 start,
803 end,
804 target,
805 fixup,
806 inverted: None,
807 labels_at_this_branch: self.labels_at_tail.clone(),
808 });
809
810 // Post-invariant: we asserted branch start is current tail; the list of
811 // labels at branch is cloned from list of labels at current tail.
812 }
813
814 /// Inform the buffer of a conditional branch at the given offset,
815 /// targeting the given label. May be used to optimize branches.
816 /// The last added label-use must correspond to this branch.
817 ///
818 /// Additional requirement: no labels may be bound between `start` and `end`
819 /// (exclusive on both ends).
820 pub fn add_cond_branch(
821 &mut self,
822 start: CodeOffset,
823 end: CodeOffset,
824 target: MachLabel,
825 inverted: &[u8],
826 ) {
827 debug_assert!(
828 !self.open_patchable,
829 "Branch instruction inserted within a patchable region"
830 );
831 assert!(self.cur_offset() == start);
832 debug_assert!(end > start);
833 assert!(!self.pending_fixup_records.is_empty());
834 debug_assert!(
835 inverted.len() == (end - start) as usize,
836 "branch length = {}, but inverted length = {}",
837 end - start,
838 inverted.len()
839 );
840 let fixup = self.pending_fixup_records.len() - 1;
841 let inverted = Some(SmallVec::from(inverted));
842 self.lazily_clear_labels_at_tail();
843 self.latest_branches.push(MachBranch {
844 start,
845 end,
846 target,
847 fixup,
848 inverted,
849 labels_at_this_branch: self.labels_at_tail.clone(),
850 });
851
852 // Post-invariant: we asserted branch start is current tail; labels at
853 // branch list is cloned from list of labels at current tail.
854 }
855
856 fn truncate_last_branch(&mut self) {
857 debug_assert!(
858 !self.open_patchable,
859 "Branch instruction truncated within a patchable region"
860 );
861
862 self.lazily_clear_labels_at_tail();
863 // Invariants hold at this point.
864
865 let b = self.latest_branches.pop().unwrap();
866 assert!(b.end == self.cur_offset());
867
868 // State:
869 // [PRE CODE]
870 // Offset b.start, b.labels_at_this_branch:
871 // [BRANCH CODE]
872 // cur_off, self.labels_at_tail -->
873 // (end of buffer)
874 self.data.truncate(b.start as usize);
875 self.pending_fixup_records.truncate(b.fixup);
876
877 // Trim srclocs and debug tags now past the end of the buffer.
878 while let Some(last_srcloc) = self.srclocs.last_mut() {
879 if last_srcloc.end <= b.start {
880 break;
881 }
882 if last_srcloc.start < b.start {
883 last_srcloc.end = b.start;
884 break;
885 }
886 self.srclocs.pop();
887 }
888 while let Some(last_debug_tag) = self.debug_tags.last() {
889 if last_debug_tag.offset <= b.start {
890 break;
891 }
892 self.debug_tags.pop();
893 }
894
895 // State:
896 // [PRE CODE]
897 // cur_off, Offset b.start, b.labels_at_this_branch:
898 // (end of buffer)
899 //
900 // self.labels_at_tail --> (past end of buffer)
901 let cur_off = self.cur_offset();
902 self.labels_at_tail_off = cur_off;
903 // State:
904 // [PRE CODE]
905 // cur_off, Offset b.start, b.labels_at_this_branch,
906 // self.labels_at_tail:
907 // (end of buffer)
908 //
909 // resolve_label_offset(l) for l in labels_at_tail:
910 // (past end of buffer)
911
912 trace!(
913 "truncate_last_branch: truncated {:?}; off now {}",
914 b, cur_off
915 );
916
917 // Fix up resolved label offsets for labels at tail.
918 for &l in &self.labels_at_tail {
919 self.label_offsets[l.0 as usize] = cur_off;
920 }
921 // Old labels_at_this_branch are now at cur_off.
922 self.labels_at_tail.extend(b.labels_at_this_branch);
923
924 // Post-invariant: this operation is defined to truncate the buffer,
925 // which moves cur_off backward, and to move labels at the end of the
926 // buffer back to the start-of-branch offset.
927 //
928 // latest_branches satisfies all invariants:
929 // - it has no branches past the end of the buffer (branches are in
930 // order, we removed the last one, and we truncated the buffer to just
931 // before the start of that branch)
932 // - no labels were moved to lower offsets than the (new) cur_off, so
933 // the labels_at_this_branch list for any other branch need not change.
934 //
935 // labels_at_tail satisfies all invariants:
936 // - all labels that were at the tail after the truncated branch are
937 // moved backward to just before the branch, which becomes the new tail;
938 // thus every element in the list should remain (ensured by `.extend()`
939 // above).
940 // - all labels that refer to the new tail, which is the start-offset of
941 // the truncated branch, must be present. The `labels_at_this_branch`
942 // list in the truncated branch's record is a complete and precise list
943 // of exactly these labels; we append these to labels_at_tail.
944 // - labels_at_tail_off is at cur_off after truncation occurs, so the
945 // list is valid (not to be lazily cleared).
946 //
947 // The stated operation was performed:
948 // - For each label at the end of the buffer prior to this method, it
949 // now resolves to the new (truncated) end of the buffer: it must have
950 // been in `labels_at_tail` (this list is precise and complete, and
951 // the tail was at the end of the truncated branch on entry), and we
952 // iterate over this list and set `label_offsets` to the new tail.
953 // None of these labels could have been an alias (by invariant), so
954 // `label_offsets` is authoritative for each.
955 // - No other labels will be past the end of the buffer, because of the
956 // requirement that no labels be bound to the middle of branch ranges
957 // (see comments to `add_{cond,uncond}_branch()`).
958 // - The buffer is truncated to just before the last branch, and the
959 // fixup record referring to that last branch is removed.
960 }
961
962 /// Performs various optimizations on branches pointing at the current label.
963 pub fn optimize_branches(&mut self, ctrl_plane: &mut ControlPlane) {
964 if ctrl_plane.get_decision() {
965 return;
966 }
967
968 self.lazily_clear_labels_at_tail();
969 // Invariants valid at this point.
970
971 trace!(
972 "enter optimize_branches:\n b = {:?}\n l = {:?}\n f = {:?}",
973 self.latest_branches, self.labels_at_tail, self.pending_fixup_records
974 );
975
976 // We continue to munch on branches at the tail of the buffer until no
977 // more rules apply. Note that the loop only continues if a branch is
978 // actually truncated (or if labels are redirected away from a branch),
979 // so this always makes progress.
980 while let Some(b) = self.latest_branches.last() {
981 let cur_off = self.cur_offset();
982 trace!("optimize_branches: last branch {:?} at off {}", b, cur_off);
983 // If there has been any code emission since the end of the last branch or
984 // label definition, then there's nothing we can edit (because we
985 // don't move code once placed, only back up and overwrite), so
986 // clear the records and finish.
987 if b.end < cur_off {
988 break;
989 }
990
991 // If the "labels at this branch" list on this branch is
992 // longer than a threshold, don't do any simplification,
993 // and let the branch remain to separate those labels from
994 // the current tail. This avoids quadratic behavior (see
995 // #3468): otherwise, if a long string of "goto next;
996 // next:" patterns are emitted, all of the labels will
997 // coalesce into a long list of aliases for the current
998 // buffer tail. We must track all aliases of the current
999 // tail for correctness, but we are also allowed to skip
1000 // optimization (removal) of any branch, so we take the
1001 // escape hatch here and let it stand. In effect this
1002 // "spreads" the many thousands of labels in the
1003 // pathological case among an actual (harmless but
1004 // suboptimal) instruction once per N labels.
1005 if b.labels_at_this_branch.len() > LABEL_LIST_THRESHOLD {
1006 break;
1007 }
1008
1009 // Invariant: we are looking at a branch that ends at the tail of
1010 // the buffer.
1011
1012 // For any branch, conditional or unconditional:
1013 // - If the target is a label at the current offset, then remove
1014 // the conditional branch, and reset all labels that targeted
1015 // the current offset (end of branch) to the truncated
1016 // end-of-code.
1017 //
1018 // Preserves execution semantics: a branch to its own fallthrough
1019 // address is equivalent to a no-op; in both cases, nextPC is the
1020 // fallthrough.
1021 if self.resolve_label_offset(b.target) == cur_off {
1022 trace!("branch with target == cur off; truncating");
1023 self.truncate_last_branch();
1024 continue;
1025 }
1026
1027 // If latest is an unconditional branch:
1028 //
1029 // - If the branch's target is not its own start address, then for
1030 // each label at the start of branch, make the label an alias of the
1031 // branch target, and remove the label from the "labels at this
1032 // branch" list.
1033 //
1034 // - Preserves execution semantics: an unconditional branch's
1035 // only effect is to set PC to a new PC; this change simply
1036 // collapses one step in the step-semantics.
1037 //
1038 // - Post-invariant: the labels that were bound to the start of
1039 // this branch become aliases, so they must not be present in any
1040 // labels-at-this-branch list or the labels-at-tail list. The
1041 // labels are removed form the latest-branch record's
1042 // labels-at-this-branch list, and are never placed in the
1043 // labels-at-tail list. Furthermore, it is correct that they are
1044 // not in either list, because they are now aliases, and labels
1045 // that are aliases remain aliases forever.
1046 //
1047 // - If there is a prior unconditional branch that ends just before
1048 // this one begins, and this branch has no labels bound to its
1049 // start, then we can truncate this branch, because it is entirely
1050 // unreachable (we have redirected all labels that make it
1051 // reachable otherwise). Do so and continue around the loop.
1052 //
1053 // - Preserves execution semantics: the branch is unreachable,
1054 // because execution can only flow into an instruction from the
1055 // prior instruction's fallthrough or from a branch bound to that
1056 // instruction's start offset. Unconditional branches have no
1057 // fallthrough, so if the prior instruction is an unconditional
1058 // branch, no fallthrough entry can happen. The
1059 // labels-at-this-branch list is complete (by invariant), so if it
1060 // is empty, then the instruction is entirely unreachable. Thus,
1061 // it can be removed.
1062 //
1063 // - Post-invariant: ensured by truncate_last_branch().
1064 //
1065 // - If there is a prior conditional branch whose target label
1066 // resolves to the current offset (branches around the
1067 // unconditional branch), then remove the unconditional branch,
1068 // and make the target of the unconditional the target of the
1069 // conditional instead.
1070 //
1071 // - Preserves execution semantics: previously we had:
1072 //
1073 // L1:
1074 // cond_br L2
1075 // br L3
1076 // L2:
1077 // (end of buffer)
1078 //
1079 // by removing the last branch, we have:
1080 //
1081 // L1:
1082 // cond_br L2
1083 // L2:
1084 // (end of buffer)
1085 //
1086 // we then fix up the records for the conditional branch to
1087 // have:
1088 //
1089 // L1:
1090 // cond_br.inverted L3
1091 // L2:
1092 //
1093 // In the original code, control flow reaches L2 when the
1094 // conditional branch's predicate is true, and L3 otherwise. In
1095 // the optimized code, the same is true.
1096 //
1097 // - Post-invariant: all edits to latest_branches and
1098 // labels_at_tail are performed by `truncate_last_branch()`,
1099 // which maintains the invariants at each step.
1100
1101 if b.is_uncond() {
1102 // Set any label equal to current branch's start as an alias of
1103 // the branch's target, if the target is not the branch itself
1104 // (i.e., an infinite loop).
1105 //
1106 // We cannot perform this aliasing if the target of this branch
1107 // ultimately aliases back here; if so, we need to keep this
1108 // branch, so break out of this loop entirely (and clear the
1109 // latest-branches list below).
1110 //
1111 // Note that this check is what prevents cycles from forming in
1112 // `self.label_aliases`. To see why, consider an arbitrary start
1113 // state:
1114 //
1115 // label_aliases[L1] = L2, label_aliases[L2] = L3, ..., up to
1116 // Ln, which is not aliased.
1117 //
1118 // We would create a cycle if we assigned label_aliases[Ln]
1119 // = L1. Note that the below assignment is the only write
1120 // to label_aliases.
1121 //
1122 // By our other invariants, we have that Ln (`l` below)
1123 // resolves to the offset `b.start`, because it is in the
1124 // set `b.labels_at_this_branch`.
1125 //
1126 // If L1 were already aliased, through some arbitrarily deep
1127 // chain, to Ln, then it must also resolve to this offset
1128 // `b.start`.
1129 //
1130 // By checking the resolution of `L1` against this offset,
1131 // and aborting this branch-simplification if they are
1132 // equal, we prevent the below assignment from ever creating
1133 // a cycle.
1134 if self.resolve_label_offset(b.target) != b.start {
1135 let redirected = b.labels_at_this_branch.len();
1136 for &l in &b.labels_at_this_branch {
1137 trace!(
1138 " -> label at start of branch {:?} redirected to target {:?}",
1139 l, b.target
1140 );
1141 self.label_aliases[l.0 as usize] = b.target;
1142 // NOTE: we continue to ensure the invariant that labels
1143 // pointing to tail of buffer are in `labels_at_tail`
1144 // because we already ensured above that the last branch
1145 // cannot have a target of `cur_off`; so we never have
1146 // to put the label into `labels_at_tail` when moving it
1147 // here.
1148 }
1149 // Maintain invariant: all branches have been redirected
1150 // and are no longer pointing at the start of this branch.
1151 let mut_b = self.latest_branches.last_mut().unwrap();
1152 mut_b.labels_at_this_branch.clear();
1153
1154 if redirected > 0 {
1155 trace!(" -> after label redirects, restarting loop");
1156 continue;
1157 }
1158 } else {
1159 break;
1160 }
1161
1162 let b = self.latest_branches.last().unwrap();
1163
1164 // Examine any immediately preceding branch.
1165 if self.latest_branches.len() > 1 {
1166 let prev_b = &self.latest_branches[self.latest_branches.len() - 2];
1167 trace!(" -> more than one branch; prev_b = {:?}", prev_b);
1168 // This uncond is immediately after another uncond; we
1169 // should have already redirected labels to this uncond away
1170 // (but check to be sure); so we can truncate this uncond.
1171 if prev_b.is_uncond()
1172 && prev_b.end == b.start
1173 && b.labels_at_this_branch.is_empty()
1174 {
1175 trace!(" -> uncond follows another uncond; truncating");
1176 self.truncate_last_branch();
1177 continue;
1178 }
1179
1180 // This uncond is immediately after a conditional, and the
1181 // conditional's target is the end of this uncond, and we've
1182 // already redirected labels to this uncond away; so we can
1183 // truncate this uncond, flip the sense of the conditional, and
1184 // set the conditional's target (in `latest_branches` and in
1185 // `fixup_records`) to the uncond's target.
1186 if prev_b.is_cond()
1187 && prev_b.end == b.start
1188 && self.resolve_label_offset(prev_b.target) == cur_off
1189 {
1190 trace!(
1191 " -> uncond follows a conditional, and conditional's target resolves to current offset"
1192 );
1193 // Save the target of the uncond (this becomes the
1194 // target of the cond), and truncate the uncond.
1195 let target = b.target;
1196 let data = prev_b.inverted.clone().unwrap();
1197 self.truncate_last_branch();
1198
1199 // Mutate the code and cond branch.
1200 let off_before_edit = self.cur_offset();
1201 let prev_b = self.latest_branches.last_mut().unwrap();
1202 let not_inverted = SmallVec::from(
1203 &self.data[(prev_b.start as usize)..(prev_b.end as usize)],
1204 );
1205
1206 // Low-level edit: replaces bytes of branch with
1207 // inverted form. cur_off remains the same afterward, so
1208 // we do not need to modify label data structures.
1209 self.data.truncate(prev_b.start as usize);
1210 self.data.extend_from_slice(&data[..]);
1211
1212 // Save the original code as the inversion of the
1213 // inverted branch, in case we later edit this branch
1214 // again.
1215 prev_b.inverted = Some(not_inverted);
1216 self.pending_fixup_records[prev_b.fixup].label = target;
1217 trace!(" -> reassigning target of condbr to {:?}", target);
1218 prev_b.target = target;
1219 debug_assert_eq!(off_before_edit, self.cur_offset());
1220 continue;
1221 }
1222 }
1223 }
1224
1225 // If we couldn't do anything with the last branch, then break.
1226 break;
1227 }
1228
1229 self.purge_latest_branches();
1230
1231 trace!(
1232 "leave optimize_branches:\n b = {:?}\n l = {:?}\n f = {:?}",
1233 self.latest_branches, self.labels_at_tail, self.pending_fixup_records
1234 );
1235 }
1236
1237 fn purge_latest_branches(&mut self) {
1238 // All of our branch simplification rules work only if a branch ends at
1239 // the tail of the buffer, with no following code; and branches are in
1240 // order in latest_branches; so if the last entry ends prior to
1241 // cur_offset, then clear all entries.
1242 let cur_off = self.cur_offset();
1243 if let Some(l) = self.latest_branches.last() {
1244 if l.end < cur_off {
1245 trace!("purge_latest_branches: removing branch {:?}", l);
1246 self.latest_branches.clear();
1247 }
1248 }
1249
1250 // Post-invariant: no invariant requires any branch to appear in
1251 // `latest_branches`; it is always optional. The list-clear above thus
1252 // preserves all semantics.
1253 }
1254
1255 /// Emit a trap at some point in the future with the specified code and
1256 /// stack map.
1257 ///
1258 /// This function returns a [`MachLabel`] which will be the future address
1259 /// of the trap. Jumps should refer to this label, likely by using the
1260 /// [`MachBuffer::use_label_at_offset`] method, to get a relocation
1261 /// patched in once the address of the trap is known.
1262 ///
1263 /// This will batch all traps into the end of the function.
1264 pub fn defer_trap(&mut self, code: TrapCode) -> MachLabel {
1265 let label = self.get_label();
1266 self.pending_traps.push(MachLabelTrap {
1267 label,
1268 code,
1269 loc: self.cur_srcloc.map(|(_start, loc)| loc),
1270 });
1271 label
1272 }
1273
1274 /// Is an island needed within the next N bytes?
1275 pub fn island_needed(&self, distance: CodeOffset) -> bool {
1276 let deadline = match self.fixup_records.peek() {
1277 Some(fixup) => fixup.deadline().min(self.pending_fixup_deadline),
1278 None => self.pending_fixup_deadline,
1279 };
1280 deadline < u32::MAX && self.worst_case_end_of_island(distance) > deadline
1281 }
1282
1283 /// Returns the maximal offset that islands can reach if `distance` more
1284 /// bytes are appended.
1285 ///
1286 /// This is used to determine if veneers need insertions since jumps that
1287 /// can't reach past this point must get a veneer of some form.
1288 fn worst_case_end_of_island(&self, distance: CodeOffset) -> CodeOffset {
1289 // Assume that all fixups will require veneers and that the veneers are
1290 // the worst-case size for each platform. This is an over-generalization
1291 // to avoid iterating over the `fixup_records` list or maintaining
1292 // information about it as we go along.
1293 let island_worst_case_size = ((self.fixup_records.len() + self.pending_fixup_records.len())
1294 as u32)
1295 * (I::LabelUse::worst_case_veneer_size())
1296 + self.pending_constants_size
1297 + (self.pending_traps.len() * I::TRAP_OPCODE.len()) as u32;
1298 self.cur_offset()
1299 .saturating_add(distance)
1300 .saturating_add(island_worst_case_size)
1301 }
1302
1303 /// Emit all pending constants and required pending veneers.
1304 ///
1305 /// Should only be called if `island_needed()` returns true, i.e., if we
1306 /// actually reach a deadline. It's not necessarily a problem to do so
1307 /// otherwise but it may result in unnecessary work during emission.
1308 pub fn emit_island(&mut self, distance: CodeOffset, ctrl_plane: &mut ControlPlane) {
1309 self.emit_island_maybe_forced(ForceVeneers::No, distance, ctrl_plane);
1310 }
1311
1312 /// Same as `emit_island`, but an internal API with a `force_veneers`
1313 /// argument to force all veneers to always get emitted for debugging.
1314 fn emit_island_maybe_forced(
1315 &mut self,
1316 force_veneers: ForceVeneers,
1317 distance: CodeOffset,
1318 ctrl_plane: &mut ControlPlane,
1319 ) {
1320 // We're going to purge fixups, so no latest-branch editing can happen
1321 // anymore.
1322 self.latest_branches.clear();
1323
1324 // End the current location tracking since anything emitted during this
1325 // function shouldn't be attributed to whatever the current source
1326 // location is.
1327 //
1328 // Note that the current source location, if it's set right now, will be
1329 // restored at the end of this island emission.
1330 let cur_loc = self.cur_srcloc.map(|(_, loc)| loc);
1331 if cur_loc.is_some() {
1332 self.end_srcloc();
1333 }
1334
1335 let forced_threshold = self.worst_case_end_of_island(distance);
1336
1337 // First flush out all traps/constants so we have more labels in case
1338 // fixups are applied against these labels.
1339 //
1340 // Note that traps are placed first since this typically happens at the
1341 // end of the function and for disassemblers we try to keep all the code
1342 // contiguously together.
1343 for MachLabelTrap { label, code, loc } in mem::take(&mut self.pending_traps) {
1344 // If this trap has source information associated with it then
1345 // emit this information for the trap instruction going out now too.
1346 if let Some(loc) = loc {
1347 self.start_srcloc(loc);
1348 }
1349 self.align_to(I::LabelUse::ALIGN);
1350 self.bind_label(label, ctrl_plane);
1351 self.add_trap(code);
1352 self.put_data(I::TRAP_OPCODE);
1353 if loc.is_some() {
1354 self.end_srcloc();
1355 }
1356 }
1357
1358 for constant in mem::take(&mut self.pending_constants) {
1359 let MachBufferConstant { align, size, .. } = self.constants[constant];
1360 let label = self.constants[constant].upcoming_label.take().unwrap();
1361 self.align_to(align);
1362 self.bind_label(label, ctrl_plane);
1363 self.used_constants.push((constant, self.cur_offset()));
1364 self.get_appended_space(size);
1365 }
1366
1367 // Either handle all pending fixups because they're ready or move them
1368 // onto the `BinaryHeap` tracking all pending fixups if they aren't
1369 // ready.
1370 assert!(self.latest_branches.is_empty());
1371 for fixup in mem::take(&mut self.pending_fixup_records) {
1372 if self.should_apply_fixup(&fixup, forced_threshold) {
1373 self.handle_fixup(fixup, force_veneers, forced_threshold);
1374 } else {
1375 self.fixup_records.push(fixup);
1376 }
1377 }
1378 self.pending_fixup_deadline = u32::MAX;
1379 while let Some(fixup) = self.fixup_records.peek() {
1380 trace!("emit_island: fixup {:?}", fixup);
1381
1382 // If this fixup shouldn't be applied, that means its label isn't
1383 // defined yet and there'll be remaining space to apply a veneer if
1384 // necessary in the future after this island. In that situation
1385 // because `fixup_records` is sorted by deadline this loop can
1386 // exit.
1387 if !self.should_apply_fixup(fixup, forced_threshold) {
1388 break;
1389 }
1390
1391 let fixup = self.fixup_records.pop().unwrap();
1392 self.handle_fixup(fixup, force_veneers, forced_threshold);
1393 }
1394
1395 if let Some(loc) = cur_loc {
1396 self.start_srcloc(loc);
1397 }
1398 }
1399
1400 fn should_apply_fixup(&self, fixup: &MachLabelFixup<I>, forced_threshold: CodeOffset) -> bool {
1401 let label_offset = self.resolve_label_offset(fixup.label);
1402 label_offset != UNKNOWN_LABEL_OFFSET || fixup.deadline() < forced_threshold
1403 }
1404
1405 fn handle_fixup(
1406 &mut self,
1407 fixup: MachLabelFixup<I>,
1408 force_veneers: ForceVeneers,
1409 forced_threshold: CodeOffset,
1410 ) {
1411 let MachLabelFixup {
1412 label,
1413 offset,
1414 kind,
1415 } = fixup;
1416 let start = offset as usize;
1417 let end = (offset + kind.patch_size()) as usize;
1418 let label_offset = self.resolve_label_offset(label);
1419
1420 if label_offset != UNKNOWN_LABEL_OFFSET {
1421 // If the offset of the label for this fixup is known then
1422 // we're going to do something here-and-now. We're either going
1423 // to patch the original offset because it's an in-bounds jump,
1424 // or we're going to generate a veneer, patch the fixup to jump
1425 // to the veneer, and then keep going.
1426 //
1427 // If the label comes after the original fixup, then we should
1428 // be guaranteed that the jump is in-bounds. Otherwise there's
1429 // a bug somewhere because this method wasn't called soon
1430 // enough. All forward-jumps are tracked and should get veneers
1431 // before their deadline comes and they're unable to jump
1432 // further.
1433 //
1434 // Otherwise if the label is before the fixup, then that's a
1435 // backwards jump. If it's past the maximum negative range
1436 // then we'll emit a veneer that to jump forward to which can
1437 // then jump backwards.
1438 let veneer_required = if label_offset >= offset {
1439 assert!((label_offset - offset) <= kind.max_pos_range());
1440 false
1441 } else {
1442 (offset - label_offset) > kind.max_neg_range()
1443 };
1444 trace!(
1445 " -> label_offset = {}, known, required = {} (pos {} neg {})",
1446 label_offset,
1447 veneer_required,
1448 kind.max_pos_range(),
1449 kind.max_neg_range()
1450 );
1451
1452 if (force_veneers == ForceVeneers::Yes && kind.supports_veneer()) || veneer_required {
1453 self.emit_veneer(label, offset, kind);
1454 } else {
1455 let slice = &mut self.data[start..end];
1456 trace!(
1457 "patching in-range! slice = {slice:?}; offset = {offset:#x}; label_offset = {label_offset:#x}"
1458 );
1459 kind.patch(slice, offset, label_offset);
1460 }
1461 } else {
1462 // If the offset of this label is not known at this time then
1463 // that means that a veneer is required because after this
1464 // island the target can't be in range of the original target.
1465 assert!(forced_threshold - offset > kind.max_pos_range());
1466 self.emit_veneer(label, offset, kind);
1467 }
1468 }
1469
1470 /// Emits a "veneer" the `kind` code at `offset` to jump to `label`.
1471 ///
1472 /// This will generate extra machine code, using `kind`, to get a
1473 /// larger-jump-kind than `kind` allows. The code at `offset` is then
1474 /// patched to jump to our new code, and then the new code is enqueued for
1475 /// a fixup to get processed at some later time.
1476 fn emit_veneer(&mut self, label: MachLabel, offset: CodeOffset, kind: I::LabelUse) {
1477 // If this `kind` doesn't support a veneer then that's a bug in the
1478 // backend because we need to implement support for such a veneer.
1479 assert!(
1480 kind.supports_veneer(),
1481 "jump beyond the range of {kind:?} but a veneer isn't supported",
1482 );
1483
1484 // Allocate space for a veneer in the island.
1485 self.align_to(I::LabelUse::ALIGN);
1486 let veneer_offset = self.cur_offset();
1487 trace!("making a veneer at {}", veneer_offset);
1488 let start = offset as usize;
1489 let end = (offset + kind.patch_size()) as usize;
1490 let slice = &mut self.data[start..end];
1491 // Patch the original label use to refer to the veneer.
1492 trace!(
1493 "patching original at offset {} to veneer offset {}",
1494 offset, veneer_offset
1495 );
1496 kind.patch(slice, offset, veneer_offset);
1497 // Generate the veneer.
1498 let veneer_slice = self.get_appended_space(kind.veneer_size() as usize);
1499 let (veneer_fixup_off, veneer_label_use) =
1500 kind.generate_veneer(veneer_slice, veneer_offset);
1501 trace!(
1502 "generated veneer; fixup offset {}, label_use {:?}",
1503 veneer_fixup_off, veneer_label_use
1504 );
1505 // Register a new use of `label` with our new veneer fixup and
1506 // offset. This'll recalculate deadlines accordingly and
1507 // enqueue this fixup to get processed at some later
1508 // time.
1509 self.use_label_at_offset(veneer_fixup_off, label, veneer_label_use);
1510 }
1511
1512 fn finish_emission_maybe_forcing_veneers(
1513 &mut self,
1514 force_veneers: ForceVeneers,
1515 ctrl_plane: &mut ControlPlane,
1516 ) {
1517 while !self.pending_constants.is_empty()
1518 || !self.pending_traps.is_empty()
1519 || !self.fixup_records.is_empty()
1520 || !self.pending_fixup_records.is_empty()
1521 {
1522 // `emit_island()` will emit any pending veneers and constants, and
1523 // as a side-effect, will also take care of any fixups with resolved
1524 // labels eagerly.
1525 self.emit_island_maybe_forced(force_veneers, u32::MAX, ctrl_plane);
1526 }
1527
1528 // Ensure that all labels have been fixed up after the last island is emitted. This is a
1529 // full (release-mode) assert because an unresolved label means the emitted code is
1530 // incorrect.
1531 assert!(self.fixup_records.is_empty());
1532 assert!(self.pending_fixup_records.is_empty());
1533 }
1534
1535 /// Finish any deferred emissions and/or fixups.
1536 pub fn finish(
1537 mut self,
1538 constants: &VCodeConstants,
1539 ctrl_plane: &mut ControlPlane,
1540 ) -> MachBufferFinalized<Stencil> {
1541 let _tt = timing::vcode_emit_finish();
1542
1543 self.finish_emission_maybe_forcing_veneers(ForceVeneers::No, ctrl_plane);
1544
1545 let alignment = self.finish_constants(constants);
1546
1547 // Resolve all labels to their offsets.
1548 let finalized_relocs = self
1549 .relocs
1550 .iter()
1551 .map(|reloc| FinalizedMachReloc {
1552 offset: reloc.offset,
1553 kind: reloc.kind,
1554 addend: reloc.addend,
1555 target: match &reloc.target {
1556 RelocTarget::ExternalName(name) => {
1557 FinalizedRelocTarget::ExternalName(name.clone())
1558 }
1559 RelocTarget::Label(label) => {
1560 FinalizedRelocTarget::Func(self.resolve_label_offset(*label))
1561 }
1562 },
1563 })
1564 .collect();
1565
1566 let finalized_exception_handlers = self
1567 .exception_handlers
1568 .iter()
1569 .map(|handler| handler.finalize(|label| self.resolve_label_offset(label)))
1570 .collect();
1571
1572 let mut srclocs = self.srclocs;
1573 srclocs.sort_by_key(|entry| entry.start);
1574
1575 MachBufferFinalized {
1576 data: self.data,
1577 relocs: finalized_relocs,
1578 traps: self.traps,
1579 call_sites: self.call_sites,
1580 patchable_call_sites: self.patchable_call_sites,
1581 exception_handlers: finalized_exception_handlers,
1582 srclocs,
1583 debug_tags: self.debug_tags,
1584 debug_tag_pool: self.debug_tag_pool,
1585 user_stack_maps: self.user_stack_maps,
1586 unwind_info: self.unwind_info,
1587 alignment,
1588 frame_layout: self.frame_layout,
1589 nop: I::gen_nop_unit(),
1590 }
1591 }
1592
1593 /// Add an external relocation at the given offset.
1594 pub fn add_reloc_at_offset<T: Into<RelocTarget> + Clone>(
1595 &mut self,
1596 offset: CodeOffset,
1597 kind: Reloc,
1598 target: &T,
1599 addend: Addend,
1600 ) {
1601 let target: RelocTarget = target.clone().into();
1602 // FIXME(#3277): This should use `I::LabelUse::from_reloc` to optionally
1603 // generate a label-use statement to track whether an island is possibly
1604 // needed to escape this function to actually get to the external name.
1605 // This is most likely to come up on AArch64 where calls between
1606 // functions use a 26-bit signed offset which gives +/- 64MB. This means
1607 // that if a function is 128MB in size and there's a call in the middle
1608 // it's impossible to reach the actual target. Also, while it's
1609 // technically possible to jump to the start of a function and then jump
1610 // further, island insertion below always inserts islands after
1611 // previously appended code so for Cranelift's own implementation this
1612 // is also a problem for 64MB functions on AArch64 which start with a
1613 // call instruction, those won't be able to escape.
1614 //
1615 // Ideally what needs to happen here is that a `LabelUse` is
1616 // transparently generated (or call-sites of this function are audited
1617 // to generate a `LabelUse` instead) and tracked internally. The actual
1618 // relocation would then change over time if and when a veneer is
1619 // inserted, where the relocation here would be patched by this
1620 // `MachBuffer` to jump to the veneer. The problem, though, is that all
1621 // this still needs to end up, in the case of a singular function,
1622 // generating a final relocation pointing either to this particular
1623 // relocation or to the veneer inserted. Additionally
1624 // `MachBuffer` needs the concept of a label which will never be
1625 // resolved, so `emit_island` doesn't trip over not actually ever
1626 // knowing what some labels are. Currently the loop in
1627 // `finish_emission_maybe_forcing_veneers` would otherwise infinitely
1628 // loop.
1629 //
1630 // For now this means that because relocs aren't tracked at all that
1631 // AArch64 functions have a rough size limits of 64MB. For now that's
1632 // somewhat reasonable and the failure mode is a panic in `MachBuffer`
1633 // when a relocation can't otherwise be resolved later, so it shouldn't
1634 // actually result in any memory unsafety or anything like that.
1635 self.relocs.push(MachReloc {
1636 offset,
1637 kind,
1638 target,
1639 addend,
1640 });
1641 }
1642
1643 /// Add an external relocation at the current offset.
1644 pub fn add_reloc<T: Into<RelocTarget> + Clone>(
1645 &mut self,
1646 kind: Reloc,
1647 target: &T,
1648 addend: Addend,
1649 ) {
1650 self.add_reloc_at_offset(self.data.len() as CodeOffset, kind, target, addend);
1651 }
1652
1653 /// Add a trap record at the current offset.
1654 pub fn add_trap(&mut self, code: TrapCode) {
1655 self.traps.push(MachTrap {
1656 offset: self.data.len() as CodeOffset,
1657 code,
1658 });
1659 }
1660
1661 /// Add a call-site record at the current offset.
1662 pub fn add_call_site(&mut self) {
1663 self.add_try_call_site(None, core::iter::empty());
1664 }
1665
1666 /// Add a call-site record at the current offset with exception
1667 /// handlers.
1668 pub fn add_try_call_site(
1669 &mut self,
1670 frame_offset: Option<u32>,
1671 exception_handlers: impl Iterator<Item = MachExceptionHandler>,
1672 ) {
1673 let start = u32::try_from(self.exception_handlers.len()).unwrap();
1674 self.exception_handlers.extend(exception_handlers);
1675 let end = u32::try_from(self.exception_handlers.len()).unwrap();
1676 let exception_handler_range = start..end;
1677
1678 self.call_sites.push(MachCallSite {
1679 ret_addr: self.data.len() as CodeOffset,
1680 frame_offset,
1681 exception_handler_range,
1682 });
1683 }
1684
1685 /// Add a patchable call record at the current offset The actual
1686 /// call is expected to have been emitted; the VCodeInst trait
1687 /// specifies how to NOP it out, and we carry that information to
1688 /// the finalized Machbuffer.
1689 pub fn add_patchable_call_site(&mut self, len: u32) {
1690 self.patchable_call_sites.push(MachPatchableCallSite {
1691 ret_addr: self.cur_offset(),
1692 len,
1693 });
1694 }
1695
1696 /// Add an unwind record at the current offset.
1697 pub fn add_unwind(&mut self, unwind: UnwindInst) {
1698 self.unwind_info.push((self.cur_offset(), unwind));
1699 }
1700
1701 /// Set the `SourceLoc` for code from this offset until the offset at the
1702 /// next call to `end_srcloc()`.
1703 /// Returns the current [CodeOffset] and [RelSourceLoc].
1704 pub fn start_srcloc(&mut self, loc: RelSourceLoc) -> (CodeOffset, RelSourceLoc) {
1705 let cur = (self.cur_offset(), loc);
1706 self.cur_srcloc = Some(cur);
1707 cur
1708 }
1709
1710 /// Mark the end of the `SourceLoc` segment started at the last
1711 /// `start_srcloc()` call.
1712 pub fn end_srcloc(&mut self) {
1713 let (start, loc) = self
1714 .cur_srcloc
1715 .take()
1716 .expect("end_srcloc() called without start_srcloc()");
1717 let end = self.cur_offset();
1718 // Skip zero-length extends.
1719 debug_assert!(end >= start);
1720 if end > start {
1721 self.srclocs.push(MachSrcLoc { start, end, loc });
1722 }
1723 }
1724
1725 /// Push a user stack map onto this buffer.
1726 ///
1727 /// The stack map is associated with the given `return_addr` code
1728 /// offset. This must be the PC for the instruction just *after* this stack
1729 /// map's associated instruction. For example in the sequence `call $foo;
1730 /// add r8, rax`, the `return_addr` must be the offset of the start of the
1731 /// `add` instruction.
1732 ///
1733 /// Stack maps must be pushed in sorted `return_addr` order.
1734 pub fn push_user_stack_map(
1735 &mut self,
1736 emit_state: &I::State,
1737 return_addr: CodeOffset,
1738 mut stack_map: ir::UserStackMap,
1739 ) {
1740 let span = emit_state.frame_layout().active_size();
1741 trace!("Adding user stack map @ {return_addr:#x} spanning {span} bytes: {stack_map:?}");
1742
1743 debug_assert!(
1744 self.user_stack_maps
1745 .last()
1746 .map_or(true, |(prev_addr, _, _)| *prev_addr < return_addr),
1747 "pushed stack maps out of order: {} is not less than {}",
1748 self.user_stack_maps.last().unwrap().0,
1749 return_addr,
1750 );
1751
1752 stack_map.finalize(emit_state.frame_layout().sp_to_sized_stack_slots());
1753 self.user_stack_maps.push((return_addr, span, stack_map));
1754 }
1755
1756 /// Push a debug tag associated with the current buffer offset.
1757 pub fn push_debug_tags(&mut self, pos: MachDebugTagPos, tags: &[DebugTag]) {
1758 trace!("debug tags at offset {}: {tags:?}", self.cur_offset());
1759 let start = u32::try_from(self.debug_tag_pool.len()).unwrap();
1760 self.debug_tag_pool.extend(tags.iter().cloned());
1761 let end = u32::try_from(self.debug_tag_pool.len()).unwrap();
1762 self.debug_tags.push(MachDebugTags {
1763 offset: self.cur_offset(),
1764 pos,
1765 range: start..end,
1766 });
1767 }
1768
1769 /// Increase the alignment of the buffer to the given alignment if bigger
1770 /// than the current alignment.
1771 pub fn set_log2_min_function_alignment(&mut self, align_to: u8) {
1772 self.min_alignment = self.min_alignment.max(
1773 1u32.checked_shl(u32::from(align_to))
1774 .expect("log2_min_function_alignment too large"),
1775 );
1776 }
1777
1778 /// Set the frame layout metadata.
1779 pub fn set_frame_layout(&mut self, frame_layout: MachBufferFrameLayout) {
1780 debug_assert!(self.frame_layout.is_none());
1781 self.frame_layout = Some(frame_layout);
1782 }
1783}
1784
1785impl<I: VCodeInst> Extend<u8> for MachBuffer<I> {
1786 fn extend<T: IntoIterator<Item = u8>>(&mut self, iter: T) {
1787 for b in iter {
1788 self.put1(b);
1789 }
1790 }
1791}
1792
1793impl<T: CompilePhase> MachBufferFinalized<T> {
1794 /// Get a list of source location mapping tuples in sorted-by-start-offset order.
1795 pub fn get_srclocs_sorted(&self) -> &[T::MachSrcLocType] {
1796 &self.srclocs[..]
1797 }
1798
1799 /// Get all debug tags, sorted by associated offset.
1800 pub fn debug_tags(&self) -> impl Iterator<Item = MachBufferDebugTagList<'_>> {
1801 self.debug_tags.iter().map(|tags| {
1802 let start = usize::try_from(tags.range.start).unwrap();
1803 let end = usize::try_from(tags.range.end).unwrap();
1804 MachBufferDebugTagList {
1805 offset: tags.offset,
1806 pos: tags.pos,
1807 tags: &self.debug_tag_pool[start..end],
1808 }
1809 })
1810 }
1811
1812 /// Get the total required size for the code.
1813 pub fn total_size(&self) -> CodeOffset {
1814 self.data.len() as CodeOffset
1815 }
1816
1817 /// Return the code in this mach buffer as a hex string for testing purposes.
1818 pub fn stringify_code_bytes(&self) -> String {
1819 // This is pretty lame, but whatever ..
1820 use std::fmt::Write;
1821 let mut s = String::with_capacity(self.data.len() * 2);
1822 for b in &self.data {
1823 write!(&mut s, "{b:02X}").unwrap();
1824 }
1825 s
1826 }
1827
1828 /// Get the code bytes.
1829 pub fn data(&self) -> &[u8] {
1830 // N.B.: we emit every section into the .text section as far as
1831 // the `CodeSink` is concerned; we do not bother to segregate
1832 // the contents into the actual program text, the jumptable and the
1833 // rodata (constant pool). This allows us to generate code assuming
1834 // that these will not be relocated relative to each other, and avoids
1835 // having to designate each section as belonging in one of the three
1836 // fixed categories defined by `CodeSink`. If this becomes a problem
1837 // later (e.g. because of memory permissions or similar), we can
1838 // add this designation and segregate the output; take care, however,
1839 // to add the appropriate relocations in this case.
1840
1841 &self.data[..]
1842 }
1843
1844 /// Get the list of external relocations for this code.
1845 pub fn relocs(&self) -> &[FinalizedMachReloc] {
1846 &self.relocs[..]
1847 }
1848
1849 /// Get the list of trap records for this code.
1850 pub fn traps(&self) -> &[MachTrap] {
1851 &self.traps[..]
1852 }
1853
1854 /// Get the user stack map metadata for this code.
1855 pub fn user_stack_maps(&self) -> &[(CodeOffset, u32, ir::UserStackMap)] {
1856 &self.user_stack_maps
1857 }
1858
1859 /// Take this buffer's user strack map metadata.
1860 pub fn take_user_stack_maps(&mut self) -> SmallVec<[(CodeOffset, u32, ir::UserStackMap); 8]> {
1861 mem::take(&mut self.user_stack_maps)
1862 }
1863
1864 /// Get the list of call sites for this code, along with
1865 /// associated exception handlers.
1866 ///
1867 /// Each item yielded by the returned iterator is a struct with:
1868 ///
1869 /// - The call site metadata record, with a `ret_addr` field
1870 /// directly accessible and denoting the offset of the return
1871 /// address into this buffer's code.
1872 /// - The slice of pairs of exception tags and code offsets
1873 /// denoting exception-handler entry points associated with this
1874 /// call site.
1875 pub fn call_sites(&self) -> impl Iterator<Item = FinalizedMachCallSite<'_>> + '_ {
1876 self.call_sites.iter().map(|call_site| {
1877 let handler_range = call_site.exception_handler_range.clone();
1878 let handler_range = usize::try_from(handler_range.start).unwrap()
1879 ..usize::try_from(handler_range.end).unwrap();
1880 FinalizedMachCallSite {
1881 ret_addr: call_site.ret_addr,
1882 frame_offset: call_site.frame_offset,
1883 exception_handlers: &self.exception_handlers[handler_range],
1884 }
1885 })
1886 }
1887
1888 /// Get the frame layout, if known.
1889 pub fn frame_layout(&self) -> Option<&MachBufferFrameLayout> {
1890 self.frame_layout.as_ref()
1891 }
1892
1893 /// Get the list of patchable call sites for this code.
1894 ///
1895 /// Each location in the buffer contains the bytes for a call
1896 /// instruction to the specified target. If the call is to be
1897 /// patched out, the bytes in the region should be replaced with
1898 /// those given in the `MachBufferFinalized::nop` array, repeated
1899 /// as many times as necessary. (The length of the patchable
1900 /// region is guaranteed to be an integer multiple of that NOP
1901 /// unit size.)
1902 pub fn patchable_call_sites(&self) -> impl Iterator<Item = &MachPatchableCallSite> + '_ {
1903 self.patchable_call_sites.iter().map(|call_site| {
1904 debug_assert!(call_site.len as usize % self.nop.len() == 0);
1905 call_site
1906 })
1907 }
1908}
1909
1910/// An item in the exception-handler list for a callsite, with label
1911/// references. Items are interpreted in left-to-right order and the
1912/// first match wins.
1913#[derive(Clone, Copy, Debug, PartialEq, Eq)]
1914pub enum MachExceptionHandler {
1915 /// A specific tag (in the current dynamic context) should be
1916 /// handled by the code at the given offset.
1917 Tag(ExceptionTag, MachLabel),
1918 /// All exceptions should be handled by the code at the given
1919 /// offset.
1920 Default(MachLabel),
1921 /// The dynamic context for interpreting tags is updated to the
1922 /// value stored in the given machine location (in this frame's
1923 /// context).
1924 Context(ExceptionContextLoc),
1925}
1926
1927impl MachExceptionHandler {
1928 fn finalize<F: Fn(MachLabel) -> CodeOffset>(self, f: F) -> FinalizedMachExceptionHandler {
1929 match self {
1930 Self::Tag(tag, label) => FinalizedMachExceptionHandler::Tag(tag, f(label)),
1931 Self::Default(label) => FinalizedMachExceptionHandler::Default(f(label)),
1932 Self::Context(loc) => FinalizedMachExceptionHandler::Context(loc),
1933 }
1934 }
1935}
1936
1937/// An item in the exception-handler list for a callsite, with final
1938/// (lowered) code offsets. Items are interpreted in left-to-right
1939/// order and the first match wins.
1940#[derive(Clone, Copy, Debug, PartialEq, Eq)]
1941#[cfg_attr(
1942 feature = "enable-serde",
1943 derive(serde_derive::Serialize, serde_derive::Deserialize)
1944)]
1945pub enum FinalizedMachExceptionHandler {
1946 /// A specific tag (in the current dynamic context) should be
1947 /// handled by the code at the given offset.
1948 Tag(ExceptionTag, CodeOffset),
1949 /// All exceptions should be handled by the code at the given
1950 /// offset.
1951 Default(CodeOffset),
1952 /// The dynamic context for interpreting tags is updated to the
1953 /// value stored in the given machine location (in this frame's
1954 /// context).
1955 Context(ExceptionContextLoc),
1956}
1957
1958/// A location for a dynamic exception context value.
1959#[derive(Clone, Copy, Debug, PartialEq, Eq)]
1960#[cfg_attr(
1961 feature = "enable-serde",
1962 derive(serde_derive::Serialize, serde_derive::Deserialize)
1963)]
1964pub enum ExceptionContextLoc {
1965 /// An offset from SP at the callsite.
1966 SPOffset(u32),
1967 /// A GPR at the callsite. The physical register number for the
1968 /// GPR register file on the target architecture is used.
1969 GPR(u8),
1970}
1971
1972/// Metadata about a constant.
1973struct MachBufferConstant {
1974 /// A label which has not yet been bound which can be used for this
1975 /// constant.
1976 ///
1977 /// This is lazily created when a label is requested for a constant and is
1978 /// cleared when a constant is emitted.
1979 upcoming_label: Option<MachLabel>,
1980 /// Required alignment.
1981 align: CodeOffset,
1982 /// The byte size of this constant.
1983 size: usize,
1984}
1985
1986/// A trap that is deferred to the next time an island is emitted for either
1987/// traps, constants, or fixups.
1988struct MachLabelTrap {
1989 /// This label will refer to the trap's offset.
1990 label: MachLabel,
1991 /// The code associated with this trap.
1992 code: TrapCode,
1993 /// An optional source location to assign for this trap.
1994 loc: Option<RelSourceLoc>,
1995}
1996
1997/// A fixup to perform on the buffer once code is emitted. Fixups always refer
1998/// to labels and patch the code based on label offsets. Hence, they are like
1999/// relocations, but internal to one buffer.
2000#[derive(Debug)]
2001struct MachLabelFixup<I: VCodeInst> {
2002 /// The label whose offset controls this fixup.
2003 label: MachLabel,
2004 /// The offset to fix up / patch to refer to this label.
2005 offset: CodeOffset,
2006 /// The kind of fixup. This is architecture-specific; each architecture may have,
2007 /// e.g., several types of branch instructions, each with differently-sized
2008 /// offset fields and different places within the instruction to place the
2009 /// bits.
2010 kind: I::LabelUse,
2011}
2012
2013impl<I: VCodeInst> MachLabelFixup<I> {
2014 fn deadline(&self) -> CodeOffset {
2015 self.offset.saturating_add(self.kind.max_pos_range())
2016 }
2017}
2018
2019impl<I: VCodeInst> PartialEq for MachLabelFixup<I> {
2020 fn eq(&self, other: &Self) -> bool {
2021 self.deadline() == other.deadline()
2022 }
2023}
2024
2025impl<I: VCodeInst> Eq for MachLabelFixup<I> {}
2026
2027impl<I: VCodeInst> PartialOrd for MachLabelFixup<I> {
2028 fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
2029 Some(self.cmp(other))
2030 }
2031}
2032
2033impl<I: VCodeInst> Ord for MachLabelFixup<I> {
2034 fn cmp(&self, other: &Self) -> Ordering {
2035 other.deadline().cmp(&self.deadline())
2036 }
2037}
2038
2039/// A relocation resulting from a compilation.
2040#[derive(Clone, Debug, PartialEq)]
2041#[cfg_attr(
2042 feature = "enable-serde",
2043 derive(serde_derive::Serialize, serde_derive::Deserialize)
2044)]
2045pub struct MachRelocBase<T> {
2046 /// The offset at which the relocation applies, *relative to the
2047 /// containing section*.
2048 pub offset: CodeOffset,
2049 /// The kind of relocation.
2050 pub kind: Reloc,
2051 /// The external symbol / name to which this relocation refers.
2052 pub target: T,
2053 /// The addend to add to the symbol value.
2054 pub addend: i64,
2055}
2056
2057type MachReloc = MachRelocBase<RelocTarget>;
2058
2059/// A relocation resulting from a compilation.
2060pub type FinalizedMachReloc = MachRelocBase<FinalizedRelocTarget>;
2061
2062/// A Relocation target
2063#[derive(Debug, Clone, PartialEq, Eq, Hash)]
2064pub enum RelocTarget {
2065 /// Points to an [ExternalName] outside the current function.
2066 ExternalName(ExternalName),
2067 /// Points to a [MachLabel] inside this function.
2068 /// This is different from [MachLabelFixup] in that both the relocation and the
2069 /// label will be emitted and are only resolved at link time.
2070 ///
2071 /// There is no reason to prefer this over [MachLabelFixup] unless the ABI requires it.
2072 Label(MachLabel),
2073}
2074
2075impl From<ExternalName> for RelocTarget {
2076 fn from(name: ExternalName) -> Self {
2077 Self::ExternalName(name)
2078 }
2079}
2080
2081impl From<MachLabel> for RelocTarget {
2082 fn from(label: MachLabel) -> Self {
2083 Self::Label(label)
2084 }
2085}
2086
2087/// A Relocation target
2088#[derive(Debug, Clone, PartialEq, Eq, Hash)]
2089#[cfg_attr(
2090 feature = "enable-serde",
2091 derive(serde_derive::Serialize, serde_derive::Deserialize)
2092)]
2093pub enum FinalizedRelocTarget {
2094 /// Points to an [ExternalName] outside the current function.
2095 ExternalName(ExternalName),
2096 /// Points to a [CodeOffset] from the start of the current function.
2097 Func(CodeOffset),
2098}
2099
2100impl FinalizedRelocTarget {
2101 /// Returns a display for the current [FinalizedRelocTarget], with extra context to prettify the
2102 /// output.
2103 pub fn display<'a>(&'a self, params: Option<&'a FunctionParameters>) -> String {
2104 match self {
2105 FinalizedRelocTarget::ExternalName(name) => format!("{}", name.display(params)),
2106 FinalizedRelocTarget::Func(offset) => format!("func+{offset}"),
2107 }
2108 }
2109}
2110
2111/// A trap record resulting from a compilation.
2112#[derive(Clone, Debug, PartialEq)]
2113#[cfg_attr(
2114 feature = "enable-serde",
2115 derive(serde_derive::Serialize, serde_derive::Deserialize)
2116)]
2117pub struct MachTrap {
2118 /// The offset at which the trap instruction occurs, *relative to the
2119 /// containing section*.
2120 pub offset: CodeOffset,
2121 /// The trap code.
2122 pub code: TrapCode,
2123}
2124
2125/// A call site record resulting from a compilation.
2126#[derive(Clone, Debug, PartialEq)]
2127#[cfg_attr(
2128 feature = "enable-serde",
2129 derive(serde_derive::Serialize, serde_derive::Deserialize)
2130)]
2131pub struct MachCallSite {
2132 /// The offset of the call's return address, *relative to the
2133 /// start of the buffer*.
2134 pub ret_addr: CodeOffset,
2135
2136 /// The offset from the FP at this callsite down to the SP when
2137 /// the call occurs, if known. In other words, the size of the
2138 /// stack frame up to the saved FP slot. Useful to recover the
2139 /// start of the stack frame and to look up dynamic contexts
2140 /// stored in [`ExceptionContextLoc::SPOffset`].
2141 ///
2142 /// If `None`, the compiler backend did not specify a frame
2143 /// offset. The runtime in use with the compiled code may require
2144 /// the frame offset if exception handlers are present or dynamic
2145 /// context is used, but that is not Cranelift's concern: the
2146 /// frame offset is optional at this level.
2147 pub frame_offset: Option<u32>,
2148
2149 /// Range in `exception_handlers` corresponding to the exception
2150 /// handlers for this callsite.
2151 exception_handler_range: Range<u32>,
2152}
2153
2154/// A call site record resulting from a compilation.
2155#[derive(Clone, Debug, PartialEq)]
2156pub struct FinalizedMachCallSite<'a> {
2157 /// The offset of the call's return address, *relative to the
2158 /// start of the buffer*.
2159 pub ret_addr: CodeOffset,
2160
2161 /// The offset from the FP at this callsite down to the SP when
2162 /// the call occurs, if known. In other words, the size of the
2163 /// stack frame up to the saved FP slot. Useful to recover the
2164 /// start of the stack frame and to look up dynamic contexts
2165 /// stored in [`ExceptionContextLoc::SPOffset`].
2166 ///
2167 /// If `None`, the compiler backend did not specify a frame
2168 /// offset. The runtime in use with the compiled code may require
2169 /// the frame offset if exception handlers are present or dynamic
2170 /// context is used, but that is not Cranelift's concern: the
2171 /// frame offset is optional at this level.
2172 pub frame_offset: Option<u32>,
2173
2174 /// Exception handlers at this callsite, with target offsets
2175 /// *relative to the start of the buffer*.
2176 pub exception_handlers: &'a [FinalizedMachExceptionHandler],
2177}
2178
2179/// A patchable call site record resulting from a compilation.
2180#[derive(Clone, Debug, PartialEq)]
2181#[cfg_attr(
2182 feature = "enable-serde",
2183 derive(serde_derive::Serialize, serde_derive::Deserialize)
2184)]
2185pub struct MachPatchableCallSite {
2186 /// The offset of the call's return address (i.e., the address
2187 /// after the end of the patchable region), *relative to the start
2188 /// of the buffer*.
2189 pub ret_addr: CodeOffset,
2190
2191 /// The length of the region to be patched by NOP bytes.
2192 pub len: u32,
2193}
2194
2195/// A source-location mapping resulting from a compilation.
2196#[derive(PartialEq, Debug, Clone)]
2197#[cfg_attr(
2198 feature = "enable-serde",
2199 derive(serde_derive::Serialize, serde_derive::Deserialize)
2200)]
2201pub struct MachSrcLoc<T: CompilePhase> {
2202 /// The start of the region of code corresponding to a source location.
2203 /// This is relative to the start of the function, not to the start of the
2204 /// section.
2205 pub start: CodeOffset,
2206 /// The end of the region of code corresponding to a source location.
2207 /// This is relative to the start of the function, not to the start of the
2208 /// section.
2209 pub end: CodeOffset,
2210 /// The source location.
2211 pub loc: T::SourceLocType,
2212}
2213
2214impl MachSrcLoc<Stencil> {
2215 fn apply_base_srcloc(self, base_srcloc: SourceLoc) -> MachSrcLoc<Final> {
2216 MachSrcLoc {
2217 start: self.start,
2218 end: self.end,
2219 loc: self.loc.expand(base_srcloc),
2220 }
2221 }
2222}
2223
2224/// Record of branch instruction in the buffer, to facilitate editing.
2225#[derive(Clone, Debug)]
2226struct MachBranch {
2227 start: CodeOffset,
2228 end: CodeOffset,
2229 target: MachLabel,
2230 fixup: usize,
2231 inverted: Option<SmallVec<[u8; 8]>>,
2232 /// All labels pointing to the start of this branch. For correctness, this
2233 /// *must* be complete (i.e., must contain all labels whose resolved offsets
2234 /// are at the start of this branch): we rely on being able to redirect all
2235 /// labels that could jump to this branch before removing it, if it is
2236 /// otherwise unreachable.
2237 labels_at_this_branch: SmallVec<[MachLabel; 4]>,
2238}
2239
2240impl MachBranch {
2241 fn is_cond(&self) -> bool {
2242 self.inverted.is_some()
2243 }
2244 fn is_uncond(&self) -> bool {
2245 self.inverted.is_none()
2246 }
2247}
2248
2249/// Stack-frame layout information carried through to machine
2250/// code. This provides sufficient information to interpret an active
2251/// stack frame from a running function, if provided.
2252#[derive(Clone, Debug, PartialEq)]
2253#[cfg_attr(
2254 feature = "enable-serde",
2255 derive(serde_derive::Serialize, serde_derive::Deserialize)
2256)]
2257pub struct MachBufferFrameLayout {
2258 /// Offset from bottom of frame to FP (near top of frame). This
2259 /// allows reading the frame given only FP.
2260 pub frame_to_fp_offset: u32,
2261 /// Offset from bottom of frame for each StackSlot,
2262 pub stackslots: SecondaryMap<ir::StackSlot, MachBufferStackSlot>,
2263}
2264
2265/// Descriptor for a single stack slot in the compiled function.
2266#[derive(Clone, Debug, PartialEq, Default)]
2267#[cfg_attr(
2268 feature = "enable-serde",
2269 derive(serde_derive::Serialize, serde_derive::Deserialize)
2270)]
2271pub struct MachBufferStackSlot {
2272 /// Offset from the bottom of the stack frame.
2273 pub offset: u32,
2274
2275 /// User-provided key to describe this stack slot.
2276 pub key: Option<ir::StackSlotKey>,
2277}
2278
2279/// Debug tags: a sequence of references to a stack slot, or a
2280/// user-defined value, at a particular PC.
2281#[derive(Clone, Debug, PartialEq)]
2282#[cfg_attr(
2283 feature = "enable-serde",
2284 derive(serde_derive::Serialize, serde_derive::Deserialize)
2285)]
2286pub(crate) struct MachDebugTags {
2287 /// Offset at which this tag applies.
2288 pub offset: CodeOffset,
2289
2290 /// Position on the attached instruction. This indicates whether
2291 /// the tags attach to the prior instruction (i.e., as a return
2292 /// point from a call) or the current instruction (i.e., as a PC
2293 /// seen during a trap).
2294 pub pos: MachDebugTagPos,
2295
2296 /// The range in the tag pool.
2297 pub range: Range<u32>,
2298}
2299
2300/// Debug tag position on an instruction.
2301///
2302/// We need to distinguish position on an instruction, and not just
2303/// use offsets, because of the following case:
2304///
2305/// ```plain
2306/// <tag1, tag2> call ...
2307/// <tag3, tag4> trapping_store ...
2308/// ```
2309///
2310/// If the stack is walked and interpreted with debug tags while
2311/// within the call, the PC seen will be the return point, i.e. the
2312/// address after the call. If the stack is walked and interpreted
2313/// with debug tags upon a trap of the following instruction, it will
2314/// be the PC of that instruction -- which is the same PC! Thus to
2315/// disambiguate which tags we want, we attach a "pre/post" flag to
2316/// every group of tags at an offset; and when we look up tags, we
2317/// look them up for an offset and "position" at that offset.
2318///
2319/// Thus there are logically two positions at every offset -- so the
2320/// above will be emitted as
2321///
2322/// ```plain
2323/// 0: call ...
2324/// 4, post: <tag1, tag2>
2325/// 4, pre: <tag3, tag4>
2326/// 4: trapping_store ...
2327/// ```
2328#[derive(Clone, Copy, Debug, PartialEq, Eq)]
2329#[cfg_attr(
2330 feature = "enable-serde",
2331 derive(serde_derive::Serialize, serde_derive::Deserialize)
2332)]
2333pub enum MachDebugTagPos {
2334 /// Tags attached after the instruction that ends at this offset.
2335 ///
2336 /// This is used to attach tags to a call, because the PC we see
2337 /// when walking the stack is the *return point*.
2338 Post,
2339 /// Tags attached before the instruction that starts at this offset.
2340 ///
2341 /// This is used to attach tags to every other kind of
2342 /// instruction, because the PC we see when processing a trap of
2343 /// that instruction is the PC of that instruction, not the
2344 /// following one.
2345 Pre,
2346}
2347
2348/// Iterator item for visiting debug tags.
2349pub struct MachBufferDebugTagList<'a> {
2350 /// Offset at which this tag applies.
2351 pub offset: CodeOffset,
2352
2353 /// Position at this offset ("post", attaching to prior
2354 /// instruction, or "pre", attaching to next instruction).
2355 pub pos: MachDebugTagPos,
2356
2357 /// The underlying tags.
2358 pub tags: &'a [DebugTag],
2359}
2360
2361/// Implementation of the `TextSectionBuilder` trait backed by `MachBuffer`.
2362///
2363/// Note that `MachBuffer` was primarily written for intra-function references
2364/// of jumps between basic blocks, but it's also quite usable for entire text
2365/// sections and resolving references between functions themselves. This
2366/// builder interprets "blocks" as labeled functions for the purposes of
2367/// resolving labels internally in the buffer.
2368pub struct MachTextSectionBuilder<I: VCodeInst> {
2369 buf: MachBuffer<I>,
2370 next_func: usize,
2371 force_veneers: ForceVeneers,
2372}
2373
2374impl<I: VCodeInst> MachTextSectionBuilder<I> {
2375 /// Creates a new text section builder which will have `num_funcs` functions
2376 /// pushed into it.
2377 pub fn new(num_funcs: usize) -> MachTextSectionBuilder<I> {
2378 let mut buf = MachBuffer::new();
2379 buf.reserve_labels_for_blocks(num_funcs);
2380 MachTextSectionBuilder {
2381 buf,
2382 next_func: 0,
2383 force_veneers: ForceVeneers::No,
2384 }
2385 }
2386}
2387
2388impl<I: VCodeInst> TextSectionBuilder for MachTextSectionBuilder<I> {
2389 fn append(
2390 &mut self,
2391 labeled: bool,
2392 func: &[u8],
2393 align: u32,
2394 ctrl_plane: &mut ControlPlane,
2395 ) -> u64 {
2396 // Conditionally emit an island if it's necessary to resolve jumps
2397 // between functions which are too far away.
2398 let size = func.len() as u32;
2399 if self.force_veneers == ForceVeneers::Yes || self.buf.island_needed(size) {
2400 self.buf
2401 .emit_island_maybe_forced(self.force_veneers, size, ctrl_plane);
2402 }
2403
2404 self.buf.align_to(align);
2405 let pos = self.buf.cur_offset();
2406 if labeled {
2407 self.buf.bind_label(
2408 MachLabel::from_block(BlockIndex::new(self.next_func)),
2409 ctrl_plane,
2410 );
2411 self.next_func += 1;
2412 }
2413 self.buf.put_data(func);
2414 u64::from(pos)
2415 }
2416
2417 fn resolve_reloc(&mut self, offset: u64, reloc: Reloc, addend: Addend, target: usize) -> bool {
2418 crate::trace!(
2419 "Resolving relocation @ {offset:#x} + {addend:#x} to target {target} of kind {reloc:?}"
2420 );
2421 let label = MachLabel::from_block(BlockIndex::new(target));
2422 let offset = u32::try_from(offset).unwrap();
2423 match I::LabelUse::from_reloc(reloc, addend) {
2424 Some(label_use) => {
2425 self.buf.use_label_at_offset(offset, label, label_use);
2426 true
2427 }
2428 None => false,
2429 }
2430 }
2431
2432 fn force_veneers(&mut self) {
2433 self.force_veneers = ForceVeneers::Yes;
2434 }
2435
2436 fn write(&mut self, offset: u64, data: &[u8]) {
2437 self.buf.data[offset.try_into().unwrap()..][..data.len()].copy_from_slice(data);
2438 }
2439
2440 fn finish(&mut self, ctrl_plane: &mut ControlPlane) -> Vec<u8> {
2441 // Double-check all functions were pushed.
2442 assert_eq!(self.next_func, self.buf.label_offsets.len());
2443
2444 // Finish up any veneers, if necessary.
2445 self.buf
2446 .finish_emission_maybe_forcing_veneers(self.force_veneers, ctrl_plane);
2447
2448 // We don't need the data any more, so return it to the caller.
2449 mem::take(&mut self.buf.data).into_vec()
2450 }
2451}
2452
2453// We use an actual instruction definition to do tests, so we depend on the `arm64` feature here.
2454#[cfg(all(test, feature = "arm64"))]
2455mod test {
2456 use cranelift_entity::EntityRef as _;
2457
2458 use super::*;
2459 use crate::ir::UserExternalNameRef;
2460 use crate::isa::aarch64::inst::{BranchTarget, CondBrKind, EmitInfo, Inst};
2461 use crate::isa::aarch64::inst::{OperandSize, xreg};
2462 use crate::machinst::{MachInstEmit, MachInstEmitState};
2463 use crate::settings;
2464
2465 fn label(n: u32) -> MachLabel {
2466 MachLabel::from_block(BlockIndex::new(n as usize))
2467 }
2468 fn target(n: u32) -> BranchTarget {
2469 BranchTarget::Label(label(n))
2470 }
2471
2472 #[test]
2473 fn test_elide_jump_to_next() {
2474 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2475 let mut buf = MachBuffer::new();
2476 let mut state = <Inst as MachInstEmit>::State::default();
2477 let constants = Default::default();
2478
2479 buf.reserve_labels_for_blocks(2);
2480 buf.bind_label(label(0), state.ctrl_plane_mut());
2481 let inst = Inst::Jump { dest: target(1) };
2482 inst.emit(&mut buf, &info, &mut state);
2483 buf.bind_label(label(1), state.ctrl_plane_mut());
2484 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2485 assert_eq!(0, buf.total_size());
2486 }
2487
2488 #[test]
2489 fn test_elide_trivial_jump_blocks() {
2490 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2491 let mut buf = MachBuffer::new();
2492 let mut state = <Inst as MachInstEmit>::State::default();
2493 let constants = Default::default();
2494
2495 buf.reserve_labels_for_blocks(4);
2496
2497 buf.bind_label(label(0), state.ctrl_plane_mut());
2498 let inst = Inst::CondBr {
2499 kind: CondBrKind::NotZero(xreg(0), OperandSize::Size64),
2500 taken: target(1),
2501 not_taken: target(2),
2502 };
2503 inst.emit(&mut buf, &info, &mut state);
2504
2505 buf.bind_label(label(1), state.ctrl_plane_mut());
2506 let inst = Inst::Jump { dest: target(3) };
2507 inst.emit(&mut buf, &info, &mut state);
2508
2509 buf.bind_label(label(2), state.ctrl_plane_mut());
2510 let inst = Inst::Jump { dest: target(3) };
2511 inst.emit(&mut buf, &info, &mut state);
2512
2513 buf.bind_label(label(3), state.ctrl_plane_mut());
2514
2515 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2516 assert_eq!(0, buf.total_size());
2517 }
2518
2519 #[test]
2520 fn test_flip_cond() {
2521 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2522 let mut buf = MachBuffer::new();
2523 let mut state = <Inst as MachInstEmit>::State::default();
2524 let constants = Default::default();
2525
2526 buf.reserve_labels_for_blocks(4);
2527
2528 buf.bind_label(label(0), state.ctrl_plane_mut());
2529 let inst = Inst::CondBr {
2530 kind: CondBrKind::Zero(xreg(0), OperandSize::Size64),
2531 taken: target(1),
2532 not_taken: target(2),
2533 };
2534 inst.emit(&mut buf, &info, &mut state);
2535
2536 buf.bind_label(label(1), state.ctrl_plane_mut());
2537 let inst = Inst::Nop4;
2538 inst.emit(&mut buf, &info, &mut state);
2539
2540 buf.bind_label(label(2), state.ctrl_plane_mut());
2541 let inst = Inst::Udf {
2542 trap_code: TrapCode::STACK_OVERFLOW,
2543 };
2544 inst.emit(&mut buf, &info, &mut state);
2545
2546 buf.bind_label(label(3), state.ctrl_plane_mut());
2547
2548 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2549
2550 let mut buf2 = MachBuffer::new();
2551 let mut state = Default::default();
2552 let inst = Inst::TrapIf {
2553 kind: CondBrKind::NotZero(xreg(0), OperandSize::Size64),
2554 trap_code: TrapCode::STACK_OVERFLOW,
2555 };
2556 inst.emit(&mut buf2, &info, &mut state);
2557 let inst = Inst::Nop4;
2558 inst.emit(&mut buf2, &info, &mut state);
2559
2560 let buf2 = buf2.finish(&constants, state.ctrl_plane_mut());
2561
2562 assert_eq!(buf.data, buf2.data);
2563 }
2564
2565 #[test]
2566 fn test_island() {
2567 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2568 let mut buf = MachBuffer::new();
2569 let mut state = <Inst as MachInstEmit>::State::default();
2570 let constants = Default::default();
2571
2572 buf.reserve_labels_for_blocks(4);
2573
2574 buf.bind_label(label(0), state.ctrl_plane_mut());
2575 let inst = Inst::CondBr {
2576 kind: CondBrKind::NotZero(xreg(0), OperandSize::Size64),
2577 taken: target(2),
2578 not_taken: target(3),
2579 };
2580 inst.emit(&mut buf, &info, &mut state);
2581
2582 buf.bind_label(label(1), state.ctrl_plane_mut());
2583 while buf.cur_offset() < 2000000 {
2584 if buf.island_needed(0) {
2585 buf.emit_island(0, state.ctrl_plane_mut());
2586 }
2587 let inst = Inst::Nop4;
2588 inst.emit(&mut buf, &info, &mut state);
2589 }
2590
2591 buf.bind_label(label(2), state.ctrl_plane_mut());
2592 let inst = Inst::Nop4;
2593 inst.emit(&mut buf, &info, &mut state);
2594
2595 buf.bind_label(label(3), state.ctrl_plane_mut());
2596 let inst = Inst::Nop4;
2597 inst.emit(&mut buf, &info, &mut state);
2598
2599 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2600
2601 assert_eq!(2000000 + 8, buf.total_size());
2602
2603 let mut buf2 = MachBuffer::new();
2604 let mut state = Default::default();
2605 let inst = Inst::CondBr {
2606 kind: CondBrKind::NotZero(xreg(0), OperandSize::Size64),
2607
2608 // This conditionally taken branch has a 19-bit constant, shifted
2609 // to the left by two, giving us a 21-bit range in total. Half of
2610 // this range positive so the we should be around 1 << 20 bytes
2611 // away for our jump target.
2612 //
2613 // There are two pending fixups by the time we reach this point,
2614 // one for this 19-bit jump and one for the unconditional 26-bit
2615 // jump below. A 19-bit veneer is 4 bytes large and the 26-bit
2616 // veneer is 20 bytes large, which means that pessimistically
2617 // assuming we'll need two veneers. Currently each veneer is
2618 // pessimistically assumed to be the maximal size which means we
2619 // need 40 bytes of extra space, meaning that the actual island
2620 // should come 40-bytes before the deadline.
2621 taken: BranchTarget::ResolvedOffset((1 << 20) - 20 - 20),
2622
2623 // This branch is in-range so no veneers should be needed, it should
2624 // go directly to the target.
2625 not_taken: BranchTarget::ResolvedOffset(2000000 + 4 - 4),
2626 };
2627 inst.emit(&mut buf2, &info, &mut state);
2628
2629 let buf2 = buf2.finish(&constants, state.ctrl_plane_mut());
2630
2631 assert_eq!(&buf.data[0..8], &buf2.data[..]);
2632 }
2633
2634 #[test]
2635 fn test_island_backward() {
2636 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2637 let mut buf = MachBuffer::new();
2638 let mut state = <Inst as MachInstEmit>::State::default();
2639 let constants = Default::default();
2640
2641 buf.reserve_labels_for_blocks(4);
2642
2643 buf.bind_label(label(0), state.ctrl_plane_mut());
2644 let inst = Inst::Nop4;
2645 inst.emit(&mut buf, &info, &mut state);
2646
2647 buf.bind_label(label(1), state.ctrl_plane_mut());
2648 let inst = Inst::Nop4;
2649 inst.emit(&mut buf, &info, &mut state);
2650
2651 buf.bind_label(label(2), state.ctrl_plane_mut());
2652 while buf.cur_offset() < 2000000 {
2653 let inst = Inst::Nop4;
2654 inst.emit(&mut buf, &info, &mut state);
2655 }
2656
2657 buf.bind_label(label(3), state.ctrl_plane_mut());
2658 let inst = Inst::CondBr {
2659 kind: CondBrKind::NotZero(xreg(0), OperandSize::Size64),
2660 taken: target(0),
2661 not_taken: target(1),
2662 };
2663 inst.emit(&mut buf, &info, &mut state);
2664
2665 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2666
2667 assert_eq!(2000000 + 12, buf.total_size());
2668
2669 let mut buf2 = MachBuffer::new();
2670 let mut state = Default::default();
2671 let inst = Inst::CondBr {
2672 kind: CondBrKind::NotZero(xreg(0), OperandSize::Size64),
2673 taken: BranchTarget::ResolvedOffset(8),
2674 not_taken: BranchTarget::ResolvedOffset(4 - (2000000 + 4)),
2675 };
2676 inst.emit(&mut buf2, &info, &mut state);
2677 let inst = Inst::Jump {
2678 dest: BranchTarget::ResolvedOffset(-(2000000 + 8)),
2679 };
2680 inst.emit(&mut buf2, &info, &mut state);
2681
2682 let buf2 = buf2.finish(&constants, state.ctrl_plane_mut());
2683
2684 assert_eq!(&buf.data[2000000..], &buf2.data[..]);
2685 }
2686
2687 #[test]
2688 fn test_multiple_redirect() {
2689 // label0:
2690 // cbz x0, label1
2691 // b label2
2692 // label1:
2693 // b label3
2694 // label2:
2695 // nop
2696 // nop
2697 // b label0
2698 // label3:
2699 // b label4
2700 // label4:
2701 // b label5
2702 // label5:
2703 // b label7
2704 // label6:
2705 // nop
2706 // label7:
2707 // ret
2708 //
2709 // -- should become:
2710 //
2711 // label0:
2712 // cbz x0, label7
2713 // label2:
2714 // nop
2715 // nop
2716 // b label0
2717 // label6:
2718 // nop
2719 // label7:
2720 // ret
2721
2722 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2723 let mut buf = MachBuffer::new();
2724 let mut state = <Inst as MachInstEmit>::State::default();
2725 let constants = Default::default();
2726
2727 buf.reserve_labels_for_blocks(8);
2728
2729 buf.bind_label(label(0), state.ctrl_plane_mut());
2730 let inst = Inst::CondBr {
2731 kind: CondBrKind::Zero(xreg(0), OperandSize::Size64),
2732 taken: target(1),
2733 not_taken: target(2),
2734 };
2735 inst.emit(&mut buf, &info, &mut state);
2736
2737 buf.bind_label(label(1), state.ctrl_plane_mut());
2738 let inst = Inst::Jump { dest: target(3) };
2739 inst.emit(&mut buf, &info, &mut state);
2740
2741 buf.bind_label(label(2), state.ctrl_plane_mut());
2742 let inst = Inst::Nop4;
2743 inst.emit(&mut buf, &info, &mut state);
2744 inst.emit(&mut buf, &info, &mut state);
2745 let inst = Inst::Jump { dest: target(0) };
2746 inst.emit(&mut buf, &info, &mut state);
2747
2748 buf.bind_label(label(3), state.ctrl_plane_mut());
2749 let inst = Inst::Jump { dest: target(4) };
2750 inst.emit(&mut buf, &info, &mut state);
2751
2752 buf.bind_label(label(4), state.ctrl_plane_mut());
2753 let inst = Inst::Jump { dest: target(5) };
2754 inst.emit(&mut buf, &info, &mut state);
2755
2756 buf.bind_label(label(5), state.ctrl_plane_mut());
2757 let inst = Inst::Jump { dest: target(7) };
2758 inst.emit(&mut buf, &info, &mut state);
2759
2760 buf.bind_label(label(6), state.ctrl_plane_mut());
2761 let inst = Inst::Nop4;
2762 inst.emit(&mut buf, &info, &mut state);
2763
2764 buf.bind_label(label(7), state.ctrl_plane_mut());
2765 let inst = Inst::Ret {};
2766 inst.emit(&mut buf, &info, &mut state);
2767
2768 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2769
2770 let golden_data = vec![
2771 0xa0, 0x00, 0x00, 0xb4, // cbz x0, 0x14
2772 0x1f, 0x20, 0x03, 0xd5, // nop
2773 0x1f, 0x20, 0x03, 0xd5, // nop
2774 0xfd, 0xff, 0xff, 0x17, // b 0
2775 0x1f, 0x20, 0x03, 0xd5, // nop
2776 0xc0, 0x03, 0x5f, 0xd6, // ret
2777 ];
2778
2779 assert_eq!(&golden_data[..], &buf.data[..]);
2780 }
2781
2782 #[test]
2783 fn test_handle_branch_cycle() {
2784 // label0:
2785 // b label1
2786 // label1:
2787 // b label2
2788 // label2:
2789 // b label3
2790 // label3:
2791 // b label4
2792 // label4:
2793 // b label1 // note: not label0 (to make it interesting).
2794 //
2795 // -- should become:
2796 //
2797 // label0, label1, ..., label4:
2798 // b label0
2799 let info = EmitInfo::new(settings::Flags::new(settings::builder()));
2800 let mut buf = MachBuffer::new();
2801 let mut state = <Inst as MachInstEmit>::State::default();
2802 let constants = Default::default();
2803
2804 buf.reserve_labels_for_blocks(5);
2805
2806 buf.bind_label(label(0), state.ctrl_plane_mut());
2807 let inst = Inst::Jump { dest: target(1) };
2808 inst.emit(&mut buf, &info, &mut state);
2809
2810 buf.bind_label(label(1), state.ctrl_plane_mut());
2811 let inst = Inst::Jump { dest: target(2) };
2812 inst.emit(&mut buf, &info, &mut state);
2813
2814 buf.bind_label(label(2), state.ctrl_plane_mut());
2815 let inst = Inst::Jump { dest: target(3) };
2816 inst.emit(&mut buf, &info, &mut state);
2817
2818 buf.bind_label(label(3), state.ctrl_plane_mut());
2819 let inst = Inst::Jump { dest: target(4) };
2820 inst.emit(&mut buf, &info, &mut state);
2821
2822 buf.bind_label(label(4), state.ctrl_plane_mut());
2823 let inst = Inst::Jump { dest: target(1) };
2824 inst.emit(&mut buf, &info, &mut state);
2825
2826 let buf = buf.finish(&constants, state.ctrl_plane_mut());
2827
2828 let golden_data = vec![
2829 0x00, 0x00, 0x00, 0x14, // b 0
2830 ];
2831
2832 assert_eq!(&golden_data[..], &buf.data[..]);
2833 }
2834
2835 #[test]
2836 fn metadata_records() {
2837 let mut buf = MachBuffer::<Inst>::new();
2838 let ctrl_plane = &mut Default::default();
2839 let constants = Default::default();
2840
2841 buf.reserve_labels_for_blocks(3);
2842
2843 buf.bind_label(label(0), ctrl_plane);
2844 buf.put1(1);
2845 buf.add_trap(TrapCode::HEAP_OUT_OF_BOUNDS);
2846 buf.put1(2);
2847 buf.add_trap(TrapCode::INTEGER_OVERFLOW);
2848 buf.add_trap(TrapCode::INTEGER_DIVISION_BY_ZERO);
2849 buf.add_try_call_site(
2850 Some(0x10),
2851 [
2852 MachExceptionHandler::Tag(ExceptionTag::new(42), label(2)),
2853 MachExceptionHandler::Default(label(1)),
2854 ]
2855 .into_iter(),
2856 );
2857 buf.add_reloc(
2858 Reloc::Abs4,
2859 &ExternalName::User(UserExternalNameRef::new(0)),
2860 0,
2861 );
2862 buf.put1(3);
2863 buf.add_reloc(
2864 Reloc::Abs8,
2865 &ExternalName::User(UserExternalNameRef::new(1)),
2866 1,
2867 );
2868 buf.put1(4);
2869 buf.bind_label(label(1), ctrl_plane);
2870 buf.put1(0xff);
2871 buf.bind_label(label(2), ctrl_plane);
2872 buf.put1(0xff);
2873
2874 let buf = buf.finish(&constants, ctrl_plane);
2875
2876 assert_eq!(buf.data(), &[1, 2, 3, 4, 0xff, 0xff]);
2877 assert_eq!(
2878 buf.traps()
2879 .iter()
2880 .map(|trap| (trap.offset, trap.code))
2881 .collect::<Vec<_>>(),
2882 vec![
2883 (1, TrapCode::HEAP_OUT_OF_BOUNDS),
2884 (2, TrapCode::INTEGER_OVERFLOW),
2885 (2, TrapCode::INTEGER_DIVISION_BY_ZERO)
2886 ]
2887 );
2888 let call_sites: Vec<_> = buf.call_sites().collect();
2889 assert_eq!(call_sites[0].ret_addr, 2);
2890 assert_eq!(call_sites[0].frame_offset, Some(0x10));
2891 assert_eq!(
2892 call_sites[0].exception_handlers,
2893 &[
2894 FinalizedMachExceptionHandler::Tag(ExceptionTag::new(42), 5),
2895 FinalizedMachExceptionHandler::Default(4)
2896 ],
2897 );
2898 assert_eq!(
2899 buf.relocs()
2900 .iter()
2901 .map(|reloc| (reloc.offset, reloc.kind))
2902 .collect::<Vec<_>>(),
2903 vec![(2, Reloc::Abs4), (3, Reloc::Abs8)]
2904 );
2905 }
2906}