Struct wasmtime_runtime::VMExternRef
source · #[repr(transparent)]pub struct VMExternRef(_);
Expand description
An external reference to some opaque data.
VMExternRef
s dereference to their underlying opaque data as dyn Any
.
Unlike the externref
in the Wasm spec, VMExternRef
s are non-nullable,
and always point to a valid value. You may use Option<VMExternRef>
to
represent nullable references, and Option<VMExternRef>
is guaranteed to
have the same size and alignment as a raw pointer, with None
represented
with the null pointer.
VMExternRef
s are reference counted, so cloning is a cheap, shallow
operation. It also means they are inherently shared, so you may not get a
mutable, exclusive reference to their inner contents, only a shared,
immutable reference. You may use interior mutability with RefCell
or
Mutex
to work around this restriction, if necessary.
VMExternRef
s have pointer-equality semantics, not structural-equality
semantics. Given two VMExternRef
s a
and b
, a == b
only if a
and
b
point to the same allocation. a
and b
are considered not equal, even
if a
and b
are two different identical copies of the same data, if they
are in two different allocations. The hashing and ordering implementations
also only operate on the pointer.
Example
use std::cell::RefCell;
use wasmtime_runtime::VMExternRef;
// Open a file. Wasm doesn't know about files, but we can let Wasm instances
// work with files via opaque `externref` handles.
let file = std::fs::File::create("some/file/path")?;
// Wrap the file up as an `VMExternRef` that can be passed to Wasm.
let extern_ref_to_file = VMExternRef::new(file);
// `VMExternRef`s dereference to `dyn Any`, so you can use `Any` methods to
// perform runtime type checks and downcasts.
assert!(extern_ref_to_file.is::<std::fs::File>());
assert!(!extern_ref_to_file.is::<String>());
if let Some(mut file) = extern_ref_to_file.downcast_ref::<std::fs::File>() {
use std::io::Write;
writeln!(&mut file, "Hello, `VMExternRef`!")?;
}
Implementations§
source§impl VMExternRef
impl VMExternRef
sourcepub fn new<T>(value: T) -> VMExternRefwhere
T: 'static + Any + Send + Sync,
pub fn new<T>(value: T) -> VMExternRefwhere
T: 'static + Any + Send + Sync,
Wrap the given value inside an VMExternRef
.
sourcepub fn new_with<T>(make_value: impl FnOnce() -> T) -> VMExternRefwhere
T: 'static + Any + Send + Sync,
pub fn new_with<T>(make_value: impl FnOnce() -> T) -> VMExternRefwhere
T: 'static + Any + Send + Sync,
Construct a new VMExternRef
in place by invoking make_value
.
sourcepub fn as_raw(&self) -> *mut u8
pub fn as_raw(&self) -> *mut u8
Turn this VMExternRef
into a raw, untyped pointer.
Unlike into_raw
, this does not consume and forget self
. It is not
safe to use from_raw
on pointers returned from this method; only use
clone_from_raw
!
Nor does this method increment the reference count. You must ensure
that self
(or some other clone of self
) stays alive until
clone_from_raw
is called.
Examples found in repository?
397 398 399 400 401 402 403 404 405 406 407 408 409 410 411
unsafe fn externref_global_get(vmctx: *mut VMContext, index: u32) -> *mut u8 {
let index = GlobalIndex::from_u32(index);
let instance = (*vmctx).instance();
let global = instance.defined_or_imported_global_ptr(index);
match (*global).as_externref().clone() {
None => ptr::null_mut(),
Some(externref) => {
let raw = externref.as_raw();
let (activations_table, module_info_lookup) =
(*instance.store()).externref_activations_table();
activations_table.insert_with_gc(externref, module_info_lookup);
raw
}
}
}
More examples
856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967
pub unsafe fn gc(
module_info_lookup: &dyn ModuleInfoLookup,
externref_activations_table: &mut VMExternRefActivationsTable,
) {
log::debug!("start GC");
#[cfg(debug_assertions)]
assert!(externref_activations_table.gc_okay);
debug_assert!({
// This set is only non-empty within this function. It is built up when
// walking the stack and interpreting stack maps, and then drained back
// into the activations table's bump-allocated space at the
// end. Therefore, it should always be empty upon entering this
// function.
externref_activations_table.precise_stack_roots.is_empty()
});
// This function proceeds by:
//
// * walking the stack,
//
// * finding the precise set of roots inside Wasm frames via our stack maps,
// and
//
// * resetting our bump-allocated table's over-approximation to the
// newly-discovered precise set.
// The `activations_table_set` is used for `debug_assert!`s checking that
// every reference we read out from the stack via stack maps is actually in
// the table. If that weren't true, than either we forgot to insert a
// reference in the table when passing it into Wasm (a bug) or we are
// reading invalid references from the stack (another bug).
let mut activations_table_set: DebugOnly<HashSet<_>> = Default::default();
if cfg!(debug_assertions) {
externref_activations_table.elements(|elem| {
activations_table_set.insert(elem.as_raw() as *mut VMExternData);
});
}
log::trace!("begin GC trace");
Backtrace::trace(|frame| {
let pc = frame.pc();
debug_assert!(pc != 0, "we should always get a valid PC for Wasm frames");
let fp = frame.fp();
debug_assert!(
fp != 0,
"we should always get a valid frame pointer for Wasm frames"
);
let module_info = module_info_lookup
.lookup(pc)
.expect("should have module info for Wasm frame");
let stack_map = match module_info.lookup_stack_map(pc) {
Some(sm) => sm,
None => {
log::trace!("No stack map for this Wasm frame");
return std::ops::ControlFlow::Continue(());
}
};
log::trace!(
"We have a stack map that maps {} words in this Wasm frame",
stack_map.mapped_words()
);
let sp = fp - stack_map.mapped_words() as usize * mem::size_of::<usize>();
for i in 0..(stack_map.mapped_words() as usize) {
// Stack maps have one bit per word in the frame, and the
// zero^th bit is the *lowest* addressed word in the frame,
// i.e. the closest to the SP. So to get the `i`^th word in
// this frame, we add `i * sizeof(word)` to the SP.
let stack_slot = sp + i * mem::size_of::<usize>();
if !stack_map.get_bit(i) {
log::trace!(
"Stack slot @ {:p} does not contain externrefs",
stack_slot as *const (),
);
continue;
}
let stack_slot = stack_slot as *const *mut VMExternData;
let r = std::ptr::read(stack_slot);
log::trace!("Stack slot @ {:p} = {:p}", stack_slot, r);
debug_assert!(
r.is_null() || activations_table_set.contains(&r),
"every on-stack externref inside a Wasm frame should \
have an entry in the VMExternRefActivationsTable; \
{:?} is not in the table",
r
);
if let Some(r) = NonNull::new(r) {
VMExternRefActivationsTable::insert_precise_stack_root(
&mut externref_activations_table.precise_stack_roots,
r,
);
}
}
std::ops::ControlFlow::Continue(())
});
log::trace!("end GC trace");
externref_activations_table.sweep();
log::debug!("end GC");
}
sourcepub unsafe fn into_raw(self) -> *mut u8
pub unsafe fn into_raw(self) -> *mut u8
Consume this VMExternRef
into a raw, untyped pointer.
Safety
This method forgets self, so it is possible to create a leak of the underlying reference counted data if not used carefully.
Use from_raw
to recreate the VMExternRef
.
Examples found in repository?
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110
unsafe fn into_table_value(self) -> usize {
match self {
Self::UninitFunc => 0,
Self::FuncRef(e) => (e as usize) | FUNCREF_INIT_BIT,
Self::ExternRef(e) => e.map_or(0, |e| e.into_raw() as usize),
}
}
/// Consumes a table element into a pointer/reference, as it
/// exists outside the table itself. This strips off any tag bits
/// or other information that only lives inside the table.
///
/// Can only be done to an initialized table element; lazy init
/// must occur first. (In other words, lazy values do not survive
/// beyond the table, as every table read path initializes them.)
///
/// # Safety
///
/// The same warnings as for `into_table_values()` apply.
pub(crate) unsafe fn into_ref_asserting_initialized(self) -> usize {
match self {
Self::FuncRef(e) => e as usize,
Self::ExternRef(e) => e.map_or(0, |e| e.into_raw() as usize),
Self::UninitFunc => panic!("Uninitialized table element value outside of table slot"),
}
}
sourcepub unsafe fn from_raw(ptr: *mut u8) -> Self
pub unsafe fn from_raw(ptr: *mut u8) -> Self
Recreate a VMExternRef
from a pointer returned from a previous call to
as_raw
.
Safety
Unlike clone_from_raw
, this does not increment the reference count of the
underlying data. It is not safe to continue to use the pointer passed to this
function.
sourcepub unsafe fn clone_from_raw(ptr: *mut u8) -> Self
pub unsafe fn clone_from_raw(ptr: *mut u8) -> Self
Recreate a VMExternRef
from a pointer returned from a previous call to
as_raw
.
Safety
Wildly unsafe to use with anything other than the result of a previous
as_raw
call!
Additionally, it is your responsibility to ensure that this raw
VMExternRef
’s reference count has not dropped to zero. Failure to do
so will result in use after free!
Examples found in repository?
More examples
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431
unsafe fn table_grow(
vmctx: *mut VMContext,
table_index: u32,
delta: u32,
// NB: we don't know whether this is a pointer to a `VMCallerCheckedAnyfunc`
// or is a `VMExternRef` until we look at the table type.
init_value: *mut u8,
) -> Result<u32> {
let instance = (*vmctx).instance_mut();
let table_index = TableIndex::from_u32(table_index);
let element = match instance.table_element_type(table_index) {
TableElementType::Func => (init_value as *mut VMCallerCheckedAnyfunc).into(),
TableElementType::Extern => {
let init_value = if init_value.is_null() {
None
} else {
Some(VMExternRef::clone_from_raw(init_value))
};
init_value.into()
}
};
Ok(match instance.table_grow(table_index, delta, element)? {
Some(r) => r,
None => -1_i32 as u32,
})
}
use table_grow as table_grow_funcref;
use table_grow as table_grow_externref;
// Implementation of `table.fill`.
unsafe fn table_fill(
vmctx: *mut VMContext,
table_index: u32,
dst: u32,
// NB: we don't know whether this is a `VMExternRef` or a pointer to a
// `VMCallerCheckedAnyfunc` until we look at the table's element type.
val: *mut u8,
len: u32,
) -> Result<(), Trap> {
let instance = (*vmctx).instance_mut();
let table_index = TableIndex::from_u32(table_index);
let table = &mut *instance.get_table(table_index);
match table.element_type() {
TableElementType::Func => {
let val = val as *mut VMCallerCheckedAnyfunc;
table.fill(dst, val.into(), len)
}
TableElementType::Extern => {
let val = if val.is_null() {
None
} else {
Some(VMExternRef::clone_from_raw(val))
};
table.fill(dst, val.into(), len)
}
}
}
use table_fill as table_fill_funcref;
use table_fill as table_fill_externref;
// Implementation of `table.copy`.
unsafe fn table_copy(
vmctx: *mut VMContext,
dst_table_index: u32,
src_table_index: u32,
dst: u32,
src: u32,
len: u32,
) -> Result<(), Trap> {
let dst_table_index = TableIndex::from_u32(dst_table_index);
let src_table_index = TableIndex::from_u32(src_table_index);
let instance = (*vmctx).instance_mut();
let dst_table = instance.get_table(dst_table_index);
// Lazy-initialize the whole range in the source table first.
let src_range = src..(src.checked_add(len).unwrap_or(u32::MAX));
let src_table = instance.get_table_with_lazy_init(src_table_index, src_range);
Table::copy(dst_table, src_table, dst, src, len)
}
// Implementation of `table.init`.
unsafe fn table_init(
vmctx: *mut VMContext,
table_index: u32,
elem_index: u32,
dst: u32,
src: u32,
len: u32,
) -> Result<(), Trap> {
let table_index = TableIndex::from_u32(table_index);
let elem_index = ElemIndex::from_u32(elem_index);
let instance = (*vmctx).instance_mut();
instance.table_init(table_index, elem_index, dst, src, len)
}
// Implementation of `elem.drop`.
unsafe fn elem_drop(vmctx: *mut VMContext, elem_index: u32) {
let elem_index = ElemIndex::from_u32(elem_index);
let instance = (*vmctx).instance_mut();
instance.elem_drop(elem_index);
}
// Implementation of `memory.copy` for locally defined memories.
unsafe fn memory_copy(
vmctx: *mut VMContext,
dst_index: u32,
dst: u64,
src_index: u32,
src: u64,
len: u64,
) -> Result<(), Trap> {
let src_index = MemoryIndex::from_u32(src_index);
let dst_index = MemoryIndex::from_u32(dst_index);
let instance = (*vmctx).instance_mut();
instance.memory_copy(dst_index, dst, src_index, src, len)
}
// Implementation of `memory.fill` for locally defined memories.
unsafe fn memory_fill(
vmctx: *mut VMContext,
memory_index: u32,
dst: u64,
val: u32,
len: u64,
) -> Result<(), Trap> {
let memory_index = MemoryIndex::from_u32(memory_index);
let instance = (*vmctx).instance_mut();
instance.memory_fill(memory_index, dst, val as u8, len)
}
// Implementation of `memory.init`.
unsafe fn memory_init(
vmctx: *mut VMContext,
memory_index: u32,
data_index: u32,
dst: u64,
src: u32,
len: u32,
) -> Result<(), Trap> {
let memory_index = MemoryIndex::from_u32(memory_index);
let data_index = DataIndex::from_u32(data_index);
let instance = (*vmctx).instance_mut();
instance.memory_init(memory_index, data_index, dst, src, len)
}
// Implementation of `ref.func`.
unsafe fn ref_func(vmctx: *mut VMContext, func_index: u32) -> *mut u8 {
let instance = (*vmctx).instance_mut();
let anyfunc = instance
.get_caller_checked_anyfunc(FuncIndex::from_u32(func_index))
.expect("ref_func: caller_checked_anyfunc should always be available for given func index");
anyfunc as *mut _
}
// Implementation of `data.drop`.
unsafe fn data_drop(vmctx: *mut VMContext, data_index: u32) {
let data_index = DataIndex::from_u32(data_index);
let instance = (*vmctx).instance_mut();
instance.data_drop(data_index)
}
// Returns a table entry after lazily initializing it.
unsafe fn table_get_lazy_init_funcref(
vmctx: *mut VMContext,
table_index: u32,
index: u32,
) -> *mut u8 {
let instance = (*vmctx).instance_mut();
let table_index = TableIndex::from_u32(table_index);
let table = instance.get_table_with_lazy_init(table_index, std::iter::once(index));
let elem = (*table)
.get(index)
.expect("table access already bounds-checked");
elem.into_ref_asserting_initialized() as *mut _
}
// Drop a `VMExternRef`.
unsafe fn drop_externref(_vmctx: *mut VMContext, externref: *mut u8) {
let externref = externref as *mut crate::externref::VMExternData;
let externref = NonNull::new(externref).unwrap();
crate::externref::VMExternData::drop_and_dealloc(externref);
}
// Do a GC and insert the given `externref` into the
// `VMExternRefActivationsTable`.
unsafe fn activations_table_insert_with_gc(vmctx: *mut VMContext, externref: *mut u8) {
let externref = VMExternRef::clone_from_raw(externref);
let instance = (*vmctx).instance();
let (activations_table, module_info_lookup) = (*instance.store()).externref_activations_table();
// Invariant: all `externref`s on the stack have an entry in the activations
// table. So we need to ensure that this `externref` is in the table
// *before* we GC, even though `insert_with_gc` will ensure that it is in
// the table *after* the GC. This technically results in one more hash table
// look up than is strictly necessary -- which we could avoid by having an
// additional GC method that is aware of these GC-triggering references --
// but it isn't really a concern because this is already a slow path.
activations_table.insert_without_gc(externref.clone());
activations_table.insert_with_gc(externref, module_info_lookup);
}
// Perform a Wasm `global.get` for `externref` globals.
unsafe fn externref_global_get(vmctx: *mut VMContext, index: u32) -> *mut u8 {
let index = GlobalIndex::from_u32(index);
let instance = (*vmctx).instance();
let global = instance.defined_or_imported_global_ptr(index);
match (*global).as_externref().clone() {
None => ptr::null_mut(),
Some(externref) => {
let raw = externref.as_raw();
let (activations_table, module_info_lookup) =
(*instance.store()).externref_activations_table();
activations_table.insert_with_gc(externref, module_info_lookup);
raw
}
}
}
// Perform a Wasm `global.set` for `externref` globals.
unsafe fn externref_global_set(vmctx: *mut VMContext, index: u32, externref: *mut u8) {
let externref = if externref.is_null() {
None
} else {
Some(VMExternRef::clone_from_raw(externref))
};
let index = GlobalIndex::from_u32(index);
let instance = (*vmctx).instance();
let global = instance.defined_or_imported_global_ptr(index);
// Swap the new `externref` value into the global before we drop the old
// value. This protects against an `externref` with a `Drop` implementation
// that calls back into Wasm and touches this global again (we want to avoid
// it observing a halfway-deinitialized value).
let old = mem::replace((*global).as_externref_mut(), externref);
drop(old);
}
sourcepub fn strong_count(&self) -> usize
pub fn strong_count(&self) -> usize
Get the strong reference count for this VMExternRef
.
Note that this loads with a SeqCst
ordering to synchronize with other
threads.
source§impl VMExternRef
impl VMExternRef
Methods that would normally be trait implementations, but aren’t to avoid
potential footguns around VMExternRef
’s pointer-equality semantics.
Note that none of these methods are on &self
, they all require a
fully-qualified VMExternRef::foo(my_ref)
invocation.
sourcepub fn eq(a: &Self, b: &Self) -> bool
pub fn eq(a: &Self, b: &Self) -> bool
Check whether two VMExternRef
s point to the same inner allocation.
Note that this uses pointer-equality semantics, not structural-equality
semantics, and so only pointers are compared, and doesn’t use any Eq
or PartialEq
implementation of the pointed-to values.
Trait Implementations§
source§impl Clone for VMExternRef
impl Clone for VMExternRef
source§fn clone(&self) -> VMExternRef
fn clone(&self) -> VMExternRef
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more