Crate zeroize

source ·
Expand description

Securely zero memory using core or OS intrinsics. This crate wraps facilities specifically designed to securely zero memory in a common, safe API: Zeroize.

Usage

extern crate zeroize;
use zeroize::Zeroize;

fn main() {
    // Protip: don't embed secrets in your source code.
    // This is just an example.
    let mut secret = b"Air shield password: 1,2,3,4,5".to_vec();
    // [ ... ] open the air shield here

    // Now that we're done using the secret, zero it out.
    secret.zeroize();
}

The Zeroize trait is impl’d on all of Rust’s core scalar types including integers, floats, bool, and char.

Additionally, it’s implemented on slices and IterMuts of the above types.

When the std feature is enabled (which it is by default), it’s also impl’d for Vecs of the above types as well as String, where it provides Vec::clear() / String::clear()-like behavior (truncating to zero-length) but ensures the backing memory is securely zeroed.

The ZeroizeWithDefault marker trait can be impl’d on types which also impl Default, which implements Zeroize by overwriting a value with the default value.

About

Zeroing memory securely is hard - compilers optimize for performance, and in doing so they love to “optimize away” unnecessary zeroing calls. There are many documented “tricks” to attempt to avoid these optimizations and ensure that a zeroing routine is performed reliably.

This crate isn’t about tricks: it uses core::ptr::write_volatile and core::sync::atomic memory fences to provide easy-to-use, portable zeroing behavior which works on all of Rust’s core number types and slices thereof, implemented in pure Rust with no usage of FFI or assembly.

  • No insecure fallbacks!
  • No dependencies!
  • No FFI or inline assembly!
  • #![no_std] i.e. embedded-friendly!
  • No functionality besides securely zeroing memory!

What guarantees does this crate provide?

Ideally a secure memory-zeroing function would guarantee the following:

  1. Ensure the zeroing operation can’t be “optimized away” by the compiler.
  2. Ensure all subsequent reads to the memory following the zeroing operation will always see zeroes.

This crate guarantees #1 is true: LLVM’s volatile semantics ensure it.

The story around #2 is much more complicated. In brief, it should be true that LLVM’s current implementation does not attempt to perform optimizations which would allow a subsequent (non-volatile) read to see the original value prior to zeroization. However, this is not a guarantee, but rather an LLVM implementation detail.

For more background, we can look to the core::ptr::write_volatile documentation:

Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.

Memory accessed with read_volatile or write_volatile should not be accessed with non-volatile operations.

Uhoh! This crate does not guarantee all reads to the memory it operates on are volatile, and the documentation for core::ptr::write_volatile explicitly warns against mixing volatile and non-volatile operations. Perhaps we’d be better off with something like a VolatileCell type which owns the associated data and ensures all reads and writes are volatile so we don’t have to worry about the semantics of mixing volatile and non-volatile accesses.

While that’s a strategy worth pursuing (and something we may investigate separately from this crate), it comes with some onerous API requirements: it means any data that we might ever desire to zero is owned by a VolatileCell. However, this does not make it possible for this crate to act on references, which severely limits its applicability. In fact a VolatileCell can only act on values, i.e. to read a value from it, we’d need to make a copy of it, and that’s literally the opposite of what we want.

It’s worth asking what the precise semantics of mixing volatile and non-volatile reads actually are, and whether a less obtrusive API which can act entirely on mutable references is possible, safe, and provides the desired behavior.

Unfortunately, that’s a tricky question, because Rust does not have a formally defined memory model, and the behavior of mixing volatile and non-volatile memory accesses is therefore not rigorously specified and winds up being an LLVM implementation detail. The semantics were discussed extensively in this thread, specifically in the context of zeroing secrets from memory:

https://internals.rust-lang.org/t/volatile-and-sensitive-memory/3188/24

Some notable details from this thread:

  • Rust/LLVM’s notion of “volatile” is centered around data accesses, not the data itself. Specifically it maps to flags in LLVM IR which control the behavior of the optimizer, and is therefore a bit different from the typical C notion of “volatile”.
  • As mentioned earlier, LLVM does not presently contain optimizations which would reorder a non-volatile read to occurs before a volatile write if it is written with the opposite ordering in the original code. However, there is nothing precluding such optimizations from being added. The current implementation presently appears to exhibit the desired behavior for both points #1 and #2 above, but there is nothing preventing future versions of Rust and/or LLVM from changing that.

To help mitigate concerns about reordering potentially exposing secrets after they have been zeroed, this crate leverages the core::sync::atomic memory fence functions including compiler_fence and fence (which uses the CPU’s native fence instructions). These fences are leveraged with the strictest ordering guarantees, Ordering::SeqCst, which ensures no accesses are reordered. Without a formally defined memory model we can’t guarantee these will be effective, but we hope they will cover most cases.

Concretely the threat of leaking “zeroized” secrets (via reordering by LLVM and/or the CPU via out-of-order or speculative execution) would require a non-volatile access to be reordered ahead of the following:

  1. before an Ordering::SeqCst compiler fence
  2. before an Ordering::SeqCst runtime fence
  3. before a volatile write

This seems unlikely, but our usage of mixed non-volatile and volatile accesses is technically undefined behavior, at least until guarantees about this particular mixture of operations is formally defined in a Rust memory model.

Furthermore, given the recent history of microarchitectural attacks (Spectre, Meltdown, etc), there is also potential for “zeroized” secrets to be leaked through covert channels (e.g. memory fences have been used as a covert channel), so we are wary to make guarantees unless they can be made firmly in terms of both a formal Rust memory model and the generated code for a particular CPU architecture.

In conclusion, this crate guarantees the zeroize operation will not be elided or “optimized away”, makes a “best effort” to ensure that memory accesses will not be reordered ahead of the “zeroize” operation, but cannot yet guarantee that such reordering will not occur.

Stack/Heap Zeroing Notes

This crate can be used to zero values from either the stack or the heap.

However, be aware that Rust’s current memory semantics (e.g. Copy types) can leave copies of data in memory, and there isn’t presently a good solution for ensuring all copies of data on the stack are properly cleared.

The Pin RFC proposes a method for avoiding this.

What about: clearing registers, mlock, mprotect, etc?

This crate is laser-focused on being a simple, unobtrusive crate for zeroing memory in as reliable a manner as is possible on stable Rust.

Clearing registers is a difficult problem that can’t easily be solved by something like a crate, and requires either inline ASM or rustc support. See https://github.com/rust-lang/rust/issues/17046 for background on this particular problem.

Other memory protection mechanisms are interesting and useful, but often overkill (e.g. defending against RAM scraping or attackers with swap access). In as much as there may be merit to these approaches, there are also many other crates that already implement more sophisticated memory protections. Such protections are explicitly out-of-scope for this crate.

Zeroing memory is good cryptographic hygiene and this crate seeks to promote it in the most unobtrusive manner possible. This includes omitting complex unsafe memory protection systems and just trying to make the best memory zeroing crate available.

Macros

Attribute macro applied to a function to register it as a handler for allocation failure.
benchExperimental
Attribute macro applied to a function to turn it into a benchmark test.
cfg_accessibleExperimental
Keeps the item it’s applied to if the passed path is accessible, and removes it otherwise.
cfg_evalExperimental
Expands all #[cfg] and #[cfg_attr] attributes in the code fragment it’s applied to.
concat_bytesExperimental
Concatenates literals into a byte slice.
concat_identsExperimental
Concatenates identifiers into one identifier.
derive_constExperimental
Attribute macro used to apply derive macros for implementing traits in a const context.
format_args_nlExperimental
Same as format_args, but adds a newline in the end.
log_syntaxExperimental
Prints passed tokens into the standard output.
test_caseExperimental
An implementation detail of the #[test] and #[bench] macros.
trace_macrosExperimental
Enables or disables tracing functionality used for debugging other macros.
Derive macro generating an impl of the trait Clone.
Derive macro generating an impl of the trait Copy.
Derive macro generating an impl of the trait Debug.
Derive macro generating an impl of the trait Default.
Derive macro generating an impl of the trait Eq.
Derive macro generating an impl of the trait Hash.
Derive macro generating an impl of the trait Ord.
Derive macro generating an impl of the trait PartialEq.
Derive macro generating an impl of the trait PartialOrd.
Asserts that a boolean expression is true at runtime.
Evaluates boolean combinations of configuration flags at compile-time.
Expands to the column number at which it was invoked.
Causes compilation to fail with the given error message when encountered.
Concatenates literals into a static string slice.
Attribute macro used to apply derive macros.
Inspects an environment variable at compile time.
Expands to the file name in which it was invoked.
Constructs parameters for the other string-formatting macros.
Attribute macro applied to a static to register it as a global allocator.
Parses a file as an expression or an item according to the context.
Includes a file as a reference to a byte array.
Includes a UTF-8 encoded file as a string.
Expands to the line number on which it was invoked.
Expands to a string that represents the current module path.
Optionally inspects an environment variable at compile time.
Stringifies its arguments.
Attribute macro applied to a function to turn it into a unit test.

Structs

A pointer type for heap allocation.
A UTF-8–encoded, growable string.
A contiguous growable array type, written as Vec<T>, short for ‘vector’.

Enums

The Option type. See the module level documentation for more.
Result is a type that represents either success (Ok) or failure (Err).

Traits

Used to do a cheap mutable-to-mutable reference conversion.
Used to do a cheap reference-to-reference conversion.
A common trait for the ability to explicitly duplicate an object.
Types whose values can be duplicated simply by copying bits.
A trait for giving a type a useful default value.
An iterator able to yield elements from both ends.
Custom code within the destructor.
Trait for equality comparisons which are equivalence relations.
An iterator that knows its exact length.
Extend a collection with the contents of an iterator.
The version of the call operator that takes an immutable receiver.
The version of the call operator that takes a mutable receiver.
The version of the call operator that takes a by-value receiver.
Used to do value-to-value conversions while consuming the input value. It is the reciprocal of Into.
A value-to-value conversion that consumes the input value. The opposite of From.
Conversion into an Iterator.
A trait for dealing with iterators.
Trait for types that form a total order.
Trait for equality comparisons which are partial equivalence relations.
Trait for types that form a partial order.
Types that can be transferred across thread boundaries.
Types with a constant size known at compile time.
Types for which it is safe to share references between threads.
A generalization of Clone to borrowed data.
A trait for converting a value to a String.
Types that can be safely moved after being pinned.
Trait for securely erasing types from memory
Marker trait for types which can be zeroized with the Default value

Functions

Disposes of a value.