Trait proptest::strategy::Strategy [] [src]

pub trait Strategy: Debug {
    type Value: ValueTree;
    fn new_value(&self, runner: &mut TestRunner) -> Result<Self::Value, String>;

    fn prop_map<O: Debug, F: Fn(<Self::Value as ValueTree>::Value) -> O>(
        fun: F
    ) -> Map<Self, F>
        Self: Sized
, { ... }
fn prop_flat_map<S: Strategy, F: Fn(<Self::Value as ValueTree>::Value) -> S>(
        fun: F
    ) -> Flatten<Map<Self, F>>
        Self: Sized
, { ... }
fn prop_ind_flat_map<S: Strategy, F: Fn(<Self::Value as ValueTree>::Value) -> S>(
        fun: F
    ) -> IndFlatten<Map<Self, F>>
        Self: Sized
, { ... }
fn prop_ind_flat_map2<S: Strategy, F: Fn(<Self::Value as ValueTree>::Value) -> S>(
        fun: F
    ) -> IndFlattenMap<Self, F>
        Self: Sized
, { ... }
fn prop_filter<F: Fn(&<Self::Value as ValueTree>::Value) -> bool>(
        whence: String,
        fun: F
    ) -> Filter<Self, F>
        Self: Sized
, { ... }
fn prop_union(self, other: Self) -> Union<Self>
        Self: Sized
, { ... }
fn prop_recursive<F: Fn(Arc<BoxedStrategy<<Self::Value as ValueTree>::Value>>) -> BoxedStrategy<<Self::Value as ValueTree>::Value>>(
        depth: u32,
        desired_size: u32,
        expected_branch_size: u32,
        recurse: F
    ) -> Recursive<BoxedStrategy<<Self::Value as ValueTree>::Value>, F>
        Self: Sized + 'static
, { ... }
fn boxed(self) -> BoxedStrategy<<Self::Value as ValueTree>::Value>
        Self: Sized + 'static
, { ... }
fn no_shrink(self) -> NoShrink<Self>
        Self: Sized
, { ... } }

A strategy for producing arbitrary values of a given type.

fmt::Debug is a hard requirement for all strategies currently due to prop_flat_map(). This constraint will be removed when specialisation becomes stable.

Associated Types

The value tree generated by this Strategy.

This also implicitly describes the ultimate value type governed by the Strategy.

Required Methods

Generate a new value tree from the given runner.

This may fail if there are constraints on the generated value and the generator is unable to produce anything that satisfies them. Any failure is wrapped in TestError::Abort.

Provided Methods

Returns a strategy which produces values transformed by the function fun.

There is no need (or possibility, for that matter) to define how the output is to be shrunken. Shrinking continues to take place in terms of the source value.

Maps values produced by this strategy into new strategies and picks values from those strategies.

fun is used to transform the values produced by this strategy into other strategies. Values are then chosen from the derived strategies. Shrinking proceeds by shrinking individual values as well as shrinking the input used to generate the internal strategies.


In the case of test failure, shrinking will not only shrink the output from the combinator itself, but also the input, i.e., the strategy used to generate the output itself. Doing this requires searching the new derived strategy for a new failing input. The combinator will generate up to Config::cases values for this search.

As a result, nested prop_flat_map/Flatten combinators risk exponential run time on this search for new failing values. To ensure that test failures occur within a reasonable amount of time, all of these combinators share a single "flat map regen" counter, and will stop generating new values if it exceeds Config::max_flat_map_regens.


Generate two integers, where the second is always less than the first, without using filtering:

#[macro_use] extern crate proptest;

use proptest::prelude::*;

proptest! {
  fn test_two(
    // Pick integers in the 1..65536 range, and derive a strategy
    // which emits a tuple of that integer and another one which is
    // some value less than it.
    (a, b) in (1..65536).prop_flat_map(|a| (Just(a), 0..a))
  ) {
    prop_assert!(b < a);

Choosing the right flat-map

Strategy has three "flat-map" combinators. They look very similar at first, and can be used to produce superficially identical test results. For example, the following three expressions all produce inputs which are 2-tuples (a,b) where the b component is less than a.

use proptest::prelude::*;

let flat_map = (1..10).prop_flat_map(|a| (Just(a), 0..a));
let ind_flat_map = (1..10).prop_ind_flat_map(|a| (Just(a), 0..a));
let ind_flat_map2 = (1..10).prop_ind_flat_map2(|a| 0..a);

The three do differ however in terms of how they shrink.

For flat_map, both a and b will shrink, and the invariant that b < a is maintained. This is a "dependent" or "higher-order" strategy in that it remembers that the strategy for choosing b is dependent on the value chosen for a.

For ind_flat_map, the invariant b < a is maintained, but only because a does not shrink. This is due to the fact that the dependency between the strategies is not tracked; a is simply seen as a constant.

Finally, for ind_flat_map2, the invariant b < a is not maintained, because a can shrink independently of b, again because the dependency between the two variables is not tracked, but in this case the derivation of a is still exposed to the shrinking system.

The use-cases for the independent flat-map variants is pretty narrow. For the majority of cases where invariants need to be maintained and you want all components to shrink, prop_flat_map is the way to go. prop_ind_flat_map makes the most sense when the input to the map function is not exposed in the output and shrinking across strategies is not expected to be useful. prop_ind_flat_map2 is useful for using related values as starting points while not constraining them to that relation.

Maps values produced by this strategy into new strategies and picks values from those strategies while considering the new strategies to be independent.

This is very similar to prop_flat_map(), but shrinking will not attempt to shrink the input that produces the derived strategies. This is appropriate for when the derived strategies already fully shrink in the desired way.

In most cases, you want prop_flat_map().

See prop_flat_map() for a more detailed explanation on how the three flat-map combinators differ.

Similar to prop_ind_flat_map(), but produces 2-tuples with the input generated from self in slot 0 and the derived strategy in slot 1.

See prop_flat_map() for a more detailed explanation on how the three flat-map combinators differ differ.

Returns a strategy which only produces values accepted by fun.

This results in a very naïve form of rejection sampling and should only be used if (a) relatively few values will actually be rejected; (b) it isn't easy to express what you want by using another strategy and/or map().

There are a lot of downsides to this form of filtering. It slows testing down, since values must be generated but then discarded. Proptest only allows a limited number of rejects this way (across the entire TestRunner). Rejection can interfere with shrinking; particularly, complex filters may largely or entirely prevent shrinking from substantially altering the original value.

Local rejection sampling is still preferable to rejecting the entire input to a test (via TestCaseError::Reject), however, and the default number of local rejections allowed is much higher than the number of whole-input rejections.

whence is used to record where and why the rejection occurred.

Returns a strategy which picks uniformly from self and other.

When shrinking, if a value from other was originally chosen but that value can be shrunken no further, it switches to a value from self and starts shrinking that.

Be aware that chaining prop_union calls will result in a very right-skewed distribution. If this is not what you want, you can call the .or() method on the Union to add more values to the same union, or directly call Union::new().

Both self and other must be of the same type. To combine heterogeneous strategies, call the boxed() method on both self and other to erase the type differences before calling prop_union().

Generate a recursive structure with self items as leaves.

recurse is applied to various strategies that produce the same type as self with nesting depth n to create a strategy that produces the same type with nesting depth n+1. Generated structures will have a depth between 0 and depth and will usually have up to desired_size total elements, though they may have more. expected_branch_size gives the expected maximum size for any collection which may contain recursive elements and is used to control branch probability to achieve the desired size. Passing a too small value can result in trees vastly larger than desired.

Note that depth only counts branches; i.e., depth = 0 is a single leaf, and depth = 1 is a leaf or a branch containing only leaves.

In practise, generated values usually have a lower depth than depth (but depth is a hard limit) and almost always under expected_branch_size (though it is not a hard limit) since the underlying code underestimates probabilities.


use std::collections::HashMap;

#[macro_use] extern crate proptest;
use proptest::prelude::*;

/// Define our own JSON AST type
#[derive(Debug, Clone)]
enum JsonNode {
  Map(HashMap<String, JsonNode>),

// Define a strategy for generating leaf nodes of the AST
let json_leaf = prop_oneof![

// Now define a strategy for a whole tree
let json_tree = json_leaf.prop_recursive(
  4, // No more than 4 branch levels deep
  64, // Target around 64 total elements
  16, // Each collection is up to 16 elements long
  |element| prop_oneof![
    // NB `element` is an `Arc` and we'll need to reference it twice,
    // so we clone it the first time.
    prop::collection::vec(element.clone(), 0..16)
    prop::collection::hash_map(".*", element, 0..16)

Erases the type of this Strategy so it can be passed around as a simple trait object.

Wraps this strategy to prevent values from being subject to shrinking.

Suppressing shrinking is useful when testing things like linear approximation functions. Ordinarily, proptest will tend to shrink the input to the function until the result is just barely outside the acceptable range whereas the original input may have produced a result far outside of it. Since this makes it harder to see what the actual problem is, making the input NoShrink allows learning about inputs that produce more incorrect results.