pub struct Simulation<S, T> { /* private fields */ }
Expand description

Simulation is the a struct for a cached markov chain simulation.

Simulation has two generic parameters:

  • S: The type of the states in the markov chain.
  • T: The type of the transitions in the markov chain, usually a description. To do anything useful both have to be Hash + Clone + Send + Sync + PartialEq + Eq + Debug.

It primarily consists of an initial state S and a StateTransitionGenerator. This generator is a function that takes a state and returns a list of the next states in the markov chain with their respective relative probabilities.

This function must be fully deterministic, as this generator is cached, so it is only called once for each state anyway.

Example

// This is a simple onedimensional random walk
use entromatica::prelude::*;
use std::sync::Arc;

// The initial state. It has to be Hash + Clone + Send + Sync + PartialEq + Eq + Debug
let initial_state: i32 = 0;

// The state transition generator. The simulation panics if the probabilities don't sum to 1.0
let state_transition_generator =
Arc::new(|state: i32| vec![(state + 1, "next", 0.5), (state - 1, "previous", 0.5)]);

let mut simulation = Simulation::new(initial_state, state_transition_generator);

// The Shannon-entropy at the given time
assert_eq!(simulation.entropy(0), 0.0);
simulation.next_step();
assert_eq!(simulation.entropy(1), 1.0);

Implementations§

source§

impl<S, T> Simulation<S, T>where S: Hash + Clone + Send + Sync + PartialEq + Eq + Debug, T: Hash + Clone + Send + Sync + PartialEq + Eq + Debug,

source

pub fn new( initial_state: S, state_transition_generator: StateTransitionGenerator<S, T> ) -> Self

Create a new Simulation with the given initial state and state transition generator.

As the initial distribution this results in a single state with a probability of 1.0.

source

pub fn new_with_distribution( probabilities: StateProbabilityDistribution<S>, state_transition_generator: StateTransitionGenerator<S, T> ) -> Self

Create a new Simulation with the given initial state distribution and state transition generator.

The initial state distribution is a HashMap from states to their respective probabilities.

source

pub fn state_transition_graph(&self) -> Graph<S, (T, Probability)>

The state transitioning graph of the markov chain.

The nodes of the graph are the states of the markov chain and the edges a tuple of the transition and the probability of the transition.

The graph is directed, so the direction of the edge indicates the direction of the transition.

This method uses all information that is available to the markov chain regardless of the current timeline. This means that the graph will include nodes and transitions generated by e.g. the (full_traversal)[#method.full_traversal] method.

source

pub fn probability_distributions( &self ) -> HashMap<Time, StateProbabilityDistribution<S>>

Get a HashMap of the probability distributions indexed by time.

Each probability distribution is a HashMap from states to their probabilities. The time starts at zero and increases by one for each step.

source

pub fn state_probability(&self, state: S, time: Time) -> f64

Get the probability of a specific state for the given time.

If the state is not known at the given time, the probability is zero.

source

pub fn initial_distribution(&self) -> StateProbabilityDistribution<S>

Get the probability distribution for the initial distribution.

source

pub fn probability_distribution( &self, time: Time ) -> StateProbabilityDistribution<S>

Get the probability distribution for the given time.

If the time is not known, the method panics.

source

pub fn known_states(&self) -> Vec<S>

Gets a list of all known states.

States are known when they have been returned at some point by the state transition generator. The ordering is arbitrary, not necessarily consistent over multiple calls and can change at any time in the future.

source

pub fn known_transitions(&self) -> Vec<T>

Gets a list of all known transitions.

Transitions are known when they have been returned at some point by the state transition generator. The ordering is arbitrary, not necessarily consistent over multiple calls and can change at any time in the future.

source

pub fn entropy(&self, time: Time) -> f64

Get the shannon entropy of the markov chain at the given time.

source

pub fn time(&self) -> Time

Get the current time of the markov chain.

The time starts at zero and increases by one for each step. This method returns the time of the newest probability distribution.

source

pub fn next_step(&mut self) -> StateProbabilityDistribution<S>

Update the markov chain by one step.

This method returns the new probability distribution. This method calls the state transition generator for each state in the current probability distribution. These probabilities are then multiplied with the probability of the state they are called on. The new probability distribution is the combination of all those distributions.

Panics

This method panics if the probabilities of the state transition generator do not sum up to 1.0.

source

pub fn full_traversal(&mut self, modify_cache_only: bool)

Update the markov chain until all states are known.

This method calls the (next_step)[#method.next_step] method until the list of known states does not change anymore. This means that all states reachable by the markov chain are then traversed. If the modify_cache_only property is set to true the list of probability distributions is not updated. Things like state_transition_graph or known_states will still be affected by this traversal.

If the number of states is infite this method will never return.

source

pub fn uniform_distribution_is_steady(&mut self) -> bool

Check if the uniform distribution is steady.

This method checks if the uniform distribution is stable i.e. if it doesn’t change anymore if this distribution is set.

If the number of states is infinte this method will never return. It will modify the cache of the markov chain, so e.g. state_transition_graph will afterwards show the full markov chain.

source

pub fn transition_rate_matrix(&mut self) -> (Array2<Probability>, Vec<S>)

Get the transition rate matrix of the markov chain.

This method returns the transition rate matrix of the state transition Graph. This is a ndarray where the value at index (i, j) is the probability that the markov chain transitions from state i to state j. To do that it makes a cache-only full traversal.

The second part of the return type is the ordering. If a state is at the nth position in the vector, it means that it corresponds to the nth row and the nth column. The ordering is arbitrary, not necessarily consistent over multiple calls and can change at any time in the future.

If the number of states is infinte this method will never return.

Example
 use entromatica::prelude::*;
 use std::sync::Arc;

// A simple random walk in a circle with NUM_STATES positions
let initial_state = 0;
const NUM_STATES: i32 = 5;
let state_transition_generator = Arc::new(|state: i32| -> OutgoingTransitions<i32, &str> {
    vec![
        (
            {
                if state + 1 == NUM_STATES {
                    0
                } else {
                    state + 1
                }
            },
            "forward",
            0.5,
        ),
        (
            {
                if state - 1 == -1 {
                    NUM_STATES - 1
                } else {
                    state - 1
                }
            },
            "backward",
            0.5,
        ),
    ]
});
let mut simulation = Simulation::new(initial_state, state_transition_generator);
let (transition_rate_matrix, ordering) = simulation.transition_rate_matrix();
// The transition rate matrix is a square matrix with a size equal to the number of states
assert_eq!(transition_rate_matrix.nrows(), NUM_STATES as usize);
assert_eq!(transition_rate_matrix.ncols(), NUM_STATES as usize);

// The probability of transitioning from state 0 to 1 is 0.5
let index = (ordering.iter().position(|state| *state == 0).unwrap(), ordering.iter().position(|state| *state == 1).unwrap());
assert_eq!(transition_rate_matrix.get(index), Some(&0.5));
source

pub fn stationary_distribution( &mut self, rounds: u64 ) -> StateProbabilityDistribution<S>

Get the stationary distribution of the markov chain.

This method returns the stationary distribution of the markov chain by exponentiating the transition rate matrix. rounds is the number of times the transition rate matrix is multiplied with itself and thus determines the accuracy and time usage of this method.

Trait Implementations§

source§

impl<S: Clone, T: Clone> Clone for Simulation<S, T>

source§

fn clone(&self) -> Simulation<S, T>

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl<S, T> Debug for Simulation<S, T>where S: Hash + Clone + Send + Sync + PartialEq + Debug, T: Hash + Clone + Send + Sync + PartialEq + Debug,

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl<S, T> !RefUnwindSafe for Simulation<S, T>

§

impl<S, T> Send for Simulation<S, T>where S: Send, T: Send,

§

impl<S, T> Sync for Simulation<S, T>where S: Sync, T: Sync,

§

impl<S, T> Unpin for Simulation<S, T>where S: Unpin, T: Unpin,

§

impl<S, T> !UnwindSafe for Simulation<S, T>

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for Twhere U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T> Pointable for T

§

const ALIGN: usize = mem::align_of::<T>()

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T> ToOwned for Twhere T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.