Skip to main content

ObservationEncoder

Trait ObservationEncoder 

Source
pub trait ObservationEncoder<O, B: Backend> {
    // Required methods
    fn obs_size(&self) -> usize;
    fn encode(&self, obs: &O, device: &B::Device) -> Tensor<B, 1>;

    // Provided method
    fn encode_batch(&self, obs: &[O], device: &B::Device) -> Tensor<B, 2> { ... }
}
Expand description

Converts environment observations into Burn tensors.

This is the primary bridge between rl-traits’ generic world and Burn’s tensor world. Users implement this for their specific observation type – for CartPole it’s 4 floats stacked into a 1D tensor; for Atari it would be image preprocessing.

§Why this is separate from Environment

rl-traits deliberately knows nothing about tensors or ML backends. This trait lives in ember-rl as the adapter layer. A user can implement the same Environment for both headless training (with this encoder) and Bevy visualisation (with no encoder at all).

§Batching

encode_batch has a default implementation that calls encode in a loop, which is correct but slow. Override it with a vectorised implementation if your observation type allows it – which for simple flat observations (like CartPole) it always does.

Required Methods§

Source

fn obs_size(&self) -> usize

The number of features in the encoded observation vector.

Used to determine the Q-network’s input layer size automatically.

Source

fn encode(&self, obs: &O, device: &B::Device) -> Tensor<B, 1>

Encode a single observation into a 1D tensor of shape [obs_size].

Provided Methods§

Source

fn encode_batch(&self, obs: &[O], device: &B::Device) -> Tensor<B, 2>

Encode a batch of observations into a 2D tensor of shape [batch, obs_size].

The default implementation calls encode in a loop and stacks results. Override with a vectorised implementation for performance.

Implementors§