Trait relearn::envs::EnvStructure
source · [−]pub trait EnvStructure {
type ObservationSpace: Space;
type ActionSpace: Space;
type FeedbackSpace: Space;
fn observation_space(&self) -> Self::ObservationSpace;
fn action_space(&self) -> Self::ActionSpace;
fn feedback_space(&self) -> Self::FeedbackSpace;
fn discount_factor(&self) -> f64;
}
Expand description
The external structure of a reinforcement learning environment.
Required Associated Types
Required Methods
fn observation_space(&self) -> Self::ObservationSpace
fn observation_space(&self) -> Self::ObservationSpace
Space containing all possible observations.
This is not required to be tight: the space may contain elements that can never be produced as a state observation.
fn action_space(&self) -> Self::ActionSpace
fn action_space(&self) -> Self::ActionSpace
The space of all possible actions.
Every element in this space must be a valid action in all environment states (although immediately ending the episode with negative reward is a possible outcome). The environment may misbehave or panic for actions outside of this action space.
fn feedback_space(&self) -> Self::FeedbackSpace
fn feedback_space(&self) -> Self::FeedbackSpace
The space of all possible feedback.
This is not required to be tight: the space may contain elements that can never be produced as a feedback signal.
fn discount_factor(&self) -> f64
fn discount_factor(&self) -> f64
A discount factor applied to future feedback.
A value between 0
and 1
, inclusive.