Trait border_core::Agent

source ·
pub trait Agent<E: Env, R: ReplayBufferBase>: Policy<E> {
    // Required methods
    fn train(&mut self);
    fn eval(&mut self);
    fn is_train(&self) -> bool;
    fn opt(&mut self, buffer: &mut R) -> Option<Record>;
    fn save<T: AsRef<Path>>(&self, path: T) -> Result<()>;
    fn load<T: AsRef<Path>>(&mut self, path: T) -> Result<()>;
}
Expand description

Represents a trainable policy on an environment.

Required Methods§

source

fn train(&mut self)

Set the policy to training mode.

source

fn eval(&mut self)

Set the policy to evaluation mode.

source

fn is_train(&self) -> bool

Return if it is in training mode.

source

fn opt(&mut self, buffer: &mut R) -> Option<Record>

Do an optimization step.

source

fn save<T: AsRef<Path>>(&self, path: T) -> Result<()>

Save the agent in the given directory. This method commonly creates a number of files consisting the agent in the directory. For example, the DQN agent in border_tch_agent crate saves two Q-networks corresponding to the original and target networks.

source

fn load<T: AsRef<Path>>(&mut self, path: T) -> Result<()>

Load the agent from the given directory.

Implementors§