Skip to main content

Crate ember_rl

Crate ember_rl 

Source
Expand description

Reinforcement learning algorithms powered by Burn, built on rl-traits.

ember-rl is the algorithm layer in the stack:

rl-traits     →  core traits (Environment, Agent, Policy, ...)
ember-rl      →  algorithm implementations using Burn (this crate)
bevy-gym      →  Bevy ECS plugin for visualisation and parallelisation

§Quick start

use burn::backend::NdArray;
use ember_rl::{
    algorithms::dqn::{DqnAgent, DqnConfig},
    encoding::{VecEncoder, UsizeActionMapper},
    training::DqnRunner,
};

type B = NdArray;

let env = CartPoleEnv::new();
let config = DqnConfig::default();
let encoder = VecEncoder::new(4);
let action_mapper = UsizeActionMapper::new(2);
let device = Default::default();

let agent = DqnAgent::<_, _, _, B>::new(encoder, action_mapper, config, device);
let mut runner = DqnRunner::new(env, agent, 42);

for step in runner.steps().take(50_000) {
    if step.episode_done {
        println!("Episode {} | reward: {:.1}", step.episode, step.episode_reward);
    }
}

Modules§

algorithms
encoding
stats
training
traits
Core training traits for ember-rl.