Expand description
kindle: a continually self-training RL agent built on meganeura.
The agent starts from a cold network, trains perpetually from experience, and derives reward from four frozen primitives: surprise, novelty, homeostatic balance, and order. To kindle is to start a fire from nothing — this crate is the ignition.
Re-exports§
pub use adapter::EnvAdapter;pub use adapter::GenericAdapter;pub use adapter::MAX_ACTION_DIM;pub use adapter::OBS_TOKEN_DIM;pub use agent::Agent;pub use agent::AgentConfig;pub use buffer::ExperienceBuffer;pub use env::Action;pub use env::ActionKind;pub use env::Environment;pub use env::HomeostaticProvider;pub use env::Observation;pub use reward::RewardCircuit;
Modules§
- adapter
- Universal action / observation adapters for cross-environment training.
- agent
- Top-level Agent struct and training loop.
- buffer
- Circular experience buffer for continual learning.
- credit
- Credit Assigner: attributes reward to past actions via causal attention.
- encoder
- Encoder: converts raw observations into a compact latent representation
z_t. - env
- Environment traits defining the boundary between kindle and any world.
- policy
- Policy and Value Head.
- reward
- Frozen Reward Circuit.
- world_
model - World Model: forward dynamics predictor.
Enums§
- OptLevel
- Controls whether meganeura’s e-graph optimizer is used.