1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
//! Evaluation interface for reinforcement learning agents.
//!
//! This module provides interfaces and implementations for evaluating the performance
//! of reinforcement learning agents. Evaluation is a crucial step in the development
//! of reinforcement learning systems, allowing developers to:
//! - Measure the effectiveness of trained agents
//! - Compare different algorithms or hyperparameters
//! - Monitor training progress
//! - Validate the generalization of learned policies
use crate::;
use Result;
pub use DefaultEvaluator;
/// Interface for evaluating reinforcement learning agents.
///
/// This trait defines the standard interface for evaluating agents in different
/// environments. Implementations of this trait should:
/// - Run the agent in the environment for a specified number of episodes
/// - Collect performance metrics (e.g., average return, success rate)
/// - Return the results in a standardized format
///
/// # Type Parameters
///
/// * `E` - The environment type that the agent operates in
///
/// # Examples
///
/// ```ignore
/// struct CustomEvaluator<E: Env> {
/// env: E,
/// n_episodes: usize,
/// }
///
/// impl<E: Env> Evaluator<E> for CustomEvaluator<E> {
/// fn evaluate<R>(&mut self, agent: &mut Box<dyn Agent<E, R>>) -> Result<Record>
/// where
/// R: ReplayBufferBase,
/// {
/// // Custom evaluation logic
/// // ...
/// }
/// }
/// ```