pub struct LearnerBuilder<B, M, O, S, TI, VI, TO, VO>where
B: AutodiffBackend,
M: AutodiffModule<B> + TrainStep<TI, TO> + Display + 'static,
M::InnerModule: ValidStep<VI, VO>,
O: Optimizer<M, B>,
S: LrScheduler,
TI: Send + 'static,
VI: Send + 'static,
TO: ItemLazy + 'static,
VO: ItemLazy + 'static,{ /* private fields */ }Expand description
Struct to configure and create a learner.
The generics components of the builder should probably not be set manually, as they are optimized for Rust type inference.
Implementations§
Source§impl<B, M, O, S, TI, VI, TO, VO> LearnerBuilder<B, M, O, S, TI, VI, TO, VO>where
B: AutodiffBackend,
M: AutodiffModule<B> + TrainStep<TI, TO> + Display + 'static,
M::InnerModule: ValidStep<VI, VO>,
O: Optimizer<M, B>,
S: LrScheduler,
TI: Send + 'static,
VI: Send + 'static,
TO: ItemLazy + 'static,
VO: ItemLazy + 'static,
impl<B, M, O, S, TI, VI, TO, VO> LearnerBuilder<B, M, O, S, TI, VI, TO, VO>where
B: AutodiffBackend,
M: AutodiffModule<B> + TrainStep<TI, TO> + Display + 'static,
M::InnerModule: ValidStep<VI, VO>,
O: Optimizer<M, B>,
S: LrScheduler,
TI: Send + 'static,
VI: Send + 'static,
TO: ItemLazy + 'static,
VO: ItemLazy + 'static,
Sourcepub fn metric_loggers<MT, MV>(self, logger_train: MT, logger_valid: MV) -> Selfwhere
MT: MetricLogger + 'static,
MV: MetricLogger + 'static,
pub fn metric_loggers<MT, MV>(self, logger_train: MT, logger_valid: MV) -> Selfwhere
MT: MetricLogger + 'static,
MV: MetricLogger + 'static,
Replace the default metric loggers with the provided ones.
§Arguments
logger_train- The training logger.logger_valid- The validation logger.
Sourcepub fn with_checkpointing_strategy<CS>(self, strategy: CS) -> Selfwhere
CS: CheckpointingStrategy + 'static,
pub fn with_checkpointing_strategy<CS>(self, strategy: CS) -> Selfwhere
CS: CheckpointingStrategy + 'static,
Update the checkpointing_strategy.
Sourcepub fn renderer<MR>(self, renderer: MR) -> Selfwhere
MR: MetricsRenderer + 'static,
pub fn renderer<MR>(self, renderer: MR) -> Selfwhere
MR: MetricsRenderer + 'static,
Sourcepub fn metrics<Me: MetricRegistration<B, M, O, S, TI, VI, TO, VO>>(
self,
metrics: Me,
) -> Self
pub fn metrics<Me: MetricRegistration<B, M, O, S, TI, VI, TO, VO>>( self, metrics: Me, ) -> Self
Register all metrics as numeric for the training and validation set.
Sourcepub fn metrics_text<Me: TextMetricRegistration<B, M, O, S, TI, VI, TO, VO>>(
self,
metrics: Me,
) -> Self
pub fn metrics_text<Me: TextMetricRegistration<B, M, O, S, TI, VI, TO, VO>>( self, metrics: Me, ) -> Self
Register all metrics as numeric for the training and validation set.
Sourcepub fn metric_train<Me: Metric + 'static>(self, metric: Me) -> Self
pub fn metric_train<Me: Metric + 'static>(self, metric: Me) -> Self
Register a training metric.
Sourcepub fn metric_valid<Me: Metric + 'static>(self, metric: Me) -> Self
pub fn metric_valid<Me: Metric + 'static>(self, metric: Me) -> Self
Register a validation metric.
Sourcepub fn grads_accumulation(self, accumulation: usize) -> Self
pub fn grads_accumulation(self, accumulation: usize) -> Self
Enable gradients accumulation.
§Notes
When you enable gradients accumulation, the gradients object used by the optimizer will be the sum of all gradients generated by each backward pass. It might be a good idea to reduce the learning to compensate.
The effect is similar to increasing the batch size and the learning rate by the accumulation
amount.
Sourcepub fn metric_train_numeric<Me>(self, metric: Me) -> Self
pub fn metric_train_numeric<Me>(self, metric: Me) -> Self
Sourcepub fn metric_valid_numeric<Me: Metric + Numeric + 'static>(
self,
metric: Me,
) -> Self
pub fn metric_valid_numeric<Me: Metric + Numeric + 'static>( self, metric: Me, ) -> Self
Sourcepub fn num_epochs(self, num_epochs: usize) -> Self
pub fn num_epochs(self, num_epochs: usize) -> Self
The number of epochs the training should last.
Sourcepub fn learning_strategy(self, learning_strategy: LearningStrategy<B>) -> Self
pub fn learning_strategy(self, learning_strategy: LearningStrategy<B>) -> Self
Run the training loop with different strategies
Sourcepub fn checkpoint(self, checkpoint: usize) -> Self
pub fn checkpoint(self, checkpoint: usize) -> Self
The epoch from which the training must resume.
Sourcepub fn interrupter(&self) -> Interrupter
pub fn interrupter(&self) -> Interrupter
Provides a handle that can be used to interrupt training.
Sourcepub fn with_interrupter(self, interrupter: Interrupter) -> Self
pub fn with_interrupter(self, interrupter: Interrupter) -> Self
Override the handle for stopping training with an externally provided handle
Sourcepub fn early_stopping<Strategy>(self, strategy: Strategy) -> Self
pub fn early_stopping<Strategy>(self, strategy: Strategy) -> Self
Register an early stopping strategy to stop the training when the conditions are meet.
Sourcepub fn with_application_logger(
self,
logger: Option<Box<dyn ApplicationLoggerInstaller>>,
) -> Self
pub fn with_application_logger( self, logger: Option<Box<dyn ApplicationLoggerInstaller>>, ) -> Self
By default, Rust logs are captured and written into
experiment.log. If disabled, standard Rust log handling
will apply.
Sourcepub fn with_file_checkpointer<FR>(self, recorder: FR) -> Selfwhere
FR: FileRecorder<B> + 'static + FileRecorder<B::InnerBackend>,
O::Record: 'static,
M::Record: 'static,
S::Record<B>: 'static,
pub fn with_file_checkpointer<FR>(self, recorder: FR) -> Selfwhere
FR: FileRecorder<B> + 'static + FileRecorder<B::InnerBackend>,
O::Record: 'static,
M::Record: 'static,
S::Record<B>: 'static,
Sourcepub fn summary(self) -> Self
pub fn summary(self) -> Self
Enable the training summary report.
The summary will be displayed after .fit(), when the renderer is dropped.
Sourcepub fn build(
self,
model: M,
optim: O,
lr_scheduler: S,
) -> Learner<LearnerComponentsMarker<B, S, M, O, AsyncCheckpointer<M::Record, B>, AsyncCheckpointer<O::Record, B>, AsyncCheckpointer<S::Record<B>, B>, AsyncProcessorTraining<FullEventProcessorTraining<TO, VO>>, Box<dyn CheckpointingStrategy>, LearningDataMarker<TI, VI, TO, VO>>>
pub fn build( self, model: M, optim: O, lr_scheduler: S, ) -> Learner<LearnerComponentsMarker<B, S, M, O, AsyncCheckpointer<M::Record, B>, AsyncCheckpointer<O::Record, B>, AsyncCheckpointer<S::Record<B>, B>, AsyncProcessorTraining<FullEventProcessorTraining<TO, VO>>, Box<dyn CheckpointingStrategy>, LearningDataMarker<TI, VI, TO, VO>>>
Create the learner from a model and an optimizer. The learning rate scheduler can also be a simple learning rate.
Auto Trait Implementations§
impl<B, M, O, S, TI, VI, TO, VO> Freeze for LearnerBuilder<B, M, O, S, TI, VI, TO, VO>
impl<B, M, O, S, TI, VI, TO, VO> !RefUnwindSafe for LearnerBuilder<B, M, O, S, TI, VI, TO, VO>
impl<B, M, O, S, TI, VI, TO, VO> !Send for LearnerBuilder<B, M, O, S, TI, VI, TO, VO>
impl<B, M, O, S, TI, VI, TO, VO> !Sync for LearnerBuilder<B, M, O, S, TI, VI, TO, VO>
impl<B, M, O, S, TI, VI, TO, VO> Unpin for LearnerBuilder<B, M, O, S, TI, VI, TO, VO>
impl<B, M, O, S, TI, VI, TO, VO> !UnwindSafe for LearnerBuilder<B, M, O, S, TI, VI, TO, VO>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more