Module training

Module training 

Source
Expand description

FastGRNN training pipeline with knowledge distillation

This module provides a complete training infrastructure for the FastGRNN model:

  • Adam optimizer implementation
  • Binary Cross-Entropy loss with gradient computation
  • Backpropagation Through Time (BPTT)
  • Mini-batch training with validation split
  • Early stopping and learning rate scheduling
  • Knowledge distillation from teacher models
  • Progress reporting and metrics tracking

Structs§

BatchIterator
Batch iterator for training
Trainer
FastGRNN trainer
TrainingConfig
Training hyperparameters
TrainingDataset
Training dataset with features and labels
TrainingMetrics
Training metrics

Functions§

generate_teacher_predictions
Generate teacher predictions for knowledge distillation
temperature_softmax
Temperature-scaled softmax for knowledge distillation with numerical stability