#[non_exhaustive]pub struct MLPClassifier {
pub loss_curve: Vec<f64>,
/* private fields */
}Expand description
Multi-layer perceptron classifier.
Trains a feedforward neural network for classification using backpropagation with configurable optimizers and activations.
Defaults match sklearn: hidden_layers=[100], Adam, lr=0.001,
max_iter=200, batch_size=200, alpha=0.0001.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.loss_curve: Vec<f64>Training loss curve (one entry per epoch).
Implementations§
Source§impl MLPClassifier
impl MLPClassifier
Set hidden layer sizes. Default: &[100].
Sourcepub fn activation(self, activation: Activation) -> Self
pub fn activation(self, activation: Activation) -> Self
Set activation function for hidden layers. Default: ReLU.
Sourcepub fn optimizer(self, kind: OptimizerKind) -> Self
pub fn optimizer(self, kind: OptimizerKind) -> Self
Set optimizer algorithm. Default: Adam.
Sourcepub fn learning_rate(self, lr: f64) -> Self
pub fn learning_rate(self, lr: f64) -> Self
Set learning rate. Default: 0.001.
Sourcepub fn max_iter(self, n: usize) -> Self
pub fn max_iter(self, n: usize) -> Self
Set maximum training iterations (epochs). Default: 200.
Sourcepub fn batch_size(self, n: usize) -> Self
pub fn batch_size(self, n: usize) -> Self
Set mini-batch size. Default: 200.
Sourcepub fn early_stopping(self, enable: bool) -> Self
pub fn early_stopping(self, enable: bool) -> Self
Enable early stopping with validation split. Default: false.
Sourcepub fn validation_fraction(self, frac: f64) -> Self
pub fn validation_fraction(self, frac: f64) -> Self
Set validation fraction for early stopping. Default: 0.1.
Sourcepub fn n_iter_no_change(self, n: usize) -> Self
pub fn n_iter_no_change(self, n: usize) -> Self
Set patience for early stopping. Default: 10.
Sourcepub fn learning_rate_schedule(self, schedule: LearningRateSchedule) -> Self
pub fn learning_rate_schedule(self, schedule: LearningRateSchedule) -> Self
Set learning rate schedule. Default: LearningRateSchedule::Constant.
Use LearningRateSchedule::adaptive() for reduce-on-plateau behavior.
Sourcepub fn dropout(self, p: f64) -> Self
pub fn dropout(self, p: f64) -> Self
Set dropout probability applied between hidden layers.
p is the fraction of activations to zero out (e.g. 0.5 for 50%).
Applied only during training; inference is unaffected.
Default: 0.0 (no dropout).
Sourcepub fn callback(self, cb: Box<dyn TrainingCallback>) -> Self
pub fn callback(self, cb: Box<dyn TrainingCallback>) -> Self
Add a training callback (invoked after each epoch).
Sourcepub fn predict(&self, features: &[Vec<f64>]) -> Result<Vec<f64>>
pub fn predict(&self, features: &[Vec<f64>]) -> Result<Vec<f64>>
Predict class labels for input samples.
features is &[Vec<f64>] where each inner vec is one sample (row-major).
Sourcepub fn predict_proba(&self, features: &[Vec<f64>]) -> Result<Vec<f64>>
pub fn predict_proba(&self, features: &[Vec<f64>]) -> Result<Vec<f64>>
Predict class probabilities (softmax output).
Returns a flat [batch * n_classes] row-major probability matrix.
Sourcepub fn n_features(&self) -> usize
pub fn n_features(&self) -> usize
Number of features the model was trained on.
Sourcepub fn loss_curve(&self) -> &[f64]
pub fn loss_curve(&self) -> &[f64]
Training loss per epoch.
Sourcepub fn history(&self) -> Option<&TrainingHistory>
pub fn history(&self) -> Option<&TrainingHistory>
Structured training history with per-epoch metrics.
Returns None if the model has not been fitted yet.
Sourcepub fn layer_dims(&self) -> &[(usize, usize)]
pub fn layer_dims(&self) -> &[(usize, usize)]
Layer dimensions (for visualization).
Sourcepub fn activation_fn(&self) -> Activation
pub fn activation_fn(&self) -> Activation
Hidden-layer activation function.
Trait Implementations§
Source§impl Clone for MLPClassifier
impl Clone for MLPClassifier
Source§impl Debug for MLPClassifier
impl Debug for MLPClassifier
Source§impl Default for MLPClassifier
impl Default for MLPClassifier
Source§impl PartialFit for MLPClassifier
impl PartialFit for MLPClassifier
Source§fn partial_fit(&mut self, data: &Dataset) -> Result<()>
fn partial_fit(&mut self, data: &Dataset) -> Result<()>
Run one epoch of mini-batch SGD on the given data.
On the first call, initializes the network architecture from the data dimensions. Subsequent calls preserve network weights and continue training.
Source§fn is_initialized(&self) -> bool
fn is_initialized(&self) -> bool
partial_fit call).Source§impl PipelineModel for MLPClassifier
impl PipelineModel for MLPClassifier
Auto Trait Implementations§
impl Freeze for MLPClassifier
impl !RefUnwindSafe for MLPClassifier
impl Send for MLPClassifier
impl Sync for MLPClassifier
impl Unpin for MLPClassifier
impl UnsafeUnpin for MLPClassifier
impl !UnwindSafe for MLPClassifier
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more