1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
// SPDX-License-Identifier: MIT OR Apache-2.0
//! Incremental (online) learning trait.
//!
//! Models implementing [`PartialFit`] can be trained on data that arrives
//! in batches, without requiring all data in memory at once.
//!
//! # Example
//!
//! ```ignore
//! use scry_learn::prelude::*;
//!
//! let mut model = LogisticRegression::new()
//! .solver(Solver::GradientDescent);
//! for batch in data_stream.chunks(10_000) {
//! model.partial_fit(&batch)?;
//! }
//! let preds = model.predict(&test_features)?;
//! ```
use crateDataset;
use crateResult;
/// Trait for models that support incremental (online) learning.
///
/// State from previous `partial_fit` calls is preserved and updated —
/// the model does **not** restart from scratch.
///
/// # Supported models
///
/// | Model | How it works |
/// |-------|-------------|
/// | `LogisticRegression` (GD) | One epoch of gradient descent per batch |
/// | `GaussianNb` | Accumulates sufficient statistics |
/// | `MiniBatchKMeans` | Streaming centroid updates |
/// | `MLPClassifier` | One epoch of mini-batch SGD |
/// | `MLPRegressor` | One epoch of mini-batch SGD |
///
/// Trees, Random Forest, and GBT are inherently batch algorithms and do
/// **not** support `partial_fit`.