Expand description
Streaming machine learning in Rust.
irithyll is a streaming ML library for the case where data arrives in order and never stops. There is no training set. There is no batch loop. Every sample updates the model and is then released – no buffer, no replay.
All models implement StreamingLearner, a two-method contract:
train_one(features, target, weight) and predict(features) -> f64. A
Box<dyn StreamingLearner> is a fully typed model. Anything that
implements the trait slots into a pipeline, an MoE expert, an AutoML
candidate, a projection wrapper, or a classification head.
§Model Survey
Gradient-boosted trees – SGBT is the flagship: sequential gradient
boosting over streaming Hoeffding trees with automatic drift replacement
(Gunasekara et al., 2024).
Variants: BaggedSGBT, DistributionalSGBT, MulticlassSGBT,
QuantileRegressorSGBT, MultiTargetSGBT, AdaptiveRandomForest,
ParallelSGBT (requires parallel feature).
Linear and kernel models – RecursiveLeastSquares with prediction
intervals, StreamingLinearModel with SGD, KRLS with RBF /
polynomial / linear kernels and ALD sparsification,
LocallyWeightedRegression, MondrianForest.
Neural streaming architectures – StreamingMamba (selective SSM,
BD-LRU block-diagonal variant), StreamingTTT (test-time training with
Titans-style momentum), StreamingKAN (Kolmogorov-Arnold networks with
B-spline basis), StreamingsLSTM (exponential-gated stabilized LSTM),
StreamingMGrade (minimal recurrent gating with delay convolutions),
SpikeNet (spiking neural network with e-prop), EchoStateNetwork,
NextGenRC, LogLinearAttention (hierarchical Fenwick state,
Han Guo et al., ICLR 2026), StreamingAttentionModel (GLA,
DeltaNet, Hawk, RetNet, RWKV-7 variants), NeuralMoE.
AutoML – AutoTuner races model families under champion-challenger
promotion using empirical Bernstein bounds (Maurer & Pontil, 2009) as the
statistical gate – no fixed elimination thresholds. AdaptationBus
composes per-arm adaptation policies (drift reracing, plasticity, meta
adaptation) in a Lipschitz-product framework.
Preprocessing and pipelines – IncrementalNormalizer, CCIPCA
(O(kd) streaming PCA), MinMaxScaler, OneHotEncoder,
PolynomialFeatures, FeatureHasher, OnlineFeatureSelector.
Chain with Pipeline::builder() or the pipe factory.
Evaluation and drift – PrequentialEvaluator,
AdaptiveConformalInterval, StreamingAUC, EwmaRegressionMetrics,
drift detectors (Page-Hinkley, ADWIN, DDM) via DriftDetector.
Bandits – EpsilonGreedy, UCB1, UCBTuned,
ThompsonSampling, LinUCB, DiscountedThompsonSampling.
Clustering – StreamingKMeans, CluStream, DBStream.
Anomaly detection – HalfSpaceTree.
Projection – ProjectedLearner via PAST subspace tracking
(Yang, 1995), supervised or PCA mode.
§Embedded deployment
The companion crate irithyll-core is #![no_std] and exports trained
trees as 12-byte packed nodes (PackedNode) that traverse branch-free on
Cortex-M0+. Train in the cloud, export with export_embedded, deploy on
bare metal. The boundary is hard and tested against thumbv6m-none-eabi.
§Feature Flags
| Feature | Default | Description |
|---|---|---|
serde-json | Yes | JSON model checkpoint / restore |
serde-bincode | No | Bincode serialization (compact, fast) |
parallel | No | Rayon-parallel tree training via ParallelSGBT |
simd | No | AVX2 histogram acceleration |
simd-avx2 | No | Explicit AVX2 SIMD intrinsics |
kmeans-binning | No | K-means histogram binning strategy |
arrow | No | Apache Arrow RecordBatch integration |
parquet | No | Parquet file I/O |
onnx | No | ONNX model export |
neural-leaves | No | Experimental MLP leaf models |
distill | No | Knowledge distillation for AutoTuner racing |
full | No | All of the above |
§Quick Start
The smallest useful pipeline – normalize, boost, predict:
use irithyll::{pipe, normalizer, sgbt, StreamingLearner};
let mut model = pipe(normalizer()).learner(sgbt(50, 0.01));
model.train(&[100.0, 0.5, 42.0], 3.14);
let pred = model.predict(&[100.0, 0.5, 42.0]);Race three model families against each other – let the data choose:
use irithyll::{automl::{AutoTuner, Factory}, StreamingLearner};
let mut tuner = AutoTuner::builder()
.add_factory(Factory::sgbt(5))
.add_factory(Factory::mamba(5))
.add_factory(Factory::esn())
.build()
.unwrap();
tuner.train(&[1.0, 2.0, 3.0, 4.0, 5.0], 6.0);
let pred = tuner.predict(&[1.0, 2.0, 3.0, 4.0, 5.0]);Wrap any regressor for binary classification:
use irithyll::{sgbt, binary_classifier, StreamingLearner};
let mut clf = binary_classifier(sgbt(50, 0.05));
clf.train(&[1.5, -0.3, 2.1], 1.0);
let label = clf.predict(&[1.5, -0.3, 2.1]);For the extended ergonomics guide – pipeline composition, AutoML
tournaments, drift wiring, embedded deployment – see
docs/USAGE.md
and MODELS.md.
§Design Principles
One sample at a time, every time. No mini-batches hidden inside
train_one. Architectures that originally required offline training (TTT,
KAN, Mamba) are reimplemented with online updates that converge
sample-by-sample – and tested for it.
O(1) memory per model. State size is a function of the model, not the data seen. Drift detectors are bounded ring buffers; histograms have fixed bin counts; subspace trackers carry rank-k projections, not covariance matrices.
Bounded readouts before linear heads. Every neural model that feeds a
recursive least squares head bounds its features first – tanh, sigmoid,
L2-normalize, clamp. Unbounded features explode the RLS weights silently.
Every threshold derives from a paper or the data. Bernstein bounds over fixed elimination thresholds. Information-decay matching over grid-searched half-lives. Magic numbers are technical debt.
Re-exports§
pub use common::PlasticityConfig;pub use common::PlasticityConfigBuilder;pub use ensemble::adaptive::AdaptiveSGBT;pub use ensemble::bagged::BaggedSGBT;pub use ensemble::config::SGBTConfig;pub use ensemble::config::ScaleMode;pub use ensemble::diagnostics::DistributionalDiagnostics;pub use ensemble::diagnostics::EnsembleDiagnostics;pub use ensemble::diagnostics::TreeDiagnostics;pub use ensemble::distributional::DecomposedPrediction;pub use ensemble::distributional::DistributionalSGBT;pub use ensemble::distributional::DistributionalTreeDiagnostic;pub use ensemble::distributional::GaussianPrediction;pub use ensemble::distributional::ModelDiagnostics;pub use ensemble::moe_distributional::MoEDistributionalSGBT;pub use ensemble::multi_target::MultiTargetSGBT;pub use ensemble::multiclass::MulticlassSGBT;pub use ensemble::quantile_regressor::QuantileRegressorSGBT;pub use ensemble::DynSGBT;pub use ensemble::SGBT;pub use error::IrithyllError;pub use sample::Sample;pub use explain::importance_drift::ImportanceDriftMonitor;pub use explain::streaming::StreamingShap;pub use explain::treeshap::ShapValues;pub use ensemble::parallel::ParallelSGBT;parallelpub use stream::AsyncSGBT;pub use stream::Prediction;pub use stream::PredictionStream;pub use stream::Predictor;pub use stream::SampleSender;pub use metrics::auc::StreamingAUC;pub use metrics::conformal::AdaptiveConformalInterval;pub use metrics::ewma::EwmaClassificationMetrics;pub use metrics::ewma::EwmaRegressionMetrics;pub use metrics::kappa::CohenKappa;pub use metrics::kappa::KappaM;pub use metrics::kappa::KappaT;pub use metrics::rolling::RollingClassificationMetrics;pub use metrics::rolling::RollingRegressionMetrics;pub use metrics::ClassificationMetrics;pub use metrics::FeatureImportance;pub use metrics::MetricSet;pub use metrics::OnlineTemperatureScaling;pub use metrics::RegressionMetrics;pub use metrics::Accuracy;pub use metrics::LogLoss;pub use metrics::MetricUnion;pub use metrics::Pinball;pub use metrics::StreamingMetric;pub use metrics::MAE;pub use metrics::MSE;pub use metrics::R2;pub use metrics::RMSE;pub use evaluation::HoldoutStrategy;pub use evaluation::PrequentialConfig;pub use evaluation::PrequentialEvaluator;pub use evaluation::ProgressiveValidator;pub use clustering::CluStream;pub use clustering::CluStreamConfig;pub use clustering::ClusterFeature;pub use clustering::DBStream;pub use clustering::DBStreamConfig;pub use clustering::MicroCluster;pub use clustering::StreamingKMeans;pub use clustering::StreamingKMeansConfig;pub use ensemble::adaptive_forest::AdaptiveRandomForest;pub use learners::BernoulliNB;pub use learners::ClassificationMode;pub use learners::ClassificationWrapper;pub use learners::MultinomialNB;pub use anomaly::hst::AnomalyScore;pub use anomaly::hst::HSTConfig;pub use anomaly::hst::HalfSpaceTree;pub use learner::SGBTLearner;pub use continual::ContinualLearner;pub use pipeline::Pipeline;pub use pipeline::PipelineBuilder;pub use pipeline::StreamingPreprocessor;pub use preprocessing::FeatureHasher;pub use preprocessing::IncrementalNormalizer;pub use preprocessing::MinMaxScaler;pub use preprocessing::OneHotEncoder;pub use preprocessing::OnlineFeatureSelector;pub use preprocessing::PolynomialFeatures;pub use preprocessing::TargetEncoder;pub use preprocessing::CCIPCA;pub use preprocessing::StreamingTargetPreprocessor;pub use preprocessing::TargetEncoderPreprocessor;pub use preprocessing::TargetLog1pTransform;pub use preprocessing::TargetScaler;pub use ensemble::lr_schedule::LRScheduler;pub use learners::GaussianNB;pub use learners::Kernel;pub use learners::LinearKernel;pub use learners::LocallyWeightedRegression;pub use learners::MondrianForest;pub use learners::PolynomialKernel;pub use learners::RBFKernel;pub use learners::RecursiveLeastSquares;pub use learners::StreamingLinearModel;pub use learners::StreamingPolynomialRegression;pub use learners::KRLS;pub use time_series::DecomposedPoint;pub use time_series::DecompositionConfig;pub use time_series::HoltWinters;pub use time_series::HoltWintersConfig;pub use time_series::SNARIMAXCoefficients;pub use time_series::SNARIMAXConfig;pub use time_series::Seasonality;pub use time_series::StreamingDecomposition;pub use time_series::SNARIMAX;pub use bandits::Bandit;pub use bandits::ContextualBandit;pub use bandits::DiscountedThompsonSampling;pub use bandits::EpsilonGreedy;pub use bandits::LinUCB;pub use bandits::ThompsonSampling;pub use bandits::UCBTuned;pub use bandits::UCB1;pub use reservoir::ESNConfig;pub use reservoir::ESNConfigBuilder;pub use reservoir::ESNPreprocessor;pub use reservoir::EchoStateNetwork;pub use reservoir::NGRCConfig;pub use reservoir::NGRCConfigBuilder;pub use reservoir::NextGenRC;pub use reservoir::StreamingESN;pub use ssm::MambaConfig;pub use ssm::MambaConfigBuilder;pub use ssm::MambaPreprocessor;pub use ssm::StreamingMamba;pub use snn::SpikeNet;pub use snn::SpikeNetConfig;pub use snn::SpikeNetConfigBuilder;pub use snn::SpikePreprocessor;pub use snn::StreamingSpikeNet;pub use ttt::StreamingTTT;pub use ttt::TTTConfig;pub use ttt::TTTConfigBuilder;pub use lstm::SLSTMConfig;pub use lstm::SLSTMConfigBuilder;pub use lstm::StreamingLSTM;pub use lstm::StreamingsLSTM;pub use mgrade::MGradeConfig;pub use mgrade::MGradeConfigBuilder;pub use mgrade::StreamingMGrade;pub use kan::KANConfig;pub use kan::KANConfigBuilder;pub use kan::StreamingKAN;pub use attention::AttentionPreprocessor;pub use attention::StreamingAttentionConfig;pub use attention::StreamingAttentionConfigBuilder;pub use attention::StreamingAttentionModel;pub use moe::NeuralMoE;pub use moe::NeuralMoEBuilder;pub use moe::NeuralMoEConfig;pub use projection::ProjectedLearner;pub use projection::ProjectionConfig;pub use projection::ProjectionConfigBuilder;pub use automl::RewardNormalizer;pub use automl::Algorithm;pub use automl::Factory;pub use automl::FactoryError;pub use automl::AutoMetric;pub use automl::AutoTuner;pub use automl::AutoTunerBuilder;pub use automl::AutoTunerConfig;pub use automl::ModelFactory;pub use automl::ConfigDiagnostics;pub use automl::DiagnosticAdaptor;pub use automl::DiagnosticLearner;pub use automl::DiagnosticSource;pub use automl::FeasibleRegion;pub use automl::MetaObjective;pub use automl::TerminateAfter;pub use automl::WelfordRace;pub use automl::AdaptContext;pub use automl::AdaptationBus;pub use automl::BusError;pub use automl::CriticalGuard;pub use automl::DriftRateAdapter;pub use automl::MetaAdapter;pub use automl::NoOpAdapter;pub use automl::PlasticityAdapter;pub use automl::ThetaDelta;pub use automl::categorical;pub use automl::int_range;pub use automl::linear_range;pub use automl::log_range;pub use automl::when;pub use automl::Category;pub use automl::Condition;pub use automl::Constraint;pub use automl::ParamDef;pub use automl::ParamMap;pub use automl::ParamValue;pub use automl::SamplerError;pub use automl::Scale;pub use automl::SearchSpace;pub use automl::SpaceError;pub use automl::bernstein_compare;pub use automl::bernstein_halfwidth;pub use automl::bernstein_promotion_test;pub use automl::empirical_bernstein_ci;pub use automl::ewma_bernstein_ci;pub use automl::ArmStats;pub use automl::EwmaWelfordTracker;pub use automl::PromotionVerdict;pub use automl::WelfordTracker;pub use automl::BERNSTEIN_DELTA;pub use automl::MIN_SAMPLES_FOR_BERNSTEIN;pub use automl::ArmBudget;pub use automl::BudgetLedger;pub use automl::BudgetStatus;pub use automl::ChampionCohort;pub use automl::CohortMember;pub use automl::CohortMemberSnapshot;pub use automl::CohortWeight;pub use automl::COHORT_K;pub use automl::ComplexityClass;pub use automl::FactoryMetaLearner;pub use automl::MetaLearner;pub use automl::MetaScore;pub use automl::MetaSearch;pub use automl::NoOpMetaLearner;pub use automl::Objective;pub use automl::SgbtClassificationMetaLearner;pub use automl::SgbtMetaLearner;pub use automl::ConfigSampler;Deprecated pub use automl::ConfigSpace;Deprecated pub use automl::HyperConfig;Deprecated pub use automl::HyperParam;Deprecated pub use irithyll_core;
Modules§
- anomaly
- Streaming anomaly detection algorithms.
- arrow_
support arrow - Arrow and Parquet integration for zero-copy data ingestion.
- attention
- Streaming linear attention models.
- automl
- Streaming AutoML: champion-challenger racing with bandit-guided hyperparameter search.
- bandits
- Multi-armed bandit algorithms for online decision-making.
- clustering
- Streaming clustering algorithms.
- common
- Shared configuration types used across multiple streaming models.
- continual
- Continual learning wrappers for streaming neural models.
- drift
- Concept drift detection algorithms.
- ensemble
- SGBT ensemble orchestrator – the core boosting loop.
- error
- Error types for Irithyll.
- evaluation
- Streaming evaluation protocols for online machine learning.
- explain
- TreeSHAP explanations for streaming gradient boosted trees.
- export_
embedded - Export trained SGBT models to the irithyll-core packed binary format.
- generators
- Canonical synthetic stream generators for benchmarking streaming ML algorithms.
- histogram
- Histogram binning and accumulation for streaming tree construction.
- kan
- Streaming Kolmogorov-Arnold Networks (KAN).
- learner
- Unified streaming learner trait for polymorphic model composition.
- learners
- Streaming learner implementations for polymorphic model composition.
- loss
- Loss functions for gradient boosting.
- lstm
- Streaming sLSTM (stabilized LSTM) with exponential gating.
- metrics
- Online metric tracking for streaming model evaluation.
- mgrade
- Streaming mGRADE (Minimal Recurrent Gating with Delay Convolutions).
- moe
- Streaming Neural Mixture of Experts.
- onnx_
export onnx - Export trained SGBT models to ONNX format.
- pipeline
- Composable streaming pipelines for preprocessing → learning chains.
- preprocessing
- Streaming preprocessing utilities for feature transformation.
- projection
- Online projection learning for streaming models.
- reservoir
- Reservoir computing models for streaming temporal learning.
- sample
- Core data types for streaming samples.
- serde_
support - Model serialization and deserialization support.
- snn
- Spiking Neural Networks for streaming machine learning.
- ssm
- Streaming Mamba (selective state space model) for temporal ML pipelines.
- stream
- Async streaming infrastructure for tokio-native sample ingestion.
- time_
series - Time series models for streaming forecasting.
- tree
- Streaming decision trees with Hoeffding-bound split decisions.
- ttt
- Streaming Test-Time Training (TTT) layers with prediction-directed fast weights.
Macros§
- make_
pipeline - River-style ergonomic pipeline construction macro.
Structs§
- Ensemble
View - Zero-copy view over a packed ensemble binary.
- Hoeffding
Tree Classifier - A streaming decision tree classifier based on the VFDT algorithm.
- LogLinear
Attention - Wrap any inner linear-attention update rule with a hierarchical Fenwick-tree state.
- LogLinear
State - Hierarchical stack of matrix states, one per active Fenwick level.
- Packed
Node - 12-byte packed decision tree node. AoS layout for cache-optimal inference.
- Packed
Node I16 - 8-byte quantized decision tree node. Integer-only traversal for FPU-less targets.
- Quantized
Ensemble Header - Header for quantized ensemble binary. 16 bytes, 4-byte aligned.
- Quantized
Ensemble View - Zero-copy view over a quantized (int16) ensemble binary.
- Sample
Ref - A borrowed observation that avoids
Vec<f64>allocation. - Turbo
Quantized - Quantized weight vector (owned).
- Turbo
Quantized View - Zero-copy view over a TurboQuant packed binary.
Enums§
- Binner
Kind - Concrete binning strategy enum, eliminating
Box<dyn BinningStrategy>heap allocations per feature per leaf. - Config
Error - Structured error for configuration validation failures.
- Drift
Signal - Signal emitted by a drift detector after observing a value.
- Feature
Type - Declares whether a feature is continuous (default) or categorical.
- Format
Error - Errors that can occur when parsing or validating a packed ensemble binary.
- Leaf
Model Type - Describes which leaf model architecture to use.
- Loss
Type - Tag identifying a loss function for serialization and reconstruction.
- Quant
Mode - Quantization bit depth. Controls the quality/compression tradeoff.
Constants§
- DEFAULT_
MAX_ LEVELS - Default
max_levelsforAttentionMode::LogLinear—⌊log₂(2³²)⌋ + 1 = 33is the paper-specified bound forT_max = 2³². The default 32 is one short to match power-of-two thinking while still covering streams up to 2³² ≈ 4 G tokens with the capacity-overflow fold semantic inLogLinearState::push_leaf. Source: Han Guo et al. 2026 §3, R1 §3.5. - DEFAULT_
TAU - Default temperature for the softplus-softmax mix. τ = 1.0 is the
canonical softmax limit — no extra smoothing beyond softplus
non-negativity. Source: paper §3.2 / streaming_primitives
bounded_mixreference suite.
Traits§
- Binning
Strategy - A strategy for computing histogram bin edges from a stream of values.
- Drift
Detector - A sequential drift detector that monitors a stream of values.
- HasReadout
- Models that expose a linear readout weight vector.
- Loss
- A differentiable loss function for gradient boosting.
- Observation
- Trait for anything that can be used as a training observation.
- Streaming
Learner - Object-safe trait for any streaming (online) machine learning model.
- Streaming
Tree - A streaming decision tree that trains incrementally.
- Structural
- Models whose internal capacity can grow or shrink at runtime.
- Tunable
- Models that expose diagnostics and accept smooth hyperparameter adjustments.
Functions§
- adaptive_
sgbt - Create an adaptive SGBT with a learning rate scheduler.
- auto_
regressor - Across-family auto-regressor preset.
- auto_
tune - Create an auto-tuning streaming learner with default settings.
- auto_
tuner - Wrap any streaming learner with champion-challenger auto-tuning.
- binary_
classifier - Wrap any streaming learner for binary classification.
- ccipca
- Create a CCIPCA preprocessor for streaming dimensionality reduction.
- default_
lambda_ init - Default initial λ for
AttentionMode::LogLinear. WithΣ λ ≤ 1after softplus-softmax mixing, an init of1/max_levelsmakes the un-trained mixture uniform — every level contributes equally. Paper §3.3 (R1 §5.3) notes: in the streaming setting without backprop, the λ projection is fixed at init time, so a uniform mixture is the principled choice when no information about which levels are useful is available. - delta_
net - Create a Gated DeltaNet model (strongest retrieval, NVIDIA 2024).
- drift_
aware - Wrap any streaming learner with drift-detected continual adaptation.
- epsilon_
greedy - Create an epsilon-greedy bandit with the given number of arms and exploration rate.
- esn
- Create an Echo State Network with cycle topology.
- esn_
preprocessor - Create an ESN preprocessor for pipeline composition.
- feature_
hasher - Create a feature hasher for fixed-size dimensionality reduction.
- gaussian_
nb - Create a Gaussian Naive Bayes classifier.
- gla
- Create a Gated Linear Attention model (SOTA streaming attention).
- hawk
- Create a Hawk model (lightest streaming attention, vector state).
- krls
- Create a kernel recursive least squares model with an RBF kernel.
- lin_ucb
- Create a LinUCB contextual bandit.
- linear
- Create a streaming linear model with the given learning rate.
- log_
linear - Create a Log-Linear Attention model (Han Guo et al., ICLR 2026 — v10 headline).
- mamba
- Create a streaming Mamba (selective SSM) model.
- mamba_
bd - Create a streaming Mamba with BD-LRU block-diagonal recurrence.
- mamba_
preprocessor - Create a Mamba preprocessor for pipeline composition.
- mgrade
- Create a streaming mGRADE (minimal recurrent gating with delay convolutions).
- min_
max_ scaler - Create a min-max scaler that normalizes features to
[0, 1]. - mondrian
- Create a Mondrian forest with the given number of trees.
- multiclass
- Wrap any streaming learner for multiclass classification.
- multiclass_
classifier - Wrap any streaming learner for multiclass classification.
- ngrc
- Create a Next Generation Reservoir Computer.
- normalizer
- Create an incremental normalizer for streaming standardization.
- one_hot
- Create a one-hot encoder for the given categorical feature indices.
- online_
regressor - Production-default streaming regressor.
- pipe
- Start building a pipeline with the first preprocessor.
- polynomial_
features - Create a degree-2 polynomial feature generator (interactions + squares).
- projected
- Wrap any streaming learner with online projection learning (PAST algorithm).
- quantize
- Quantize a weight vector with explicit mode and seed.
- quantize_
f32 - Quantize f32 weights with explicit mode. Uses the default seed.
- quantize_
i16 - Quantize i16 weights with a dequantization scale and explicit mode.
- quantize_
weights - Quantize a weight vector to 3.5-bit TurboQuant format.
- quantize_
weights_ with_ seed - Quantize with an explicit seed for the Hadamard rotation (3.5-bit mode).
- ret_net
- Create a RetNet model (simplest, fixed decay).
- rls
- Create a recursive least squares model with the given forgetting factor.
- sgbt
- Create an SGBT learner with squared loss from minimal parameters.
- simd_
exp - SIMD-accelerated element-wise exp with runtime feature detection.
- simd_
sigmoid - SIMD-accelerated element-wise sigmoid with runtime feature detection.
- simd_
silu - SIMD-accelerated element-wise SiLU (Sigmoid Linear Unit) with runtime feature detection.
- simd_
tanh - SIMD-accelerated element-wise tanh with runtime feature detection.
- spikenet
- Create a spiking neural network with e-prop learning.
- streaming_
attention - Create a streaming attention model with any mode.
- streaming_
kan - Create a streaming KAN with the given layer sizes and learning rate.
- streaming_
slstm - Create a streaming sLSTM (stabilized LSTM with exponential gating).
- streaming_
ttt - Create a streaming TTT (Test-Time Training) model.
- target_
encoder - Create a target encoder with Bayesian smoothing for categorical features.
- thompson
- Create a Thompson Sampling bandit with Beta(1,1) prior.
- tuned_
sgbt - Auto-tuned SGBT preset.
- ucb1
- Create a UCB1 bandit with the given number of arms.
- ucb_
tuned - Create a UCB-Tuned bandit with the given number of arms.