Expand description
Auto-generated module
π€ Generated with SplitRS
FunctionsΒ§
- adaboost_
training_ error_ ty - AdaBoost training error: β€ exp(-2 Ξ£ Ξ³_tΒ²) after T rounds Type: β (T : Nat), Prop
- adaboost_
ty AdaBoost: adaptive boosting with exponential loss, T weak learners Type: Nat β Type β Type- app
- app2
- app3
- arrow
- backdoor_
adjustment_ ty BackdoorAdjustment: P(Y|do(X)) = Ξ£_z P(Y|X,Z=z) P(Z=z) Type: β (X Y Z : Type), BackdoorCriterion X Y Z β Prop- backdoor_
criterion_ ty BackdoorCriterion: set Z satisfies backdoor criterion for (X, Y) Type: Type β Type β Type β Prop (X, Y, Z β criterion)- bandit_
algorithm_ ty BanditAlgorithm: protocol with only loss feedback (no gradient) Type: Nat β Nat β Type (n arms, T rounds)- barron_
approximation_ ty BarronApproximation: shallow nets approximate Barron functions at rate 1/βm Type: β (B : Real) (m : Nat), Prop (B = norm bound, m = neurons)- barron_
space_ ty BarronSpace: functions with finite Barron norm (representable by shallow nets) Type: Real β Type (radius B β function class)- bayesian_
regret_ bound_ ty BayesianRegretBound: Bayesian regret for Thompson sampling O(β(n T)) Type: β (n T : Nat), Prop- benign_
overfitting_ ty BenignOverfitting: interpolating solution still generalizes well Type: β (n d : Nat), Prop (n samples, d features)- bias_
variance_ decomposition_ ty - Bias-variance decomposition: E[(Ε· - y)Β²] = BiasΒ²(Ε·) + Var(Ε·) + ΟΒ² Type: Prop
- bias_
variance_ tradeoff_ ty BiasVarianceTradeoff: MSE = BiasΒ² + Variance + Noise Type: Real β Real β Real β Prop- bool_ty
- build_
env - Register all statistical learning theory axioms and theorems in the kernel environment.
- build_
env_ extended - Register the extended set of statistical learning theory axioms (Β§7βΒ§17).
- bvar
- byzantine_
fault_ tolerance_ ty ByzantineFaultTolerance: learning with f Byzantine clients out of m Type: Nat β Nat β Type (m total, f Byzantine β robust protocol)- calibration_
error_ ty CalibrationError: ECE = E[|P(Y=1|score=p) - p|] Type: (List Real) β (List Real) β Real (predictions, labels β ECE)- catoni_
pac_ bayes_ ty CatoniPACBayes: Catoniβs PAC-Bayes bound with tighter constants Type: β (n : Nat) (Ξ΄ : Real), Prop- chain_
rule_ mi_ ty - Chain rule of mutual information: I(X;Y,Z) = I(X;Y) + I(X;Z|Y) Type: Prop
- communication_
complexity_ ty CommunicationComplexity: total bits communicated to reach Ξ΅ accuracy Type: Real β Nat β Nat (Ξ΅, m β bits)- confounding_
bias_ ty ConfoundingBias: bias from ignoring confounders Type: Real β Prop (bias magnitude β bounded)- covariate_
shift_ ty CovariateShift: p_S(x) β p_T(x) but p(y|x) the same Type: Type β Prop- cramer_
rao_ bound_ ty - CramΓ©r-Rao bound: Var(ΞΈΜ) β₯ 1/I(ΞΈ) for any unbiased estimator Type: β (p : Real β Real) (ΞΈ : Real), Prop
- cst
- data_
deletion_ ty DataDeletion: privacy-adjacent: model update after removing one point Type: Nat β Real β Type (n, budget β deletion mechanism)- data_
dependent_ bound_ ty DataDependentBound: a bound that depends on the observed dataset S Type: Nat β Real β Prop (n samples, bound value)- data_
processing_ inequality_ ty - Data processing inequality: I(X;Z) β€ I(X;Y) for Markov chain XβYβZ Type: Prop
- depth_
separation_ ty DepthSeparation: a function requiring exponentially wide shallow net Type: Nat β Prop (depth d β separation result)- do_
calculus_ ty DoCalculus: causal interventional distribution P(Y | do(X=x)) Type: Real β Real β Real (x, context β P(Y | do(X=x)))- domain_
adaptation_ bound_ ty DomainAdaptationBound: generalization bound under covariate shift Type: β (n : Nat) (delta : Real), Prop- domain_
adaptation_ ty DomainAdaptation: learning when source and target domains differ Type: Type β Type β Nat β Type (source, target, n β adapted model)- dot_ext
- double_
descent_ ty DoubleDescent: test error as function of model complexity Type: Nat β Real (n_params β test error curve)- double_
rademacher_ ty DoubleRademacher: two-sided Rademacher bound (two-sided uniform convergence) Type: Type β Nat β Real β Prop- dp_
generalization_ bound_ ty DPGeneralizationBound: utility bound for DP learning Type: β (n : Nat) (eps_priv delta_priv : Real), Prop- dp_
sample_ complexity_ ty DPSampleComplexity: extra samples needed for privacy Type: Real β Real β Real β Real β Nat (Ξ΅_priv, Ξ΄_priv, Ξ΅_learn, Ξ΄_learn β m)- dp_
sgd_ algorithm_ ty DPSGDAlgorithm: DP-SGD with noise Ο, clipping C, epochs T Type: Real β Real β Nat β Type (Ο, C, T β algorithm)- early_
stopping_ reg_ ty EarlyStoppingReg: implicit regularization via iteration count T Type: Nat β Type- elbo_ty
ELBO: evidence lower bound β(q) = E_q[log p(x,z)] - E_q[log q(z)] Type: (Real β Real) β (Real β Real) β Real- ewa_
algorithm_ ty ExponentialWeightsAlgorithm: EWA (Hedge) distribution over n experts Type: Nat β Real β Type (n experts, learning rate Ξ·)- ewa_
regret_ bound_ ty EWARegretBound: R_T(EWA) β€ ln(n)/Ξ· + Ξ· T/2 Type: β (n : Nat) (T : Nat) (Ξ· : Real), Prop- feature_
map_ ty FeatureMap: Ο: X β H_k with k(x,xβ) = β¨Ο(x),Ο(xβ)β© Type: Type β Type β Type- fedavg_
convergence_ ty FedAvgConvergence: FedAvg converges at rate O(1/β(T m)) Type: β (T m : Nat), Prop- federated_
learning_ ty FederatedLearning: distributed optimization across m clients Type: Nat β Nat β Type (m clients, T rounds β protocol)- fisher_
information_ ty FisherInformation: I(ΞΈ) = E[(β/βΞΈ log p(x;ΞΈ))Β²] Type: (Real β Real) β Real β Real- fundamental_
theorem_ pac_ ty - Fundamental theorem of PAC learning: finite VC dimension β PAC learnability Type: β (H : Type), VCDimension H < β β PACLearnability H
- gaussian_
complexity_ ty GaussianComplexity: G_n(H) = E_{g,S}[sup_{hβH} (1/n) Ξ£ g_i h(x_i)] Type: Type β Nat β Real- growth_
function_ ty GrowthFunction: Ξ _H(m) = max_{S,|S|=m} |{h|_S : h β H}| Type: Type β Nat β Nat- heterogeneity_
measure_ ty HeterogeneityMeasure: degree of statistical heterogeneity across clients Type: Nat β Real β Prop (m clients, Ξ measure β bound)- implicit_
regularization_ ty ImplicitRegularization: GD with zero init converges to min-norm solution Type: Nat β Real β Prop (steps T, step size Ξ·)- importance_
weighting_ ty ImportanceWeighting: reweight source samples by p_T(x)/p_S(x) Type: (Real β Real) β (Real β Real) β Real β Real (p_T, p_S, x β weight)- interventional_
dist_ ty InterventionalDist: P(Y | do(X)) in a structural causal model Type: Type β Type β Type (X space, Y space β dist)- kernel_
function_ ty KernelFunction: k: X Γ X β β, a positive-definite symmetric function Type: Type β Type (representing a kernel on X)- kernel_
matrix_ ty KernelMatrix: Gram matrix K_{ij} = k(x_i, x_j) β β^{nΓn} Type: (Type β Type) β Nat β Type- kernel_
pca_ projection_ ty KernelPCAProjection: projection onto top-k kernel PCA components Type: (Type β Type) β Nat β Nat β Type (kernel, n, k β projector)- kernel_
pca_ ty - Kernel PCA: principal components in feature space via eigendecomposition of K Type: (Type β Type) β Nat β Nat β Type
- kernel_
svm_ ty KernelSVM: support vector machine with kernel trick Type: (Type β Type) β Real β Type- lasso_
reg_ ty LassoReg: ββ regularization Ξ»βhββ (sparsity-promoting) Type: Real β Type- list_ty
- localized_
bound_ ty LocalizedBound: generalization bound via localized Rademacher Type: β (H : Type) (n : Nat) (Ξ΄ : Real), Prop- localized_
rademacher_ ty LocalizedRademacher: local Rademacher complexity around minimizer Type: Type β Nat β Real β Real (H, n, radius β complexity)- mercer_
theorem_ ty - Mercerβs theorem: k is p.d. β β feature map Ο with k(x,xβ) = β¨Ο(x),Ο(xβ)β© Type: β (k : Type β Type), isPositiveDefinite k β β feature map, Prop
- min_
norm_ interpolation_ ty MinNormInterpolation: min-βwβ solution that fits training data exactly Type: Nat β Nat β Type (n samples, d features β solution)- ml_
kl_ divergence_ ty MLKLDivergence: D_KL(PβQ) = Ξ£ P log(P/Q) β used in PAC-Bayes bounds Type: (List Real) β (List Real) β Real- ml_
mutual_ information_ ty MLMutualInformation: I(X;Y) = H(X) - H(X|Y) for learning-theoretic analysis Type: (List Real) β (List Real) β Real- multiplicative_
weights_ update_ ty MultiplicativeWeightsUpdate: MW update step at round t Type: Nat β Real β Type (n, Ξ·)- nat_ty
- neural_
network_ ty NeuralNetwork: a feedforward network of given depth and width Type: Nat β Nat β Type (depth, width β network)- nn_
expressivity_ ty NNExpressivity: VC dimension / capacity of a neural net class Type: Nat β Nat β Nat (depth, width β VC dim)- nn_
generalization_ bound_ ty NNGeneralizationBound: Rademacher-based bound for neural networks Type: β (depth width n : Nat) (Ξ΄ : Real), Prop- ogd_
regret_ bound_ ty - OGD regret bound: R_T β€ DΒ²/(2Ξ·) + Ξ· Ξ£ββ_tβΒ² β€ DΒ·GΒ·β(2T) Type: β (T : Nat) (eta : Real), Prop
- on_
average_ stability_ ty OnAverageStability: E_S[E_z|L(A(S),z) - L(A(S^{(i)}),z)|] β€ Ξ² Type: Real β Prop- online_
algorithm_ ty OnlineAlgorithm: sequential prediction protocol over T rounds Type: Nat β Type- online_
gradient_ descent_ ty OnlineGradientDescent: OGD with regret O(βT) Type: Real β Nat β Type (learning rate Ξ·, rounds T)- pac_
bayes_ bound_ ty - PAC-Bayes bound: L_D(Q) β€ L_S(Q) + β((D_KL(QβP) + log(n/Ξ΄))/(2n)) Type: β (n : Nat) (Ξ΄ : Real), Prop
- pac_
bayes_ mcallester_ ty PACBayesBound: McAllesterβs PAC-Bayes bound Type: β (n : Nat) (Ξ΄ : Real), (List Real) β (List Real) β Prop- pac_
learnability_ ty PACLearnability: a hypothesis class is PAC learnable Type: Type β Prop- pac_
learner_ ty PACLearner: a learning algorithm that for any Ξ΅, Ξ΄ > 0 returns h with L_D(h) β€ Ξ΅ Type: Real β Real β Nat β Type- perceptron_
mistake_ bound_ ty - Perceptron mistake bound: M β€ (R/Ξ³)Β² where R = maxβxβ, Ξ³ = margin Type: β (R Ξ³ : Real), mistakes β€ (R/Ξ³)Β²
- perceptron_
ty Perceptron: online linear classifier with mistake bound Type: Nat β Type (dimension β perceptron)- pi
- private_
pac_ learning_ ty PrivatePACLearning: PAC learning with (Ξ΅, Ξ΄)-differential privacy Type: Real β Real β Type (Ξ΅_priv, Ξ΄_priv β learner)- private_
query_ mechanism_ ty PrivateQueryMechanism: answering statistical queries with DP Type: Real β Real β Type (Ξ΅, Ξ΄ β mechanism)- prop
- proper_
scoring_ rule_ ty ProperScoringRule: S(f, y) is proper if E[S(f,Y)] maximized by true dist Type: (Real β Real) β Real β Real (forecast function, outcome β score)- rademacher_
bound_ ty - Rademacher generalization bound: L_D(h) β€ L_S(h) + 2 R_n(H) + O(β(log(1/Ξ΄)/n)) Type: β (H : Type) (n : Nat) (Ξ΄ : Real), Prop
- rademacher_
complexity_ ty RademacherComplexity: R_n(H) = E_{Ο,S}[sup_{hβH} (1/n) Ξ£ Ο_i h(x_i)] Type: Type β Nat β Real- real_ty
- regret_
bound_ ty RegretBound: R_T = Ξ£ L_t(h_t) - min_h Ξ£ L_t(h) β€ O(βT) Type: Nat β Real β Prop (rounds T, bound Ξ΅)- regularized_
objective_ ty RegularizedObjective: L(h) + Ξ» Ξ©(h) Type: Real β Type (regularization weight β regularized problem)- reliability_
diagram_ ty ReliabilityDiagram: calibration curve P(Y=1 | score β bin) vs. score Type: Nat β Type (n bins β diagram)- representer_
theorem_ ty - Representer theorem: optimal solution in RKHS lies in span of kernel evaluations Type: β (k : Type β Type) (n : Nat), Prop
- ridge_
regression_ solution_ ty - Ridge regression solution: (X^T X + Ξ»I)^{-1} X^T y Type: β (n d : Nat) (Ξ» : Real), List (List Real) β List Real β List Real
- rkhs_
norm_ ty RKHSNorm: βfβ{H_k}Β² = Ξ£{i,j} Ξ±_i Ξ±_j k(x_i, x_j) Type: (Type β Type) β Real (kernel β normΒ²)- rkhs_ty
RKHS: reproducing kernel Hilbert space H_k associated to kernel k Type: (Type β Type) β Type- sample_
complexity_ bound_ ty - Sample complexity upper bound for PAC learning Type: β (Ξ΅ Ξ΄ : Real) (d : Nat), m β₯ SampleComplexity Ξ΅ Ξ΄ d β Prop
- sample_
complexity_ ty SampleComplexity: m = O((d log(d/Ξ΅) + log(1/Ξ΄)) / Ξ΅) Type: Real β Real β Nat β Nat- sauer_
shelah_ ty - Sauer-Shelah lemma: Ξ _H(m) β€ Ξ£_{i=0}^{d} C(m,i) Type: β (H : Type) (m : Nat), GrowthFunction H m β€ Sauer_bound (VCDimension H) m
- sharpness_
measure_ ty SharpnessMeasure: variance of the forecast distribution Type: (List Real) β Real (forecasts β sharpness)- stability_
generalization_ bound_ ty StabilityGeneralizationBound: Ξ²-stable β gen error β€ 2Ξ² + O(1/βn) Type: β (Ξ² : Real) (n : Nat) (Ξ΄ : Real), Prop- svm_
generalization_ bound_ ty SVMGeneralizationBound: margin-based bound β€ RΒ²/(Ξ³Β² n) Type: β (n : Nat) (R Ξ³ : Real), Prop- symmetrization_
lemma_ ty - Symmetrization lemma: relates population risk to Rademacher complexity Type: β (H : Type) (n : Nat), Prop
- thompson_
sampling_ ty ThompsonSampling: Bayesian bandit via posterior sampling Type: Nat β Type (n arms)- tikhonov_
reg_ ty TikhonovReg: Tikhonov regularization Ξ»βhβΒ²_{H_k} Type: Real β (Type β Type) β Type- type0
- ucb_
algorithm_ ty UCBAlgorithm: upper confidence bound algorithm Type: Nat β Real β Type (n arms, exploration param c)- ucb_
regret_ bound_ ty UCBRegretBound: UCB1 regret O(β(n T ln T)) Type: β (n T : Nat), Prop- uniform_
convergence_ ty UniformConvergence: sup_{hβH} |L_D(h) - L_S(h)| β€ Ξ΅ w.p. β₯ 1βΞ΄ Type: Type β Real β Real β Prop- uniform_
stability_ ty UniformStability: |L(A(S), z) - L(A(Sβ), z)| β€ Ξ² for any S, Sβ differing in 1 point Type: Real β Prop (Ξ² β stability predicate)- vc_
dimension_ ty VCDimension: maximal shattered set size for a hypothesis class H Type: Type β Nat