1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
//! Kolmogorov-Arnold Networks (KAN).
//!
//! Replaces fixed activation functions with learnable univariate functions on
//! each network edge. Each edge carries a B-spline or rational activation that
//! is learned during training.
//!
//! # Architecture
//!
//! A KAN layer with `n_in` inputs and `n_out` outputs stores `n_in × n_out`
//! independent activation functions `phi_{i,j}`. The output is:
//!
//! ```text
//! y_j = Σ_i phi_{i,j}(x_i)
//! ```
//!
//! Stacking multiple KAN layers forms a `KanNetwork`.
//!
//! # Supported activation families
//!
//! | Variant | Description |
//! |---------------------|--------------------------------------------------|
//! | [`BSplineActivation`] | Piecewise-polynomial (cubic by default); smooth |
//! | [`RationalActivation`] | Padé-type; globally smooth; compact support |
//!
//! # Reference
//!
//! Liu et al. (2024) *"KAN: Kolmogorov-Arnold Networks"*
//! <https://arxiv.org/abs/2404.19756>
pub use ;
pub use ;
pub use ;
use crateNeuralError;
/// Convenience result type for KAN operations.
pub type KanResult<T> = ;