1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
//! # nexcore-cognition
//!
//! Typed cognitive engine — the transformer algorithm as strict Rust.
//!
//! ## Meta-cognitive origin
//!
//! This crate captures the fundamental algorithm behind large language model
//! cognition: attention selects, transformation processes, generation builds.
//! Each module maps to an observable pattern in how neural networks process
//! information, translated faithfully into Rust's type system.
//!
//! ## Architecture (bottom-up)
//!
//! ```text
//! pipeline ──► generator ──► block ──► attention + feed_forward
//! │ │ │
//! ▼ ▼ ▼
//! normalize mask tensor
//! residual │
//! embedding error
//! ```
//!
//! ## T1 Primitive grounding
//!
//! | Module | Primitives | Cognitive role |
//! |-------------|----------------------------------|--------------------------|
//! | tensor | N, Σ, ×, ∂, κ | Numerical substrate |
//! | embedding | μ, λ, N | Symbol → vector |
//! | attention | κ, →, N, μ, Σ | Relevance selection |
//! | feed_forward| μ, ς | Nonlinear transformation |
//! | residual | π, Σ | Context preservation |
//! | normalize | ∂, N | Signal stability |
//! | block | σ, ∃ | Composable unit |
//! | mask | ∂, →, ∝ | Causal constraint |
//! | generator | σ, ρ, ∝, →, ∂ | Autoregressive output |
//! | sample | N, ∂, ν, κ | Stochastic selection |
//! | metrics | κ, N, ν, μ | Self-measurement |
//! | pipeline | σ, →, Σ, κ | Full cognitive flow |
// Layer 2: modules that depend only on tensor
// Layer 3: the cognitive core
// Layer 4: composition — the complete engine
/// Create a seeded or OS-random `StdRng` for use with the cognitive engine.
///
/// Downstream crates (e.g., nexcore-mcp) call this instead of depending on `rand` directly.