ternsig
TernarySignal foundation for CPU-only neural networks.
The Equation
w = s * m
s in {-1, 0, +1} polarity
m in {0..255} magnitude
Two bytes per weight. Integer arithmetic only. No floats. No GPU.
What This Changes
Traditional neural networks use 32-bit floats (4 bytes per weight). Ternsig uses TernarySignal (2 bytes per weight):
| Property | Float | TernarySignal |
|---|---|---|
| Size | 4 bytes | 2 bytes |
| Arithmetic | FP multiply-add | Integer multiply-add |
| Hardware | GPU preferred | CPU native |
| Training | Gradient descent | Mastery learning |
Core Components
TernarySignal
Effective weight = polarity * magnitude. Range: -255 to +255.
TensorISA
Hot-reloadable neural network definitions as .tisa.asm files:
.registers
H0: i32[12] ; input activations
H1: i32[32] ; hidden layer
C0: ternary[32, 12] key="chip.audio.w1"
.program
load_input H0
ternary_matmul H1, C0, H0
relu H1, H1
store_output H1
halt
Mastery Learning
Pure integer adaptive learning. 23ms to 90% accuracy.
use ;
let mut weights = init_random_structure;
let mut state = new;
let config = default;
// Learning loop
for sample in samples
Key principles:
- Participation threshold: Only top 25% active neurons update
- Sustained pressure: Changes require accumulated evidence
- Weaken before flip: Magnitude depletes before polarity changes
Thermogram Integration
Persistent weight storage with temperature lifecycle:
- HOT: Actively learning, high plasticity
- WARM: Recently learned, moderate plasticity
- COOL: Stable, low plasticity
- COLD: Long-term memory, frozen
Usage
[]
= "0.2"
use ;
// Load chip definition
let program = assemble?;
let mut interpreter = from_program;
// Forward pass
let input: = /* your input */;
let output = interpreter.forward?;
License
MIT OR Apache-2.0