Anomaly Grid
█████╗ ███╗ ██╗ ██████╗ ███╗ ███╗ █████╗ ██╗ ██╗ ██╗
██╔══██╗████╗ ██║██╔═══██╗████╗ ████║██╔══██╗██║ ╚██╗ ██╔╝
███████║██╔██╗ ██║██║ ██║██╔████╔██║███████║██║ ╚████╔╝
██╔══██║██║╚██╗██║██║ ██║██║╚██╔╝██║██╔══██║██║ ╚██╔╝
██║ ██║██║ ╚████║╚██████╔╝██║ ╚═╝ ██║██║ ██║███████╗██║
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚═╝
[ANOMALY-GRID v0.4.2] - SEQUENCE ANOMALY DETECTION ENGINE
A Rust library implementing variable-order Markov chains for sequence anomaly detection in finite alphabets.
To use a Python wrapper of this library implementations refer, to my other repository at: https://github.com/Abimael10/anomaly-grid-py
Quick Start
[]
= "0.4.2"
use *;
Expected output with the above data:
- Two anomaly windows flagged:
["B","C","X","Y"](strength ~0.27) and["C","X","Y","C"](strength ~0.39). - No other windows reported; the rest of the test sequence matches the trained ABC pattern.
What This Library Does
- Variable-order Markov modeling for finite alphabets (order 1..max_order with fallback).
- On-the-fly scoring: likelihood + information score, combined into an anomaly strength.
- Memory-conscious storage: string interning, trie-based contexts, SmallVec for small counts.
- Batch processing: detect anomalies across many sequences in parallel (Rayon).
- Tunable config: smoothing, weights, memory limit, and optimization helpers for pruning.
Configuration
let config = default
.with_max_order? // Higher order = more memory, better accuracy
.with_smoothing_alpha? // Lower = more sensitive to training data
.with_weights? // Likelihood vs information weight
.with_memory_limit?; // 100MB memory limit
let detector = with_config?;
Use Cases (with context)
Markov chains are not state of the art for anomaly detection. Modern systems favor deep sequence, probabilistic, and graph-based models. This library remains useful when you need:
- Discrete, low-dimensional states with short contexts.
- Predictable workflows where interpretability matters.
- Ultra-low-latency or resource-constrained inference.
Practical fits
- Network/Protocol flows: Finite state machines, handshake/order violations.
- Small structured workflows: Ops runbooks, CLI/session macros, simple ETL steps.
- Device/state telemetry: Low-cardinality IoT states, embedded controllers.
Not a fit without heavy preprocessing
- High-dimensional logs/sensors or complex user behavior with long-range dependencies.
- Large alphabets or non-stationary patterns.
- Continuous/unstructured data (images, audio, raw text) without discretization.
Current state-of-the-art alternatives
- Deep sequence models: LSTM/GRU, Transformers (TFT, Anomaly Transformer, TS foundation models), autoencoders/VAEs.
- Probabilistic deep models: Normalizing flows, diffusion, energy-based models.
- Graph/representation learning: GNNs, dynamic graph embeddings, contrastive methods.
- Classical statistical baselines: HMMs (strong Markovian baseline), GMMs/Bayesian changepoint, ARIMA/VAR/Kalman for continuous signals.
- TS foundation models (2023–2025): TimeGPT, Chronos, MOIRAI, DeepTime.
Testing
# Run all tests
# Run specific test suites
# Run examples
Documentation
- Complete Documentation - Comprehensive guides and API reference
- API Reference - Online API documentation
- Examples
- Changelog - Version history and changes
License
MIT License - see LICENSE file.