OxiGDAL ML Foundation
Deep learning training infrastructure and model architectures for geospatial machine learning.
Features
- Training Infrastructure: Complete training loops, optimizers (SGD, Adam, AdamW), loss functions (MSE, Cross-Entropy, Dice, Focal), learning rate schedulers, early stopping, and checkpointing
- Model Architectures: UNet for segmentation, ResNet for classification, with flexible configurations
- Transfer Learning: Pre-trained model loading, layer freezing strategies, fine-tuning procedures
- Data Augmentation: Geometric (flip, rotate, crop), color (brightness, contrast, gamma), noise, and geospatial-specific augmentations
- Evaluation Metrics: Accuracy, precision, recall, F1-score, IoU, confusion matrix
COOLJAPAN Compliance
- ✅ Pure Rust implementation (PyTorch bindings are feature-gated)
- ✅ No
unwrap()calls in production code - ✅ All files under 2000 lines
- ✅ Uses workspace dependencies
- ✅ Uses SciRS2-Core for numerical operations
Usage
use ;
// Create a UNet model for segmentation
let model = standard?;
// Configure training
let config = TrainingConfig ;
// Setup augmentation pipeline
let mut pipeline = new;
pipeline.add;
// Create optimizer and loss
let optimizer = new?;
let loss_fn = new;
Cargo Features
std(default): Standard library supportpytorch: PyTorch backend for training (requires libtorch)onnx: ONNX export supportcuda: GPU acceleration (requires CUDA)
Architecture
Training Module (training/)
- mod.rs: Training configuration and history
- training_loop.rs: Core training loop implementation
- losses.rs: Loss functions (MSE, Cross-Entropy, Dice, Focal, Combined)
- optimizers.rs: Optimization algorithms (SGD, Adam, AdamW)
- schedulers.rs: Learning rate schedulers (Step, Exponential, Cosine, OneCycle)
- early_stopping.rs: Early stopping logic
- checkpointing.rs: Model checkpoint management
Models Module (models/)
- unet.rs: UNet architecture for segmentation
- resnet.rs: ResNet variants (18, 34, 50, 101, 152)
- layers.rs: Common layers (Conv2D, BatchNorm, Pooling, Residual blocks)
Transfer Learning Module (transfer/)
- pretrained.rs: Pre-trained model loading
- freezing.rs: Layer freezing strategies
- finetuning.rs: Fine-tuning procedures
- feature_extraction.rs: Feature extraction utilities
Augmentation Module (augmentation/)
- geometric.rs: Flip, rotate, crop transformations
- color.rs: Brightness, contrast, gamma adjustments
- noise.rs: Gaussian noise, channel dropout
- geospatial.rs: Band selection, spectral normalization
Examples
Training Configuration
let config = TrainingConfig ;
Model Creation
// UNet variants
let small_unet = small?;
let standard_unet = standard?;
let deep_unet = deep?;
// ResNet variants
let resnet18 = resnet18?;
let resnet50 = resnet50?;
Data Augmentation Pipeline
let mut pipeline = new;
pipeline
.add
.add
.add
.add;
let augmented = pipeline.apply?;
Transfer Learning
let config = fine_tune_top;
let freezer = new?;
// Check which layers are trainable
for idx in 0..total_layers
Testing
Run tests:
Run benchmarks:
License
Apache-2.0
Authors
COOLJAPAN OU (Team Kitasan)