DeOldify

About
DeOldify is an EDLA project.
The purpose of edla.org is to promote the state of the art in various domains.
A Rust tool that colorizes grayscale and black-and-white images using the DeOldify neural network via ONNX Runtime inference. Available as both a CLI and a graphical interface.
The tool preserves the original image sharpness (luminance channel) and only adds color information from the model, producing natural-looking results.
Requirements
System
- Linux x86_64 (the pre-built ONNX Runtime binary targets Linux)
- glibc 2.38+ (Debian 13, Ubuntu 24.04, Fedora 39, or newer)
- Rust 1.85+ (edition 2024)
Hardware
CPU (default)
Inference runs on CPU using all available cores via ONNX Runtime's multi-threaded executor.
| Component | Requirement |
|---|---|
| RAM for model loading | ~500 MB (243 MB model + ORT session overhead) |
| RAM for inference (256x256 fixed) | ~50 MB (input/output tensors + intermediate buffers) |
| RAM for pre/postprocessing | Depends on input image size (see table below) |
| Inference time (256x256, 4 cores) | ~4 s |
| Inference time (256x256, 8 cores) | ~2 s |
The model always operates at a fixed 256x256 resolution internally. Input images of any size are resized down to 256x256 before inference, then the colorized result is scaled back up to the original dimensions. This means inference cost is constant regardless of input image size.
Pre/postprocessing RAM usage scales with input image size:
| Input image size | Approximate additional RAM |
|---|---|
| 1 MP (e.g. 1000x1000) | ~30 MB |
| 4 MP (e.g. 2000x2000) | ~120 MB |
| 12 MP (e.g. 4000x3000) | ~350 MB |
| 24 MP (e.g. 6000x4000) | ~700 MB |
| 50 MP (e.g. 8192x6144) | ~1.5 GB |
This RAM is used for loading the original image, extracting LAB channels, resizing, blurring, and merging the final result. Each pixel requires ~15 bytes across the various buffers (original RGB + LAB luminance + colorized RGB + result RGB).
Total RAM recommendation: 1 GB minimum, 2 GB for images up to 12 MP, 4 GB for very large images.
GPU (not yet supported)
GPU acceleration via CUDA execution provider is planned but not yet implemented. When available, it will require:
- NVIDIA GPU with CUDA support
- CUDA 11.8+ and cuDNN 8.x+
- VRAM: ~500 MB for model + ~10 MB for 256x256 inference tensors. Since inference is at fixed 256x256, VRAM usage is constant regardless of input image size. Pre/postprocessing always runs on CPU/RAM.
- Any GPU with 1 GB+ VRAM will be sufficient
Installation
1. Download the ONNX model
This downloads the pre-converted DeOldify Artistic model (~243 MB).
2. Build from source
Usage
GUI (graphical interface)
The GUI opens a 1000x700 window with a dark theme. The workflow is:
- Select Model — click the button in the top bar to pick your
.onnxmodel file - Open Image — choose an input image (JPEG, PNG, BMP, TIFF, or WebP)
- Colorize — runs inference and displays the result side-by-side with the original
- Save Result — export the colorized image as PNG or JPEG
The status bar at the bottom shows progress and any errors.
CLI (command-line)
Options
| Flag | Description |
|---|---|
-i, --input |
Path to the input image (any format: JPEG, PNG, BMP, TIFF, WebP...) |
-o, --output |
Path to save the colorized output image (format inferred from extension) |
-m, --model |
Path to the ONNX model file |
Examples
# Colorize a single photo
# Use PNG for lossless output
Output
Progress is printed to stderr:
Colorizing old_photo.jpg (model input: 256x256)
Loading and preprocessing image...
Preparing input tensor...
Running inference...
Model loaded in 0.5s (4 threads)
Starting inference...
Inference completed in 4.4s
Postprocessing...
Colorized image saved to colorized.jpg
How it works
- Load the input image and extract its L (luminance) channel in CIE L*a*b* color space
- Convert to grayscale 3-channel RGB and resize to 256x256
- Run inference through the DeOldify ONNX model, which predicts RGB color
- Postprocess: resize the model output back to original dimensions, apply a light blur to smooth color artifacts
- Merge the original L channel (sharpness/detail) with the predicted a* and b* channels (color) in LAB space
- Save the final colorized image
This LAB-space merge is key: the original image's detail and contrast are fully preserved, while only the chrominance (color) comes from the neural network.
Converting your own model
If you have DeOldify PyTorch weights (.pth), you can convert them to ONNX:
# Place .pth weights in ./models/
The converter bakes in ImageNet normalization so the Rust tool can pass raw 0-255 pixel values directly.
Limitations
- Fixed 256x256 internal resolution (the pre-converted model does not support dynamic sizes)
- CPU-only inference (GPU support planned)
- Color quality depends on the model; works best on natural scenes and portraits
- Very large images need proportionally more RAM for pre/postprocessing (see table above)
License
DeOldify model weights are subject to their original license. See the DeOldify repository for details.