Docker Image Pusher
A memory-optimized Docker image transfer tool designed to handle large Docker images without excessive memory usage. This tool addresses the common problem of memory exhaustion when pulling or pushing multi-gigabyte Docker images.
π― Problem Statement
Traditional Docker image tools often load entire layers into memory, which can cause:
- Memory exhaustion with large images (>1GB)
- System instability when processing multiple large layers
- Failed transfers due to insufficient RAM
- Poor performance on resource-constrained systems
π Solution
This tool implements streaming-based layer processing using the OCI client library:
- β Streaming Downloads: Layers are streamed directly to disk without loading into memory
- β Sequential Processing: Processes one layer at a time to minimize memory footprint
- β Chunked Uploads: Large layers (>100MB) are read in 50MB chunks during upload
- β Local Caching: Efficient caching system for faster subsequent operations
- β Progress Monitoring: Real-time feedback on transfer progress and layer sizes
π What's New in 0.5.2
- Push workflow orchestrator β New
PushWorkflowstruct now coordinates input analysis, target inference, credential lookup, cache hydration, and the layer/config upload sequence. Each stage is its own method, making the CLI easier to extend while keeping the streaming guarantees the project is known for. - Smarter destination & credential inference β History- and tar-metadata-based target suggestions now live inside the workflow. The confirmation prompt remembers previously accepted registries, and credential lookup cleanly falls back to stored logins before asking for overrides.
- Large-layer telemetry β Chunked uploads for 1GB+ layers emit richer progress, ETA, and throughput stats. We only keep a single chunk in memory and back off between medium-sized layers to stay friendly to registries with aggressive rate limits.
- Tar importer refactor β A dedicated
TarImportergroups manifest parsing, layer extraction, digest calculation, and cache persistence. Extraction progress for oversized layers mirrors the push progress bars so you can see streaming speeds end-to-end. - Vendor cleanup β Removed the old vendored OCI client copy and its tests; the workspace now relies solely on the published crates, which simplifies audits and shrinks the source tree.
π Prerequisites
- Rust: Version 1.70 or later
- Network Access: To source and target registries
- Disk Space: Sufficient space for caching large images
π οΈ Installation
Download Pre-built Binaries (Recommended)
Download the latest compiled binaries from GitHub Releases:
Available Platforms:
docker-image-pusher-linux-x86_64- Linux 64-bitdocker-image-pusher-macos-x86_64- macOS Inteldocker-image-pusher-macos-aarch64- macOS Apple Silicon (M1/M2)docker-image-pusher-windows-x86_64.exe- Windows 64-bit
Installation Steps:
- Visit the Releases page
- Download the binary for your platform from the latest release
- Make it executable and add to PATH:
# Linux/macOS
# Windows
# Move docker-image-pusher-windows-x86_64.exe to a directory in your PATH
# Rename to docker-image-pusher.exe if desired
Install from Crates.io
Install directly using Cargo from the official Rust package registry:
This will compile and install the latest published version from crates.io.
From Source
For development or customization:
The compiled binary will be available at target/release/docker-image-pusher (or .exe on Windows)
π Usage
Quick Start (two commands)
-
Login (once per registry)
Credentials are saved under
.cache/credentials.jsonfor reuse. -
Push a docker save tar directly
- Create the tar with
docker save nginx:latest -o nginx.tar(or any image you like). - During
push, the tool automatically combines the RepoTag inside the tar with the registry you just logged into (or the last five registries you pushed to) and prints something likeπ― Target image resolved as: registry.example.com/tools/nginx:latestbefore uploading. Pass--registry other.example.comif you need to override the destination host.
- Create the tar with
Need to cache an image first? Run pull or import (see the table below) and then call push <image>βthe flow is identical once the image is in .cache/.
Command Reference
| Command | When to use | Key flags |
|---|---|---|
pull <image> |
Cache an image from any registry | β |
import <tar> <name> |
Convert docker save output into cache |
β |
push <input> |
Upload cached image or tar; <input> can be nginx:latest or ./file.tar |
-t target override, --registry host override, --username/--password credential override |
login <registry> |
Save credentials for future pushes | --username, --password |
The push command now handles most of the bookkeeping automatically:
- infers a sensible destination from tar metadata, the last five targets, or stored credentials
- prompts once when switching registries (or auto-confirms if you accepted it before)
- imports
docker savearchives on the fly before uploading - reuses saved logins unless you pass explicit
--username/--password
Tips
- Need a different account temporarily? Pass
--username/--password(or use env vars such asDOCKER_USERNAME) and they override stored credentials for that run only. - Prefer scripting? Keep everything declarative:
loginonce inside CI, then runpull,push, done. - Unsure what target was used last time? Run
pushwithout-t; the history-based inference will suggest a sane default and print it before uploading.
ποΈ Architecture
Memory Optimization Strategy
Traditional Approach (High Memory):
[Registry] β [Full Image in Memory] β [Local Storage]
β
β Memory usage scales with image size
β Can exceed available RAM with large images
Optimized Approach (Low Memory):
[Registry] β [Stream Layer by Layer] β [Local Storage]
β
β
Constant memory usage regardless of image size
β
Handles multi-GB images efficiently
Cache Structure
Images are cached in .cache/ directory with the following structure:
.cache/
βββ {sanitized_image_name}/
βββ index.json # Metadata and layer list
βββ manifest.json # OCI image manifest
βββ config_{digest}.json # Image configuration
βββ {layer_digest_1} # Layer file 1
βββ {layer_digest_2} # Layer file 2
βββ ... # Additional layers
Processing Flow
Pull Operation:
- Fetch Manifest - Download image metadata (~1-5KB)
- Create Cache Structure - Set up local directories
- Stream Layers - Download each layer directly to disk
- Cache Metadata - Store manifest and configuration
- Create Index - Generate lookup metadata
Push Operation:
- Authenticate - Connect to target registry
- Read Cache - Load cached image metadata
- Upload Layers - Transfer layers with size-based optimization
- Upload Config - Transfer image configuration
- Push Manifest - Complete the image transfer
Layer Processing Strategies
| Layer Size | Strategy | Memory Usage | Description |
|---|---|---|---|
| < 100MB | Direct Read | ~Layer Size | Read entire layer into memory |
| > 100MB | Chunked Read | ~50MB | Read in 50MB chunks with delays |
| Any Size | Streaming | ~Buffer Size | Direct stream to/from disk |
π§ Configuration
Client Configuration
The tool uses these default settings:
// Platform resolver for multi-arch images
platform_resolver = linux_amd64_resolver
// Authentication methods
- Anonymous
- Basic Auth
// Chunk size for large layers
chunk_size = 50MB
// Rate limiting delays
large_layer_delay = 200ms
chunk_delay = 10ms
Customization
You can modify these settings in src/main.rs:
// Adjust chunk size for very large layers
let chunk_size = 100 * 1024 * 1024; // 100MB chunks
// Modify size threshold for chunked processing
if layer_size_mb > 50.0
// Adjust rate limiting
sleep.await; // Longer delay
Debugging OCI Traffic
Set the following environment variables to inspect the raw OCI flow without recompiling:
| Variable | Effect |
|---|---|
OCI_DEBUG=1 |
Logs every HTTP request/response handled by the internal OCI client (method, URL, status, scope). |
OCI_DEBUG_UPLOAD=1 |
Adds detailed tracing for blob uploads (upload session URLs, redirects, finalization). Inherits OCI_DEBUG when set. |
These logs run through println!, so they appear directly in the CLI output and can be piped to files for troubleshooting.
π Performance Comparison
Memory Usage (Processing 5GB Image)
| Method | Peak Memory | Notes |
|---|---|---|
| Traditional Docker | ~5.2GB | Loads layers into memory |
| This Tool | ~50MB | Streams with chunked processing |
Transfer Speed
- Network bound: Performance limited by network speed
- Consistent memory: No memory-related slowdowns
- Parallel-safe: Can run multiple instances without memory conflicts
π Troubleshooting
Common Issues
"Authentication failed"
Solution: Verify username/password and registry permissions
"Cache not found"
Solution: Run pull command first to cache the image
"Failed to create cache directory"
Solution: Check disk space and write permissions
Memory Issues (Still occurring)
If you're still experiencing memory issues:
- Check chunk size: Reduce chunk size in code
- Monitor disk space: Ensure sufficient space for caching
- Close other applications: Free up system memory
- Use sequential processing: Avoid concurrent operations
Debug Mode
Add debug logging by setting environment variable:
RUST_LOG=debug
π€ Contributing
Development Setup
Code Structure
src/main.rs- Lean CLI + shared constants (delegates to modules)src/push.rs- Push/import workflow, target inference, confirmation promptssrc/tar_import.rs- Tar parsing, RepoTag helpers, import pipelinesrc/cache.rs- Pull and caching logic with streamingsrc/state.rs- Credential storage + push history trackingPusherError- Custom error type re-exported frommain.rs
Adding Features
- New authentication methods: Extend
RegistryAuthusage - Progress bars: Add progress indication for long transfers
- Compression: Add layer compression/decompression support
- Parallel processing: Implement safe concurrent layer transfers
π License
[Add your license information here]
π Dependencies
- oci-client: OCI registry client with streaming support
- tokio: Async runtime for concurrent operations
- clap: Command-line argument parsing
- serde_json: JSON serialization for metadata
- thiserror: Structured error handling
π Future Enhancements
- Progress bars for long transfers
- Resume interrupted transfers
- Compression optimization
- Multi-registry synchronization
- Garbage collection for cache
- Configuration file support
- Integration with CI/CD pipelines
Happy Docker image transferring! π³