Docker Image Pusher
A memory-optimized Docker image transfer tool designed to handle large Docker images without excessive memory usage. This tool addresses the common problem of memory exhaustion when pulling or pushing multi-gigabyte Docker images.
🎯 Problem Statement
Traditional Docker image tools often load entire layers into memory, which can cause:
- Memory exhaustion with large images (>1GB)
- System instability when processing multiple large layers
- Failed transfers due to insufficient RAM
- Poor performance on resource-constrained systems
🚀 Solution
This tool implements streaming-based layer processing using the OCI client library:
- ✅ Streaming Downloads: Layers are streamed directly to disk without loading into memory
- ✅ Sequential Processing: Processes one layer at a time to minimize memory footprint
- ✅ Chunked Uploads: Layers ≥500MB stream in ~5MB chunks (auto-expands when registries demand larger slices)
- ✅ Local Caching: Efficient caching system for faster subsequent operations
- ✅ Progress Monitoring: Real-time feedback on transfer progress and layer sizes
🆕 What's New in 0.5.4
- Pipeline lives in
oci-core– The concurrent extraction/upload queue, blob-existence checks, rate limiting, and telemetry now ship inside the reusableoci-core::blobsmodule. Other projects can embed the exact same uploader without copy/paste. - Prefetch-aware chunk uploads – Large layers read an initial chunk into memory before network I/O begins, giving registries a steady stream immediately and honoring any server-provided chunk size hints mid-flight.
- Tar importer emits shared
LocalLayerstructs –tar_import.rsnow returns the exact structs consumed byLayerUploadPool, eliminating adapter code and reducing memory copies during extraction. - Cleaner push workflow –
src/push.rsdelegates scheduling toLayerUploadPool, so the CLI only worries about plan setup, manifest publishing, and user prompts. The parallelism cap and chunk sizing still respect the same CLI flags as before. - Docs caught up – This README now documents the pipeline-focused architecture, the new reusable uploader, and the 0.5.4 feature set.
OCI Core Library
The OCI functionality now lives inside crates/oci-core, an MIT-licensed library crate that
can be embedded in other tools. It exposes:
reference– a no-dependency reference parser with richOciErrorsignalsauth– helpers for anonymous/basic auth negotiationclient– an asyncreqwestuploader/downloader that understands chunked blobs, real-time telemetry, and registry-provided chunk hints
docker-image-pusher consumes oci-core through a normal Cargo path dependency, mirroring how
Rust itself treats the core crate. This keeps the CLI boundary clean while enabling other
projects to reuse the same stable OCI primitives without pulling in the rest of the binary.
🛠️ Installation
Download Pre-built Binaries (Recommended)
Download the latest compiled binaries from GitHub Releases:
Available Platforms:
docker-image-pusher-linux-x86_64- Linux 64-bitdocker-image-pusher-macos-x86_64- macOS Inteldocker-image-pusher-macos-aarch64- macOS Apple Silicon (M1/M2)docker-image-pusher-windows-x86_64.exe- Windows 64-bit
Installation Steps:
- Visit the Releases page
- Download the binary for your platform from the latest release
- Make it executable and add to PATH:
# Linux/macOS
# Windows
# Move docker-image-pusher-windows-x86_64.exe to a directory in your PATH
# Rename to docker-image-pusher.exe if desired
Install from Crates.io
Install directly using Cargo from the official Rust package registry:
This will compile and install the latest published version from crates.io.
From Source
For development or customization:
The compiled binary will be available at target/release/docker-image-pusher (or .exe on Windows)
📖 Usage
Quick Start (three commands)
-
Login (once per registry)
Credentials are saved under
.docker-image-pusher/credentials.jsonand reused automatically. -
Save a local image to a tarball
- Detects Docker/nerdctl/Podman automatically (or pass
--runtime). - Prompts for image selection if you omit arguments.
- Produces a sanitized tar such as
./nginx_latest.tar.
- Detects Docker/nerdctl/Podman automatically (or pass
-
Push the tar archive
- The RepoTag embedded in the tar is combined with the most recent registry you authenticated against (or
--target/--registryoverrides). - If the destination image was confirmed previously, we auto-continue after a short pause; otherwise we prompt before uploading.
- The RepoTag embedded in the tar is combined with the most recent registry you authenticated against (or
Command Reference
| Command | When to use | Key flags |
|---|---|---|
save [IMAGE ...] |
Export one or more local images to tar archives | --runtime, --output-dir, --force |
push <tar> |
Upload a docker-save tar archive directly to a registry | -t/--target, --registry, --username/--password, --blob-chunk |
login <registry> |
Persist credentials for future pushes | --username, --password |
The push command now handles most of the bookkeeping automatically:
- infers a sensible destination from tar metadata, the last five targets, or stored credentials
- prompts once when switching registries (or auto-confirms if you accepted it before)
- imports
docker savearchives on the fly before uploading - reuses saved logins unless you pass explicit
--username/--password
🏗️ Architecture
Memory Optimization Strategy
Traditional Approach (High Memory):
[Registry] → [Full Image in Memory] → [Local Storage]
↓
❌ Memory usage scales with image size
❌ Can exceed available RAM with large images
Optimized Approach (Low Memory):
[Registry] → [Stream Layer by Layer] → [Local Storage]
↓
✅ Constant memory usage regardless of image size
✅ Handles multi-GB images efficiently
State Directory
Credential material and push history are stored under .docker-image-pusher/:
.docker-image-pusher/
├── credentials.json # registry → username/password pairs from `login`
└── push_history.json # most recent destinations (used for inference/prompts)
Tar archives produced by save live wherever you choose to write them (current directory by default). They remain ordinary docker save outputs, so you can transfer them, scan them, or delete them independently of the CLI state.
Processing Flow
Save Operation (runtime → tar):
- Runtime detection – locate Docker, nerdctl, or Podman (or honor
--runtime). - Image selection – parse JSON output from
images --format '{{json .}}'and optionally prompt. - Tar export – call
<runtime> save image -o file.tar, sanitizing filenames and warning before overwrites.
Push Operation (tar → registry):
- Authenticate – load stored credentials or prompt for overrides.
- Tar analysis – extract RepoTags + manifest to infer the final destination.
- Layer extraction – stream each layer from the tar into temporary files while hashing and reporting progress.
- Layer/config upload – reuse existing blobs when present, otherwise stream in fixed-size chunks with telemetry.
- Manifest publish – rebuild the OCI manifest and push it once all blobs are present.
🤝 Welcome Contributing
Happy Docker image transferring! 🐳