ic-file-uploader
ic-file-uploader is a Rust crate designed to facilitate the efficient uploading of files larger than 2MB to the Internet Computer. This crate focuses on breaking down large files into manageable chunks that fit within packet size limits and passing them to update calls which write these chunks to files.
Use Cases
- Large File Handling: Efficiently manage and upload large singular files.
Features
- Chunk-based Uploads: Automatically splits large files into 2MB chunks for efficient transfer
- Parallel Uploads: Upload multiple chunks concurrently with configurable rate limiting
- Resume Support: Resume interrupted uploads from where they left off
- Retry Logic: Automatically retry failed chunks with exponential backoff
- Progress Tracking: Real-time progress reporting and upload rate monitoring
- Flexible Configuration: Customizable chunk size, retry attempts, and concurrency limits
Installation
From crates.io
From source
Usage
Basic Upload
Parallel Upload (Recommended for large files)
Resume an interrupted upload
Upload with custom network
Retry specific failed chunks
Command Line Options
--parallel: Enable parallel upload mode for better performance--max-concurrent <N>: Maximum number of concurrent uploads (default: 4)--target-rate <RATE>: Target upload rate in MiB/s (default: 4.0)--chunk-offset <N>: Start uploading from chunk N (for resume)--autoresume: Enable automatic resume with retry attempts--max-retries <N>: Maximum retry attempts per chunk (default: 3)--network <NETWORK>: Specify dfx network (local, ic, etc.)--retry-chunks-file <FILE>: Retry only specific chunk IDs from file
Canister Integration
Your canister needs to implement methods that accept chunked data. For parallel uploads, the method should accept:
// For parallel uploads
append_parallel_chunk : (nat32, blob) -> ();
// For sequential uploads
append_chunk : (blob) -> ();
Example Rust canister implementation:
use RefCell;
use HashMap;
thread_local!
Examples
Upload a large model file
# Upload a 50MB machine learning model with parallel chunks
Resume a failed upload
# If upload fails at chunk 15, resume from there
Production upload with rate limiting
# Upload to IC mainnet with conservative rate limiting
Performance Tips
- Use
--parallelfor files larger than 10MB - Adjust
--max-concurrentbased on your network and canister capacity - Use
--target-rateto avoid overwhelming the canister - Enable
--autoresumefor unreliable network connections
Troubleshooting
Upload hangs or doesn't complete
- Try reducing
--max-concurrentto 1 or 2 - Lower the
--target-rate - Check canister logs for memory or processing limits
Chunks fail repeatedly
- Verify your canister method signature matches expected format
- Check canister cycle balance
- Ensure sufficient canister memory for storing chunks
Resume not working
- Use
--chunk-offsetwith the exact chunk number where upload failed - Combine with
--autoresumefor automatic retry logic
Use Cases
- Large File Handling: Upload datasets, models, and media files to IC canisters
- Bulk Data Transfer: Efficiently transfer large amounts of data with progress tracking
- Reliable Uploads: Resume interrupted transfers without starting over
- CI/CD Integration: Automated deployment of large assets to IC canisters
Requirements
- Rust 1.70+ (for building from source)
dfxcommand-line tool installed and configured- Internet Computer canister with appropriate upload methods
License
All original work is licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT), at your option.