Chunked Upload Server
A production-ready Rust HTTP server supporting resumable chunked uploads for large files (>10GB), designed for Cloudflare compatibility with 50MB chunk sizes.
Features
- Large File Support: Upload files of any size (10GB+)
- Resumable Uploads: Continue interrupted uploads from where they left off
- Cloudflare Compatible: 50MB default chunk size fits within Cloudflare's request limits
- JWT-based Part Authentication: Each chunk has its own secure token
- Multiple Storage Backends: Local filesystem, SMB/NAS, or S3-compatible storage
- Custom Paths: Include path in filename (e.g.,
videos/2024/movie.mp4) to organize files - Auto Cleanup: Expired incomplete uploads are automatically cleaned up
- Progress Tracking: Real-time upload progress via SQLite persistence
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Client │
└─────────────────────────────────────────────────────────────────┘
│
1. POST /upload/init (API Key)
│ filename: "videos/2024/movie.mp4"
│ Returns: file_id + JWT tokens for each part
▼
┌─────────────────────────────────────────────────────────────────┐
│ Upload Server (Rust/Axum) │
├─────────────────────────────────────────────────────────────────┤
│ 2. PUT /upload/{id}/part/{n} (JWT Token per part) │
│ - Validates token │
│ - Stores chunk │
│ - Updates SQLite │
│ │
│ 3. GET /upload/{id}/status (API Key) │
│ - Returns progress: [{part: 0, status: "uploaded"}, ...] │
│ │
│ 4. POST /upload/{id}/complete (API Key) │
│ - Assembles all parts │
│ - Returns final file path │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────────────┐ ┌─────────────────┐ ┌─────────────────────┐
│ Local Storage │ │ SMB/NAS │ │ S3 Storage │
│ ./uploads/ │ │ \\server\share│ │ s3://bucket/ │
└─────────────────────┘ └─────────────────┘ └─────────────────────┘
Quick Start
1. Setup
# Initialize environment (generates .env with secure random keys)
# Edit .env if needed (e.g., change storage path, port)
# Build
2. Deploy as Service
macOS (launchd)
# Deploy and start service (creates LaunchAgent and loads it)
# Service management
|
# View logs
Linux (systemd)
# Deploy and start service (requires sudo)
# Service management
# View logs
# or
Manual Run (Development)
3. Initialize Upload
With custom path (path is extracted from filename):
This will store the file at videos/2024/december/uuid_large-video.mp4
Response:
4. Upload Parts
Upload each 50MB chunk with its corresponding JWT token:
# Upload part 0
Response:
5. Check Progress (for resume)
Response:
6. Complete Upload
After all parts are uploaded:
Response:
With S3 backend (and path in filename):
7. Cancel Upload (optional)
API Reference
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/upload/init |
POST | API Key | Initialize upload, get part tokens |
/upload/{id}/part/{n} |
PUT | JWT (per part) | Upload a single chunk |
/upload/{id}/status |
GET | API Key | Get upload progress |
/upload/{id}/complete |
POST | API Key | Assemble all parts |
/upload/{id} |
DELETE | API Key | Cancel and cleanup |
/health |
GET | None | Health check |
Configuration
| Variable | Default | Description |
|---|---|---|
API_KEY |
required | API key for authentication |
JWT_SECRET |
required | Secret for JWT token signing |
STORAGE_BACKEND |
local |
local, smb, or s3 |
LOCAL_STORAGE_PATH |
./uploads |
Path for local storage |
TEMP_STORAGE_PATH |
system temp | Local path for temporary chunk storage (fast SSD recommended). Used by S3 and SMB backends. |
SMB_HOST |
localhost |
SMB server hostname or IP |
SMB_PORT |
445 |
SMB server port |
SMB_USER |
SMB username | |
SMB_PASS |
SMB password | |
SMB_SHARE |
share |
SMB share name |
SMB_PATH |
Subdirectory within the share (optional) | |
S3_ENDPOINT |
AWS default | S3 endpoint URL |
S3_BUCKET |
uploads |
S3 bucket name |
S3_REGION |
us-east-1 |
S3 region |
CHUNK_SIZE_MB |
50 |
Chunk size in MB |
UPLOAD_TTL_HOURS |
24 |
Hours before incomplete uploads expire |
DATABASE_PATH |
./uploads.db |
SQLite database path |
SERVER_PORT |
3000 |
Server port |
Resume Upload Flow
- Client starts upload with
POST /upload/init - Client uploads chunks in parallel or sequence
- If interrupted, client calls
GET /upload/{id}/status - Response shows which parts are
pendingvsuploaded - Client re-uploads only
pendingparts using original tokens - When all parts uploaded, call
POST /upload/{id}/complete
Example Client (Python)
=
=
=
= 50 * 1024 * 1024 # 50MB
"""
Upload a file to the chunked upload server.
Args:
file_path: Local path to the file
target_path: Optional remote path (e.g., "videos/2024")
"""
=
=
# Include target path in filename if specified
= f
# 1. Initialize upload
=
=
=
=
# 2. Upload each part
=
=
# Read chunk
=
break
# Upload
=
=
# 3. Complete upload
=
# Simple upload (file goes to default location)
# Upload to specific path
JavaScript/TypeScript SDK
Official SDK for browser and Node.js: chunked-uploader-sdk
Features
- Large File Support: Upload files of any size (10GB+)
- Automatic Chunking: Files split into 50MB chunks (Cloudflare compatible)
- Parallel Uploads: Configurable concurrency for faster uploads
- Resumable: Continue interrupted uploads from where they left off
- Progress Tracking: Real-time progress callbacks
- Retry Logic: Automatic retry for failed chunks
- TypeScript: Full type definitions included
- Isomorphic: Works in both browser and Node.js
Installation
Quick Start
import { ChunkedUploader } from 'chunked-uploader-sdk';
const uploader = new ChunkedUploader({
baseUrl: 'https://upload.example.com',
apiKey: 'your-api-key',
});
// Upload a file with progress tracking
const result = await uploader.uploadFile(file, {
onProgress: (event) => {
console.log(`Progress: ${event.overallProgress.toFixed(1)}%`);
},
});
if (result.success) {
console.log('Upload complete:', result.finalPath);
} else {
console.error('Upload failed:', result.error);
}
Browser Example
const uploader = new ChunkedUploader({
baseUrl: 'http://localhost:3000',
apiKey: 'your-api-key',
});
// File input handler
const input = document.querySelector('input[type="file"]') as HTMLInputElement;
input.addEventListener('change', async () => {
const file = input.files?.[0];
if (!file) return;
const result = await uploader.uploadFile(file, {
concurrency: 5, // Upload 5 parts simultaneously
onProgress: (event) => {
progressBar.style.width = `${event.overallProgress}%`;
statusText.textContent = `Uploading part ${event.uploadedParts}/${event.totalParts}`;
},
onPartComplete: (result) => {
if (!result.success) {
console.error(`Part ${result.partNumber} failed:`, result.error);
}
},
});
console.log(result);
});
Resume Interrupted Upload
// Store part tokens from initial upload
const tokenMap = new Map<number, string>();
initResponse.parts.forEach(p => tokenMap.set(p.part, p.token));
// Later, resume the upload
const result = await uploader.resumeUpload(uploadId, file, {
partTokens: tokenMap,
onProgress: (event) => console.log(`${event.overallProgress}%`),
});
Cancellable Upload
const abortController = new AbortController();
// Cancel button
cancelButton.addEventListener('click', () => {
abortController.abort();
});
const result = await uploader.uploadFile(file, {
signal: abortController.signal,
});
if (!result.success && result.error?.message === 'Upload aborted') {
console.log('Upload was cancelled');
}
Node.js Usage
import { ChunkedUploader } from 'chunked-uploader-sdk';
import { readFile } from 'fs/promises';
const uploader = new ChunkedUploader({
baseUrl: 'http://localhost:3000',
apiKey: 'your-api-key',
concurrency: 5,
});
async function uploadFromDisk(filePath: string) {
const buffer = await readFile(filePath);
const result = await uploader.uploadFile(buffer, {
onProgress: (event) => {
process.stdout.write(`\rProgress: ${event.overallProgress.toFixed(1)}%`);
},
});
console.log('\nUpload complete:', result);
}
Configuration Options
interface ChunkedUploaderConfig {
/** Base URL of the chunked upload server */
baseUrl: string;
/** API key for management endpoints */
apiKey: string;
/** Request timeout in milliseconds (default: 30000) */
timeout?: number;
/** Number of concurrent chunk uploads (default: 3) */
concurrency?: number;
/** Retry attempts for failed chunks (default: 3) */
retryAttempts?: number;
/** Delay between retries in milliseconds (default: 1000) */
retryDelay?: number;
/** Custom fetch implementation */
fetch?: typeof fetch;
}
SDK Methods
| Method | Description |
|---|---|
uploadFile(file, options?) |
Upload a file with automatic chunking and parallel uploads |
resumeUpload(uploadId, file, options?) |
Resume an interrupted upload |
initUpload(filename, totalSize, webhookUrl?) |
Initialize an upload session manually |
uploadPart(uploadId, partNumber, token, data, signal?) |
Upload a single chunk |
getStatus(uploadId) |
Get upload progress and status |
completeUpload(uploadId) |
Complete an upload (assemble all parts) |
cancelUpload(uploadId) |
Cancel an upload and cleanup |
healthCheck() |
Check server health |
Scripts Reference
| Script | Description |
|---|---|
init.sh |
Generates .env file with secure random API_KEY and JWT_SECRET |
deploy-mac.sh |
Creates macOS LaunchAgent and starts service (auto-restarts on reboot) |
deploy-linux.sh |
Creates systemd service and starts it (requires sudo, auto-restarts on reboot) |
init.sh
Initializes the environment configuration:
- Generates cryptographically secure
API_KEYandJWT_SECRET - Creates
.envfile with default settings - Creates
uploadsdirectory
deploy-mac.sh
Deploys on macOS using launchd:
- Creates
~/Library/LaunchAgents/com.grace.chunked-uploader.plist - Waits for external volumes (e.g.,
/Volumes/...) to mount before starting - Auto-restarts if the process crashes
- Starts automatically on login
deploy-linux.sh
Deploys on Linux using systemd:
- Creates
/etc/systemd/system/chunked-uploader.service - Waits for mount points (e.g.,
/mnt/...,/media/...) before starting - Auto-restarts if the process crashes
- Starts automatically on boot
Building
# Default build (local storage only)
# With SMB/NAS support (pure Rust, no external dependencies)
# With S3 support (requires native crypto libs)
# With both SMB and S3 support
# The binary will be at:
# Run with custom config
API_KEY=xxx JWT_SECRET=yyy
Build Requirements
- Rust 1.70+
- SQLite development libraries (usually bundled)
- For S3 feature: native crypto libraries (OpenSSL or aws-lc)
Webhook Notifications
When initializing an upload, you can provide a webhook_url. When the upload completes, the server will POST a notification to that URL:
The webhook is called asynchronously and does not block the complete response.
S3 Storage Setup
For S3-compatible storage (AWS S3, MinIO, etc.):
# .env
STORAGE_BACKEND=s3
S3_ENDPOINT=https://play.min.io # or AWS endpoint
S3_BUCKET=my-uploads
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
# Optional: Fast local storage for temporary chunks (recommended)
TEMP_STORAGE_PATH=/tmp/chunked-uploads
S3 Storage Architecture
The S3 backend uses a hybrid approach for optimal performance:
- Chunks are stored locally on fast storage (SSD) during upload
- Final assembled file is uploaded to S3 after all chunks complete
- Automatic cleanup of local temporary files
This design ensures:
- Fast chunk uploads (no network latency per chunk)
- No S3 multipart upload complexity or minimum part size restrictions
- Reliable large file transfers to S3
- Works with any S3-compatible storage (AWS S3, MinIO, Cloudflare R2, etc.)
Building with S3 Support
# Build with S3 feature
Running S3 Tests
# Ensure .env has S3 credentials configured
# Start server with S3 backend
# Run integration tests (in another terminal)
SMB/NAS Storage Setup
For SMB/CIFS network storage (NAS devices, Windows shares, Samba):
# .env
STORAGE_BACKEND=smb
SMB_HOST=192.168.1.100 # NAS IP or hostname
SMB_PORT=445 # Default SMB port
SMB_USER=admin # SMB username
SMB_PASS=your-password # SMB password
SMB_SHARE=uploads # Share name on the server
SMB_PATH=videos # Optional: subdirectory within share
# Optional: Fast local storage for temporary chunks (recommended)
TEMP_STORAGE_PATH=/tmp/chunked-uploads
SMB Storage Architecture
The SMB backend uses a hybrid approach for optimal performance:
- Chunks are stored locally on fast storage (SSD) during upload
- Final assembled file is transferred to SMB after all chunks complete
- Automatic cleanup of local temporary files
This design ensures:
- Fast chunk uploads (no network latency per chunk)
- Reliable large file transfers to NAS
- Works with any SMB 3.0+ compatible server (Synology, QNAP, TrueNAS, Windows, Samba)
Building with SMB Support
# Build with SMB feature (pure Rust, no external dependencies)
macOS Local Network Permission
On macOS Sequoia (15.x) and later, apps need permission to access local network resources. When deploying with deploy-mac.sh, the service may need Local Network permission:
- Run the binary once manually to trigger the permission prompt:
&& - If prompted, allow "Local Network" access in System Settings > Privacy & Security > Local Network
- Then deploy normally with
./deploy-mac.sh
Troubleshooting SMB Connection
If SMB connection fails:
# Test network connectivity
# Test SMB port
# Test SMB connection (on macOS/Linux)
# Check server logs
Common issues:
- "No route to host": Network/firewall issue or macOS Local Network permission needed
- "Access denied": Check username/password
- "Share not found": Verify share name exists on server
License
MIT