Hyperfetch
Fast, reliable CLI downloader for HTTP(S) with multi-connection segmented downloads, resume, retries, custom headers, and a clean progress UI.
Features
- Parallel chunked downloads (when the server supports byte ranges)
- Auto-resume from partially downloaded files (on by default)
- Adaptive chunk sizing for steadier throughput
- Retriable chunks with backoff delay
- Custom headers and User-Agent
- Progress bar or quiet/verbose console output
Install
Option 1: Prebuilt binaries (recommended)
Grab the latest release from the GitHub Releases page:
- Linux (x86_64):
hyperfetch - Windows (x86_64):
hyperfetch.exe
Linux:
# Download the hyperfetch linux executable from Releases, then:
Windows (PowerShell):
# Download the .exe from Releases, then:
./hyperfetch.exe --help
Option 2: Build from source
Requires Rust (stable).
# In the repo root
# Binary will be at:
# target/release/hyperfetch
Usage
Basic:
Specify output path:
Increase concurrency (default 16):
Set chunk size (supports K, M, G, T; default 1M):
Resume is enabled by default. To disable:
Adaptive chunking is enabled by default. To disable:
Custom User-Agent and headers (repeat --header as needed):
Retries and timeouts:
Quiet and verbose modes:
# Quiet (no progress bar)
# Verbose (extra console info)
Full CLI reference
Usage: hyperfetch [OPTIONS] <URL>
Options:
-o, --output <FILE> Output file path
-c, --connections <NUM> Maximum concurrent connections [default: 16]
--chunk-size <SIZE> Chunk size (K/M/G/T) [default: 1M]
-u, --user-agent <STRING> Custom User-Agent
--header <HEADER> Custom HTTP header (format: 'Name: Value') [repeatable]
--no-resume Disable resume
--no-adaptive Disable adaptive chunking
-r, --retries <NUM> Number of retry attempts [default: 3]
-t, --timeout <SECONDS> Connection timeout [default: 30]
-q, --quiet Quiet mode (no progress bar)
-v, --verbose Verbose output
-h, --help Print help
-V, --version Print version
Tips and notes
- Parallel speedups require server support for range requests (
Accept-Ranges: bytes). If not supported, Hyperfetch falls back to a single stream. - For time-limited or authenticated URLs, you may need to pass headers (for example,
AuthorizationorCookie). - On flaky networks, keep resume enabled and increase
--retries. - Very small files can be slower with many connections; reduce
--connectionsor rely on the default.
Examples
Download a large file with higher concurrency and custom chunk size:
Resume a partially downloaded file:
Authenticated download (bearer token):
Releases automation
Tagged releases (e.g., v0.1.2) automatically build Linux and Windows archives and attach them to the GitHub Releases page.
Troubleshooting
- "Server doesn’t support range requests": Expect single-connection performance; try again with fewer connections.
- "Permission denied" on Linux:
chmod +x hyperfetchbefore running. - "TLS/certificate" errors: verify the URL and certificates; some private endpoints require passing custom headers.
License
This project is open source; see the repository for license details.