find_duplicate_files-0.8.6 has been yanked.
find_duplicate_files
This program finds duplicate files by calculating their hash values.
The chosen hashing algorithm was blake version 3.
To find duplicate files in a directory, run the command:
find_duplicate_files
Help
Type in the terminal find_duplicate_files -h
to see the help messages and all available options:
find duplicate files according to their blake3 hash
Usage: find_duplicate_files [OPTIONS]
Options:
-f, --full_path
Prints full path of duplicate files, otherwise relative path
-g, --generate <GENERATOR>
If provided, outputs the completion file for given shell [possible values: bash, elvish, fish, powershell, zsh]
-m, --max_depth <MAX_DEPTH>
Set the maximum depth to search for duplicate files
-o, --omit_hidden
Omit hidden files (starts with '.'), otherwise search all files
-p, --path <PATH>
Set the path where to look for duplicate files, otherwise use the current directory
-r, --result_format <RESULT_FORMAT>
Print the result in the chosen format [default: personal] [possible values: json, yaml, personal]
-s, --sort
Sort result by file size, otherwise sort by number of duplicate files
-t, --time
Show total execution time
-h, --help
Print help (see more with '--help')
-V, --version
Print version
Building
To build and install from source, run the following command:
cargo install find_duplicate_files
Another option is to clone/copy the project from github, compile and generate the executable:
git clone https://github.com/claudiofsr/find_duplicate_files.git
cd find_duplicate_files
cargo b -r && cargo install --path=.
Mutually exclusive features: jwalk or walkdir.
In general, jwalk (default) is faster than walkdir.
But if you prefer to use walkdir:
cargo b -r && cargo install --path=. --features walkdir