s3find 0.4.2

A command line utility to walk an Amazon S3 hierarchy. s3find is an analog of find for Amazon S3.
Documentation

s3find

build status codecov Crates.io

A command line utility to walk an Amazon S3 hierarchy. An analog of find for Amazon S3.

Usage

USAGE:
    s3find [OPTIONS] <path> [SUBCOMMAND]

FLAGS:
    -h, --help
            Prints help information

    -V, --version
            Prints version information


OPTIONS:
        --aws-access-key <aws_access_key>
            AWS access key. Unrequired.

        --aws-region <aws_region>
            The region to use. Default value is us-east-1 [default: us-east-1]

        --aws-secret-key <aws_secret_key>
            AWS secret key. Unrequired

        --size <bytes_size>...
            File size for match:
                5k - exact match 5k,
                +5k - bigger than 5k,
                -5k - smaller than 5k,

            Possible file size units are as follows:
                k - kilobytes (1024 bytes)
                M - megabytes (1024 kilobytes)
                G - gigabytes (1024 megabytes)
                T - terabytes (1024 gigabytes)
                P - petabytes (1024 terabytes)
        --iname <ipatern>...
            Case-insensitive glob pattern for match, can be multiple

        --limit <limit>
            Limit result

        --name <npatern>...
            Glob pattern for match, can be multiple

        --page-size <number>
            The number of results to return in each response to a
            list operation. The default value is 1000 (the maximum
            allowed). Using a lower value may help if an operation
            times out. [default: 1000]
        --regex <rpatern>...
            Regex pattern for match, can be multiple

        --mtime <time>...
            Modification time for match, a time period:
                +5d - for period from now-5d to now
                -5d - for period  before now-5d

            Possible time units are as follows:
                s - seconds
                m - minutes
                h - hours
                d - days
                w - weeks

            Can be multiple, but should be overlaping

ARGS:
    <path>
            S3 path to walk through. It should be s3://bucket/path


SUBCOMMANDS:
    -copy        Copy matched keys to a s3 destination
    -delete      Delete matched keys
    -download    Download matched keys
    -exec        Exec any shell program with every key
    -ls          Print the list of matched keys
    -lstags      Print the list of matched keys with tags
    -move        Move matched keys to a s3 destination
    -print       Extended print with detail information
    -public      Make the matched keys public available (readonly)
    -tags        Set the tags(overwrite) for the matched keys
    help         Prints this message or the help of the given subcommand(s)


The authorization flow is the following chain:
  * use credentials from arguments provided by users
  * use environment variable credentials: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  * use credentials via aws file profile.
    Profile can be set via environment variable AWS_PROFILE
    Profile file can be set via environment variable AWS_SHARED_CREDENTIALS_FILE
  * use AWS instance IAM profile
  * use AWS container IAM profile

Examples

Find path by glob pattern

Print

s3find 's3://example-bucket/example-path' --name '*' -print

Delete

s3find 's3://example-bucket/example-path' --name '*' -delete

List

s3find 's3://example-bucket/example-path' --name '*' -ls

List keys with tags

s3find 's3://example-bucket/example-path' --name '*' -lstags

Exec

s3find 's3://example-bucket/example-path' --name '*' -exec 'echo {}'

Download

s3find 's3://example-bucket/example-path' --name '*' -download

Copy files to another s3 location

s3find 's3://example-bucket/example-path' --name '*.dat' -copy -f 's3://example-bucket/example-path2'

Move files to another s3 location

s3find 's3://example-bucket/example-path' --name '*.dat' -move -f 's3://example-bucket/example-path2'

Set tags

s3find 's3://example-bucket/example-path' --name '*9*' -tags 'key:value' 'env:staging'

Make public available

s3find 's3://example-bucket/example-path' --name '*9*' -public

Find path by case insensitive glob pattern

s3find 's3://example-bucket/example-path' --iname '*s*' -ls

Find path by regex pattern

s3find 's3://example-bucket/example-path' --regex '1$' -print

Find path by size

Exact match

s3find 's3://example-bucket/example-path' --size 0 -print

Larger

s3find 's3://example-bucket/example-path' --size +10M -print

Smaller

s3find 's3://example-bucket/example-path' --size -10k -print

Find path by time

Files modified since last 10 seconds

s3find 's3://example-bucket/example-path' --time 10 -print

Files modified for the period before last 10 minutes

s3find 's3://example-bucket/example-path' --time +10m -print

Files modified since last 10 hours

s3find 's3://example-bucket/example-path' --time -10h -print

Multiple filters

Same filters

Files with size between 10 and 20 bytes

s3find 's3://example-bucket/example-path' --size +10 --size -20 -print

Different filters

s3find 's3://example-bucket/example-path' --size +10 --name '*file*' -print

Additional control

Select limited number of keys

s3find 's3://example-bucket/example-path' --name '*' --limit 10

Limit page size of the request

s3find 's3://example-bucket/example-path' --name '*' --page-size 100

How to build and install

Requirements: rust and cargo

# Build
cargo build --release

# Install from local source
cargo install

# Install latest from git
cargo install --git https://github.com/AnderEnder/s3find-rs

# Install from crate package
cargo install s3find