pub struct Config {
pub link_to_cache: bool,
pub overwrite: bool,
pub block_size: Option<u64>,
pub parallelism: Option<usize>,
pub retries: Option<usize>,
pub azure: AzureConfig,
pub s3: S3Config,
pub google: GoogleConfig,
}Expand description
Configuration used in a cloud copy operation.
Fields§
§link_to_cache: boolIf link_to_cache is true, then a downloaded file that is already
present (and fresh) in the cache will be hard linked at the requested
destination instead of copied.
If the creation of the hard link fails (for example, the cache exists on a different file system than the destination path), then a copy to the destination will be made instead.
Note that cache files are created read-only; if the destination is created as a hard link, it will also be read-only. It is not recommended to make the destination writable as writing to the destination path would corrupt the corresponding content entry in the cache.
When false, a copy to the destination is always performed.
overwrite: boolWhether or not the destination should be overwritten.
If false and the destination is a local file that already exists, the
copy operation will fail.
If false and the destination is a remote file, a network request will
be made for the URL; if the request succeeds, the copy operation will
fail.
block_size: Option<u64>The block size to use for file transfers.
The default block size depends on the cloud storage service.
parallelism: Option<usize>The parallelism level for network operations.
Defaults to the host’s available parallelism.
retries: Option<usize>The number of retries to attempt for network operations.
Defaults to 5.
azure: AzureConfigThe Azure Storage configuration.
s3: S3ConfigThe AWS S3 configuration.
google: GoogleConfigThe Google Cloud Storage configuration.
Implementations§
Source§impl Config
impl Config
Sourcepub fn parallelism(&self) -> usize
pub fn parallelism(&self) -> usize
Gets the parallelism supported for uploads and downloads.
For uploads, this is the number of blocks that may be concurrently transferred.
For downloads, this is the number of blocks that may be concurrently downloaded if the download supports ranged requests.
Defaults to the host’s available parallelism (or 1 if it cannot be determined).
Sourcepub fn retry_durations<'a>(&self) -> impl Iterator<Item = Duration> + use<'a>
pub fn retry_durations<'a>(&self) -> impl Iterator<Item = Duration> + use<'a>
Gets an iterator over the retry durations for network operations.
Retries use an exponential power of 2 backoff, starting at 1 second with a maximum duration of 10 minutes.