Skip to main content

Module s3

Module s3 

Source
Expand description

AWS S3 SDK function wrappers for caching deployer artifacts

Structs§

InstanceFileUrls
Result of uploading instance files to S3
Region
The region to send requests to.

Enums§

UploadSource
Source for S3 upload

Constants§

DEPLOYMENTS_PREFIX
Prefix for per-deployment data
MAX_CONCURRENT_HASHES
Maximum number of concurrent file hash operations
MAX_HASH_BUFFER_SIZE
Maximum buffer size for file hashing (32MB)
PRESIGN_DURATION
Duration for pre-signed URLs (6 hours)
TOOLS_BINARIES_PREFIX
Prefix for tool binaries: tools/binaries/{tool}/{version}/{platform}/{filename}
TOOLS_CONFIGS_PREFIX
Prefix for tool configs: tools/configs/{deployer_version}/{component}/{file}
WGET
Common wget prefix with retry settings for S3 downloads

Functions§

cache_and_presign
Caches content to S3 if it doesn’t exist, then returns a pre-signed URL
create_client
Creates an S3 client for the specified AWS region
delete_bucket
Deletes a bucket (must be empty first)
delete_bucket_and_contents
Deletes all objects in a bucket and then deletes the bucket itself
delete_bucket_config
Deletes the bucket config file so a new bucket name is generated on next use.
delete_prefix
Deletes all objects under a prefix in S3 using batch delete (up to 1000 objects per request)
ensure_bucket_exists
Ensures the S3 bucket exists, creating it if necessary
get_bucket_name
Gets the bucket name, generating one if it doesn’t exist. The bucket name is stored in ~/.commonware_deployer/bucket.
hash_file
Computes the SHA256 hash of a file and returns it as a hex string. Uses spawn_blocking internally to avoid blocking the async runtime.
hash_files
Computes SHA256 hashes for multiple files concurrently. Returns a map from file path to hex-encoded digest.
is_no_such_bucket_error
Checks if an error is a “bucket does not exist” error
object_exists
Checks if an object exists in S3
presign_url
Generates a pre-signed URL for downloading an object from S3
upload_instance_files
Uploads binary and config files for instances to S3 with digest-based deduplication.