Crypsol Storage
Multi-cloud storage library for Rust. Upload files, process images, generate thumbnails — works with AWS S3, GCS, Azure Blob, Cloudflare R2, MinIO, and local filesystem.
Features
- Upload any file — PDFs, videos, ZIPs, whatever you need
- Image processing with automatic resize and thumbnail generation (Lanczos3)
- Supports JPEG, PNG, GIF, WebP
- Presigned GET/PUT URLs for temporary access (S3)
- File size, image dimension, and content-type validation
test_connection()on every backend — verify credentials and reachability before serving traffic- Fully async with Tokio
- Optional serde support for API responses
Installation
[]
= "0.2"
Feature flags:
| Flag | Default | Description |
|---|---|---|
s3 |
yes | AWS S3 / R2 / MinIO |
gcs |
no | Google Cloud Storage |
azure |
no | Azure Blob Storage |
local |
no | Local filesystem (dev/testing) |
serde |
no | Serialize/Deserialize on result types |
= { = "0.2", = ["s3", "local", "serde"] }
Usage
Setting up a backend
use ;
let backend = builder
.region
.bucket
.credentials
.build?;
let service = new;
R2, MinIO, or any S3-compatible service — just pass an endpoint:
let backend = builder
.region
.bucket
.credentials
.endpoint
.public_base_url
.build?;
Uploading images
Images are resized and a thumbnail is generated automatically.
use ImageUploadConfig;
// Default: 200x200 main, 50x50 thumbnail
let result = service
.upload_image_with_config
.await?;
println!;
println!;
Custom dimensions:
let config = ImageUploadConfig ;
let result = service
.upload_image_with_config
.await?;
Uploading files
Any file type works — PDF, MP4, ZIP, DOCX, etc. No processing, just a straight upload.
let result = service
.upload_file
.await?;
let result = service
.upload_file
.await?;
Download and delete
let file = service.download_object.await?;
println!;
service.delete_file.await?;
// Images: deletes both main and thumbnail
service.delete_image_with_thumbnail.await?;
Presigned URLs (S3 only)
let url = service.presigned_get_url.await?;
let upload = service
.presigned_upload_url
.await?;
Connection check
Verify that credentials are valid and the bucket is reachable before serving traffic:
service.test_connection.await?;
Other backends
Google Cloud Storage
use ;
// Using Application Default Credentials (ADC)
let backend = builder
.bucket
.build
.await?;
let service = new;
Or authenticate with a service-account JSON key directly:
let json = read_to_string?;
let backend = builder
.bucket
.credentials_json
.build
.await?;
Azure Blob Storage
use ;
let backend = builder
.account
.container
.access_key
.build?;
let service = new;
Local filesystem
Good for development and tests.
use ;
let backend = new;
let service = new;
Configuration
All limits are controlled through StorageConfig:
| Field | Default | Description |
|---|---|---|
max_file_size |
5 MB | Max upload size |
max_download_size |
100 MB | Max download size |
max_image_alloc |
50 MB | Max decoded image memory |
max_image_dimension |
10,000 px | Max image dimension (input and output) |
default_cache_control |
max-age=31536000 |
Cache-Control header for uploads |
If you want users to control upload size via env in your app:
let max_upload_mb: usize = var
.ok
.and_then
.unwrap_or;
let config = StorageConfig ;
Environment variables
Credentials are passed through builders, not read from env directly. Most apps will load them from the environment though — see example.env for a template.
let backend = builder
.region
.bucket
.credentials
.build?;
| Variable | Backend | Required | Description |
|---|---|---|---|
STORAGE_S3_REGION |
S3 / R2 / MinIO | yes | Region (us-east-1, auto for R2) |
STORAGE_S3_BUCKET |
S3 / R2 / MinIO | yes | Bucket name |
STORAGE_S3_ACCESS_KEY |
S3 / R2 / MinIO | yes | Access key ID |
STORAGE_S3_SECRET_KEY |
S3 / R2 / MinIO | yes | Secret access key |
STORAGE_S3_ENDPOINT |
R2 / MinIO | no | Custom endpoint URL |
STORAGE_S3_PUBLIC_URL |
S3 / R2 / MinIO | no | Public URL override (CDN) |
GOOGLE_APPLICATION_CREDENTIALS |
GCS | no | Path to service-account JSON (ADC) |
STORAGE_GCS_CREDENTIALS_JSON |
GCS | no | Raw service-account JSON string |
STORAGE_GCS_BUCKET |
GCS | yes | Bucket name |
STORAGE_GCS_PUBLIC_URL |
GCS | no | Public URL override |
STORAGE_AZURE_ACCOUNT |
Azure | yes | Account name |
STORAGE_AZURE_CONTAINER |
Azure | yes | Container name |
STORAGE_AZURE_ACCESS_KEY |
Azure | yes | Account key |
STORAGE_AZURE_PUBLIC_URL |
Azure | no | Public URL override |
STORAGE_LOCAL_BASE_DIR |
Local | yes | Storage directory |
STORAGE_LOCAL_PUBLIC_URL |
Local | yes | Base URL for file access |
Error handling
use Error;
match service.upload_image_with_config.await
Backend parity
Not every backend supports every feature:
| Capability | S3 | GCS | Azure | Local |
|---|---|---|---|---|
| Upload headers | yes | yes | partial | no |
| Download content-type | yes | yes | no | guessed |
| Presigned URLs | yes | no | no | no |
| test_connection() | yes | yes | yes | yes |
License
MIT — see LICENSE.