# Crypsol Storage
Multi-cloud storage library for Rust. Upload files, process images, generate thumbnails — works with AWS S3, GCS, Azure Blob, Cloudflare R2, MinIO, and local filesystem.
[](https://crates.io/crates/crypsol_storage)
[](https://docs.rs/crypsol_storage)
[](https://opensource.org/licenses/MIT)
## Features
- Upload any file — PDFs, videos, ZIPs, whatever you need
- Image processing with automatic resize and thumbnail generation (Lanczos3)
- Supports JPEG, PNG, GIF, WebP
- Presigned GET/PUT URLs for temporary access (S3)
- File size, image dimension, and content-type validation
- `test_connection()` on every backend — verify credentials and reachability before serving traffic
- Fully async with Tokio
- Optional serde support for API responses
## Installation
```toml
[dependencies]
crypsol_storage = "0.2"
```
Feature flags:
| `s3` | **yes** | AWS S3 / R2 / MinIO |
| `gcs` | no | Google Cloud Storage |
| `azure` | no | Azure Blob Storage |
| `local` | no | Local filesystem (dev/testing) |
| `serde` | no | `Serialize`/`Deserialize` on result types |
```toml
crypsol_storage = { version = "0.2", features = ["s3", "local", "serde"] }
```
## Usage
### Setting up a backend
```rust
use crypsol_storage::{S3Backend, StorageConfig, StorageService};
let backend = S3Backend::builder()
.region("us-east-1")
.bucket("my-bucket")
.credentials("AKID...", "secret...")
.build()?;
let service = StorageService::new(backend, StorageConfig::default());
```
R2, MinIO, or any S3-compatible service — just pass an endpoint:
```rust
let backend = S3Backend::builder()
.region("auto")
.bucket("my-bucket")
.credentials("key", "secret")
.endpoint("https://account.r2.cloudflarestorage.com")
.public_base_url("https://cdn.example.com")
.build()?;
```
### Uploading images
Images are resized and a thumbnail is generated automatically.
```rust
use crypsol_storage::ImageUploadConfig;
// Default: 200x200 main, 50x50 thumbnail
let result = service
.upload_image_with_config(&image_bytes, "image/jpeg", &ImageUploadConfig::default())
.await?;
println!("Image: {}", result.url);
println!("Thumbnail: {}", result.thumbnail_url);
```
Custom dimensions:
```rust
let config = ImageUploadConfig {
width: 800,
height: 600,
thumbnail_width: 150,
thumbnail_height: 150,
folder: "products".into(),
maintain_aspect_ratio: true,
};
let result = service
.upload_image_with_config(&data, "image/png", &config)
.await?;
```
### Uploading files
Any file type works — PDF, MP4, ZIP, DOCX, etc. No processing, just a straight upload.
```rust
let result = service
.upload_file(&pdf_bytes, "application/pdf", "documents", "pdf")
.await?;
let result = service
.upload_file(&video, "video/mp4", "media", "mp4")
.await?;
```
### Download and delete
```rust
let file = service.download_object("docs/2026/03/28/report.pdf").await?;
println!("{} bytes", file.data.len());
service.delete_file("docs/2026/03/28/report.pdf").await?;
// Images: deletes both main and thumbnail
service.delete_image_with_thumbnail("profiles/2026/03/28/avatar.jpg").await?;
```
### Presigned URLs (S3 only)
```rust
let url = service.presigned_get_url("private/doc.pdf", 3600).await?;
let upload = service
.presigned_upload_url("uploads/file.pdf", "application/pdf", 3600)
.await?;
```
### Connection check
Verify that credentials are valid and the bucket is reachable before serving traffic:
```rust
service.test_connection().await?;
```
## Other backends
### Google Cloud Storage
```rust
use crypsol_storage::{GcsBackend, StorageService, StorageConfig};
// Using Application Default Credentials (ADC)
let backend = GcsBackend::builder()
.bucket("my-gcs-bucket")
.build()
.await?;
let service = StorageService::new(backend, StorageConfig::default());
```
Or authenticate with a service-account JSON key directly:
```rust
let json = std::fs::read_to_string("service-account.json")?;
let backend = GcsBackend::builder()
.bucket("my-gcs-bucket")
.credentials_json(json)
.build()
.await?;
```
### Azure Blob Storage
```rust
use crypsol_storage::{AzureBackend, StorageService, StorageConfig};
let backend = AzureBackend::builder()
.account("myaccount")
.container("mycontainer")
.access_key("base64key...")
.build()?;
let service = StorageService::new(backend, StorageConfig::default());
```
### Local filesystem
Good for development and tests.
```rust
use crypsol_storage::{LocalBackend, StorageService, StorageConfig};
let backend = LocalBackend::new("/tmp/storage", "http://localhost:8080/files");
let service = StorageService::new(backend, StorageConfig::default());
```
## Configuration
All limits are controlled through `StorageConfig`:
| `max_file_size` | 5 MB | Max upload size |
| `max_download_size` | 100 MB | Max download size |
| `max_image_alloc` | 50 MB | Max decoded image memory |
| `max_image_dimension` | 10,000 px | Max image dimension (input and output) |
| `default_cache_control` | `max-age=31536000` | Cache-Control header for uploads |
If you want users to control upload size via env in your app:
```rust
let max_upload_mb: usize = std::env::var("STORAGE_MAX_UPLOAD_MB")
.ok()
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or(5);
let config = StorageConfig {
max_file_size: max_upload_mb * 1024 * 1024,
..StorageConfig::default()
};
```
## Environment variables
Credentials are passed through builders, not read from env directly. Most apps will load them from the environment though — see [`example.env`](example.env) for a template.
```rust
let backend = S3Backend::builder()
.region(&std::env::var("STORAGE_S3_REGION").unwrap())
.bucket(&std::env::var("STORAGE_S3_BUCKET").unwrap())
.credentials(
&std::env::var("STORAGE_S3_ACCESS_KEY").unwrap(),
&std::env::var("STORAGE_S3_SECRET_KEY").unwrap(),
)
.build()?;
```
| `STORAGE_S3_REGION` | S3 / R2 / MinIO | yes | Region (`us-east-1`, `auto` for R2) |
| `STORAGE_S3_BUCKET` | S3 / R2 / MinIO | yes | Bucket name |
| `STORAGE_S3_ACCESS_KEY` | S3 / R2 / MinIO | yes | Access key ID |
| `STORAGE_S3_SECRET_KEY` | S3 / R2 / MinIO | yes | Secret access key |
| `STORAGE_S3_ENDPOINT` | R2 / MinIO | no | Custom endpoint URL |
| `STORAGE_S3_PUBLIC_URL` | S3 / R2 / MinIO | no | Public URL override (CDN) |
| `GOOGLE_APPLICATION_CREDENTIALS` | GCS | no | Path to service-account JSON (ADC) |
| `STORAGE_GCS_CREDENTIALS_JSON` | GCS | no | Raw service-account JSON string |
| `STORAGE_GCS_BUCKET` | GCS | yes | Bucket name |
| `STORAGE_GCS_PUBLIC_URL` | GCS | no | Public URL override |
| `STORAGE_AZURE_ACCOUNT` | Azure | yes | Account name |
| `STORAGE_AZURE_CONTAINER` | Azure | yes | Container name |
| `STORAGE_AZURE_ACCESS_KEY` | Azure | yes | Account key |
| `STORAGE_AZURE_PUBLIC_URL` | Azure | no | Public URL override |
| `STORAGE_LOCAL_BASE_DIR` | Local | yes | Storage directory |
| `STORAGE_LOCAL_PUBLIC_URL` | Local | yes | Base URL for file access |
## Error handling
```rust
use crypsol_storage::Error;
match service.upload_image_with_config(&data, ct, &cfg).await {
Ok(result) => println!("Uploaded: {}", result.url),
Err(Error::InvalidFileType(got, _)) => eprintln!("Bad type: {got}"),
Err(Error::FileTooLarge(size, max)) => eprintln!("Too large: {size}/{max}"),
Err(Error::InvalidDimensions(w, h, max)) => eprintln!("{w}x{h} exceeds {max}"),
Err(Error::ImageProcessing(msg)) => eprintln!("Image error: {msg}"),
Err(Error::Backend(msg)) => eprintln!("Backend: {msg}"),
Err(e) => eprintln!("{e}"),
}
```
## Backend parity
Not every backend supports every feature:
| Upload headers | yes | yes | partial | no |
| Download content-type | yes | yes | no | guessed |
| Presigned URLs | yes | no | no | no |
| test_connection() | yes | yes | yes | yes |
## License
MIT — see [LICENSE](LICENSE).