arche
arche is an opinionated backend foundation crate for building production-ready applications with Axum.
It provides a curated set of building blocks commonly required in modern backend services—cloud integrations, databases, authentication, middleware, and logging— so you can focus on business logic instead of repetitive infrastructure wiring.
arche is designed to sit around Axum, not replace it.
Why arche?
Most backend services end up re-implementing the same infrastructure concerns:
- Cloud SDK setup and ergonomics
- Database connection management
- Authentication primitives
- Middleware patterns
- Logging and tracing configuration
- Common error handling
arche brings these pieces together into a cohesive, Rust-native foundation, built on top of well-established libraries and SDKs.
What arche provides
aws
AWS SDK integrations built on official SDKs:
- S3: Client initialization with support for IAM roles or environment-based credentials
- SES: Email sending with SES, including templated emails
- KMS: Key Management Service for encryption/decryption operations
gcp
Google Cloud Platform integrations:
- Drive: Google Drive client with service account authentication
- Sheets: Google Sheets client with service account authentication
database
Database connection management:
- Postgres: Connection pooling with
sqlx, configurable credentials, health checks - Redis: Connection pooling with
bb8, async operations, health checks
jwt
JWT utilities for authentication and authorization:
- Token generation and verification (HS256)
- Access/refresh token pair generation
- Token expiry helpers
- Custom claims support
csv
Async CSV processing via a single reusable CsvClient:
- Batch:
read_all,read_file,write_all,write_file— load everything at once - Streaming:
reader/writerfactories for memory-efficient record-by-record I/O - Configurable delimiter, quoting, escaping, headers, and more
error
Axum-compatible error handling:
AppErrorenum with HTTP error variants covering 400, 401, 403, 404, 409, 422, 424, 500, and 503- Automatic
IntoResponseconversion with structured JSON bodies InternalErrorresponses are sanitized by default (no leaked SQL, infra details)- Optional
verbose-errorsfeature flag for dev/staging diagnostics DependencyFailedvariant for upstream service failures (OpenSearch, Shopify, S3, etc.)
utils
Common utilities for backend services:
- Timestamp validation and conversion helpers
OffsetDateTimeutilities (Unix, ISO8601)- Pagination parameter types
All components are modular and explicit—nothing is hidden or magical.
Module Reference
AWS (arche::aws)
S3
Initialize an S3 client with automatic credential management:
use get_s3_client;
let client = get_s3_client.await;
Environment Variables:
S3_CRED_SOURCE:"IAM"(default) or"env"for environment-based credentialsS3_ACCESS_KEY_ID: Required when using"env"credential sourceS3_SECRET_ACCESS_KEY: Required when using"env"credential source
KMS
Encrypt and decrypt data using AWS Key Management Service:
use ;
// Initialize with default region (ap-south-1)
let client = get_kms_client.await;
let kms = new;
// Or with a specific region
let kms = new_with_region.await;
// Encrypt data
let plaintext = b"sensitive data";
let ciphertext = kms.encrypt.await?;
// Decrypt data
let decrypted = kms.decrypt.await?;
Credentials: Uses IAM role credentials by default (recommended for EC2/ECS/Lambda).
GCP (arche::gcp)
Drive
use get_drive_client;
let drive = get_drive_client.await?;
Environment Variables:
GCP_DRIVE_KEY: Path to service account JSON key file
Sheets
use get_sheets_client;
let sheets = get_sheets_client.await?;
Environment Variables:
GCP_SHEETS_KEY: Path to service account JSON key file
Database (arche::database)
Postgres
use ;
let pool = get_pg_pool.await;
let is_healthy = test_pg.await;
Environment Variables:
PG_HOST: Database hostPG_PORT: Database portPG_DATABASE: Database namePG_MAX_CONN: Maximum connections in poolPG_CREDENTIALS: JSON string withusernameandpassword(alternative)PG_USERNAME: Username (if not usingPG_CREDENTIALS)PG_PASSWORD: Password (if not usingPG_CREDENTIALS)
Redis
use ;
let pool = get_redis_pool.await;
let is_healthy = test_redis.await;
Environment Variables:
REDIS_HOST: Redis hostREDIS_PORT: Redis portREDIS_MAX_CONN: Maximum connections in pool
JWT (arche::jwt)
use ;
use ;
// Generate tokens
let access_claims = Claims ;
let refresh_claims = Claims ;
let tokens = generate_tokens;
// Verify token
let token_data = ?;
CSV (arche::csv)
Async CSV processing powered by csv-async. Create one CsvClient, reuse it everywhere:
use CsvClient;
use ;
// Default config (comma-delimited, with headers)
let csv = new;
// Or customize
let csv = new
.delimiter
.has_headers
.flexible;
Batch reading
// From bytes
let data = b"name,age,city\nAlice,30,NYC\nBob,25,LA";
let records: = csv.read_all.await?;
// From a file
let records: = csv.read_file.await?;
// Raw string records (no serde)
let raw_records = csv.read_records.await?;
Batch writing
let records = vec!;
// Write to in-memory bytes
let bytes: = csv.write_all.await?;
// Write to a file
csv.write_file.await?;
Streaming (memory-efficient)
// Record-by-record reading
let mut stream = csv.reader_from_file.await?;
while let Some = stream..await
// Record-by-record writing
let mut writer = csv.writer_to_file.await?;
writer.serialize.await?;
writer.write_fields.await?;
writer.finish.await?;
Error (arche::error)
use AppError;
use IntoResponse;
async
// 400 — bad request with details
let error = bad_request;
// 404 — resource not found
let error = not_found;
// 409 — unique constraint violation
let error = conflict;
// 424 — upstream dependency failed (retryable)
let error = dependency_failed;
// 424 — upstream dependency failed (permanent)
let error = dependency_failed_permanent;
// 500 — internal error (response body is sanitized by default)
let error = internal_error;
Error Variants:
| Variant | Status | Constructor |
|---|---|---|
BadRequest |
400 | bad_request(errors, message, description) |
Unauthorized |
401 | Direct construction |
Forbidden |
403 | Direct construction |
NotFound |
404 | not_found(resource) |
Conflict |
409 | conflict(message) |
UnprocessableEntity |
422 | unprocessable_entity(errors, message, description) |
DependencyFailed |
424 | dependency_failed(upstream, detail) |
InternalError |
500 | internal_error(error, message) |
Unavailable |
503 | Direct construction |
Feature Flags:
verbose-errors— When enabled,InternalErrorreturns the raw error string to the client instead of a sanitized message. Intended for dev/staging only.
# In your Cargo.toml (dev/staging only)
= { = "2.2.0", = ["verbose-errors"] }
Utils (arche::utils)
use ;
use OffsetDateTime;
// Timestamp validation
let is_future = validate_timestamp;
// DateTime conversion
let iso_string = offset_dt.to_iso_string?;
// Pagination
let params = PaginationParams ;
What arche is not
- ❌ A framework that replaces Axum
- ❌ A code generator or project template
- ❌ A monolithic abstraction over third-party libraries
- ❌ A "do-everything" utils crate
arche favors composition over abstraction.
Design principles
- Explicit over implicit
- Composition over inheritance
- Thin wrappers over official SDKs
- Production-first defaults
- No global state
- Async-first
Why arche?
Most backend services end up re-implementing the same infrastructure concerns:
- Cloud SDK setup and ergonomics
- Database connection management
- Authentication primitives
- Middleware patterns
- Logging and tracing configuration
- Common error handling
arche brings these pieces together into a cohesive, Rust-native foundation, built on top of well-established libraries and SDKs.
What arche provides
aws
AWS SDK integrations built on official SDKs:
- S3: Client initialization with support for IAM roles or environment-based credentials
- SES: Email sending with SES, including templated emails
- KMS: Key Management Service for encryption/decryption operations
gcp
Google Cloud Platform integrations:
- Drive: Google Drive client with service account authentication
- Sheets: Google Sheets client with service account authentication
database
Database connection management:
- Postgres: Connection pooling with
sqlx, configurable credentials, health checks - Redis: Connection pooling with
bb8, async operations, health checks
jwt
JWT utilities for authentication and authorization:
- Token generation and verification (HS256)
- Access/refresh token pair generation
- Token expiry helpers
- Custom claims support
csv
Async CSV processing via a single reusable CsvClient:
- Batch:
read_all,read_file,write_all,write_file— load everything at once - Streaming:
reader/writerfactories for memory-efficient record-by-record I/O - Configurable delimiter, quoting, escaping, headers, and more
error
Axum-compatible error handling:
AppErrorenum with common HTTP error variants- Automatic
IntoResponseconversion - Structured error responses with details
utils
Common utilities for backend services:
- Timestamp validation and conversion helpers
OffsetDateTimeutilities (Unix, ISO8601)- Pagination parameter types
All components are modular and explicit—nothing is hidden or magical.
Module Reference
AWS (arche::aws)
S3
Initialize an S3 client with automatic credential management:
use get_s3_client;
let client = get_s3_client.await;
Environment Variables:
S3_CRED_SOURCE:"IAM"(default) or"env"for environment-based credentialsS3_ACCESS_KEY_ID: Required when using"env"credential sourceS3_SECRET_ACCESS_KEY: Required when using"env"credential source
KMS
Encrypt and decrypt data using AWS Key Management Service:
use ;
// Initialize with default region (ap-south-1)
let client = get_kms_client.await;
let kms = new;
// Or with a specific region
let kms = new_with_region.await;
// Encrypt data
let plaintext = b"sensitive data";
let ciphertext = kms.encrypt.await?;
// Decrypt data
let decrypted = kms.decrypt.await?;
Credentials: Uses IAM role credentials by default (recommended for EC2/ECS/Lambda).
GCP (arche::gcp)
Drive
use get_drive_client;
let drive = get_drive_client.await?;
Environment Variables:
GCP_DRIVE_KEY: Path to service account JSON key file
Sheets
use get_sheets_client;
let sheets = get_sheets_client.await?;
Environment Variables:
GCP_SHEETS_KEY: Path to service account JSON key file
Database (arche::database)
Postgres
use ;
let pool = get_pg_pool.await;
let is_healthy = test_pg.await;
Environment Variables:
PG_HOST: Database hostPG_PORT: Database portPG_DATABASE: Database namePG_MAX_CONN: Maximum connections in poolPG_CREDENTIALS: JSON string withusernameandpassword(alternative)PG_USERNAME: Username (if not usingPG_CREDENTIALS)PG_PASSWORD: Password (if not usingPG_CREDENTIALS)
Redis
use ;
let pool = get_redis_pool.await;
let is_healthy = test_redis.await;
Environment Variables:
REDIS_HOST: Redis hostREDIS_PORT: Redis portREDIS_MAX_CONN: Maximum connections in pool
JWT (arche::jwt)
use ;
use ;
// Generate tokens
let access_claims = Claims ;
let refresh_claims = Claims ;
let tokens = generate_tokens;
// Verify token
let token_data = ?;
CSV (arche::csv)
Async CSV processing powered by csv-async. Create one CsvClient, reuse it everywhere:
use CsvClient;
use ;
// Default config (comma-delimited, with headers)
let csv = new;
// Or customize
let csv = new
.delimiter
.has_headers
.flexible;
Batch reading
// From bytes
let data = b”name,age,city\nAlice,30,NYC\nBob,25,LA”;
let records: = csv.read_all.await?;
// From a file
let records: = csv.read_file.await?;
// Raw string records (no serde)
let raw_records = csv.read_records.await?;
Batch writing
let records = vec!;
// Write to in-memory bytes
let bytes: = csv.write_all.await?;
// Write to a file
csv.write_file.await?;
Streaming (memory-efficient)
// Record-by-record reading
let mut stream = csv.reader_from_file.await?;
while let Some = stream..await
// Record-by-record writing
let mut writer = csv.writer_to_file.await?;
writer.serialize.await?;
writer.write_fields.await?;
writer.finish.await?;