๐ crypsol_logger
Structured, production-grade async logger for Rust services โ with CloudWatch, HTTP push (Loki / Elasticsearch / custom), file, and console backends.
๐ฅ Features
- โ Structured JSON logging with key-value fields
- โ 4 backends: CloudWatch, HTTP push, local files, console
- โ Automatic batching with configurable size & timeout
- โ Loki, JSON, and NDJSON output formats
- โ Custom labels for log aggregation
- โ Thread-safe, high-performance design
- โ Minimal configuration โ just set env vars
๐ฆ Installation
[]
= "0.3.1"
The Level enum is re-exported, so there's no need to add the log crate separately.
๐ Setup & Usage
log!;
log!;
log!;
Attach structured key-value fields with ; separator:
log!;
log!;
Produces JSON:
Custom log stream:
log_custom!;
๐งช Environment Variables
Backend Selection (priority: CloudWatch > HTTP > File > Console)
| Variable | Default | Description |
|---|---|---|
LOG_TO_CLOUDWATCH |
false |
Push logs to AWS CloudWatch |
LOG_TO_HTTP |
false |
Push logs via HTTP (Loki, Elasticsearch, etc.) |
LOG_TO_FILE |
false |
Write logs to local disk files |
LOG_SHOW_LOCATION |
false |
Include file:line in output |
AWS_LOG_GROUP |
default |
Log group name (CloudWatch group / HTTP job label) |
If none are enabled, logs print to console (stdout).
โ๏ธ CloudWatch Backend (LOG_TO_CLOUDWATCH=true)
| Variable | Default | Required |
|---|---|---|
CLOUDWATCH_AWS_ACCESS_KEY |
โ | โ |
CLOUDWATCH_AWS_SECRET_KEY |
โ | โ |
CLOUDWATCH_AWS_REGION |
us-east-1 |
โ |
AWS_LOG_GROUP |
default |
โ |
LOG_BATCH_SIZE |
10 |
โ |
BATCH_TIMEOUT |
5 (secs) |
โ |
๐ HTTP Push Backend (LOG_TO_HTTP=true)
| Variable | Default | Required |
|---|---|---|
LOG_HTTP_ENDPOINT |
http://localhost:3100/loki/api/v1/push |
โ |
LOG_HTTP_FORMAT |
loki |
โ |
LOG_HTTP_BATCH_SIZE |
10 |
โ |
LOG_HTTP_TIMEOUT_SECS |
5 |
โ |
LOG_HTTP_LABELS |
โ | โ |
Supported formats:
| Format | Compatible With | Example Endpoint |
|---|---|---|
loki |
Grafana Loki | http://loki:3100/loki/api/v1/push |
json |
Custom APIs, Logstash | http://logserver:8080/logs |
ndjson |
Elasticsearch, OpenSearch | http://es:9200/logs/_bulk |
Custom labels (optional): LOG_HTTP_LABELS=env=production,service=my-api
๐ File Backend (LOG_TO_FILE=true)
| Variable | Default | Description |
|---|---|---|
LOG_FILE_DIR |
logs |
Directory path for log files |
LOG_RETENTION_DAYS |
30 |
Days to keep log files |
LOG_RETENTION_SIZE_MB |
512 |
Max total size before cleanup |
LOG_DELETE_BATCH_MB |
100 |
Amount deleted when limit is hit |
๐ก Quick Start Examples
Loki (Grafana stack)
LOG_TO_HTTP=true
LOG_HTTP_ENDPOINT=http://localhost:3100/loki/api/v1/push
LOG_HTTP_FORMAT=loki
AWS_LOG_GROUP=my_service
Elasticsearch
LOG_TO_HTTP=true
LOG_HTTP_ENDPOINT=http://elasticsearch:9200/logs/_bulk
LOG_HTTP_FORMAT=ndjson
AWS_LOG_GROUP=my_service
CloudWatch
LOG_TO_CLOUDWATCH=true
CLOUDWATCH_AWS_ACCESS_KEY=AKIA...
CLOUDWATCH_AWS_SECRET_KEY=JdOT...
CLOUDWATCH_AWS_REGION=us-east-1
AWS_LOG_GROUP=my_service
Local File
LOG_TO_FILE=true
LOG_FILE_DIR=logs
AWS_LOG_GROUP=my_service
Runtime Requirements
This crate relies on Tokio for all async backends (CloudWatch, HTTP, File).
The log! and log_custom! macros call tokio::spawn internally, so the
calling code must be running inside a Tokio runtime. In practice this means
your binary needs #[tokio::main] or an equivalent runtime handle.
The console fallback (when no backend is enabled) does not require Tokio.
Reliability and Delivery
All backends operate on an at-most-once delivery model. A log entry is formatted and dispatched to a bounded async channel; if the backend fails to deliver it, the entry is lost.
Per-backend failure behavior:
CloudWatch retries on AWS ThrottlingException (up to 3 attempts with
exponential backoff) and once on InvalidSequenceTokenException. Other
errors are logged to stderr and the entry is dropped. If the initial
credential verification fails at startup, all subsequent CloudWatch log
calls return immediately without sending.
HTTP (Loki, Elasticsearch, custom) does not retry. Non-2xx responses and network errors are printed to stderr and the batch is discarded.
File returns IO errors to the caller, but the macros discard those errors internally, so a disk-full or permission-denied condition results in silent loss.
Console writes to stdout synchronously and does not go through the async channel.
Ordering is preserved within a single log stream and batch, but concurrent batches may arrive out of order at the backend.
Operational Limits
Both CloudWatch and HTTP backends buffer log entries through a bounded
Tokio MPSC channel with a fixed capacity of 1000 entries. If the
backend cannot keep up with the emission rate, the channel fills and
subsequent log! calls will await until space opens up. This means
sustained logging pressure with a slow or unreachable backend can
introduce latency into your application's async tasks.
Batch size and flush timeout are tunable per backend via environment variables (see above). Larger batches reduce network calls at the cost of higher per-flush latency and memory usage. Smaller batches provide more frequent delivery but increase overhead.
For high-throughput services (above 1k logs/sec), consider increasing
LOG_HTTP_BATCH_SIZE / LOG_BATCH_SIZE and adjusting the timeout to
match your latency tolerance.
๐ License
MIT ยฉ 2025 Crypsol
๐ง Also Available in Python!
A Python version of this logger, which is also easily integratable with FastAPI, Flask, and other WSGI/ASGI frameworks: ๐ cloudwatchpy โ Python Logger for AWS CloudWatch