eat-rocks 0.1.1

Restore a rocks database from object storage
Documentation

eat-rocks

docs.rs

restore a rocks backup from s3-compatible object storage

rocks has built-in backup/restore, but its restore function expects a local filesystem. bridging object storage to a filesystem works, but it's really annoying.

eat-rocks talks to object storage directly (and with high default concurrency) so it can be pretty fast at getting your database back.

cli

# restore latest from a public bucket (subdomain style)
eat-rocks --endpoint https://constellation.t3.storage.dev restore /data/rocksdb

# list available backups
eat-rocks --endpoint https://constellation.t3.storage.dev list

# restore a specific backup
eat-rocks --endpoint https://constellation.t3.storage.dev restore \
  --backup-id 3 /data/rocksdb

# authenticated access (or set AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY)
eat-rocks \
  --endpoint https://constellation.t3.storage.dev \
  --access-key-id AKIA... \
  --secret-access-key wJal... \
  restore /data/rocksdb

# path-style (minio, localstack)
eat-rocks \
  --endpoint http://localhost:9000 \
  --bucket mybucket \
  restore /data/rocksdb

# limit concurrency (poor connection, etc)
eat-rocks --endpoint https://constellation.t3.storage.dev \
  restore --concurrency 8 /data/rocksdb

lib

use eat_rocks::{public_bucket, restore};

let store = public_bucket("https://constellation.t3.storage.dev")?;
restore(store, "", "/data/rocksdb".as_ref(), Default::default()).await?;

or bring your own ObjectStore implementation (S3, GCS, Azure, local filesystem, ...):

let store: Arc<dyn ObjectStore> = /* store: all you */;
eat_rocks::restore(store, "", "/data/rocksdb".as_ref(), Default::default()).await?;

features

  • cli: enable deps to build the binary
  • easy (default): public_bucket() convenience function with aws store backend

cli build

the cli feature flag is required to build the cli

cargo build --release --features cli

license

Dual-licensed under MIT and Apache 2.0.

SPDX-License-Identifier: MIT OR Apache-2.0