commonware_storage/archive/prunable/mod.rs
1//! A prunable key-value store for ordered data.
2//!
3//! Data is stored in [crate::journal::variable::Journal] (an append-only log) and the location of
4//! written data is stored in-memory by both index and key (via [crate::index::Index]) to enable
5//! **single-read lookups** for both query patterns over archived data.
6//!
7//! _Notably, [Archive] does not make use of compaction nor on-disk indexes (and thus has no read
8//! nor write amplification during normal operation).
9//!
10//! # Format
11//!
12//! [Archive] stores data in the following format:
13//!
14//! ```text
15//! +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
16//! | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |10 |11 |12 | ... |
17//! +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
18//! | Index(u64) | Key(Fixed Size) | Data |
19//! +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
20//! ```
21//!
22//! # Uniqueness
23//!
24//! [Archive] assumes all stored indexes and keys are unique. If the same key is associated with
25//! multiple `indices`, there is no guarantee which value will be returned. If the key is written to
26//! an existing `index`, [Archive] will return an error.
27//!
28//! ## Conflicts
29//!
30//! Because a translated representation of a key is only ever stored in memory, it is possible (and
31//! expected) that two keys will eventually be represented by the same translated key. To handle this
32//! case, [Archive] must check the persisted form of all conflicting keys to ensure data from the
33//! correct key is returned. To support efficient checks, [Archive] (via [crate::index::Index])
34//! keeps a linked list of all keys with the same translated prefix:
35//!
36//! ```rust
37//! struct Record {
38//! index: u64,
39//!
40//! next: Option<Box<Record>>,
41//! }
42//! ```
43//!
44//! _To avoid random memory reads in the common case, the in-memory index directly stores the first
45//! item in the linked list instead of a pointer to the first item._
46//!
47//! `index` is the key to the map used to serve lookups by `index` that stores the location of data
48//! in a given `Blob` (selected by `section = index & section_mask` to minimize the number of open
49//! [crate::journal::variable::Journal]s):
50//!
51//! ```rust
52//! struct Location {
53//! offset: u32,
54//! len: u32,
55//! }
56//! ```
57//!
58//! _If the [Translator] provided by the caller does not uniformly distribute keys across the key
59//! space or uses a translated representation that means keys on average have many conflicts,
60//! performance will degrade._
61//!
62//! ## Memory Overhead
63//!
64//! [Archive] uses two maps to enable lookups by both index and key. The memory used to track each
65//! index item is `8 + 4 + 4` (where `8` is the index, `4` is the offset, and `4` is the length).
66//! The memory used to track each key item is `~translated(key).len() + 16` bytes (where `16` is the
67//! size of the `Record` struct). This means that an [Archive] employing a [Translator] that uses
68//! the first `8` bytes of a key will use `~40` bytes to index each key.
69//!
70//! # Pruning
71//!
72//! [Archive] supports pruning up to a minimum `index` using the `prune` method. After `prune` is
73//! called on a `section`, all interaction with a `section` less than the pruned `section` will
74//! return an error.
75//!
76//! ## Lazy Index Cleanup
77//!
78//! Instead of performing a full iteration of the in-memory index, storing an additional in-memory
79//! index per `section`, or replaying a `section` of [crate::journal::variable::Journal], [Archive]
80//! lazily cleans up the [crate::index::Index] after pruning. When a new key is stored that overlaps
81//! (same translated value) with a pruned key, the pruned key is removed from the in-memory index.
82//!
83//! # Single Operation Reads
84//!
85//! To enable single operation reads (i.e. reading all of an item in a single call to
86//! [commonware_runtime::Blob]), [Archive] caches the length of each item in its in-memory index.
87//! While it increases the footprint per key stored, the benefit of only ever performing a single
88//! operation to read a key (when there are no conflicts) is worth the tradeoff.
89//!
90//! # Compression
91//!
92//! [Archive] supports compressing data before storing it on disk. This can be enabled by setting
93//! the `compression` field in the `Config` struct to a valid `zstd` compression level. This setting
94//! can be changed between initializations of [Archive], however, it must remain populated if any
95//! data was written with compression enabled.
96//!
97//! # Querying for Gaps
98//!
99//! [Archive] tracks gaps in the index space to enable the caller to efficiently fetch unknown keys
100//! using `next_gap`. This is a very common pattern when syncing blocks in a blockchain.
101//!
102//! # Example
103//!
104//! ```rust
105//! use commonware_runtime::{Spawner, Runner, deterministic, buffer::PoolRef};
106//! use commonware_cryptography::{Hasher as _, Sha256};
107//! use commonware_storage::{
108//! translator::FourCap,
109//! archive::{
110//! Archive as _,
111//! prunable::{Archive, Config},
112//! },
113//! };
114//! use commonware_utils::{NZUsize, NZU64};
115//!
116//! let executor = deterministic::Runner::default();
117//! executor.start(|context| async move {
118//! // Create an archive
119//! let cfg = Config {
120//! translator: FourCap,
121//! partition: "demo".into(),
122//! compression: Some(3),
123//! codec_config: (),
124//! items_per_section: NZU64!(1024),
125//! write_buffer: NZUsize!(1024 * 1024),
126//! replay_buffer: NZUsize!(4096),
127//! buffer_pool: PoolRef::new(NZUsize!(1024), NZUsize!(10)),
128//! };
129//! let mut archive = Archive::init(context, cfg).await.unwrap();
130//!
131//! // Put a key
132//! archive.put(1, Sha256::hash(b"data"), 10).await.unwrap();
133//!
134//! // Close the archive (also closes the journal)
135//! archive.close().await.unwrap();
136//! });
137//! ```
138
139use crate::translator::Translator;
140use commonware_runtime::buffer::PoolRef;
141use std::num::{NonZeroU64, NonZeroUsize};
142
143mod storage;
144pub use storage::Archive;
145
146/// Configuration for [Archive] storage.
147#[derive(Clone)]
148pub struct Config<T: Translator, C> {
149 /// Logic to transform keys into their index representation.
150 ///
151 /// [Archive] assumes that all internal keys are spread uniformly across the key space.
152 /// If that is not the case, lookups may be O(n) instead of O(1).
153 pub translator: T,
154
155 /// The partition to use for the archive's [crate::journal] storage.
156 pub partition: String,
157
158 /// The compression level to use for the archive's [crate::journal] storage.
159 pub compression: Option<u8>,
160
161 /// The [commonware_codec::Codec] configuration to use for the value stored in the archive.
162 pub codec_config: C,
163
164 /// The number of items per section (the granularity of pruning).
165 pub items_per_section: NonZeroU64,
166
167 /// The amount of bytes that can be buffered in a section before being written to a
168 /// [commonware_runtime::Blob].
169 pub write_buffer: NonZeroUsize,
170
171 /// The buffer size to use when replaying a [commonware_runtime::Blob].
172 pub replay_buffer: NonZeroUsize,
173
174 /// The buffer pool to use for the archive's [crate::journal] storage.
175 pub buffer_pool: PoolRef,
176}
177
178#[cfg(test)]
179mod tests {
180 use super::*;
181 use crate::{
182 archive::{Archive as _, Error, Identifier},
183 journal::Error as JournalError,
184 translator::{FourCap, TwoCap},
185 };
186 use commonware_codec::{varint::UInt, DecodeExt, EncodeSize, Error as CodecError};
187 use commonware_macros::test_traced;
188 use commonware_runtime::{deterministic, Blob, Metrics, Runner, Storage};
189 use commonware_utils::{sequence::FixedBytes, NZUsize, NZU64};
190 use rand::Rng;
191 use std::collections::BTreeMap;
192
193 const DEFAULT_ITEMS_PER_SECTION: u64 = 65536;
194 const DEFAULT_WRITE_BUFFER: usize = 1024;
195 const DEFAULT_REPLAY_BUFFER: usize = 4096;
196 const PAGE_SIZE: NonZeroUsize = NZUsize!(1024);
197 const PAGE_CACHE_SIZE: NonZeroUsize = NZUsize!(10);
198
199 fn test_key(key: &str) -> FixedBytes<64> {
200 let mut buf = [0u8; 64];
201 let key = key.as_bytes();
202 assert!(key.len() <= buf.len());
203 buf[..key.len()].copy_from_slice(key);
204 FixedBytes::decode(buf.as_ref()).unwrap()
205 }
206
207 #[test_traced]
208 fn test_archive_compression_then_none() {
209 // Initialize the deterministic context
210 let executor = deterministic::Runner::default();
211 executor.start(|context| async move {
212 // Initialize the archive
213 let cfg = Config {
214 partition: "test_partition".into(),
215 translator: FourCap,
216 codec_config: (),
217 compression: Some(3),
218 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
219 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
220 items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
221 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
222 };
223 let mut archive = Archive::init(context.clone(), cfg.clone())
224 .await
225 .expect("Failed to initialize archive");
226
227 // Put the key-data pair
228 let index = 1u64;
229 let key = test_key("testkey");
230 let data = 1;
231 archive
232 .put(index, key.clone(), data)
233 .await
234 .expect("Failed to put data");
235
236 // Close the archive
237 archive.close().await.expect("Failed to close archive");
238
239 // Initialize the archive again without compression
240 let cfg = Config {
241 partition: "test_partition".into(),
242 translator: FourCap,
243 codec_config: (),
244 compression: None,
245 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
246 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
247 items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
248 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
249 };
250 let result = Archive::<_, _, FixedBytes<64>, i32>::init(context, cfg.clone()).await;
251 assert!(matches!(
252 result,
253 Err(Error::Journal(JournalError::Codec(CodecError::EndOfBuffer)))
254 ));
255 });
256 }
257
258 #[test_traced]
259 fn test_archive_record_corruption() {
260 // Initialize the deterministic context
261 let executor = deterministic::Runner::default();
262 executor.start(|context| async move {
263 // Initialize the archive
264 let cfg = Config {
265 partition: "test_partition".into(),
266 translator: FourCap,
267 codec_config: (),
268 compression: None,
269 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
270 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
271 items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
272 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
273 };
274 let mut archive = Archive::init(context.clone(), cfg.clone())
275 .await
276 .expect("Failed to initialize archive");
277
278 let index = 1u64;
279 let key = test_key("testkey");
280 let data = 1;
281
282 // Put the key-data pair
283 archive
284 .put(index, key.clone(), data)
285 .await
286 .expect("Failed to put data");
287
288 // Close the archive
289 archive.close().await.expect("Failed to close archive");
290
291 // Corrupt the value
292 let section = (index / DEFAULT_ITEMS_PER_SECTION) * DEFAULT_ITEMS_PER_SECTION;
293 let (blob, _) = context
294 .open("test_partition", §ion.to_be_bytes())
295 .await
296 .unwrap();
297 let value_location = 4 /* journal size */ + UInt(1u64).encode_size() as u64 /* index */ + 64 + 4 /* value length */;
298 blob.write_at(b"testdaty".to_vec(), value_location).await.unwrap();
299 blob.sync().await.unwrap();
300
301 // Initialize the archive again
302 let archive = Archive::<_, _, FixedBytes<64>, i32>::init(
303 context,
304 Config {
305 partition: "test_partition".into(),
306 translator: FourCap,
307 codec_config: (),
308 compression: None,
309 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
310 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
311 items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
312 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
313 },
314 )
315 .await.expect("Failed to initialize archive");
316
317 // Check that the archive is empty
318 let retrieved: Option<i32> = archive
319 .get(Identifier::Index(index))
320 .await
321 .expect("Failed to get data");
322 assert!(retrieved.is_none());
323 });
324 }
325
326 #[test_traced]
327 fn test_archive_overlapping_key_basic() {
328 // Initialize the deterministic context
329 let executor = deterministic::Runner::default();
330 executor.start(|context| async move {
331 // Initialize the archive
332 let cfg = Config {
333 partition: "test_partition".into(),
334 translator: FourCap,
335 codec_config: (),
336 compression: None,
337 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
338 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
339 items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
340 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
341 };
342 let mut archive = Archive::init(context.clone(), cfg.clone())
343 .await
344 .expect("Failed to initialize archive");
345
346 let index1 = 1u64;
347 let key1 = test_key("keys1");
348 let data1 = 1;
349 let index2 = 2u64;
350 let key2 = test_key("keys2");
351 let data2 = 2;
352
353 // Put the key-data pair
354 archive
355 .put(index1, key1.clone(), data1)
356 .await
357 .expect("Failed to put data");
358
359 // Put the key-data pair
360 archive
361 .put(index2, key2.clone(), data2)
362 .await
363 .expect("Failed to put data");
364
365 // Get the data back
366 let retrieved = archive
367 .get(Identifier::Key(&key1))
368 .await
369 .expect("Failed to get data")
370 .expect("Data not found");
371 assert_eq!(retrieved, data1);
372
373 // Get the data back
374 let retrieved = archive
375 .get(Identifier::Key(&key2))
376 .await
377 .expect("Failed to get data")
378 .expect("Data not found");
379 assert_eq!(retrieved, data2);
380
381 // Check metrics
382 let buffer = context.encode();
383 assert!(buffer.contains("items_tracked 2"));
384 assert!(buffer.contains("unnecessary_reads_total 1"));
385 assert!(buffer.contains("gets_total 2"));
386 });
387 }
388
389 #[test_traced]
390 fn test_archive_overlapping_key_multiple_sections() {
391 // Initialize the deterministic context
392 let executor = deterministic::Runner::default();
393 executor.start(|context| async move {
394 // Initialize the archive
395 let cfg = Config {
396 partition: "test_partition".into(),
397 translator: FourCap,
398 codec_config: (),
399 compression: None,
400 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
401 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
402 items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
403 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
404 };
405 let mut archive = Archive::init(context.clone(), cfg.clone())
406 .await
407 .expect("Failed to initialize archive");
408
409 let index1 = 1u64;
410 let key1 = test_key("keys1");
411 let data1 = 1;
412 let index2 = 2_000_000u64;
413 let key2 = test_key("keys2");
414 let data2 = 2;
415
416 // Put the key-data pair
417 archive
418 .put(index1, key1.clone(), data1)
419 .await
420 .expect("Failed to put data");
421
422 // Put the key-data pair
423 archive
424 .put(index2, key2.clone(), data2)
425 .await
426 .expect("Failed to put data");
427
428 // Get the data back
429 let retrieved = archive
430 .get(Identifier::Key(&key1))
431 .await
432 .expect("Failed to get data")
433 .expect("Data not found");
434 assert_eq!(retrieved, data1);
435
436 // Get the data back
437 let retrieved = archive
438 .get(Identifier::Key(&key2))
439 .await
440 .expect("Failed to get data")
441 .expect("Data not found");
442 assert_eq!(retrieved, data2);
443 });
444 }
445
446 #[test_traced]
447 fn test_archive_prune_keys() {
448 // Initialize the deterministic context
449 let executor = deterministic::Runner::default();
450 executor.start(|context| async move {
451 // Initialize the archive
452 let cfg = Config {
453 partition: "test_partition".into(),
454 translator: FourCap,
455 codec_config: (),
456 compression: None,
457 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
458 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
459 items_per_section: NZU64!(1), // no mask - each item is its own section
460 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
461 };
462 let mut archive = Archive::init(context.clone(), cfg.clone())
463 .await
464 .expect("Failed to initialize archive");
465
466 // Insert multiple keys across different sections
467 let keys = vec![
468 (1u64, test_key("key1-blah"), 1),
469 (2u64, test_key("key2-blah"), 2),
470 (3u64, test_key("key3-blah"), 3),
471 (4u64, test_key("key3-bleh"), 3),
472 (5u64, test_key("key4-blah"), 4),
473 ];
474
475 for (index, key, data) in &keys {
476 archive
477 .put(*index, key.clone(), *data)
478 .await
479 .expect("Failed to put data");
480 }
481
482 // Check metrics
483 let buffer = context.encode();
484 assert!(buffer.contains("items_tracked 5"));
485
486 // Prune sections less than 3
487 archive.prune(3).await.expect("Failed to prune");
488
489 // Ensure keys 1 and 2 are no longer present
490 for (index, key, data) in keys {
491 let retrieved = archive
492 .get(Identifier::Key(&key))
493 .await
494 .expect("Failed to get data");
495 if index < 3 {
496 assert!(retrieved.is_none());
497 } else {
498 assert_eq!(retrieved.expect("Data not found"), data);
499 }
500 }
501
502 // Check metrics
503 let buffer = context.encode();
504 assert!(buffer.contains("items_tracked 3"));
505 assert!(buffer.contains("indices_pruned_total 2"));
506 assert!(buffer.contains("pruned_total 0")); // no lazy cleanup yet
507
508 // Try to prune older section
509 archive.prune(2).await.expect("Failed to prune");
510
511 // Try to prune current section again
512 archive.prune(3).await.expect("Failed to prune");
513
514 // Try to put older index
515 let result = archive.put(1, test_key("key1-blah"), 1).await;
516 assert!(matches!(result, Err(Error::AlreadyPrunedTo(3))));
517
518 // Trigger lazy removal of keys
519 archive
520 .put(6, test_key("key2-blfh"), 5)
521 .await
522 .expect("Failed to put data");
523
524 // Check metrics
525 let buffer = context.encode();
526 assert!(buffer.contains("items_tracked 4")); // lazily remove one, add one
527 assert!(buffer.contains("indices_pruned_total 2"));
528 assert!(buffer.contains("pruned_total 1"));
529 });
530 }
531
532 fn test_archive_keys_and_restart(num_keys: usize) -> String {
533 // Initialize the deterministic context
534 let executor = deterministic::Runner::default();
535 executor.start(|mut context| async move {
536 // Initialize the archive
537 let items_per_section = 256u64;
538 let cfg = Config {
539 partition: "test_partition".into(),
540 translator: TwoCap,
541 codec_config: (),
542 compression: None,
543 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
544 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
545 items_per_section: NZU64!(items_per_section),
546 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
547 };
548 let mut archive = Archive::init(context.clone(), cfg.clone())
549 .await
550 .expect("Failed to initialize archive");
551
552 // Insert multiple keys across different sections
553 let mut keys = BTreeMap::new();
554 while keys.len() < num_keys {
555 let index = keys.len() as u64;
556 let mut key = [0u8; 64];
557 context.fill(&mut key);
558 let key = FixedBytes::<64>::decode(key.as_ref()).unwrap();
559 let mut data = [0u8; 1024];
560 context.fill(&mut data);
561 let data = FixedBytes::<1024>::decode(data.as_ref()).unwrap();
562
563 archive
564 .put(index, key.clone(), data.clone())
565 .await
566 .expect("Failed to put data");
567 keys.insert(key, (index, data));
568 }
569
570 // Ensure all keys can be retrieved
571 for (key, (index, data)) in &keys {
572 let retrieved = archive
573 .get(Identifier::Index(*index))
574 .await
575 .expect("Failed to get data")
576 .expect("Data not found");
577 assert_eq!(&retrieved, data);
578 let retrieved = archive
579 .get(Identifier::Key(key))
580 .await
581 .expect("Failed to get data")
582 .expect("Data not found");
583 assert_eq!(&retrieved, data);
584 }
585
586 // Check metrics
587 let buffer = context.encode();
588 let tracked = format!("items_tracked {num_keys:?}");
589 assert!(buffer.contains(&tracked));
590 assert!(buffer.contains("pruned_total 0"));
591
592 // Close the archive
593 archive.close().await.expect("Failed to close archive");
594
595 // Reinitialize the archive
596 let cfg = Config {
597 partition: "test_partition".into(),
598 translator: TwoCap,
599 codec_config: (),
600 compression: None,
601 write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
602 replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
603 items_per_section: NZU64!(items_per_section),
604 buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
605 };
606 let mut archive =
607 Archive::<_, _, _, FixedBytes<1024>>::init(context.clone(), cfg.clone())
608 .await
609 .expect("Failed to initialize archive");
610
611 // Ensure all keys can be retrieved
612 for (key, (index, data)) in &keys {
613 let retrieved = archive
614 .get(Identifier::Index(*index))
615 .await
616 .expect("Failed to get data")
617 .expect("Data not found");
618 assert_eq!(&retrieved, data);
619 let retrieved = archive
620 .get(Identifier::Key(key))
621 .await
622 .expect("Failed to get data")
623 .expect("Data not found");
624 assert_eq!(&retrieved, data);
625 }
626
627 // Prune first half
628 let min = (keys.len() / 2) as u64;
629 archive.prune(min).await.expect("Failed to prune");
630
631 // Ensure all keys can be retrieved that haven't been pruned
632 let min = (min / items_per_section) * items_per_section;
633 let mut removed = 0;
634 for (key, (index, data)) in keys {
635 if index >= min {
636 let retrieved = archive
637 .get(Identifier::Key(&key))
638 .await
639 .expect("Failed to get data")
640 .expect("Data not found");
641 assert_eq!(retrieved, data);
642
643 // Check range
644 let (current_end, start_next) = archive.next_gap(index);
645 assert_eq!(current_end.unwrap(), num_keys as u64 - 1);
646 assert!(start_next.is_none());
647 } else {
648 let retrieved = archive
649 .get(Identifier::Key(&key))
650 .await
651 .expect("Failed to get data");
652 assert!(retrieved.is_none());
653 removed += 1;
654
655 // Check range
656 let (current_end, start_next) = archive.next_gap(index);
657 assert!(current_end.is_none());
658 assert_eq!(start_next.unwrap(), min);
659 }
660 }
661
662 // Check metrics
663 let buffer = context.encode();
664 let tracked = format!("items_tracked {:?}", num_keys - removed);
665 assert!(buffer.contains(&tracked));
666 let pruned = format!("indices_pruned_total {removed}");
667 assert!(buffer.contains(&pruned));
668 assert!(buffer.contains("pruned_total 0")); // have not lazily removed keys yet
669
670 context.auditor().state()
671 })
672 }
673
674 #[test_traced]
675 #[ignore]
676 fn test_archive_many_keys_and_restart() {
677 test_archive_keys_and_restart(100_000);
678 }
679
680 #[test_traced]
681 #[ignore]
682 fn test_determinism() {
683 let state1 = test_archive_keys_and_restart(5_000);
684 let state2 = test_archive_keys_and_restart(5_000);
685 assert_eq!(state1, state2);
686 }
687}