commonware_storage/archive/prunable/
mod.rs

1//! A prunable key-value store for ordered data.
2//!
3//! Data is stored across two backends: [crate::journal::segmented::fixed] for fixed-size index entries and
4//! [crate::journal::segmented::glob::Glob] for values (managed by [crate::journal::segmented::oversized]).
5//! The location of written data is stored in-memory by both index and key (via [crate::index::unordered::Index])
6//! to enable efficient lookups (on average).
7//!
8//! _Notably, [Archive] does not make use of compaction nor on-disk indexes (and thus has no read
9//! nor write amplification during normal operation).
10//!
11//! # Format
12//!
13//! [Archive] uses a two-journal structure for efficient buffer pool usage:
14//!
15//! **Index Journal (segmented/fixed)** - Fixed-size entries for fast startup replay:
16//! ```text
17//! +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
18//! | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |10 |11 |12 |13 |14 |15 |16 |17 |18 |19 |20 |21 |22 |23 |
19//! +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
20//! |          Index(u64)           |Key(Fixed Size)|        val_offset(u64)        | val_size(u32) |
21//! +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
22//! ```
23//!
24//! **Value Blob** - Raw values with CRC32 checksums (direct reads, no buffer pool):
25//! ```text
26//! +---+---+---+---+---+---+---+---+---+---+---+---+
27//! |     Compressed Data (variable)    |   CRC32   |
28//! +---+---+---+---+---+---+---+---+---+---+---+---+
29//! ```
30//!
31//! # Uniqueness
32//!
33//! [Archive] assumes all stored indexes and keys are unique. If the same key is associated with
34//! multiple `indices`, there is no guarantee which value will be returned. If the key is written to
35//! an existing `index`, [Archive] will return an error.
36//!
37//! ## Conflicts
38//!
39//! Because a translated representation of a key is only ever stored in memory, it is possible (and
40//! expected) that two keys will eventually be represented by the same translated key. To handle
41//! this case, [Archive] must check the persisted form of all conflicting keys to ensure data from
42//! the correct key is returned. To support efficient checks, [Archive] (via
43//! [crate::index::unordered::Index]) keeps a linked list of all keys with the same translated
44//! prefix:
45//!
46//! ```rust
47//! struct Record {
48//!     index: u64,
49//!
50//!     next: Option<Box<Record>>,
51//! }
52//! ```
53//!
54//! _To avoid random memory reads in the common case, the in-memory index directly stores the first
55//! item in the linked list instead of a pointer to the first item._
56//!
57//! `index` is the key to the map used to serve lookups by `index` that stores the position in the
58//! index journal (selected by `section = index / items_per_section * items_per_section` to minimize
59//! the number of open blobs):
60//!
61//! ```text
62//! // Maps index -> position in index journal
63//! indices: BTreeMap<u64, u64>
64//! ```
65//!
66//! _If the [Translator] provided by the caller does not uniformly distribute keys across the key
67//! space or uses a translated representation that means keys on average have many conflicts,
68//! performance will degrade._
69//!
70//! ## Memory Overhead
71//!
72//! [Archive] uses two maps to enable lookups by both index and key. The memory used to track each
73//! index item is `8 + 8` (where `8` is the index and `8` is the position in the index journal).
74//! The memory used to track each key item is `~translated(key).len() + 16` bytes (where `16` is the
75//! size of the `Record` struct). This means that an [Archive] employing a [Translator] that uses
76//! the first `8` bytes of a key will use `~40` bytes to index each key.
77//!
78//! # Pruning
79//!
80//! [Archive] supports pruning up to a minimum `index` using the `prune` method. After `prune` is
81//! called on a `section`, all interaction with a `section` less than the pruned `section` will
82//! return an error.
83//!
84//! ## Lazy Index Cleanup
85//!
86//! Instead of performing a full iteration of the in-memory index, storing an additional in-memory
87//! index per `section`, or replaying a `section` of the value blob,
88//! [Archive] lazily cleans up the [crate::index::unordered::Index] after pruning. When a new key is
89//! stored that overlaps (same translated value) with a pruned key, the pruned key is removed from
90//! the in-memory index.
91//!
92//! # Read Path
93//!
94//! All reads (by index or key) first read the index entry from the index journal to get the
95//! value location (offset and size), then read the value from the value blob. The index journal
96//! uses a buffer pool for caching, so hot entries are served from memory. Values are read directly
97//! from disk without caching to avoid polluting the buffer pool with large values.
98//!
99//! # Compression
100//!
101//! [Archive] supports compressing data before storing it on disk. This can be enabled by setting
102//! the `compression` field in the `Config` struct to a valid `zstd` compression level. This setting
103//! can be changed between initializations of [Archive], however, it must remain populated if any
104//! data was written with compression enabled.
105//!
106//! # Querying for Gaps
107//!
108//! [Archive] tracks gaps in the index space to enable the caller to efficiently fetch unknown keys
109//! using `next_gap`. This is a very common pattern when syncing blocks in a blockchain.
110//!
111//! # Example
112//!
113//! ```rust
114//! use commonware_runtime::{Spawner, Runner, deterministic, buffer::PoolRef};
115//! use commonware_cryptography::{Hasher as _, Sha256};
116//! use commonware_storage::{
117//!     translator::FourCap,
118//!     archive::{
119//!         Archive as _,
120//!         prunable::{Archive, Config},
121//!     },
122//! };
123//! use commonware_utils::{NZUsize, NZU16, NZU64};
124//!
125//! let executor = deterministic::Runner::default();
126//! executor.start(|context| async move {
127//!     // Create an archive
128//!     let cfg = Config {
129//!         translator: FourCap,
130//!         key_partition: "demo_index".into(),
131//!         key_buffer_pool: PoolRef::new(NZU16!(1024), NZUsize!(10)),
132//!         value_partition: "demo_value".into(),
133//!         compression: Some(3),
134//!         codec_config: (),
135//!         items_per_section: NZU64!(1024),
136//!         key_write_buffer: NZUsize!(1024 * 1024),
137//!         value_write_buffer: NZUsize!(1024 * 1024),
138//!         replay_buffer: NZUsize!(4096),
139//!     };
140//!     let mut archive = Archive::init(context, cfg).await.unwrap();
141//!
142//!     // Put a key
143//!     archive.put(1, Sha256::hash(b"data"), 10).await.unwrap();
144//!
145//!     // Sync the archive
146//!     archive.sync().await.unwrap();
147//! });
148//! ```
149
150use crate::translator::Translator;
151use commonware_runtime::buffer::PoolRef;
152use std::num::{NonZeroU64, NonZeroUsize};
153
154mod storage;
155pub use storage::Archive;
156
157/// Configuration for [Archive] storage.
158#[derive(Clone)]
159pub struct Config<T: Translator, C> {
160    /// Logic to transform keys into their index representation.
161    ///
162    /// [Archive] assumes that all internal keys are spread uniformly across the key space.
163    /// If that is not the case, lookups may be O(n) instead of O(1).
164    pub translator: T,
165
166    /// The partition to use for the key journal (stores index+key metadata).
167    pub key_partition: String,
168
169    /// The buffer pool to use for the key journal.
170    pub key_buffer_pool: PoolRef,
171
172    /// The partition to use for the value blob (stores values).
173    pub value_partition: String,
174
175    /// The compression level to use for the value blob.
176    pub compression: Option<u8>,
177
178    /// The [commonware_codec::Codec] configuration to use for the value stored in the archive.
179    pub codec_config: C,
180
181    /// The number of items per section (the granularity of pruning).
182    pub items_per_section: NonZeroU64,
183
184    /// The amount of bytes that can be buffered for the key journal before being written to a
185    /// [commonware_runtime::Blob].
186    pub key_write_buffer: NonZeroUsize,
187
188    /// The amount of bytes that can be buffered for the value journal before being written to a
189    /// [commonware_runtime::Blob].
190    pub value_write_buffer: NonZeroUsize,
191
192    /// The buffer size to use when replaying a [commonware_runtime::Blob].
193    pub replay_buffer: NonZeroUsize,
194}
195
196#[cfg(test)]
197mod tests {
198    use super::*;
199    use crate::{
200        archive::{Archive as _, Error, Identifier},
201        journal::Error as JournalError,
202        translator::{FourCap, TwoCap},
203    };
204    use commonware_codec::{DecodeExt, Error as CodecError};
205    use commonware_macros::{test_group, test_traced};
206    use commonware_runtime::{deterministic, Metrics, Runner};
207    use commonware_utils::{sequence::FixedBytes, NZUsize, NZU16, NZU64};
208    use rand::Rng;
209    use std::{collections::BTreeMap, num::NonZeroU16};
210
211    const DEFAULT_ITEMS_PER_SECTION: u64 = 65536;
212    const DEFAULT_WRITE_BUFFER: usize = 1024;
213    const DEFAULT_REPLAY_BUFFER: usize = 4096;
214    const PAGE_SIZE: NonZeroU16 = NZU16!(1024);
215    const PAGE_CACHE_SIZE: NonZeroUsize = NZUsize!(10);
216
217    fn test_key(key: &str) -> FixedBytes<64> {
218        let mut buf = [0u8; 64];
219        let key = key.as_bytes();
220        assert!(key.len() <= buf.len());
221        buf[..key.len()].copy_from_slice(key);
222        FixedBytes::decode(buf.as_ref()).unwrap()
223    }
224
225    #[test_traced]
226    fn test_archive_compression_then_none() {
227        // Initialize the deterministic context
228        let executor = deterministic::Runner::default();
229        executor.start(|context| async move {
230            // Initialize the archive
231            let cfg = Config {
232                translator: FourCap,
233                key_partition: "test_index".into(),
234                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
235                value_partition: "test_value".into(),
236                codec_config: (),
237                compression: Some(3),
238                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
239                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
240                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
241                items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
242            };
243            let mut archive = Archive::init(context.clone(), cfg.clone())
244                .await
245                .expect("Failed to initialize archive");
246
247            // Put the key-data pair
248            let index = 1u64;
249            let key = test_key("testkey");
250            let data = 1;
251            archive
252                .put(index, key.clone(), data)
253                .await
254                .expect("Failed to put data");
255
256            // Sync and drop the archive
257            archive.sync().await.expect("Failed to sync archive");
258            drop(archive);
259
260            // Initialize the archive again without compression.
261            // Index journal replay succeeds (no compression), but value reads will fail.
262            let cfg = Config {
263                translator: FourCap,
264                key_partition: "test_index".into(),
265                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
266                value_partition: "test_value".into(),
267                codec_config: (),
268                compression: None,
269                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
270                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
271                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
272                items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
273            };
274            let archive = Archive::<_, _, FixedBytes<64>, i32>::init(context, cfg.clone())
275                .await
276                .unwrap();
277
278            // Getting the value should fail because compression settings mismatch.
279            // Without compression, the codec sees extra bytes after decoding the value
280            // (because the compressed data doesn't match the expected format).
281            let result: Result<Option<i32>, _> = archive.get(Identifier::Index(index)).await;
282            assert!(matches!(
283                result,
284                Err(Error::Journal(JournalError::Codec(CodecError::ExtraData(
285                    _
286                ))))
287            ));
288        });
289    }
290
291    #[test_traced]
292    fn test_archive_overlapping_key_basic() {
293        // Initialize the deterministic context
294        let executor = deterministic::Runner::default();
295        executor.start(|context| async move {
296            // Initialize the archive
297            let cfg = Config {
298                translator: FourCap,
299                key_partition: "test_index".into(),
300                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
301                value_partition: "test_value".into(),
302                codec_config: (),
303                compression: None,
304                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
305                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
306                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
307                items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
308            };
309            let mut archive = Archive::init(context.clone(), cfg.clone())
310                .await
311                .expect("Failed to initialize archive");
312
313            let index1 = 1u64;
314            let key1 = test_key("keys1");
315            let data1 = 1;
316            let index2 = 2u64;
317            let key2 = test_key("keys2");
318            let data2 = 2;
319
320            // Put the key-data pair
321            archive
322                .put(index1, key1.clone(), data1)
323                .await
324                .expect("Failed to put data");
325
326            // Put the key-data pair
327            archive
328                .put(index2, key2.clone(), data2)
329                .await
330                .expect("Failed to put data");
331
332            // Get the data back
333            let retrieved = archive
334                .get(Identifier::Key(&key1))
335                .await
336                .expect("Failed to get data")
337                .expect("Data not found");
338            assert_eq!(retrieved, data1);
339
340            // Get the data back
341            let retrieved = archive
342                .get(Identifier::Key(&key2))
343                .await
344                .expect("Failed to get data")
345                .expect("Data not found");
346            assert_eq!(retrieved, data2);
347
348            // Check metrics
349            let buffer = context.encode();
350            assert!(buffer.contains("items_tracked 2"));
351            assert!(buffer.contains("unnecessary_reads_total 1"));
352            assert!(buffer.contains("gets_total 2"));
353        });
354    }
355
356    #[test_traced]
357    fn test_archive_overlapping_key_multiple_sections() {
358        // Initialize the deterministic context
359        let executor = deterministic::Runner::default();
360        executor.start(|context| async move {
361            // Initialize the archive
362            let cfg = Config {
363                translator: FourCap,
364                key_partition: "test_index".into(),
365                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
366                value_partition: "test_value".into(),
367                codec_config: (),
368                compression: None,
369                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
370                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
371                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
372                items_per_section: NZU64!(DEFAULT_ITEMS_PER_SECTION),
373            };
374            let mut archive = Archive::init(context.clone(), cfg.clone())
375                .await
376                .expect("Failed to initialize archive");
377
378            let index1 = 1u64;
379            let key1 = test_key("keys1");
380            let data1 = 1;
381            let index2 = 2_000_000u64;
382            let key2 = test_key("keys2");
383            let data2 = 2;
384
385            // Put the key-data pair
386            archive
387                .put(index1, key1.clone(), data1)
388                .await
389                .expect("Failed to put data");
390
391            // Put the key-data pair
392            archive
393                .put(index2, key2.clone(), data2)
394                .await
395                .expect("Failed to put data");
396
397            // Get the data back
398            let retrieved = archive
399                .get(Identifier::Key(&key1))
400                .await
401                .expect("Failed to get data")
402                .expect("Data not found");
403            assert_eq!(retrieved, data1);
404
405            // Get the data back
406            let retrieved = archive
407                .get(Identifier::Key(&key2))
408                .await
409                .expect("Failed to get data")
410                .expect("Data not found");
411            assert_eq!(retrieved, data2);
412        });
413    }
414
415    #[test_traced]
416    fn test_archive_prune_keys() {
417        // Initialize the deterministic context
418        let executor = deterministic::Runner::default();
419        executor.start(|context| async move {
420            // Initialize the archive
421            let cfg = Config {
422                translator: FourCap,
423                key_partition: "test_index".into(),
424                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
425                value_partition: "test_value".into(),
426                codec_config: (),
427                compression: None,
428                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
429                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
430                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
431                items_per_section: NZU64!(1), // no mask - each item is its own section
432            };
433            let mut archive = Archive::init(context.clone(), cfg.clone())
434                .await
435                .expect("Failed to initialize archive");
436
437            // Insert multiple keys across different sections
438            let keys = vec![
439                (1u64, test_key("key1-blah"), 1),
440                (2u64, test_key("key2-blah"), 2),
441                (3u64, test_key("key3-blah"), 3),
442                (4u64, test_key("key3-bleh"), 3),
443                (5u64, test_key("key4-blah"), 4),
444            ];
445
446            for (index, key, data) in &keys {
447                archive
448                    .put(*index, key.clone(), *data)
449                    .await
450                    .expect("Failed to put data");
451            }
452
453            // Check metrics
454            let buffer = context.encode();
455            assert!(buffer.contains("items_tracked 5"));
456
457            // Prune sections less than 3
458            archive.prune(3).await.expect("Failed to prune");
459
460            // Ensure keys 1 and 2 are no longer present
461            for (index, key, data) in keys {
462                let retrieved = archive
463                    .get(Identifier::Key(&key))
464                    .await
465                    .expect("Failed to get data");
466                if index < 3 {
467                    assert!(retrieved.is_none());
468                } else {
469                    assert_eq!(retrieved.expect("Data not found"), data);
470                }
471            }
472
473            // Check metrics
474            let buffer = context.encode();
475            assert!(buffer.contains("items_tracked 3"));
476            assert!(buffer.contains("indices_pruned_total 2"));
477            assert!(buffer.contains("pruned_total 0")); // no lazy cleanup yet
478
479            // Try to prune older section
480            archive.prune(2).await.expect("Failed to prune");
481
482            // Try to prune current section again
483            archive.prune(3).await.expect("Failed to prune");
484
485            // Try to put older index
486            let result = archive.put(1, test_key("key1-blah"), 1).await;
487            assert!(matches!(result, Err(Error::AlreadyPrunedTo(3))));
488
489            // Trigger lazy removal of keys
490            archive
491                .put(6, test_key("key2-blfh"), 5)
492                .await
493                .expect("Failed to put data");
494
495            // Check metrics
496            let buffer = context.encode();
497            assert!(buffer.contains("items_tracked 4")); // lazily remove one, add one
498            assert!(buffer.contains("indices_pruned_total 2"));
499            assert!(buffer.contains("pruned_total 1"));
500        });
501    }
502
503    fn test_archive_keys_and_restart(num_keys: usize) -> String {
504        // Initialize the deterministic context
505        let executor = deterministic::Runner::default();
506        executor.start(|mut context| async move {
507            // Initialize the archive
508            let items_per_section = 256u64;
509            let cfg = Config {
510                translator: TwoCap,
511                key_partition: "test_index".into(),
512                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
513                value_partition: "test_value".into(),
514                codec_config: (),
515                compression: None,
516                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
517                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
518                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
519                items_per_section: NZU64!(items_per_section),
520            };
521            let mut archive = Archive::init(context.clone(), cfg.clone())
522                .await
523                .expect("Failed to initialize archive");
524
525            // Insert multiple keys across different sections
526            let mut keys = BTreeMap::new();
527            while keys.len() < num_keys {
528                let index = keys.len() as u64;
529                let mut key = [0u8; 64];
530                context.fill(&mut key);
531                let key = FixedBytes::<64>::decode(key.as_ref()).unwrap();
532                let mut data = [0u8; 1024];
533                context.fill(&mut data);
534                let data = FixedBytes::<1024>::decode(data.as_ref()).unwrap();
535
536                archive
537                    .put(index, key.clone(), data.clone())
538                    .await
539                    .expect("Failed to put data");
540                keys.insert(key, (index, data));
541            }
542
543            // Ensure all keys can be retrieved
544            for (key, (index, data)) in &keys {
545                let retrieved = archive
546                    .get(Identifier::Index(*index))
547                    .await
548                    .expect("Failed to get data")
549                    .expect("Data not found");
550                assert_eq!(&retrieved, data);
551                let retrieved = archive
552                    .get(Identifier::Key(key))
553                    .await
554                    .expect("Failed to get data")
555                    .expect("Data not found");
556                assert_eq!(&retrieved, data);
557            }
558
559            // Check metrics
560            let buffer = context.encode();
561            let tracked = format!("items_tracked {num_keys:?}");
562            assert!(buffer.contains(&tracked));
563            assert!(buffer.contains("pruned_total 0"));
564
565            // Sync and drop the archive
566            archive.sync().await.expect("Failed to sync archive");
567            drop(archive);
568
569            // Reinitialize the archive
570            let cfg = Config {
571                translator: TwoCap,
572                key_partition: "test_index".into(),
573                key_buffer_pool: PoolRef::new(PAGE_SIZE, PAGE_CACHE_SIZE),
574                value_partition: "test_value".into(),
575                codec_config: (),
576                compression: None,
577                key_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
578                value_write_buffer: NZUsize!(DEFAULT_WRITE_BUFFER),
579                replay_buffer: NZUsize!(DEFAULT_REPLAY_BUFFER),
580                items_per_section: NZU64!(items_per_section),
581            };
582            let mut archive =
583                Archive::<_, _, _, FixedBytes<1024>>::init(context.clone(), cfg.clone())
584                    .await
585                    .expect("Failed to initialize archive");
586
587            // Ensure all keys can be retrieved
588            for (key, (index, data)) in &keys {
589                let retrieved = archive
590                    .get(Identifier::Index(*index))
591                    .await
592                    .expect("Failed to get data")
593                    .expect("Data not found");
594                assert_eq!(&retrieved, data);
595                let retrieved = archive
596                    .get(Identifier::Key(key))
597                    .await
598                    .expect("Failed to get data")
599                    .expect("Data not found");
600                assert_eq!(&retrieved, data);
601            }
602
603            // Prune first half
604            let min = (keys.len() / 2) as u64;
605            archive.prune(min).await.expect("Failed to prune");
606
607            // Ensure all keys can be retrieved that haven't been pruned
608            let min = (min / items_per_section) * items_per_section;
609            let mut removed = 0;
610            for (key, (index, data)) in keys {
611                if index >= min {
612                    let retrieved = archive
613                        .get(Identifier::Key(&key))
614                        .await
615                        .expect("Failed to get data")
616                        .expect("Data not found");
617                    assert_eq!(retrieved, data);
618
619                    // Check range
620                    let (current_end, start_next) = archive.next_gap(index);
621                    assert_eq!(current_end.unwrap(), num_keys as u64 - 1);
622                    assert!(start_next.is_none());
623                } else {
624                    let retrieved = archive
625                        .get(Identifier::Key(&key))
626                        .await
627                        .expect("Failed to get data");
628                    assert!(retrieved.is_none());
629                    removed += 1;
630
631                    // Check range
632                    let (current_end, start_next) = archive.next_gap(index);
633                    assert!(current_end.is_none());
634                    assert_eq!(start_next.unwrap(), min);
635                }
636            }
637
638            // Check metrics
639            let buffer = context.encode();
640            let tracked = format!("items_tracked {:?}", num_keys - removed);
641            assert!(buffer.contains(&tracked));
642            let pruned = format!("indices_pruned_total {removed}");
643            assert!(buffer.contains(&pruned));
644            assert!(buffer.contains("pruned_total 0")); // have not lazily removed keys yet
645
646            context.auditor().state()
647        })
648    }
649
650    #[test_group("slow")]
651    #[test_traced]
652    fn test_archive_many_keys_and_restart() {
653        test_archive_keys_and_restart(100_000);
654    }
655
656    #[test_group("slow")]
657    #[test_traced]
658    fn test_determinism() {
659        let state1 = test_archive_keys_and_restart(5_000);
660        let state2 = test_archive_keys_and_restart(5_000);
661        assert_eq!(state1, state2);
662    }
663}