Crate akd

source ·
Expand description

An implementation of an auditable key directory (AKD), also known as a verifiable registry.

⚠️ Warning: This implementation has not been audited and is not ready for use in a real system. Use at your own risk!

Overview

An auditable key directory (AKD) provides an interface to a data structure that stores key-value mappings in a database in a verifiable manner. The data structure is similar to that of a Python dict, where directory entries are indexed by keys, and allow for storing a value with some key and then extracting the value given the key.

Keys can also be updated to be associated with different values. Each batch of updates to these key-value mappings are associated with an epoch along with a commitment to the database of entries at that point in time. The server that controls the database can use this library to generate proofs of inclusion to clients that wish to query entries in the database. These proofs can be verified by a client against the corresponding commitment to the database. We can think of this data structure intuitively as a verifiable dictionary.

Operations

This library supports the following operations for the directory it maintains:

  • Publishing: Allows the directory server to insert and update new entries into the directory.
  • Lookup Proofs: Handles point queries to the directory, providing proofs of validity based on the server’s public key and a root hash for an epoch.
  • History Proofs: For a given index in the directory, provides proofs for the history of updates to this entry, matched against the server’s public key and a root hash for an epoch.
  • Append-Only Proofs: For a pair of epochs, provides a proof to an auditor that the database has evolved consistently and in an append-only manner. These append-only proofs use a verifiable random function (VRF) to avoid leaking any information about the labels and their corresponding values.

Asynchronicity

Note that all of the library functions must be called asynchronously (within async { ... } blocks) and the responses must be awaited. In the following examples, the necessary async blocks are omitted for simplicity.

Setup

A Directory represents an AKD. To set up a Directory, we first need to pick on a database, a hash function, and a VRF. For this example, we use Blake3 as the hash function, storage::memory::AsyncInMemoryDatabase as in-memory storage, and ecvrf::HardCodedAkdVRF as the VRF. The Directory::new function also takes as input a third parameter indicating whether or not it is “read-only”. Note that a read-only directory cannot be updated, and so we most likely will want to keep this variable set as false.

use akd::storage::StorageManager;
use akd::storage::memory::AsyncInMemoryDatabase;
use akd::ecvrf::HardCodedAkdVRF;
use akd::directory::Directory;

let db = AsyncInMemoryDatabase::new();
let storage_manager = StorageManager::new_no_cache(db);
let vrf = HardCodedAkdVRF{};

let mut akd = Directory::<_, _>::new(storage_manager, vrf, false)
    .await
    .expect("Could not create a new directory");

Publishing

To add label-value pairs (of type AkdLabel and AkdValue) to the directory, we can call Directory::publish with a list of the pairs. In the following example, we derive the labels and values from strings. After publishing, the new epoch number and root hash are returned.

use akd::EpochHash;
use akd::{AkdLabel, AkdValue};
use akd::Digest;

let entries = vec![
    (AkdLabel::from("first entry"), AkdValue::from("first value")),
    (AkdLabel::from("second entry"), AkdValue::from("second value")),
];

let EpochHash(epoch, root_hash) = akd.publish(entries)
    .await.expect("Error with publishing");
println!("Published epoch {} with root hash: {}", epoch, hex::encode(root_hash));

This function can be called repeatedly to add entries to the directory, with each invocation producing a new epoch and root hash for the directory.

Lookup Proofs

We can call Directory::lookup to generate a LookupProof that proves the correctness of a client lookup for an existing entry.

let (lookup_proof, _) = akd.lookup(
    AkdLabel::from("first entry")
).await.expect("Could not generate proof");

To verify a valid proof, we call client::lookup_verify, with respect to the root hash and the server’s public key.

let public_key = akd.get_public_key().await.expect("Could not fetch public key");

let lookup_result = akd::client::lookup_verify(
    public_key.as_bytes(),
    epoch_hash.hash(),
    AkdLabel::from("first entry"),
    lookup_proof,
).expect("Could not verify lookup proof");

assert_eq!(
    lookup_result,
    akd::VerifyResult {
        epoch: 1,
        version: 1,
        value: AkdValue::from("first value"),
    },
);

History Proofs

As mentioned above, the security is defined by consistent views of the value for a key at any epoch. To this end, a server running an AKD needs to provide a way to check the history of a key. Note that in this case, the server is trusted for validating that a particular client is authorized to run a history check on a particular key. We can use Directory::key_history to prove the history of a key’s values at a given epoch. The HistoryParams field can be used to limit the history that we issue proofs for, but in this example we default to a complete history.

use akd::HistoryParams;

let EpochHash(epoch2, root_hash2) = akd.publish(
    vec![(AkdLabel::from("first entry"), AkdValue::from("updated value"))],
).await.expect("Error with publishing");
let (history_proof, _) = akd.key_history(
    &AkdLabel::from("first entry"),
    HistoryParams::default(),
).await.expect("Could not generate proof");

To verify the above proof, we call client::key_history_verify, with respect to the latest root hash and public key, as follows. This function returns a list of values that have been associated with the specified entry, in reverse chronological order.

let public_key = akd.get_public_key().await.expect("Could not fetch public key");
let key_history_result = akd::client::key_history_verify(
    public_key.as_bytes(),
    epoch_hash.hash(),
    epoch_hash.epoch(),
    AkdLabel::from("first entry"),
    history_proof,
    akd::HistoryVerificationParams::default(),
).expect("Could not verify history");

assert_eq!(
    key_history_result,
    vec![
        akd::VerifyResult {
            epoch: 2,
            version: 2,
            value: AkdValue::from("updated value"),
        },
        akd::VerifyResult {
            epoch: 1,
            version: 1,
            value: AkdValue::from("first value"),
        },
    ],
);

Append-Only Proofs

In addition to the client API calls, the AKD also provides proofs to auditors that its commitments evolved correctly. Below we illustrate how the server responds to an audit query between two epochs.

// Publish new entries into a second epoch
let entries = vec![
    (AkdLabel::from("first entry"), AkdValue::from("new first value")),
    (AkdLabel::from("third entry"), AkdValue::from("third value")),
];
let EpochHash(epoch2, root_hash2) = akd.publish(entries)
    .await.expect("Error with publishing");

// Generate audit proof for the evolution from epoch 1 to epoch 2.
let audit_proof = akd.audit(epoch, epoch2)
    .await.expect("Error with generating proof");

The auditor then verifies the above AppendOnlyProof using auditor::audit_verify.

let audit_result = akd::auditor::audit_verify(
    vec![root_hash, root_hash2],
    audit_proof,
).await;
assert!(audit_result.is_ok());

Compilation Features

The akd crate supports multiple compilation features:

  1. serde: Will enable [serde] serialization support on all public structs used in storage & transmission operations. This is helpful in the event you wish to directly serialize the structures to transmit between library <-> storage layer or library <-> clients. If you’re also utilizing VRFs (see (2.) below) it will additionally enable the serde feature in the ed25519-dalek crate.

  2. public-tests: Will expose some internal sanity testing functionality, which is often helpful so you don’t have to write all your own unit test cases when implementing a storage layer yourself. This helps guarantee the sanity of a given storage implementation. Should be used only in unit testing scenarios by altering your Cargo.toml as such:

[dependencies]
akd = { version = "0.7" }

[dev-dependencies]
akd = { version = "0.7", features = ["public-tests"] }

Re-exports

Modules

  • An implementation of an append-only zero knowledge set
  • Code for an auditor of a authenticated key directory
  • Code for a client of a auditable key directory
  • Functions for performing the core cryptographic operations for AKD
  • Implementation of a auditable key directory
  • This module contains implementations of a verifiable random function (currently only ECVRF). VRFs are used, in the case of this crate, to anonymize the user id <-> node label mapping into a 1-way hash, which is verifyable without being regeneratable without the secret key.
  • Errors for various data structure operations.
  • This module contains all the hashing utilities needed for the AKD directory and verification operations
  • Helper structs that are used for various data structures, to make it easier to pass arguments around.
  • This module contains all the type conversions between internal AKD & message types with the protobuf types
  • This module contains all the protobuf types for type conversion between internal and external types. NOTE: Protobuf encoding is NOT supported in nostd environments. The generated code is using vector too heavily to be nostd compliant
  • Storage module for a auditable key directory
  • The implementation of a node for a history patricia tree
  • This module contains all of the structs which need to be constructed to verify any of the following AKD proofs
  • This module contains verification calls for different proofs contained in the AKD crate

Structs

  • The label of a particular entry in the AKD
  • The value of a particular entry in the AKD
  • Proof that no leaves were deleted from the initial epoch. This is done using a list of SingleAppendOnly proofs, one proof for each epoch between the initial epoch and final epochs which are being audited.
  • Represents an element to be inserted into the AZKS. This is a pair consisting of a label (NodeLabel) and a value. The purpose of the directory publish is to convert an insertion set of (AkdLabel, AkdValue) tuples into a set of AzksElements, which are then inserted into the AZKS.
  • The value associated with an element of the AZKS
  • Used to denote an azks value that has been hashed together with an epoch
  • This proof is just an array of UpdateProofs.
  • Proof that a given label was at a particular state at the given epoch. This means we need to show that the state and version we are claiming for this node must have been:
  • Merkle proof of membership of a NodeLabel with a particular hash value in the tree at a given epoch
  • Represents the label of a AKD node
  • Merkle Patricia proof of non-membership for a NodeLabel in the tree at a given epoch.
  • Represents a specific level of the tree with the parental sibling and the direction of the parent for use in tree hash calculations
  • Proof that no leaves were deleted from the initial epoch. This means that unchanged_nodes should hash to the initial root hash and the vec of inserted is the set of leaves inserted between these epochs. If we built the tree using the nodes in inserted and the nodes in unchanged_nodes as the leaves with the correct epoch of insertion, it should result in the final root hash.
  • A vector of UpdateProofs are sent as the proof to a history query for a particular key. For each version of the value associated with the key, the verifier must check that:
  • The payload that is outputted as a result of successful verification of a LookupProof or HistoryProof. This includes the fields containing the epoch that the leaf was published in, the version corresponding to the value, and the value itself.

Enums

  • This type is used to indicate a direction for a particular node relative to its parent. We use 0 to represent “left” and 1 to represent “right”.
  • Parameters for customizing how history proof verification proceeds
  • This type is used to indicate whether or not one label is a prefix of another, and if so, whether the longer string has a 0 after the prefix, or a 1 after the prefix. If the first label is equal to the second, or not a prefix of the second, then it is considered invalid.
  • Whether or not a node is marked as stale or fresh Stale nodes are no longer active because a newer version exists to replace them.

Constants

  • The number of children each non-leaf node has in the tree
  • The number of bytes in a digest for Blake3 hashes
  • The label used for an empty node
  • The value to be hashed every time an empty node’s hash is to be considered
  • The length of a leaf node’s label (in bits)
  • The label used for a root node
  • A “tombstone” is a false value in an AKD ValueState denoting that a real value has been removed (e.g. data rentention policies). Should a tombstone be encountered, we have to assume that the hash of the value is correct, and we move forward without being able to verify the raw value. We utilize an empty array to save space in the storage layer

Traits

  • Retrieve the in-memory size of a structure

Type Definitions

  • A hash digest of a specified number of bytes