Expand description
Protocol for transferring content-addressed blobs over iroh
p2p QUIC connections.
§Participants
The protocol is a request/response protocol with two parties, a provider that serves blobs and a getter that requests blobs.
§Goals
-
Be paranoid about data integrity.
Data integrity is considered more important than performance. Data will be validated both on the provider and getter side. A well behaved provider will never send invalid data. Responses to range requests contain sufficient information to validate the data.
Note: Validation using blake3 is extremely fast, so in almost all scenarios the validation will not be the bottleneck even if we validate both on the provider and getter side.
-
Do not limit the size of blobs or collections.
Blobs can be of arbitrary size, up to terabytes. Likewise, collections can contain an arbitrary number of links. A well behaved implementation will not require the entire blob or collection to be in memory at once.
-
Be efficient when transferring large blobs, including range requests.
It is possible to request entire blobs or ranges of blobs, where the minimum granularity is a chunk group of 16KiB or 16 blake3 chunks. The worst case overhead when doing range requests is about two chunk groups per range.
-
Be efficient when transferring multiple tiny blobs.
For tiny blobs the overhead of sending the blob hashes and the round-trip time for each blob would be prohibitive.
To avoid roundtrips, the protocol allows grouping multiple blobs into collections. The semantic meaning of a collection is up to the application. For the purpose of this protocol, a collection is just a grouping of related blobs.
§Non-goals
-
Do not attempt to be generic in terms of the used hash function.
The protocol makes extensive use of the blake3 hash function and it’s special properties such as blake3 verified streaming.
-
Do not support graph traversal.
The protocol only supports collections that directly contain blobs. If you have deeply nested graph data, you will need to either do multiple requests or flatten the graph into a single temporary collection.
-
Do not support discovery.
The protocol does not yet have a discovery mechanism for asking the provider what ranges are available for a given blob. Currently you have to have some out-of-band knowledge about what node has data for a given hash, or you can just try to retrieve the data and see if it is available.
A discovery protocol is planned in the future though.
§Requests
§Getter defined requests
In this case the getter knows the hash of the blob it wants to retrieve and whether it wants to retrieve a single blob or a collection.
The getter needs to define exactly what it wants to retrieve and send the request to the provider.
The provider will then respond with the bao encoded bytes for the requested data and then close the connection. It will immediately close the connection in case some data is not available or invalid.
§Provider defined requests
In this case the getter sends a blob to the provider. This blob can contain some kind of query. The exact details of the query are up to the application.
The provider evaluates the query and responds with a serialized request in the same format as the getter defined requests, followed by the bao encoded data. From then on the protocol is the same as for getter defined requests.
§Specifying the required data
A GetRequest
contains a hash and a specification of what data related to
that hash is required. The specification is using a ChunkRangesSeq
which
has a compact representation on the wire but is otherwise identical to a
sequence of sets of ranges.
In the following, we describe how the GetRequest
is to be created for
different common scenarios.
Under the hood, this is using the ChunkRangesSeq
type, but the most
convenient way to create a GetRequest
is to use the builder API.
Ranges are always given in terms of 1024 byte blake3 chunks, not in terms of bytes or chunk groups. The reason for this is that chunks are the fundamental unit of hashing in BLAKE3. Addressing anything smaller than a chunk is not possible, and combining multiple chunks is merely an optimization to reduce metadata overhead.
§Individual blobs
In the easiest case, the getter just wants to retrieve a single blob. In this
case, the getter specifies ChunkRangesSeq
that contains a single element.
This element is the set of all chunks to indicate that we
want the entire blob, no matter how many chunks it has.
Since this is a very common case, there is a convenience method
GetRequest::blob
that only requires the hash of the blob.
let request = GetRequest::blob(hash);
§Ranges of blobs
In this case, we have a (possibly large) blob and we want to retrieve only some ranges of chunks. This is useful in similar cases as HTTP range requests.
We still need just a single element in the ChunkRangesSeq
, since we are
still only interested in a single blob. However, this element contains all
the chunk ranges we want to retrieve.
For example, if we want to retrieve chunks 0-10 of a blob, we would
create a ChunkRangesSeq
like this:
let request = GetRequest::builder()
.root(ChunkRanges::chunks(..10))
.build(hash);
While not that common, it is also possible to request multiple ranges of a
single blob. For example, if we want to retrieve chunks 0-10
and 100-110
of a large file, we would create a GetRequest
like this:
let request = GetRequest::builder()
.root(ChunkRanges::chunks(..10) | ChunkRanges::chunks(100..110))
.build(hash);
This is all great, but in most cases we are not interested in chunks but
in bytes. The ChunkRanges
type has a constructor that allows providing
byte ranges instead of chunk ranges. These will be rounded up to the
nearest chunk.
let request = GetRequest::builder()
.root(ChunkRanges::bytes(..1000) | ChunkRanges::bytes(10000..11000))
.build(hash);
There are also methods to request a single chunk or a single byte offset, as well as a special constructor for the last chunk of a blob.
let request = GetRequest::builder()
.root(ChunkRanges::offset(1) | ChunkRanges::last_chunk())
.build(hash);
To specify chunk ranges, we use the ChunkRanges
type alias.
This is actually the RangeSet
type from the
range_collections crate. This
type supports efficient boolean operations on sets of non-overlapping ranges.
The RangeSet2
type is a type alias for RangeSet
that can store up to
2 boundaries without allocating. This is sufficient for most use cases.
§Hash sequences
In this case the provider has a hash sequence that refers multiple blobs. We want to retrieve all blobs in the hash sequence.
When used for hash sequences, the first element of a ChunkRangesSeq
refers
to the hash seq itself, and all subsequent elements refer to the blobs
in the hash seq. When a ChunkRangesSeq
specifies ranges for more than
one blob, the provider will interpret this as a request for a hash seq.
One thing to note is that we might not yet know how many blobs are in the
hash sequence. Therefore, it is not possible to download an entire hash seq
by just specifying ChunkRanges::all()
for all children.
Instead, ChunkRangesSeq
allows defining infinite sequences of range sets.
The ChunkRangesSeq::all()
method returns a ChunkRangesSeq
that, when iterated
over, will yield ChunkRanges::all()
forever.
So a get request to download a hash sequence blob and all its children would look like this:
let request = GetRequest::builder()
.root(ChunkRanges::all())
.build_open(hash); // repeats the last range forever
Downloading an entire hash seq is also a very common case, so there is a
convenience method GetRequest::all
that only requires the hash of the
hash sequence blob.
let request = GetRequest::all(hash);
§Parts of hash sequences
The most complex common case is when we have retrieved a hash seq and it’s children, but were interrupted before we could retrieve all children.
In this case we need to specify the hash seq we want to retrieve, but exclude the children and parts of children that we already have.
For example, if we have a hash with 3 children, and we already have the first child and the first 1000000 chunks of the second child.
We would create a GetRequest
like this:
let request = GetRequest::builder()
.child(1, ChunkRanges::chunks(1000000..)) // we don't need the first child;
.next(ChunkRanges::all()) // we need the second child and all subsequent children completely
.build_open(hash);
§Requesting chunks for each child
The ChunkRangesSeq allows some scenarios that are not covered above. E.g. you might want to request a hash seq and the first chunk of each child blob to do something like mime type detection.
You do not know how many children the collection has, so you need to use an infinite sequence.
let request = GetRequest::builder()
.root(ChunkRanges::all())
.next(ChunkRanges::chunk(1)) // the first chunk of each child)
.build_open(hash);
§Requesting a single child
It is of course possible to request a single child of a collection. E.g. the following would download the second child of a collection:
let request = GetRequest::builder()
.child(1, ChunkRanges::all()) // we need the second child completely
.build(hash);
However, if you already have the collection, you might as well locally look up the hash of the child and request it directly.
let request = GetRequest::blob(child_hash);
§Why ChunkRanges and ChunkRangesSeq?
You might wonder why we have ChunkRangesSeq
, when a simple
sequence of ChunkRanges
might also do.
The ChunkRangesSeq
type exist to provide an efficient
representation of the request on the wire. In the wire encoding of ChunkRangesSeq
,
ChunkRanges
are encoded alternating intervals of selected and non-selected chunks.
This results in smaller numbers that will result in fewer bytes on the wire when using
the postcard encoding format that uses variable
length integers.
Likewise, the ChunkRangesSeq
type
does run length encoding to remove repeating elements. It also allows infinite
sequences of ChunkRanges
to be encoded, unlike a simple sequence of
ChunkRanges
s.
ChunkRangesSeq
should be efficient even in case of very fragmented availability
of chunks, like a download from multiple providers that was frequently interrupted.
§Responses
The response stream contains the bao encoded bytes for the requested data. The data will be sent in the order in which it was requested, so ascending chunks for each blob, and blobs in the order in which they appear in the hash seq.
For details on the bao encoding, see the bao specification and the bao-tree crate. The bao-tree crate is identical to the bao crate, except that it allows combining multiple BLAKE3 chunks to chunk groups for efficiency.
For a complete response, the chunks are guaranteed to completely cover the requested ranges.
Reasons for not retrieving a complete response are two-fold:
-
the connection to the provider was interrupted, or the provider encountered an internal error. In this case the provider will close the entire quinn connection.
-
the provider does not have the requested data, or discovered on send that the requested data is not valid.
In this case the provider will close just the stream used to send the response. The exact location of the missing data can be retrieved from the error.
§Requesting multiple unrelated blobs
Let’s say you don’t have a hash sequence on the provider side, but you nevertheless want to request multiple unrelated blobs in a single request.
For this, there is the GetManyRequest
type, which also comes with a
builder API.
GetManyRequest::builder()
.hash(hash1, ChunkRanges::all())
.hash(hash2, ChunkRanges::all())
.build();
If you accidentally or intentionally request ranges for the same hash
multiple times, they will be merged into a single ChunkRanges
.
GetManyRequest::builder()
.hash(hash1, ChunkRanges::chunk(1))
.hash(hash2, ChunkRanges::all())
.hash(hash1, ChunkRanges::last_chunk())
.build();
This is mostly useful for requesting multiple tiny blobs in a single request. For large or even medium sized blobs, multiple requests are not expensive. Multiple requests just create multiple streams on the same connection, which is very cheap in QUIC.
In case nodes are permanently exchanging data, it is somewhat valuable to keep a connection open and reuse it for multiple requests. However, creating a new connection is also very cheap, so you would only do this to optimize a large existing system that has demonstrated performance issues.
If in doubt, just use multiple requests and multiple connections.
Re-exports§
pub use crate::util::ChunkRangesExt;
Modules§
Structs§
- Chunk
Ranges Seq - GetMany
Request - A GetMany request is a request to get multiple blobs via a single request.
- GetRequest
- A get request
- NonEmpty
Request Range Spec Iter - An iterator over blobs in the sequence with a non-empty range specs.
- Observe
Item - Observe
Request - A request to observe a raw blob bitfield.
- Push
Request - A push request contains a description of what to push, but will be followed by the data to push.
- Range
Spec - A chunk range specification as a sequence of chunk offsets.
- Unknown
Error Code - Unknown error_code, can not be converted into
Closed
.
Enums§
- Closed
- Reasons to close connections or stop streams.
- Request
- A request to the provider
- Request
Type - This must contain the request types in the same order as the full requests
Constants§
- ALPN
- The ALPN used with quic for the iroh blobs protocol.
- MAX_
MESSAGE_ SIZE - Maximum message size is limited to 100MiB for now.
Type Aliases§
- Chunk
Ranges - A set of chunk ranges