pub struct FileApi { /* private fields */ }Expand description
Handle exposing the file/data/chunk endpoints. Cheap to clone
(holds an Arc<Inner>).
Implementations§
Source§impl FileApi
impl FileApi
Sourcepub async fn upload_file(
&self,
batch_id: &BatchId,
data: impl Into<Bytes>,
name: &str,
content_type: &str,
opts: Option<&FileUploadOptions>,
) -> Result<UploadResult, Error>
pub async fn upload_file( &self, batch_id: &BatchId, data: impl Into<Bytes>, name: &str, content_type: &str, opts: Option<&FileUploadOptions>, ) -> Result<UploadResult, Error>
Upload a single file via POST /bzz. name is sent as the
name= query parameter (Bee uses it as the filename in
Content-Disposition on download). When content_type is empty
and opts does not specify one, application/octet-stream is
used.
Sourcepub async fn download_file(
&self,
reference: &Reference,
opts: Option<&DownloadOptions>,
) -> Result<(Bytes, FileHeaders), Error>
pub async fn download_file( &self, reference: &Reference, opts: Option<&DownloadOptions>, ) -> Result<(Bytes, FileHeaders), Error>
Download a file via GET /bzz/{ref}. Returns the body bytes plus
the parsed FileHeaders (filename / content-type / tag UID).
Sourcepub async fn download_file_response(
&self,
reference: &Reference,
opts: Option<&DownloadOptions>,
) -> Result<Response, Error>
pub async fn download_file_response( &self, reference: &Reference, opts: Option<&DownloadOptions>, ) -> Result<Response, Error>
Same as FileApi::download_file but returns the raw
reqwest::Response for streaming. The caller drives reading
from resp.bytes_stream() or resp.chunk().
Sourcepub async fn download_file_path(
&self,
reference: &Reference,
path: &str,
opts: Option<&DownloadOptions>,
) -> Result<(Bytes, FileHeaders), Error>
pub async fn download_file_path( &self, reference: &Reference, path: &str, opts: Option<&DownloadOptions>, ) -> Result<(Bytes, FileHeaders), Error>
Download a path inside a collection via
GET /bzz/{ref}/{path}. Useful for serving individual files of a
previously uploaded site.
Sourcepub async fn upload_collection_entries(
&self,
batch_id: &BatchId,
entries: &[CollectionEntry],
opts: Option<&CollectionUploadOptions>,
) -> Result<UploadResult, Error>
pub async fn upload_collection_entries( &self, batch_id: &BatchId, entries: &[CollectionEntry], opts: Option<&CollectionUploadOptions>, ) -> Result<UploadResult, Error>
Upload an in-memory collection (vec of CollectionEntry) as a
tar stream via POST /bzz. Mirrors bee-go’s
UploadCollectionEntries and bee-js’s
makeCollectionFromFileList + bzz.uploadCollection.
Sourcepub async fn upload_collection(
&self,
batch_id: &BatchId,
dir: impl AsRef<Path>,
opts: Option<&CollectionUploadOptions>,
) -> Result<UploadResult, Error>
pub async fn upload_collection( &self, batch_id: &BatchId, dir: impl AsRef<Path>, opts: Option<&CollectionUploadOptions>, ) -> Result<UploadResult, Error>
Walk the filesystem at dir, build a tar archive of every
regular file (relative paths preserved), and upload it via
POST /bzz. Symlinks and special files are skipped. Mirrors
bee-go’s UploadCollection.
Source§impl FileApi
impl FileApi
Sourcepub async fn upload_chunk(
&self,
batch_id: &BatchId,
data: impl Into<Bytes>,
opts: Option<&UploadOptions>,
) -> Result<UploadResult, Error>
pub async fn upload_chunk( &self, batch_id: &BatchId, data: impl Into<Bytes>, opts: Option<&UploadOptions>, ) -> Result<UploadResult, Error>
Upload a single raw chunk (span || payload) via
POST /chunks.
Sourcepub async fn download_chunk(
&self,
reference: &Reference,
opts: Option<&DownloadOptions>,
) -> Result<Bytes, Error>
pub async fn download_chunk( &self, reference: &Reference, opts: Option<&DownloadOptions>, ) -> Result<Bytes, Error>
Download a single chunk’s bytes via GET /chunks/{ref}.
Source§impl FileApi
impl FileApi
Sourcepub async fn upload_data(
&self,
batch_id: &BatchId,
data: impl Into<Bytes>,
opts: Option<&RedundantUploadOptions>,
) -> Result<UploadResult, Error>
pub async fn upload_data( &self, batch_id: &BatchId, data: impl Into<Bytes>, opts: Option<&RedundantUploadOptions>, ) -> Result<UploadResult, Error>
Upload raw bytes via POST /bytes. The body is sent as
application/octet-stream. Returns an UploadResult with the
content reference, optional tag UID, and (when ACT was
requested) the history address.
Sourcepub async fn download_data(
&self,
reference: &Reference,
opts: Option<&DownloadOptions>,
) -> Result<Bytes, Error>
pub async fn download_data( &self, reference: &Reference, opts: Option<&DownloadOptions>, ) -> Result<Bytes, Error>
Download raw bytes via GET /bytes/{ref}. Returns the full body
in memory. For streaming downloads use
FileApi::download_data_response.
Sourcepub async fn download_data_response(
&self,
reference: &Reference,
opts: Option<&DownloadOptions>,
) -> Result<Response, Error>
pub async fn download_data_response( &self, reference: &Reference, opts: Option<&DownloadOptions>, ) -> Result<Response, Error>
Download raw bytes via GET /bytes/{ref} and return the raw
reqwest::Response for streaming. The caller drives reading
from resp.bytes_stream() or resp.chunk().
Sourcepub async fn probe_data(
&self,
reference: &Reference,
) -> Result<ReferenceInformation, Error>
pub async fn probe_data( &self, reference: &Reference, ) -> Result<ReferenceInformation, Error>
Probe the size of the data behind a /bytes reference using
a HEAD request. Mirrors bee-js Bee.probeData.
Source§impl FileApi
impl FileApi
Sourcepub async fn create_feed_manifest(
&self,
batch_id: &BatchId,
owner: &EthAddress,
topic: &Topic,
) -> Result<Reference, Error>
pub async fn create_feed_manifest( &self, batch_id: &BatchId, owner: &EthAddress, topic: &Topic, ) -> Result<Reference, Error>
POST /feeds/{owner}/{topic} — create a feed manifest for
the given pair. Returns the manifest reference.
Sourcepub async fn get_feed_lookup(
&self,
owner: &EthAddress,
topic: &Topic,
) -> Result<Reference, Error>
pub async fn get_feed_lookup( &self, owner: &EthAddress, topic: &Topic, ) -> Result<Reference, Error>
GET /feeds/{owner}/{topic} — return the latest feed lookup.
Sourcepub async fn fetch_latest_feed_update(
&self,
owner: &EthAddress,
topic: &Topic,
) -> Result<FeedUpdate, Error>
pub async fn fetch_latest_feed_update( &self, owner: &EthAddress, topic: &Topic, ) -> Result<FeedUpdate, Error>
Fetch the most recent feed update.
The body is the wrapped chunk payload; the swarm-feed-index
and swarm-feed-index-next headers carry the indexes as
8-byte big-endian hex.
Sourcepub async fn find_next_index(
&self,
owner: &EthAddress,
topic: &Topic,
) -> Result<u64, Error>
pub async fn find_next_index( &self, owner: &EthAddress, topic: &Topic, ) -> Result<u64, Error>
Return the index where the next feed update should be written.
Bee returns 404 / 500 when the feed is empty; this helper
translates those to 0.
Sourcepub async fn update_feed(
&self,
batch_id: &BatchId,
signer: &PrivateKey,
topic: &Topic,
data: &[u8],
) -> Result<UploadResult, Error>
pub async fn update_feed( &self, batch_id: &BatchId, signer: &PrivateKey, topic: &Topic, data: &[u8], ) -> Result<UploadResult, Error>
Update the feed at the next available index. The chunk payload
is BE-uint64(timestamp) || data. Mirrors bee-js
updateFeedWithPayload.
Sourcepub async fn update_feed_with_reference(
&self,
batch_id: &BatchId,
signer: &PrivateKey,
topic: &Topic,
reference: &Reference,
index: Option<u64>,
) -> Result<UploadResult, Error>
pub async fn update_feed_with_reference( &self, batch_id: &BatchId, signer: &PrivateKey, topic: &Topic, reference: &Reference, index: Option<u64>, ) -> Result<UploadResult, Error>
Update the feed to point at reference. The chunk payload is
BE-uint64(timestamp) || reference(32 or 64). If index is
None, FileApi::find_next_index is called.
Sourcepub async fn update_feed_with_index(
&self,
batch_id: &BatchId,
signer: &PrivateKey,
topic: &Topic,
index: u64,
data: &[u8],
) -> Result<UploadResult, Error>
pub async fn update_feed_with_index( &self, batch_id: &BatchId, signer: &PrivateKey, topic: &Topic, index: u64, data: &[u8], ) -> Result<UploadResult, Error>
Update the feed at a specific index.
The chunk identifier is keccak256(topic || BE-uint64(index));
the payload is BE-uint64(now_unix_seconds) || data. The
chunk is signed via SOC and uploaded to /soc/{owner}/{id}.
Sourcepub async fn is_feed_retrievable(
&self,
owner: &EthAddress,
topic: &Topic,
index: Option<u64>,
opts: Option<&DownloadOptions>,
) -> Result<bool, Error>
pub async fn is_feed_retrievable( &self, owner: &EthAddress, topic: &Topic, index: Option<u64>, opts: Option<&DownloadOptions>, ) -> Result<bool, Error>
True iff the feed currently resolves on the network. If
index is None, only the latest update is checked. If
index is Some(i), every chunk from 0 through i is
checked via FileApi::are_all_sequential_feeds_update_retrievable.
Sourcepub async fn are_all_sequential_feeds_update_retrievable(
&self,
owner: &EthAddress,
topic: &Topic,
index: u64,
opts: Option<&DownloadOptions>,
) -> Result<bool, Error>
pub async fn are_all_sequential_feeds_update_retrievable( &self, owner: &EthAddress, topic: &Topic, index: u64, opts: Option<&DownloadOptions>, ) -> Result<bool, Error>
True iff every feed-update chunk from 0 through index
(inclusive) is currently retrievable. Used to validate that a
feed can be replayed from its origin.
Sourcepub fn make_feed_reader(&self, owner: EthAddress, topic: Topic) -> FeedReader
pub fn make_feed_reader(&self, owner: EthAddress, topic: Topic) -> FeedReader
Construct a FeedReader bound to (owner, topic). Mirrors
bee-js Bee.makeFeedReader.
Sourcepub fn make_feed_writer(
&self,
signer: PrivateKey,
topic: Topic,
) -> Result<FeedWriter, Error>
pub fn make_feed_writer( &self, signer: PrivateKey, topic: Topic, ) -> Result<FeedWriter, Error>
Construct a FeedWriter bound to (signer, topic). Owner is
derived from signer.public_key().address(). Mirrors bee-js
Bee.makeFeedWriter.
Source§impl FileApi
impl FileApi
Sourcepub async fn upload_soc(
&self,
batch_id: &BatchId,
owner: &EthAddress,
identifier: &Identifier,
signature: &Signature,
data: impl Into<Bytes>,
opts: Option<&UploadOptions>,
) -> Result<UploadResult, Error>
pub async fn upload_soc( &self, batch_id: &BatchId, owner: &EthAddress, identifier: &Identifier, signature: &Signature, data: impl Into<Bytes>, opts: Option<&UploadOptions>, ) -> Result<UploadResult, Error>
Upload a Single Owner Chunk to POST /soc/{owner}/{id}?sig=….
data must be the SOC body in wire form: span (8) || payload.
Mirrors bee-go (*Service).UploadSOC.
Sourcepub fn make_soc_reader(&self, owner: EthAddress) -> SocReader
pub fn make_soc_reader(&self, owner: EthAddress) -> SocReader
Construct a SocReader for the given owner.
Sourcepub fn make_soc_writer(&self, signer: PrivateKey) -> Result<SocWriter, Error>
pub fn make_soc_writer(&self, signer: PrivateKey) -> Result<SocWriter, Error>
Construct a SocWriter for the given signer. Owner is
derived from signer.public_key().address().
Source§impl FileApi
impl FileApi
Sourcepub async fn save_manifest_recursively(
&self,
node: &mut MantarayNode,
batch_id: &BatchId,
opts: Option<&UploadOptions>,
) -> Result<UploadResult, Error>
pub async fn save_manifest_recursively( &self, node: &mut MantarayNode, batch_id: &BatchId, opts: Option<&UploadOptions>, ) -> Result<UploadResult, Error>
Persist a MantarayNode tree recursively, depth-first.
Mirrors bee-js MantarayNode.saveRecursively — each child is
uploaded first (so its self_address is populated), then the
node itself is marshaled and uploaded via /bytes. The
resulting reference is stored on the node’s self_address and
returned to the caller.
Sourcepub async fn stream_directory(
&self,
batch_id: &BatchId,
dir: impl AsRef<Path>,
opts: Option<&CollectionUploadOptions>,
on_progress: Option<OnStreamProgressFn>,
) -> Result<UploadResult, Error>
pub async fn stream_directory( &self, batch_id: &BatchId, dir: impl AsRef<Path>, opts: Option<&CollectionUploadOptions>, on_progress: Option<OnStreamProgressFn>, ) -> Result<UploadResult, Error>
Stream a directory upload chunk-by-chunk.
For each regular file under dir, content-addresses it via
FileChunker, uploads the resulting chunks via POST /chunks
with up to N=64 concurrent in-flight uploads, then assembles a
Mantaray manifest with one fork per file (path →
content-addressed root). Finally calls
FileApi::save_manifest_recursively to persist the manifest
and returns its reference.
Mirrors bee-js Bee.streamDirectory. The on_progress callback
fires once per uploaded chunk with (processed, total) counts.
Differences from bee-js:
- File contents are read fully into memory before being fed to the chunker. True file-streaming (read → seal → upload as a pipeline) can be added later if a real use case lands.
- Per-file metadata (Content-Type / Filename) is not yet set on
the Mantaray fork — bee-js sets
Content-Typefrom the file extension, but bee-rs leaves manifests metadata-free for now; seeSelf::upload_collectionfor the tar-based path that lets Bee infer types server-side.
Sourcepub async fn stream_collection_entries(
&self,
batch_id: &BatchId,
entries: &[CollectionEntry],
opts: Option<&CollectionUploadOptions>,
on_progress: Option<OnStreamProgressFn>,
) -> Result<UploadResult, Error>
pub async fn stream_collection_entries( &self, batch_id: &BatchId, entries: &[CollectionEntry], opts: Option<&CollectionUploadOptions>, on_progress: Option<OnStreamProgressFn>, ) -> Result<UploadResult, Error>
Same as Self::stream_directory but takes pre-built
in-memory entries instead of walking the filesystem.