pub struct Array<TStorage: ?Sized> { /* private fields */ }
Expand description
A Zarr array.
§Initilisation
The easiest way to create a new Zarr V3 array is with an ArrayBuilder
.
Alternatively, a new Zarr V2 or Zarr V3 array can be created with Array::new_with_metadata
.
An existing Zarr V2 or Zarr V3 array can be initialised with Array::open
or Array::open_opt
with metadata read from the store.
Array
initialisation will error if ArrayMetadata
contains:
- unsupported extension points, including extensions which are supported by
zarrs
but have not been enabled with the appropriate features gates, or - incompatible codecs (e.g. codecs in wrong order, codecs incompatible with data type, etc.),
- a chunk grid incompatible with the array shape,
- a fill value incompatible with the data type, or
- the metadata is in invalid in some other way.
§Array Metadata
Array metadata must be explicitly stored with store_metadata
or store_metadata_opt
if an array is newly created or its metadata has been mutated.
The underlying metadata of an Array
can be accessed with metadata
or metadata_opt
.
The latter accepts ArrayMetadataOptions
that can be used to convert array metadata from Zarr V2 to V3, for example.
metadata_opt
is used internally by store_metadata
/ store_metadata_opt
.
Use serde_json::to_string
or serde_json::to_string_pretty
on ArrayMetadata
to convert it to a JSON string.
§Immutable Array Metadata / Properties
metadata
: the underlyingArrayMetadata
structure containing all array metadatadata_type
fill_value
chunk_grid
chunk_key_encoding
codecs
storage_transformers
path
§Mutable Array Metadata
Do not forget to store metadata after mutation.
§zarrs
Metadata
By default, the zarrs
version and a link to its source code is written to the _zarrs
attribute in array metadata when calling store_metadata
.
Override this behaviour globally with Config::set_include_zarrs_metadata
or call store_metadata_opt
with an explicit ArrayMetadataOptions
.
§Array Data
Array operations are divided into several categories based on the traits implemented for the backing storage. The core array methods are:
[Async]ReadableStorageTraits
: read array data and metadata[Async]WritableStorageTraits
: store/erase array data and metadata[Async]ReadableWritableStorageTraits
: store operations requiring reading and writing
Many retrieve
and store
methods have multiple variants:
- Standard variants store or retrieve data represented as
ArrayBytes
(representing fixed or variable length bytes). _elements
suffix variants can store or retrieve chunks with a known type._ndarray
suffix variants can store or retrievendarray::Array
s (requiresndarray
feature)._opt
suffix variants have aCodecOptions
parameter for fine-grained concurrency control and more.- Variants without the
_opt
suffix use defaultCodecOptions
. - Experimental:
async_
prefix variants can be used with async stores (requiresasync
feature).
Additional methods are offered by extension traits:
ArrayShardedExt
andArrayShardedReadableExt
: see Reading Sharded Arrays.ArrayChunkCacheExt
: see Chunk Caching.[Async]ArrayDlPackExt
: methods forDLPack
tensor interop.
§Chunks and Array Subsets
Several convenience methods are available for querying the underlying chunk grid:
chunk_origin
chunk_shape
chunk_subset
chunk_subset_bounded
chunks_subset
/chunks_subset_bounded
chunks_in_array_subset
An ArraySubset
spanning the entire array can be retrieved with subset_all
.
§Example: Update an Array Chunk-by-Chunk (in Parallel)
In the below example, an array is updated chunk-by-chunk in parallel.
This makes use of chunk_subset_bounded
to retrieve and store only the subset of chunks that are within the array bounds.
This can occur when a regular chunk grid does not evenly divide the array shape, for example.
// Get an iterator over the chunk indices
// The array shape must have been set (i.e. non-zero), otherwise the
// iterator will be empty
let chunk_grid_shape = array.chunk_grid_shape().unwrap();
let chunks: Indices = ArraySubset::new_with_shape(chunk_grid_shape).indices();
// Iterate over chunk indices (in parallel)
chunks.into_par_iter().try_for_each(|chunk_indices: Vec<u64>| {
// Retrieve the array subset of the chunk within the array bounds
// This partially decodes chunks that extend beyond the array end
let subset: ArraySubset = array.chunk_subset_bounded(&chunk_indices)?;
let chunk_bytes: ArrayBytes = array.retrieve_array_subset(&subset)?;
// ... Update the chunk bytes
// Write the updated chunk
// Elements beyond the array bounds in straddling chunks are left
// unmodified or set to the fill value if the chunk did not exist.
array.store_array_subset(&subset, chunk_bytes)
})?;
§Optimising Writes
For optimum write performance, an array should be written using store_chunk
or store_chunks
where possible.
store_chunk_subset
and store_array_subset
may incur decoding overhead, and they require careful usage if executed in parallel (see Parallel Writing below).
However, these methods will use a fast path and avoid decoding if the subset covers entire chunks.
§Direct IO (Linux)
If using Linux, enabling direct IO with the FilesystemStore
may improve write performance.
Currently, the most performant path for uncompressed writing is to reuse page aligned buffers via store_encoded_chunk
.
See zarrs
GitHub issue #58 for a discussion on this method.
§Parallel Writing
zarrs
does not currently offer a “synchronisation” API for locking chunks or array subsets.
It is the responsibility of zarrs
consumers to ensure that chunks are not written to concurrently.
If a chunk is written more than once, its element values depend on whichever operation wrote to the chunk last.
The store_chunk_subset
and store_array_subset
methods and their variants internally retrieve, update, and store chunks.
So do partial_encoder
s, which may used internally by the above methods.
It is the responsibility of zarrs
consumers to ensure that:
store_array_subset
is not called concurrently on array subsets sharing chunks,store_chunk_subset
is not called concurrently on the same chunk,partial_encoder
s are created or used concurrently for the same chunk,- or any combination of the above are called concurrently on the same chunk.
Partial writes to a chunk may be lost if these rules are not respected.
§Optimising Reads
It is fastest to load arrays using retrieve_chunk
or retrieve_chunks
where possible.
In contrast, the retrieve_chunk_subset
and retrieve_array_subset
may use partial decoders which can be less efficient with some codecs/stores.
Like their write counterparts, these methods will use a fast path if subsets cover entire chunks.
Standard Array
retrieve methods do not perform any caching.
For this reason, retrieving multiple subsets in a chunk with retrieve_chunk_subset
is very inefficient and strongly discouraged.
For example, consider that a compressed chunk may need to be retrieved and decoded in its entirety even if only a small part of the data is needed.
In such situations, prefer to initialise a partial decoder for a chunk with partial_decoder
and then retrieve multiple chunk subsets with partial_decode
.
The underlying codec chain will use a cache where efficient to optimise multiple partial decoding requests (see CodecChain
).
Another alternative is to use Chunk Caching.
§Chunk Caching
The ArrayChunkCacheExt
trait adds Array
retrieve methods that utilise chunk caching:
retrieve_chunk_opt_cached
retrieve_chunks_opt_cached
retrieve_chunk_subset_opt_cached
retrieve_array_subset_opt_cached
_elements
and _ndarray
variants are also available.
Each method has a cache
parameter that implements the ChunkCache
trait.
Several Least Recently Used (LRU) chunk caches are provided by zarrs
:
ChunkCacheDecodedLruChunkLimit
: a decoded chunk cache with a fixed chunk capacity.ChunkCacheEncodedLruChunkLimit
: an encoded chunk cache with a fixed chunk capacity.ChunkCacheDecodedLruSizeLimit
: a decoded chunk cache with a fixed size in bytes.ChunkCacheEncodedLruSizeLimit
: an encoded chunk cache with a fixed size in bytes.
There are also ThreadLocal
suffixed variants of all of these caches that have a per-thread cache.
zarrs
consumers can create custom caches by implementing the ChunkCache
trait.
Chunk caching is likely to be effective for remote stores where redundant retrievals are costly. Chunk caching may not outperform disk caching with a filesystem store. The above caches use internal locking to support multithreading, which has a performance overhead. Prefer not to use a chunk cache if chunks are not accessed repeatedly. Cached retrieve methods do not use partial decoders, and any intersected chunk is fully decoded if not present in the cache. The encoded chunk caches may be optimal if dealing with highly compressed/sparse data with a fast codec. However, the decoded chunk caches are likely to be more performant in most cases.
For many access patterns, chunk caching may reduce performance. Benchmark your algorithm/data.
§Reading Sharded Arrays
The sharding_indexed
codec (ShardingCodec
) enables multiple sub-chunks (“inner chunks”) to be stored in a single chunk (“shard”).
With a sharded array, the chunk_grid
and chunk indices in store/retrieve methods reference the chunks (“shards”) of an array.
The ArrayShardedExt
trait provides additional methods to Array
to query if an array is sharded and retrieve the inner chunk shape.
Additionally, the inner chunk grid can be queried, which is a ChunkGrid
where chunk indices refer to inner chunks rather than shards.
The ArrayShardedReadableExt
trait adds Array
methods to conveniently and efficiently access the data in a sharded array (with _elements
and _ndarray
variants):
For unsharded arrays, these methods gracefully fallback to referencing standard chunks.
Each method has a cache
parameter (ArrayShardedReadableExtCache
) that stores shard indexes so that they do not have to be repeatedly retrieved and decoded.
§Parallelism and Concurrency
§Sync API
Codecs run in parallel using a dedicated threadpool.
Array store and retrieve methods will also run in parallel when they involve multiple chunks.
zarrs
will automatically choose where to prioritise parallelism between codecs/chunks based on the codecs and number of chunks.
By default, all available CPU cores will be used (where possible/efficient).
Concurrency can be limited globally with Config::set_codec_concurrent_target
or as required using _opt
methods with CodecOptions
manipulated with CodecOptions::set_concurrent_target
.
§Async API
This crate is async runtime-agnostic. Async methods do not spawn tasks internally, so asynchronous storage calls are concurrent but not parallel. Codec encoding and decoding operations still execute in parallel (where supported) in an asynchronous context.
Due the lack of parallelism, methods like async_retrieve_array_subset
or async_retrieve_chunks
do not parallelise over chunks and can be slow compared to the sync API.
Parallelism over chunks can be achieved by spawning tasks outside of zarrs
.
A crate like async-scoped
can enable spawning non-'static
futures.
If executing many tasks concurrently, consider reducing the codec concurrent_target
.
Implementations§
Source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>
Sourcepub fn open(
storage: Arc<TStorage>,
path: &str,
) -> Result<Self, ArrayCreateError>
pub fn open( storage: Arc<TStorage>, path: &str, ) -> Result<Self, ArrayCreateError>
Open an existing array in storage
at path
with default MetadataRetrieveVersion
.
The metadata is read from the store.
§Errors
Returns ArrayCreateError
if there is a storage error or any metadata is invalid.
Examples found in repository?
30fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
31 const HTTP_URL: &str =
32 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
33 const ARRAY_PATH: &str = "/group/array";
34
35 // Create a HTTP store
36 // let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
37 let block_on = TokioBlockOn(tokio::runtime::Runtime::new()?);
38 let mut store: ReadableStorage = match backend {
39 Backend::OpenDAL => {
40 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
41 let operator = opendal::Operator::new(builder)?.finish();
42 let store = Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator));
43 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
44 }
45 Backend::ObjectStore => {
46 let options = object_store::ClientOptions::new().with_allow_http(true);
47 let store = object_store::http::HttpBuilder::new()
48 .with_url(HTTP_URL)
49 .with_client_options(options)
50 .build()?;
51 let store = Arc::new(zarrs_object_store::AsyncObjectStore::new(store));
52 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
53 }
54 };
55 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
56 if arg1 == "--usage-log" {
57 let log_writer = Arc::new(std::sync::Mutex::new(
58 // std::io::BufWriter::new(
59 std::io::stdout(),
60 // )
61 ));
62 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
63 chrono::Utc::now().format("[%T%.3f] ").to_string()
64 }));
65 }
66 }
67
68 // Init the existing array, reading metadata
69 let array = Array::open(store, ARRAY_PATH)?;
70
71 println!(
72 "The array metadata is:\n{}\n",
73 array.metadata().to_string_pretty()
74 );
75
76 // Read the whole array
77 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
78 println!("The whole array is:\n{data_all}\n");
79
80 // Read a chunk back from the store
81 let chunk_indices = vec![1, 0];
82 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
83 println!("Chunk [1,0] is:\n{data_chunk}\n");
84
85 // Read the central 4x2 subset of the array
86 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
87 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
88 println!("The middle 4x2 subset is:\n{data_4x2}\n");
89
90 Ok(())
91}
Sourcepub fn open_opt(
storage: Arc<TStorage>,
path: &str,
version: &MetadataRetrieveVersion,
) -> Result<Self, ArrayCreateError>
pub fn open_opt( storage: Arc<TStorage>, path: &str, version: &MetadataRetrieveVersion, ) -> Result<Self, ArrayCreateError>
Open an existing array in storage
at path
with non-default MetadataRetrieveVersion
.
The metadata is read from the store.
§Errors
Returns ArrayCreateError
if there is a storage error or any metadata is invalid.
Sourcepub fn retrieve_chunk_if_exists(
&self,
chunk_indices: &[u64],
) -> Result<Option<ArrayBytes<'_>>, ArrayError>
pub fn retrieve_chunk_if_exists( &self, chunk_indices: &[u64], ) -> Result<Option<ArrayBytes<'_>>, ArrayError>
Read and decode the chunk at chunk_indices
into its bytes if it exists with default codec options.
§Errors
Returns an ArrayError
if
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of elements in the chunk exceeds usize::MAX
.
Sourcepub fn retrieve_chunk_elements_if_exists<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<Option<Vec<T>>, ArrayError>
pub fn retrieve_chunk_elements_if_exists<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<Option<Vec<T>>, ArrayError>
Read and decode the chunk at chunk_indices
into a vector of its elements if it exists with default codec options.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_chunk_ndarray_if_exists<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray_if_exists<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<Option<ArrayD<T>>, ArrayError>
ndarray
only.Read and decode the chunk at chunk_indices
into an ndarray::ArrayD
if it exists.
§Errors
Returns an ArrayError
if:
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
- the chunk indices are invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if a chunk dimension is larger than usize::MAX
.
Sourcepub fn retrieve_encoded_chunk(
&self,
chunk_indices: &[u64],
) -> Result<Option<Vec<u8>>, StorageError>
pub fn retrieve_encoded_chunk( &self, chunk_indices: &[u64], ) -> Result<Option<Vec<u8>>, StorageError>
Retrieve the encoded bytes of a chunk.
§Errors
Returns an StorageError
if there is an underlying store error.
Sourcepub fn retrieve_chunk(
&self,
chunk_indices: &[u64],
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_chunk( &self, chunk_indices: &[u64], ) -> Result<ArrayBytes<'_>, ArrayError>
Read and decode the chunk at chunk_indices
into its bytes or the fill value if it does not exist with default codec options.
§Errors
Returns an ArrayError
if
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of elements in the chunk exceeds usize::MAX
.
Sourcepub fn retrieve_chunk_elements<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_elements<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<Vec<T>, ArrayError>
Read and decode the chunk at chunk_indices
into a vector of its elements or the fill value if it does not exist.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_chunk_ndarray<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the chunk at chunk_indices
into an ndarray::ArrayD
. It is filled with the fill value if it does not exist.
§Errors
Returns an ArrayError
if:
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
- the chunk indices are invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if a chunk dimension is larger than usize::MAX
.
Examples found in repository?
30fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
31 const HTTP_URL: &str =
32 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
33 const ARRAY_PATH: &str = "/group/array";
34
35 // Create a HTTP store
36 // let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
37 let block_on = TokioBlockOn(tokio::runtime::Runtime::new()?);
38 let mut store: ReadableStorage = match backend {
39 Backend::OpenDAL => {
40 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
41 let operator = opendal::Operator::new(builder)?.finish();
42 let store = Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator));
43 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
44 }
45 Backend::ObjectStore => {
46 let options = object_store::ClientOptions::new().with_allow_http(true);
47 let store = object_store::http::HttpBuilder::new()
48 .with_url(HTTP_URL)
49 .with_client_options(options)
50 .build()?;
51 let store = Arc::new(zarrs_object_store::AsyncObjectStore::new(store));
52 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
53 }
54 };
55 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
56 if arg1 == "--usage-log" {
57 let log_writer = Arc::new(std::sync::Mutex::new(
58 // std::io::BufWriter::new(
59 std::io::stdout(),
60 // )
61 ));
62 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
63 chrono::Utc::now().format("[%T%.3f] ").to_string()
64 }));
65 }
66 }
67
68 // Init the existing array, reading metadata
69 let array = Array::open(store, ARRAY_PATH)?;
70
71 println!(
72 "The array metadata is:\n{}\n",
73 array.metadata().to_string_pretty()
74 );
75
76 // Read the whole array
77 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
78 println!("The whole array is:\n{data_all}\n");
79
80 // Read a chunk back from the store
81 let chunk_indices = vec![1, 0];
82 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
83 println!("Chunk [1,0] is:\n{data_chunk}\n");
84
85 // Read the central 4x2 subset of the array
86 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
87 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
88 println!("The middle 4x2 subset is:\n{data_4x2}\n");
89
90 Ok(())
91}
More examples
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn retrieve_encoded_chunks(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<Option<Vec<u8>>>, StorageError>
pub fn retrieve_encoded_chunks( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<Option<Vec<u8>>>, StorageError>
Retrieve the encoded bytes of the chunks in chunks
.
The chunks are in order of the chunk indices returned by chunks.indices().into_iter()
.
§Errors
Returns a StorageError
if there is an underlying store error.
Sourcepub fn retrieve_chunks(
&self,
chunks: &ArraySubset,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_chunks( &self, chunks: &ArraySubset, ) -> Result<ArrayBytes<'_>, ArrayError>
Read and decode the chunks at chunks
into their bytes.
§Errors
Returns an ArrayError
if
- any chunk indices in
chunks
are invalid, - there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of array elements in the chunk exceeds usize::MAX
.
Sourcepub fn retrieve_chunks_elements<T: ElementOwned>(
&self,
chunks: &ArraySubset,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunks_elements<T: ElementOwned>( &self, chunks: &ArraySubset, ) -> Result<Vec<T>, ArrayError>
Read and decode the chunks at chunks
into a vector of their elements.
§Errors
Returns an ArrayError
if any chunk indices in chunks
are invalid or an error condition in Array::retrieve_chunks_opt
.
§Panics
Panics if the number of array elements in the chunks exceeds usize::MAX
.
Sourcepub fn retrieve_chunks_ndarray<T: ElementOwned>(
&self,
chunks: &ArraySubset,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunks_ndarray<T: ElementOwned>( &self, chunks: &ArraySubset, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the chunks at chunks
into an ndarray::ArrayD
.
§Errors
Returns an ArrayError
if any chunk indices in chunks
are invalid or an error condition in Array::retrieve_chunks_elements_opt
.
§Panics
Panics if the number of array elements in the chunks exceeds usize::MAX
.
Examples found in repository?
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
More examples
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn retrieve_chunk_subset(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, ) -> Result<ArrayBytes<'_>, ArrayError>
Read and decode the chunk_subset
of the chunk at chunk_indices
into its bytes.
§Errors
Returns an ArrayError
if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if the number of elements in chunk_subset
is usize::MAX
or larger.
Sourcepub fn retrieve_chunk_subset_elements<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_subset_elements<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, ) -> Result<Vec<T>, ArrayError>
Read and decode the chunk_subset
of the chunk at chunk_indices
into its elements.
§Errors
Returns an ArrayError
if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_chunk_subset_ndarray<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_subset_ndarray<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the chunk_subset
of the chunk at chunk_indices
into an ndarray::ArrayD
.
§Errors
Returns an ArrayError
if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if the number of elements in chunk_subset
is usize::MAX
or larger.
Sourcepub fn retrieve_array_subset(
&self,
array_subset: &ArraySubset,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_array_subset( &self, array_subset: &ArraySubset, ) -> Result<ArrayBytes<'_>, ArrayError>
Read and decode the array_subset
of array into its bytes.
Out-of-bounds elements will have the fill value.
§Errors
Returns an ArrayError
if:
- the
array_subset
dimensionality does not match the chunk grid dimensionality, - there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if attempting to reference a byte beyond usize::MAX
.
Examples found in repository?
10fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12 use zarrs::{
13 array::{DataType, FillValue},
14 array_subset::ArraySubset,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![4, 4], // array shape
61 DataType::String,
62 vec![2, 2].try_into()?, // regular chunk shape
63 FillValue::from("_"),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 array.store_chunk_ndarray(
80 &[0, 0],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
82 )?;
83 array.store_chunk_ndarray(
84 &[0, 1],
85 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
86 )?;
87 let subset_all = array.subset_all();
88 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
90
91 // Write a subset spanning multiple chunks, including updating chunks already written
92 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
93 array.store_array_subset_ndarray(
94 ArraySubset::new_with_ranges(&[1..3, 1..3]).start(),
95 ndarray_subset,
96 )?;
97 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
98 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
99
100 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
101 // TODO: Add a convenience function for this?
102 let data_all = array.retrieve_array_subset(&subset_all)?;
103 let (bytes, offsets) = data_all.into_variable()?;
104 let string = String::from_utf8(bytes.into_owned())?;
105 let elements = offsets
106 .iter()
107 .tuple_windows()
108 .map(|(&curr, &next)| &string[curr..next])
109 .collect::<Vec<&str>>();
110 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
111 println!("ndarray::ArrayD<&str>:\n{ndarray}");
112
113 Ok(())
114}
Sourcepub fn retrieve_array_subset_elements<T: ElementOwned>(
&self,
array_subset: &ArraySubset,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_array_subset_elements<T: ElementOwned>( &self, array_subset: &ArraySubset, ) -> Result<Vec<T>, ArrayError>
Read and decode the array_subset
of array into a vector of its elements.
§Errors
Returns an ArrayError
if:
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
- an array subset is invalid or out of bounds of the array,
- there is a codec decoding error, or
- an underlying store error.
Examples found in repository?
153fn main() {
154 let store = std::sync::Arc::new(MemoryStore::default());
155 let array_path = "/array";
156 let array = ArrayBuilder::new(
157 vec![4, 1], // array shape
158 DataType::Extension(Arc::new(CustomDataTypeVariableSize)),
159 vec![3, 1].try_into().unwrap(), // regular chunk shape
160 FillValue::from(vec![]),
161 )
162 .array_to_array_codecs(vec![
163 #[cfg(feature = "transpose")]
164 Arc::new(zarrs::array::codec::TransposeCodec::new(
165 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
166 )),
167 ])
168 .bytes_to_bytes_codecs(vec![
169 #[cfg(feature = "gzip")]
170 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
171 #[cfg(feature = "crc32c")]
172 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
173 ])
174 // .storage_transformers(vec![].into())
175 .build(store, array_path)
176 .unwrap();
177 println!("{}", array.metadata().to_string_pretty());
178
179 let data = [
180 CustomDataTypeVariableSizeElement::from(Some(1.0)),
181 CustomDataTypeVariableSizeElement::from(None),
182 CustomDataTypeVariableSizeElement::from(Some(3.0)),
183 ];
184 array.store_chunk_elements(&[0, 0], &data).unwrap();
185
186 let data = array
187 .retrieve_array_subset_elements::<CustomDataTypeVariableSizeElement>(&array.subset_all())
188 .unwrap();
189
190 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
191 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
192 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
193 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
194
195 println!("{data:#?}");
196}
More examples
269fn main() {
270 let store = std::sync::Arc::new(MemoryStore::default());
271 let array_path = "/array";
272 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
273 let array = ArrayBuilder::new(
274 vec![4, 1], // array shape
275 DataType::Extension(Arc::new(CustomDataTypeFixedSize)),
276 vec![2, 1].try_into().unwrap(), // regular chunk shape
277 FillValue::new(fill_value.to_ne_bytes().to_vec()),
278 )
279 .array_to_array_codecs(vec![
280 #[cfg(feature = "transpose")]
281 Arc::new(zarrs::array::codec::TransposeCodec::new(
282 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
283 )),
284 ])
285 .bytes_to_bytes_codecs(vec![
286 #[cfg(feature = "gzip")]
287 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
288 #[cfg(feature = "crc32c")]
289 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
290 ])
291 // .storage_transformers(vec![].into())
292 .build(store, array_path)
293 .unwrap();
294 println!("{}", array.metadata().to_string_pretty());
295
296 let data = [
297 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
298 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
299 ];
300 array.store_chunk_elements(&[0, 0], &data).unwrap();
301
302 let data = array
303 .retrieve_array_subset_elements::<CustomDataTypeFixedSizeElement>(&array.subset_all())
304 .unwrap();
305
306 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
307 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
308 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
309 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
310
311 println!("{data:#?}");
312}
205fn main() {
206 let store = std::sync::Arc::new(MemoryStore::default());
207 let array_path = "/array";
208 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
209 let array = ArrayBuilder::new(
210 vec![4096, 1], // array shape
211 DataType::Extension(Arc::new(CustomDataTypeUInt12)),
212 vec![5, 1].try_into().unwrap(), // regular chunk shape
213 FillValue::new(fill_value.to_le_bytes().to_vec()),
214 )
215 .array_to_array_codecs(vec![
216 #[cfg(feature = "transpose")]
217 Arc::new(zarrs::array::codec::TransposeCodec::new(
218 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
219 )),
220 ])
221 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
222 .bytes_to_bytes_codecs(vec![
223 #[cfg(feature = "gzip")]
224 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
225 #[cfg(feature = "crc32c")]
226 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
227 ])
228 // .storage_transformers(vec![].into())
229 .build(store, array_path)
230 .unwrap();
231 println!("{}", array.metadata().to_string_pretty());
232
233 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
234 .into_iter()
235 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
236 .collect();
237
238 array
239 .store_array_subset_elements(&array.subset_all(), &data)
240 .unwrap();
241
242 let data = array
243 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(&array.subset_all())
244 .unwrap();
245
246 for i in 0usize..4096 {
247 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
248 assert_eq!(data[i], element);
249 let element_pd = array
250 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(
251 &ArraySubset::new_with_ranges(&[(i as u64)..i as u64 + 1, 0..1]),
252 )
253 .unwrap()[0];
254 assert_eq!(element_pd, element);
255 }
256}
217fn main() {
218 let store = std::sync::Arc::new(MemoryStore::default());
219 let array_path = "/array";
220 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
221 let array = ArrayBuilder::new(
222 vec![6, 1], // array shape
223 DataType::Extension(Arc::new(CustomDataTypeFloat8e3m4)),
224 vec![5, 1].try_into().unwrap(), // regular chunk shape
225 FillValue::new(fill_value.to_ne_bytes().to_vec()),
226 )
227 .array_to_array_codecs(vec![
228 #[cfg(feature = "transpose")]
229 Arc::new(zarrs::array::codec::TransposeCodec::new(
230 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
231 )),
232 ])
233 .bytes_to_bytes_codecs(vec![
234 #[cfg(feature = "gzip")]
235 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
236 #[cfg(feature = "crc32c")]
237 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
238 ])
239 // .storage_transformers(vec![].into())
240 .build(store, array_path)
241 .unwrap();
242 println!("{}", array.metadata().to_string_pretty());
243
244 let data = [
245 CustomDataTypeFloat8e3m4Element::from(2.34),
246 CustomDataTypeFloat8e3m4Element::from(3.45),
247 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
248 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
249 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
250 ];
251 array.store_chunk_elements(&[0, 0], &data).unwrap();
252
253 let data = array
254 .retrieve_array_subset_elements::<CustomDataTypeFloat8e3m4Element>(&array.subset_all())
255 .unwrap();
256
257 for f in &data {
258 println!(
259 "float8_e3m4: {:08b} f32: {}",
260 f.to_ne_bytes()[0],
261 f.as_f32()
262 );
263 }
264
265 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
266 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
267 assert_eq!(
268 data[2],
269 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
270 );
271 assert_eq!(
272 data[3],
273 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
274 );
275 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
276 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
277}
203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 DataType::Extension(Arc::new(CustomDataTypeUInt4)),
210 vec![5, 1].try_into().unwrap(), // regular chunk shape
211 FillValue::new(fill_value.to_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
220 .bytes_to_bytes_codecs(vec![
221 #[cfg(feature = "gzip")]
222 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
223 #[cfg(feature = "crc32c")]
224 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
225 ])
226 // .storage_transformers(vec![].into())
227 .build(store, array_path)
228 .unwrap();
229 println!("{}", array.metadata().to_string_pretty());
230
231 let data = [
232 CustomDataTypeUInt4Element::try_from(1).unwrap(),
233 CustomDataTypeUInt4Element::try_from(2).unwrap(),
234 CustomDataTypeUInt4Element::try_from(3).unwrap(),
235 CustomDataTypeUInt4Element::try_from(4).unwrap(),
236 CustomDataTypeUInt4Element::try_from(5).unwrap(),
237 ];
238 array.store_chunk_elements(&[0, 0], &data).unwrap();
239
240 let data = array
241 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(&array.subset_all())
242 .unwrap();
243
244 for f in &data {
245 println!("uint4: {:08b} u8: {}", f.as_u8(), f.as_u8());
246 }
247
248 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
249 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
250 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
251 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
252 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
253 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
254
255 let data = array
256 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(
257 &ArraySubset::new_with_ranges(&[1..3, 0..1]),
258 )
259 .unwrap();
260 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
261 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
262}
Sourcepub fn retrieve_array_subset_ndarray<T: ElementOwned>(
&self,
array_subset: &ArraySubset,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_array_subset_ndarray<T: ElementOwned>( &self, array_subset: &ArraySubset, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the array_subset
of array into an ndarray::ArrayD
.
§Errors
Returns an ArrayError
if:
- an array subset is invalid or out of bounds of the array,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if any dimension in chunk_subset
is usize::MAX
or larger.
Examples found in repository?
30fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
31 const HTTP_URL: &str =
32 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
33 const ARRAY_PATH: &str = "/group/array";
34
35 // Create a HTTP store
36 // let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
37 let block_on = TokioBlockOn(tokio::runtime::Runtime::new()?);
38 let mut store: ReadableStorage = match backend {
39 Backend::OpenDAL => {
40 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
41 let operator = opendal::Operator::new(builder)?.finish();
42 let store = Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator));
43 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
44 }
45 Backend::ObjectStore => {
46 let options = object_store::ClientOptions::new().with_allow_http(true);
47 let store = object_store::http::HttpBuilder::new()
48 .with_url(HTTP_URL)
49 .with_client_options(options)
50 .build()?;
51 let store = Arc::new(zarrs_object_store::AsyncObjectStore::new(store));
52 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
53 }
54 };
55 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
56 if arg1 == "--usage-log" {
57 let log_writer = Arc::new(std::sync::Mutex::new(
58 // std::io::BufWriter::new(
59 std::io::stdout(),
60 // )
61 ));
62 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
63 chrono::Utc::now().format("[%T%.3f] ").to_string()
64 }));
65 }
66 }
67
68 // Init the existing array, reading metadata
69 let array = Array::open(store, ARRAY_PATH)?;
70
71 println!(
72 "The array metadata is:\n{}\n",
73 array.metadata().to_string_pretty()
74 );
75
76 // Read the whole array
77 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
78 println!("The whole array is:\n{data_all}\n");
79
80 // Read a chunk back from the store
81 let chunk_indices = vec![1, 0];
82 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
83 println!("Chunk [1,0] is:\n{data_chunk}\n");
84
85 // Read the central 4x2 subset of the array
86 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
87 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
88 println!("The middle 4x2 subset is:\n{data_4x2}\n");
89
90 Ok(())
91}
More examples
10fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12 use zarrs::{
13 array::{DataType, FillValue},
14 array_subset::ArraySubset,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![4, 4], // array shape
61 DataType::String,
62 vec![2, 2].try_into()?, // regular chunk shape
63 FillValue::from("_"),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 array.store_chunk_ndarray(
80 &[0, 0],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
82 )?;
83 array.store_chunk_ndarray(
84 &[0, 1],
85 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
86 )?;
87 let subset_all = array.subset_all();
88 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
90
91 // Write a subset spanning multiple chunks, including updating chunks already written
92 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
93 array.store_array_subset_ndarray(
94 ArraySubset::new_with_ranges(&[1..3, 1..3]).start(),
95 ndarray_subset,
96 )?;
97 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
98 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
99
100 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
101 // TODO: Add a convenience function for this?
102 let data_all = array.retrieve_array_subset(&subset_all)?;
103 let (bytes, offsets) = data_all.into_variable()?;
104 let string = String::from_utf8(bytes.into_owned())?;
105 let elements = offsets
106 .iter()
107 .tuple_windows()
108 .map(|(&curr, &next)| &string[curr..next])
109 .collect::<Vec<&str>>();
110 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
111 println!("ndarray::ArrayD<&str>:\n{ndarray}");
112
113 Ok(())
114}
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn partial_decoder(
&self,
chunk_indices: &[u64],
) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
pub fn partial_decoder( &self, chunk_indices: &[u64], ) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
Initialises a partial decoder for the chunk at chunk_indices
.
§Errors
Returns an ArrayError
if initialisation of the partial decoder fails.
Examples found in repository?
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
Sourcepub fn retrieve_chunk_if_exists_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ArrayBytes<'_>>, ArrayError>
pub fn retrieve_chunk_if_exists_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ArrayBytes<'_>>, ArrayError>
Explicit options version of retrieve_chunk_if_exists
.
Sourcepub fn retrieve_chunk_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_chunk_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
Explicit options version of retrieve_chunk
.
Sourcepub fn retrieve_chunk_elements_if_exists_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<Vec<T>>, ArrayError>
pub fn retrieve_chunk_elements_if_exists_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<Vec<T>>, ArrayError>
Explicit options version of retrieve_chunk_elements_if_exists
.
Sourcepub fn retrieve_chunk_elements_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_elements_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunk_elements
.
Sourcepub fn retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ArrayD<T>>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunk_ndarray_if_exists
.
Sourcepub fn retrieve_chunk_ndarray_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunk_ndarray
.
Sourcepub fn retrieve_chunks_opt(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_chunks_opt( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
Explicit options version of retrieve_chunks
.
Sourcepub fn retrieve_chunks_elements_opt<T: ElementOwned>(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunks_elements_opt<T: ElementOwned>( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunks_elements
.
Sourcepub fn retrieve_chunks_ndarray_opt<T: ElementOwned>(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunks_ndarray_opt<T: ElementOwned>( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunks_ndarray
.
Sourcepub fn retrieve_array_subset_opt(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_array_subset_opt( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
Explicit options version of retrieve_array_subset
.
Sourcepub fn retrieve_array_subset_elements_opt<T: ElementOwned>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_array_subset_elements_opt<T: ElementOwned>( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_array_subset_elements
.
Sourcepub fn retrieve_array_subset_ndarray_opt<T: ElementOwned>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_array_subset_ndarray_opt<T: ElementOwned>( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_array_subset_ndarray
.
Sourcepub fn retrieve_chunk_subset_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
pub fn retrieve_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
Explicit options version of retrieve_chunk_subset
.
Sourcepub fn retrieve_chunk_subset_elements_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_subset_elements_opt<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunk_subset_elements
.
Sourcepub fn retrieve_chunk_subset_ndarray_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_subset_ndarray_opt<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunk_subset_ndarray
.
Sourcepub fn partial_decoder_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
pub fn partial_decoder_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
Explicit options version of partial_decoder
.
Source§impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>
Sourcepub fn store_metadata(&self) -> Result<(), StorageError>
pub fn store_metadata(&self) -> Result<(), StorageError>
Store metadata with default ArrayMetadataOptions
.
The metadata is created with Array::metadata_opt
.
§Errors
Returns StorageError
if there is an underlying store error.
Examples found in repository?
23fn main() -> Result<(), Box<dyn std::error::Error>> {
24 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
25
26 let serde_json::Value::Object(attributes) = serde_json::json!({
27 "foo": "bar",
28 "baz": 42,
29 }) else {
30 unreachable!()
31 };
32
33 // Create a Zarr V2 group
34 let group_metadata: GroupMetadata = GroupMetadataV2::new()
35 .with_attributes(attributes.clone())
36 .into();
37 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
38
39 // Store the metadata as V2 and V3
40 let convert_group_metadata_to_v3 =
41 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
42 group.store_metadata()?;
43 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
44 println!(
45 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
46 key_to_str(&store, "group/.zgroup")?
47 );
48 println!(
49 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
50 key_to_str(&store, "group/.zattrs")?
51 );
52 println!(
53 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
54 key_to_str(&store, "group/zarr.json")?
55 );
56 // println!(
57 // "The equivalent Zarr V3 group metadata is\n{}\n",
58 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
59 // );
60
61 // Create a Zarr V2 array
62 let array_metadata = ArrayMetadataV2::new(
63 vec![10, 10],
64 vec![5, 5].try_into()?,
65 ">f4".into(), // big endian float32
66 FillValueMetadataV2::NaN,
67 None,
68 None,
69 )
70 .with_dimension_separator(ChunkKeySeparator::Slash)
71 .with_order(ArrayMetadataV2Order::F)
72 .with_attributes(attributes.clone());
73 let array = zarrs::array::Array::new_with_metadata(
74 store.clone(),
75 "/group/array",
76 array_metadata.into(),
77 )?;
78
79 // Store the metadata as V2 and V3
80 let convert_array_metadata_to_v3 =
81 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
82 array.store_metadata()?;
83 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
84 println!(
85 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
86 key_to_str(&store, "group/array/.zarray")?
87 );
88 println!(
89 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
90 key_to_str(&store, "group/array/.zattrs")?
91 );
92 println!(
93 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
94 key_to_str(&store, "group/array/zarr.json")?
95 );
96 // println!(
97 // "The equivalent Zarr V3 array metadata is\n{}\n",
98 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
99 // );
100
101 array.store_chunk_elements::<f32>(&[0, 1], &[0.0; 5 * 5])?;
102
103 // Print the keys in the store
104 println!("The store contains keys:");
105 for key in store.list()? {
106 println!(" {}", key);
107 }
108
109 Ok(())
110}
More examples
10fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12 use zarrs::{
13 array::{DataType, FillValue},
14 array_subset::ArraySubset,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![4, 4], // array shape
61 DataType::String,
62 vec![2, 2].try_into()?, // regular chunk shape
63 FillValue::from("_"),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 array.store_chunk_ndarray(
80 &[0, 0],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
82 )?;
83 array.store_chunk_ndarray(
84 &[0, 1],
85 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
86 )?;
87 let subset_all = array.subset_all();
88 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
90
91 // Write a subset spanning multiple chunks, including updating chunks already written
92 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
93 array.store_array_subset_ndarray(
94 ArraySubset::new_with_ranges(&[1..3, 1..3]).start(),
95 ndarray_subset,
96 )?;
97 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
98 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
99
100 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
101 // TODO: Add a convenience function for this?
102 let data_all = array.retrieve_array_subset(&subset_all)?;
103 let (bytes, offsets) = data_all.into_variable()?;
104 let string = String::from_utf8(bytes.into_owned())?;
105 let elements = offsets
106 .iter()
107 .tuple_windows()
108 .map(|(&curr, &next)| &string[curr..next])
109 .collect::<Vec<&str>>();
110 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
111 println!("ndarray::ArrayD<&str>:\n{ndarray}");
112
113 Ok(())
114}
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn store_metadata_opt(
&self,
options: &ArrayMetadataOptions,
) -> Result<(), StorageError>
pub fn store_metadata_opt( &self, options: &ArrayMetadataOptions, ) -> Result<(), StorageError>
Store metadata with non-default ArrayMetadataOptions
.
The metadata is created with Array::metadata_opt
.
§Errors
Returns StorageError
if there is an underlying store error.
Examples found in repository?
23fn main() -> Result<(), Box<dyn std::error::Error>> {
24 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
25
26 let serde_json::Value::Object(attributes) = serde_json::json!({
27 "foo": "bar",
28 "baz": 42,
29 }) else {
30 unreachable!()
31 };
32
33 // Create a Zarr V2 group
34 let group_metadata: GroupMetadata = GroupMetadataV2::new()
35 .with_attributes(attributes.clone())
36 .into();
37 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
38
39 // Store the metadata as V2 and V3
40 let convert_group_metadata_to_v3 =
41 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
42 group.store_metadata()?;
43 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
44 println!(
45 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
46 key_to_str(&store, "group/.zgroup")?
47 );
48 println!(
49 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
50 key_to_str(&store, "group/.zattrs")?
51 );
52 println!(
53 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
54 key_to_str(&store, "group/zarr.json")?
55 );
56 // println!(
57 // "The equivalent Zarr V3 group metadata is\n{}\n",
58 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
59 // );
60
61 // Create a Zarr V2 array
62 let array_metadata = ArrayMetadataV2::new(
63 vec![10, 10],
64 vec![5, 5].try_into()?,
65 ">f4".into(), // big endian float32
66 FillValueMetadataV2::NaN,
67 None,
68 None,
69 )
70 .with_dimension_separator(ChunkKeySeparator::Slash)
71 .with_order(ArrayMetadataV2Order::F)
72 .with_attributes(attributes.clone());
73 let array = zarrs::array::Array::new_with_metadata(
74 store.clone(),
75 "/group/array",
76 array_metadata.into(),
77 )?;
78
79 // Store the metadata as V2 and V3
80 let convert_array_metadata_to_v3 =
81 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
82 array.store_metadata()?;
83 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
84 println!(
85 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
86 key_to_str(&store, "group/array/.zarray")?
87 );
88 println!(
89 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
90 key_to_str(&store, "group/array/.zattrs")?
91 );
92 println!(
93 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
94 key_to_str(&store, "group/array/zarr.json")?
95 );
96 // println!(
97 // "The equivalent Zarr V3 array metadata is\n{}\n",
98 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
99 // );
100
101 array.store_chunk_elements::<f32>(&[0, 1], &[0.0; 5 * 5])?;
102
103 // Print the keys in the store
104 println!("The store contains keys:");
105 for key in store.list()? {
106 println!(" {}", key);
107 }
108
109 Ok(())
110}
Sourcepub fn store_chunk<'a>(
&self,
chunk_indices: &[u64],
chunk_bytes: impl Into<ArrayBytes<'a>>,
) -> Result<(), ArrayError>
pub fn store_chunk<'a>( &self, chunk_indices: &[u64], chunk_bytes: impl Into<ArrayBytes<'a>>, ) -> Result<(), ArrayError>
Encode chunk_bytes
and store at chunk_indices
.
Use store_chunk_opt
to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError
if
chunk_indices
are invalid,- the length of
chunk_bytes
is not equal to the expected length (the product of the number of elements in the chunk and the data type size in bytes), - there is a codec encoding error, or
- an underlying store error.
Sourcepub fn store_chunk_elements<T: Element>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
) -> Result<(), ArrayError>
pub fn store_chunk_elements<T: Element>( &self, chunk_indices: &[u64], chunk_elements: &[T], ) -> Result<(), ArrayError>
Encode chunk_elements
and store at chunk_indices
.
Use store_chunk_elements_opt
to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_chunk
error condition is met.
Examples found in repository?
153fn main() {
154 let store = std::sync::Arc::new(MemoryStore::default());
155 let array_path = "/array";
156 let array = ArrayBuilder::new(
157 vec![4, 1], // array shape
158 DataType::Extension(Arc::new(CustomDataTypeVariableSize)),
159 vec![3, 1].try_into().unwrap(), // regular chunk shape
160 FillValue::from(vec![]),
161 )
162 .array_to_array_codecs(vec![
163 #[cfg(feature = "transpose")]
164 Arc::new(zarrs::array::codec::TransposeCodec::new(
165 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
166 )),
167 ])
168 .bytes_to_bytes_codecs(vec![
169 #[cfg(feature = "gzip")]
170 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
171 #[cfg(feature = "crc32c")]
172 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
173 ])
174 // .storage_transformers(vec![].into())
175 .build(store, array_path)
176 .unwrap();
177 println!("{}", array.metadata().to_string_pretty());
178
179 let data = [
180 CustomDataTypeVariableSizeElement::from(Some(1.0)),
181 CustomDataTypeVariableSizeElement::from(None),
182 CustomDataTypeVariableSizeElement::from(Some(3.0)),
183 ];
184 array.store_chunk_elements(&[0, 0], &data).unwrap();
185
186 let data = array
187 .retrieve_array_subset_elements::<CustomDataTypeVariableSizeElement>(&array.subset_all())
188 .unwrap();
189
190 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
191 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
192 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
193 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
194
195 println!("{data:#?}");
196}
More examples
269fn main() {
270 let store = std::sync::Arc::new(MemoryStore::default());
271 let array_path = "/array";
272 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
273 let array = ArrayBuilder::new(
274 vec![4, 1], // array shape
275 DataType::Extension(Arc::new(CustomDataTypeFixedSize)),
276 vec![2, 1].try_into().unwrap(), // regular chunk shape
277 FillValue::new(fill_value.to_ne_bytes().to_vec()),
278 )
279 .array_to_array_codecs(vec![
280 #[cfg(feature = "transpose")]
281 Arc::new(zarrs::array::codec::TransposeCodec::new(
282 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
283 )),
284 ])
285 .bytes_to_bytes_codecs(vec![
286 #[cfg(feature = "gzip")]
287 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
288 #[cfg(feature = "crc32c")]
289 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
290 ])
291 // .storage_transformers(vec![].into())
292 .build(store, array_path)
293 .unwrap();
294 println!("{}", array.metadata().to_string_pretty());
295
296 let data = [
297 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
298 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
299 ];
300 array.store_chunk_elements(&[0, 0], &data).unwrap();
301
302 let data = array
303 .retrieve_array_subset_elements::<CustomDataTypeFixedSizeElement>(&array.subset_all())
304 .unwrap();
305
306 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
307 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
308 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
309 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
310
311 println!("{data:#?}");
312}
217fn main() {
218 let store = std::sync::Arc::new(MemoryStore::default());
219 let array_path = "/array";
220 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
221 let array = ArrayBuilder::new(
222 vec![6, 1], // array shape
223 DataType::Extension(Arc::new(CustomDataTypeFloat8e3m4)),
224 vec![5, 1].try_into().unwrap(), // regular chunk shape
225 FillValue::new(fill_value.to_ne_bytes().to_vec()),
226 )
227 .array_to_array_codecs(vec![
228 #[cfg(feature = "transpose")]
229 Arc::new(zarrs::array::codec::TransposeCodec::new(
230 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
231 )),
232 ])
233 .bytes_to_bytes_codecs(vec![
234 #[cfg(feature = "gzip")]
235 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
236 #[cfg(feature = "crc32c")]
237 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
238 ])
239 // .storage_transformers(vec![].into())
240 .build(store, array_path)
241 .unwrap();
242 println!("{}", array.metadata().to_string_pretty());
243
244 let data = [
245 CustomDataTypeFloat8e3m4Element::from(2.34),
246 CustomDataTypeFloat8e3m4Element::from(3.45),
247 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
248 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
249 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
250 ];
251 array.store_chunk_elements(&[0, 0], &data).unwrap();
252
253 let data = array
254 .retrieve_array_subset_elements::<CustomDataTypeFloat8e3m4Element>(&array.subset_all())
255 .unwrap();
256
257 for f in &data {
258 println!(
259 "float8_e3m4: {:08b} f32: {}",
260 f.to_ne_bytes()[0],
261 f.as_f32()
262 );
263 }
264
265 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
266 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
267 assert_eq!(
268 data[2],
269 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
270 );
271 assert_eq!(
272 data[3],
273 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
274 );
275 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
276 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
277}
203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 DataType::Extension(Arc::new(CustomDataTypeUInt4)),
210 vec![5, 1].try_into().unwrap(), // regular chunk shape
211 FillValue::new(fill_value.to_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
220 .bytes_to_bytes_codecs(vec![
221 #[cfg(feature = "gzip")]
222 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
223 #[cfg(feature = "crc32c")]
224 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
225 ])
226 // .storage_transformers(vec![].into())
227 .build(store, array_path)
228 .unwrap();
229 println!("{}", array.metadata().to_string_pretty());
230
231 let data = [
232 CustomDataTypeUInt4Element::try_from(1).unwrap(),
233 CustomDataTypeUInt4Element::try_from(2).unwrap(),
234 CustomDataTypeUInt4Element::try_from(3).unwrap(),
235 CustomDataTypeUInt4Element::try_from(4).unwrap(),
236 CustomDataTypeUInt4Element::try_from(5).unwrap(),
237 ];
238 array.store_chunk_elements(&[0, 0], &data).unwrap();
239
240 let data = array
241 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(&array.subset_all())
242 .unwrap();
243
244 for f in &data {
245 println!("uint4: {:08b} u8: {}", f.as_u8(), f.as_u8());
246 }
247
248 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
249 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
250 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
251 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
252 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
253 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
254
255 let data = array
256 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(
257 &ArraySubset::new_with_ranges(&[1..3, 0..1]),
258 )
259 .unwrap();
260 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
261 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
262}
23fn main() -> Result<(), Box<dyn std::error::Error>> {
24 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
25
26 let serde_json::Value::Object(attributes) = serde_json::json!({
27 "foo": "bar",
28 "baz": 42,
29 }) else {
30 unreachable!()
31 };
32
33 // Create a Zarr V2 group
34 let group_metadata: GroupMetadata = GroupMetadataV2::new()
35 .with_attributes(attributes.clone())
36 .into();
37 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
38
39 // Store the metadata as V2 and V3
40 let convert_group_metadata_to_v3 =
41 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
42 group.store_metadata()?;
43 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
44 println!(
45 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
46 key_to_str(&store, "group/.zgroup")?
47 );
48 println!(
49 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
50 key_to_str(&store, "group/.zattrs")?
51 );
52 println!(
53 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
54 key_to_str(&store, "group/zarr.json")?
55 );
56 // println!(
57 // "The equivalent Zarr V3 group metadata is\n{}\n",
58 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
59 // );
60
61 // Create a Zarr V2 array
62 let array_metadata = ArrayMetadataV2::new(
63 vec![10, 10],
64 vec![5, 5].try_into()?,
65 ">f4".into(), // big endian float32
66 FillValueMetadataV2::NaN,
67 None,
68 None,
69 )
70 .with_dimension_separator(ChunkKeySeparator::Slash)
71 .with_order(ArrayMetadataV2Order::F)
72 .with_attributes(attributes.clone());
73 let array = zarrs::array::Array::new_with_metadata(
74 store.clone(),
75 "/group/array",
76 array_metadata.into(),
77 )?;
78
79 // Store the metadata as V2 and V3
80 let convert_array_metadata_to_v3 =
81 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
82 array.store_metadata()?;
83 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
84 println!(
85 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
86 key_to_str(&store, "group/array/.zarray")?
87 );
88 println!(
89 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
90 key_to_str(&store, "group/array/.zattrs")?
91 );
92 println!(
93 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
94 key_to_str(&store, "group/array/zarr.json")?
95 );
96 // println!(
97 // "The equivalent Zarr V3 array metadata is\n{}\n",
98 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
99 // );
100
101 array.store_chunk_elements::<f32>(&[0, 1], &[0.0; 5 * 5])?;
102
103 // Print the keys in the store
104 println!("The store contains keys:");
105 for key in store.list()? {
106 println!(" {}", key);
107 }
108
109 Ok(())
110}
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
Sourcepub fn store_chunk_ndarray<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: impl Into<Array<T, D>>,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_ndarray<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: impl Into<Array<T, D>>, ) -> Result<(), ArrayError>
ndarray
only.Encode chunk_array
and store at chunk_indices
.
Use store_chunk_ndarray_opt
to control codec options.
§Errors
Returns an ArrayError
if
- the shape of the array does not match the shape of the chunk,
- a
store_chunk_elements
error condition is met.
Examples found in repository?
10fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12 use zarrs::{
13 array::{DataType, FillValue},
14 array_subset::ArraySubset,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![4, 4], // array shape
61 DataType::String,
62 vec![2, 2].try_into()?, // regular chunk shape
63 FillValue::from("_"),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 array.store_chunk_ndarray(
80 &[0, 0],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
82 )?;
83 array.store_chunk_ndarray(
84 &[0, 1],
85 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
86 )?;
87 let subset_all = array.subset_all();
88 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
90
91 // Write a subset spanning multiple chunks, including updating chunks already written
92 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
93 array.store_array_subset_ndarray(
94 ArraySubset::new_with_ranges(&[1..3, 1..3]).start(),
95 ndarray_subset,
96 )?;
97 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
98 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
99
100 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
101 // TODO: Add a convenience function for this?
102 let data_all = array.retrieve_array_subset(&subset_all)?;
103 let (bytes, offsets) = data_all.into_variable()?;
104 let string = String::from_utf8(bytes.into_owned())?;
105 let elements = offsets
106 .iter()
107 .tuple_windows()
108 .map(|(&curr, &next)| &string[curr..next])
109 .collect::<Vec<&str>>();
110 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
111 println!("ndarray::ArrayD<&str>:\n{ndarray}");
112
113 Ok(())
114}
More examples
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn store_chunks<'a>(
&self,
chunks: &ArraySubset,
chunks_bytes: impl Into<ArrayBytes<'a>>,
) -> Result<(), ArrayError>
pub fn store_chunks<'a>( &self, chunks: &ArraySubset, chunks_bytes: impl Into<ArrayBytes<'a>>, ) -> Result<(), ArrayError>
Encode chunks_bytes
and store at the chunks with indices represented by the chunks
array subset.
Use store_chunks_opt
to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError
if
chunks
are invalid,- the length of
chunk_bytes
is not equal to the expected length (the product of the number of elements in the chunks and the data type size in bytes), - there is a codec encoding error, or
- an underlying store error.
Sourcepub fn store_chunks_elements<T: Element>(
&self,
chunks: &ArraySubset,
chunks_elements: &[T],
) -> Result<(), ArrayError>
pub fn store_chunks_elements<T: Element>( &self, chunks: &ArraySubset, chunks_elements: &[T], ) -> Result<(), ArrayError>
Encode chunks_elements
and store at the chunks with indices represented by the chunks
array subset.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_chunks
error condition is met.
Examples found in repository?
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
Sourcepub fn store_chunks_ndarray<T: Element, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: impl Into<Array<T, D>>,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunks_ndarray<T: Element, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: impl Into<Array<T, D>>, ) -> Result<(), ArrayError>
ndarray
only.Encode chunks_array
and store at the chunks with indices represented by the chunks
array subset.
§Errors
Returns an ArrayError
if
- the shape of the array does not match the shape of the chunks,
- a
store_chunks_elements
error condition is met.
Examples found in repository?
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn erase_metadata(&self) -> Result<(), StorageError>
pub fn erase_metadata(&self) -> Result<(), StorageError>
Erase the metadata with default MetadataEraseVersion
options.
Succeeds if the metadata does not exist.
§Errors
Returns a StorageError
if there is an underlying store error.
Sourcepub fn erase_metadata_opt(
&self,
options: MetadataEraseVersion,
) -> Result<(), StorageError>
pub fn erase_metadata_opt( &self, options: MetadataEraseVersion, ) -> Result<(), StorageError>
Erase the metadata with non-default MetadataEraseVersion
options.
Succeeds if the metadata does not exist.
§Errors
Returns a StorageError
if there is an underlying store error.
Sourcepub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>
pub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>
Erase the chunk at chunk_indices
.
Succeeds if the chunk does not exist.
§Errors
Returns a StorageError
if there is an underlying store error.
Examples found in repository?
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
More examples
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn erase_chunks(&self, chunks: &ArraySubset) -> Result<(), StorageError>
pub fn erase_chunks(&self, chunks: &ArraySubset) -> Result<(), StorageError>
Sourcepub fn store_chunk_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_bytes: impl Into<ArrayBytes<'a>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunk_opt<'a>( &self, chunk_indices: &[u64], chunk_bytes: impl Into<ArrayBytes<'a>>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk
.
Sourcepub unsafe fn store_encoded_chunk(
&self,
chunk_indices: &[u64],
encoded_chunk_bytes: Bytes,
) -> Result<(), ArrayError>
pub unsafe fn store_encoded_chunk( &self, chunk_indices: &[u64], encoded_chunk_bytes: Bytes, ) -> Result<(), ArrayError>
Store encoded_chunk_bytes
at chunk_indices
§Safety
The responsibility is on the caller to ensure the chunk is encoded correctly
§Errors
Returns StorageError
if there is an underlying store error.
Sourcepub fn store_chunk_elements_opt<T: Element>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunk_elements_opt<T: Element>( &self, chunk_indices: &[u64], chunk_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk_elements
.
Sourcepub fn store_chunk_ndarray_opt<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: impl Into<Array<T, D>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_ndarray_opt<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: impl Into<Array<T, D>>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_chunk_ndarray
.
Sourcepub fn store_chunks_opt<'a>(
&self,
chunks: &ArraySubset,
chunks_bytes: impl Into<ArrayBytes<'a>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunks_opt<'a>( &self, chunks: &ArraySubset, chunks_bytes: impl Into<ArrayBytes<'a>>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunks
.
Sourcepub fn store_chunks_elements_opt<T: Element>(
&self,
chunks: &ArraySubset,
chunks_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunks_elements_opt<T: Element>( &self, chunks: &ArraySubset, chunks_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunks_elements
.
Sourcepub fn store_chunks_ndarray_opt<T: Element, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: impl Into<Array<T, D>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunks_ndarray_opt<T: Element, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: impl Into<Array<T, D>>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_chunks_ndarray
.
Source§impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>
Sourcepub fn store_chunk_subset<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: impl Into<ArrayBytes<'a>>,
) -> Result<(), ArrayError>
pub fn store_chunk_subset<'a>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: impl Into<ArrayBytes<'a>>, ) -> Result<(), ArrayError>
Encode chunk_subset_bytes
and store in chunk_subset
of the chunk at chunk_indices
with default codec options.
Use store_chunk_subset_opt
to control codec options.
Prefer to use store_chunk
where possible, since this function may decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError
if
chunk_subset
is invalid or out of bounds of the chunk,- there is a codec encoding error, or
- an underlying store error.
§Panics
Panics if attempting to reference a byte beyond usize::MAX
.
Sourcepub fn store_chunk_subset_elements<T: Element>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: &[T],
) -> Result<(), ArrayError>
pub fn store_chunk_subset_elements<T: Element>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: &[T], ) -> Result<(), ArrayError>
Encode chunk_subset_elements
and store in chunk_subset
of the chunk at chunk_indices
with default codec options.
Use store_chunk_subset_elements_opt
to control codec options.
Prefer to use store_chunk_elements
where possible, since this will decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_chunk_subset
error condition is met.
Examples found in repository?
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
Sourcepub fn store_chunk_subset_ndarray<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: impl Into<Array<T, D>>,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_subset_ndarray<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: impl Into<Array<T, D>>, ) -> Result<(), ArrayError>
ndarray
only.Encode chunk_subset_array
and store in chunk_subset
of the chunk in the subset starting at chunk_subset_start
.
Use store_chunk_subset_ndarray_opt
to control codec options.
Prefer to use store_chunk_ndarray
where possible, since this will decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError
if a store_chunk_subset_elements
error condition is met.
Examples found in repository?
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn store_array_subset<'a>(
&self,
array_subset: &ArraySubset,
subset_bytes: impl Into<ArrayBytes<'a>>,
) -> Result<(), ArrayError>
pub fn store_array_subset<'a>( &self, array_subset: &ArraySubset, subset_bytes: impl Into<ArrayBytes<'a>>, ) -> Result<(), ArrayError>
Encode subset_bytes
and store in array_subset
.
Use store_array_subset_opt
to control codec options.
Prefer to use store_chunk
or store_chunks
where possible, since this will decode and encode each chunk intersecting array_subset
.
§Errors
Returns an ArrayError
if
- the dimensionality of
array_subset
does not match the chunk grid dimensionality - the length of
subset_bytes
does not match the expected length governed by the shape of the array subset and the data type size, - there is a codec encoding error, or
- an underlying store error.
Sourcepub fn store_array_subset_elements<T: Element>(
&self,
array_subset: &ArraySubset,
subset_elements: &[T],
) -> Result<(), ArrayError>
pub fn store_array_subset_elements<T: Element>( &self, array_subset: &ArraySubset, subset_elements: &[T], ) -> Result<(), ArrayError>
Encode subset_elements
and store in array_subset
.
Use store_array_subset_elements_opt
to control codec options.
Prefer to use store_chunk_elements
or store_chunks_elements
where possible, since this will decode and encode each chunk intersecting array_subset
.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_array_subset
error condition is met.
Examples found in repository?
205fn main() {
206 let store = std::sync::Arc::new(MemoryStore::default());
207 let array_path = "/array";
208 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
209 let array = ArrayBuilder::new(
210 vec![4096, 1], // array shape
211 DataType::Extension(Arc::new(CustomDataTypeUInt12)),
212 vec![5, 1].try_into().unwrap(), // regular chunk shape
213 FillValue::new(fill_value.to_le_bytes().to_vec()),
214 )
215 .array_to_array_codecs(vec![
216 #[cfg(feature = "transpose")]
217 Arc::new(zarrs::array::codec::TransposeCodec::new(
218 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
219 )),
220 ])
221 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
222 .bytes_to_bytes_codecs(vec![
223 #[cfg(feature = "gzip")]
224 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
225 #[cfg(feature = "crc32c")]
226 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
227 ])
228 // .storage_transformers(vec![].into())
229 .build(store, array_path)
230 .unwrap();
231 println!("{}", array.metadata().to_string_pretty());
232
233 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
234 .into_iter()
235 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
236 .collect();
237
238 array
239 .store_array_subset_elements(&array.subset_all(), &data)
240 .unwrap();
241
242 let data = array
243 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(&array.subset_all())
244 .unwrap();
245
246 for i in 0usize..4096 {
247 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
248 assert_eq!(data[i], element);
249 let element_pd = array
250 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(
251 &ArraySubset::new_with_ranges(&[(i as u64)..i as u64 + 1, 0..1]),
252 )
253 .unwrap()[0];
254 assert_eq!(element_pd, element);
255 }
256}
More examples
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
Sourcepub fn store_array_subset_ndarray<T: Element, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: impl Into<Array<T, D>>,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_array_subset_ndarray<T: Element, D: Dimension>( &self, subset_start: &[u64], subset_array: impl Into<Array<T, D>>, ) -> Result<(), ArrayError>
ndarray
only.Encode subset_array
and store in the array subset starting at subset_start
.
Use store_array_subset_ndarray_opt
to control codec options.
Prefer to use store_chunk_ndarray
or store_chunks_ndarray
where possible, since this will decode and encode each chunk intersecting array_subset
.
§Errors
Returns an ArrayError
if a store_array_subset_elements
error condition is met.
Examples found in repository?
10fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12 use zarrs::{
13 array::{DataType, FillValue},
14 array_subset::ArraySubset,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![4, 4], // array shape
61 DataType::String,
62 vec![2, 2].try_into()?, // regular chunk shape
63 FillValue::from("_"),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 array.store_chunk_ndarray(
80 &[0, 0],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
82 )?;
83 array.store_chunk_ndarray(
84 &[0, 1],
85 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
86 )?;
87 let subset_all = array.subset_all();
88 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
90
91 // Write a subset spanning multiple chunks, including updating chunks already written
92 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
93 array.store_array_subset_ndarray(
94 ArraySubset::new_with_ranges(&[1..3, 1..3]).start(),
95 ndarray_subset,
96 )?;
97 let data_all = array.retrieve_array_subset_ndarray::<String>(&subset_all)?;
98 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
99
100 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
101 // TODO: Add a convenience function for this?
102 let data_all = array.retrieve_array_subset(&subset_all)?;
103 let (bytes, offsets) = data_all.into_variable()?;
104 let string = String::from_utf8(bytes.into_owned())?;
105 let elements = offsets
106 .iter()
107 .tuple_windows()
108 .map(|(&curr, &next)| &string[curr..next])
109 .collect::<Vec<&str>>();
110 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
111 println!("ndarray::ArrayD<&str>:\n{ndarray}");
112
113 Ok(())
114}
More examples
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
Sourcepub fn store_chunk_subset_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: impl Into<ArrayBytes<'a>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunk_subset_opt<'a>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: impl Into<ArrayBytes<'a>>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk_subset
.
Sourcepub fn store_chunk_subset_elements_opt<T: Element>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunk_subset_elements_opt<T: Element>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk_subset_elements
.
Sourcepub fn store_chunk_subset_ndarray_opt<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: impl Into<Array<T, D>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_subset_ndarray_opt<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: impl Into<Array<T, D>>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_chunk_subset_ndarray
.
Sourcepub fn store_array_subset_opt<'a>(
&self,
array_subset: &ArraySubset,
subset_bytes: impl Into<ArrayBytes<'a>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_array_subset_opt<'a>( &self, array_subset: &ArraySubset, subset_bytes: impl Into<ArrayBytes<'a>>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_array_subset
.
Sourcepub fn store_array_subset_elements_opt<T: Element>(
&self,
array_subset: &ArraySubset,
subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_array_subset_elements_opt<T: Element>( &self, array_subset: &ArraySubset, subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_array_subset_elements
.
Sourcepub fn store_array_subset_ndarray_opt<T: Element, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: impl Into<Array<T, D>>,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_array_subset_ndarray_opt<T: Element, D: Dimension>( &self, subset_start: &[u64], subset_array: impl Into<Array<T, D>>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_array_subset_ndarray
.
Sourcepub fn partial_encoder(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn ArrayPartialEncoderTraits>, ArrayError>
pub fn partial_encoder( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn ArrayPartialEncoderTraits>, ArrayError>
Initialises a partial encoder for the chunk at chunk_indices
.
Only one partial encoder should be created for a chunk at a time because:
- partial encoders can hold internal state that may become out of sync, and
- parallel writing to the same chunk may result in data loss.
Partial encoding with ArrayPartialEncoderTraits::partial_encode
will use parallelism internally where possible.
§Errors
Returns an ArrayError
if initialisation of the partial encoder fails.
Source§impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>
Sourcepub async fn async_open(
storage: Arc<TStorage>,
path: &str,
) -> Result<Array<TStorage>, ArrayCreateError>
Available on crate feature async
only.
pub async fn async_open( storage: Arc<TStorage>, path: &str, ) -> Result<Array<TStorage>, ArrayCreateError>
async
only.Async variant of open
.
Examples found in repository?
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 Backend::OpenDAL => {
23 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 let operator = opendal::Operator::new(builder)?.finish();
25 Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
37 if arg1 == "--usage-log" {
38 let log_writer = Arc::new(std::sync::Mutex::new(
39 // std::io::BufWriter::new(
40 std::io::stdout(),
41 // )
42 ));
43 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
44 chrono::Utc::now().format("[%T%.3f] ").to_string()
45 }));
46 }
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all = array
59 .async_retrieve_array_subset_ndarray::<f32>(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk = array
66 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
67 .await?;
68 println!("Chunk [1,0] is:\n{data_chunk}\n");
69
70 // Read the central 4x2 subset of the array
71 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
72 let data_4x2 = array
73 .async_retrieve_array_subset_ndarray::<f32>(&subset_4x2)
74 .await?;
75 println!("The middle 4x2 subset is:\n{data_4x2}\n");
76
77 Ok(())
78}
Sourcepub async fn async_open_opt(
storage: Arc<TStorage>,
path: &str,
version: &MetadataRetrieveVersion,
) -> Result<Array<TStorage>, ArrayCreateError>
Available on crate feature async
only.
pub async fn async_open_opt( storage: Arc<TStorage>, path: &str, version: &MetadataRetrieveVersion, ) -> Result<Array<TStorage>, ArrayCreateError>
async
only.Async variant of open_opt
.
Sourcepub async fn async_retrieve_chunk_if_exists(
&self,
chunk_indices: &[u64],
) -> Result<Option<ArrayBytes<'_>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_if_exists( &self, chunk_indices: &[u64], ) -> Result<Option<ArrayBytes<'_>>, ArrayError>
async
only.Async variant of retrieve_chunk_if_exists
.
Sourcepub async fn async_retrieve_chunk_elements_if_exists<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
) -> Result<Option<Vec<T>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements_if_exists<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], ) -> Result<Option<Vec<T>>, ArrayError>
async
only.Async variant of retrieve_chunk_elements_if_exists
.
Sourcepub async fn async_retrieve_chunk_ndarray_if_exists<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray_if_exists<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], ) -> Result<Option<ArrayD<T>>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray_if_exists
.
Sourcepub async fn async_retrieve_encoded_chunk(
&self,
chunk_indices: &[u64],
) -> Result<Option<AsyncBytes>, StorageError>
Available on crate feature async
only.
pub async fn async_retrieve_encoded_chunk( &self, chunk_indices: &[u64], ) -> Result<Option<AsyncBytes>, StorageError>
async
only.Retrieve the encoded bytes of a chunk.
§Errors
Returns a StorageError
if there is an underlying store error.
Sourcepub async fn async_retrieve_chunk(
&self,
chunk_indices: &[u64],
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk( &self, chunk_indices: &[u64], ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_chunk
.
Sourcepub async fn async_retrieve_chunk_elements<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_elements
.
Sourcepub async fn async_retrieve_chunk_ndarray<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray
.
Examples found in repository?
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 Backend::OpenDAL => {
23 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 let operator = opendal::Operator::new(builder)?.finish();
25 Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
37 if arg1 == "--usage-log" {
38 let log_writer = Arc::new(std::sync::Mutex::new(
39 // std::io::BufWriter::new(
40 std::io::stdout(),
41 // )
42 ));
43 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
44 chrono::Utc::now().format("[%T%.3f] ").to_string()
45 }));
46 }
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all = array
59 .async_retrieve_array_subset_ndarray::<f32>(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk = array
66 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
67 .await?;
68 println!("Chunk [1,0] is:\n{data_chunk}\n");
69
70 // Read the central 4x2 subset of the array
71 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
72 let data_4x2 = array
73 .async_retrieve_array_subset_ndarray::<f32>(&subset_4x2)
74 .await?;
75 println!("The middle 4x2 subset is:\n{data_4x2}\n");
76
77 Ok(())
78}
More examples
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_retrieve_chunks(
&self,
chunks: &ArraySubset,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks( &self, chunks: &ArraySubset, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_chunks
.
Sourcepub async fn async_retrieve_chunks_elements<T: ElementOwned + Send + Sync>(
&self,
chunks: &ArraySubset,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_elements<T: ElementOwned + Send + Sync>( &self, chunks: &ArraySubset, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunks_elements
.
Sourcepub async fn async_retrieve_chunks_ndarray<T: ElementOwned + Send + Sync>(
&self,
chunks: &ArraySubset,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunks_ndarray<T: ElementOwned + Send + Sync>( &self, chunks: &ArraySubset, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunks_ndarray
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_retrieve_chunk_subset(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_chunk_subset
.
Sourcepub async fn async_retrieve_chunk_subset_elements<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_elements<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_subset_elements
.
Sourcepub async fn async_retrieve_chunk_subset_ndarray<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_subset_ndarray<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_subset_ndarray
.
Sourcepub async fn async_retrieve_array_subset(
&self,
array_subset: &ArraySubset,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset( &self, array_subset: &ArraySubset, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_array_subset
.
Sourcepub async fn async_retrieve_array_subset_elements<T: ElementOwned + Send + Sync>(
&self,
array_subset: &ArraySubset,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_elements<T: ElementOwned + Send + Sync>( &self, array_subset: &ArraySubset, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_array_subset_elements
.
Sourcepub async fn async_retrieve_array_subset_ndarray<T: ElementOwned + Send + Sync>(
&self,
array_subset: &ArraySubset,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_array_subset_ndarray<T: ElementOwned + Send + Sync>( &self, array_subset: &ArraySubset, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_array_subset_ndarray
.
Examples found in repository?
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 Backend::OpenDAL => {
23 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 let operator = opendal::Operator::new(builder)?.finish();
25 Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
37 if arg1 == "--usage-log" {
38 let log_writer = Arc::new(std::sync::Mutex::new(
39 // std::io::BufWriter::new(
40 std::io::stdout(),
41 // )
42 ));
43 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
44 chrono::Utc::now().format("[%T%.3f] ").to_string()
45 }));
46 }
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all = array
59 .async_retrieve_array_subset_ndarray::<f32>(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk = array
66 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
67 .await?;
68 println!("Chunk [1,0] is:\n{data_chunk}\n");
69
70 // Read the central 4x2 subset of the array
71 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
72 let data_4x2 = array
73 .async_retrieve_array_subset_ndarray::<f32>(&subset_4x2)
74 .await?;
75 println!("The middle 4x2 subset is:\n{data_4x2}\n");
76
77 Ok(())
78}
More examples
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_partial_decoder(
&self,
chunk_indices: &[u64],
) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
Available on crate feature async
only.
pub async fn async_partial_decoder( &self, chunk_indices: &[u64], ) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
async
only.Async variant of partial_decoder
.
Sourcepub async fn async_retrieve_chunk_if_exists_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ArrayBytes<'_>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_if_exists_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ArrayBytes<'_>>, ArrayError>
async
only.Async variant of retrieve_chunk_if_exists_opt
.
Sourcepub async fn async_retrieve_chunk_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_chunk_opt
.
Sourcepub async fn async_retrieve_chunk_elements_if_exists_opt<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<Vec<T>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements_if_exists_opt<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<Vec<T>>, ArrayError>
async
only.Async variant of retrieve_chunk_elements_if_exists_opt
.
Sourcepub async fn async_retrieve_chunk_elements_opt<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements_opt<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_elements_opt
.
Sourcepub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ArrayD<T>>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray_if_exists_opt
.
Sourcepub async fn async_retrieve_chunk_ndarray_opt<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray_opt<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray_opt
.
Sourcepub async fn async_retrieve_encoded_chunks(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<Option<AsyncBytes>>, StorageError>
Available on crate feature async
only.
pub async fn async_retrieve_encoded_chunks( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<Option<AsyncBytes>>, StorageError>
async
only.Retrieve the encoded bytes of the chunks in chunks
.
The chunks are in order of the chunk indices returned by chunks.indices().into_iter()
.
§Errors
Returns a StorageError
if there is an underlying store error.
Sourcepub async fn async_retrieve_chunks_opt(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_opt( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_chunks_opt
.
Sourcepub async fn async_retrieve_chunks_elements_opt<T: ElementOwned + Send + Sync>(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_elements_opt<T: ElementOwned + Send + Sync>( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunks_elements_opt
.
Sourcepub async fn async_retrieve_chunks_ndarray_opt<T: ElementOwned + Send + Sync>(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunks_ndarray_opt<T: ElementOwned + Send + Sync>( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunks_ndarray_opt
.
Sourcepub async fn async_retrieve_array_subset_opt(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_opt( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_array_subset_opt
.
Sourcepub async fn async_retrieve_array_subset_elements_opt<T: ElementOwned + Send + Sync>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_elements_opt<T: ElementOwned + Send + Sync>( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_array_subset_elements_opt
.
Sourcepub async fn async_retrieve_array_subset_ndarray_opt<T: ElementOwned + Send + Sync>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_array_subset_ndarray_opt<T: ElementOwned + Send + Sync>( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_array_subset_ndarray_opt
.
Sourcepub async fn async_retrieve_chunk_subset_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
async
only.Async variant of retrieve_chunk_subset_opt
.
Sourcepub async fn async_retrieve_chunk_subset_elements_opt<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_elements_opt<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_subset_elements_opt
.
Sourcepub async fn async_retrieve_chunk_subset_ndarray_opt<T: ElementOwned + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_subset_ndarray_opt<T: ElementOwned + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_subset_ndarray_opt
.
Sourcepub async fn async_partial_decoder_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
Available on crate feature async
only.
pub async fn async_partial_decoder_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
async
only.Async variant of partial_decoder_opt
.
Source§impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>
Sourcepub async fn async_store_metadata(&self) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_store_metadata(&self) -> Result<(), StorageError>
async
only.Async variant of store_metadata
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_store_metadata_opt(
&self,
options: &ArrayMetadataOptions,
) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_store_metadata_opt( &self, options: &ArrayMetadataOptions, ) -> Result<(), StorageError>
async
only.Async variant of store_metadata_opt
.
Sourcepub async fn async_store_chunk<'a>(
&self,
chunk_indices: &[u64],
chunk_bytes: impl Into<ArrayBytes<'a>> + Send,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk<'a>( &self, chunk_indices: &[u64], chunk_bytes: impl Into<ArrayBytes<'a>> + Send, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk
.
Sourcepub async fn async_store_chunk_elements<T: Element + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_elements<T: Element + Send + Sync>( &self, chunk_indices: &[u64], chunk_elements: &[T], ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_elements
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_store_chunk_ndarray<T: Element + Send + Sync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: impl Into<Array<T, D>> + Send,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunk_ndarray<T: Element + Send + Sync, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: impl Into<Array<T, D>> + Send, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunk_ndarray
.
Sourcepub async fn async_store_chunks<'a>(
&self,
chunks: &ArraySubset,
chunks_bytes: impl Into<ArrayBytes<'a>> + Send,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks<'a>( &self, chunks: &ArraySubset, chunks_bytes: impl Into<ArrayBytes<'a>> + Send, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks
.
Sourcepub async fn async_store_chunks_elements<T: Element + Send + Sync>(
&self,
chunks: &ArraySubset,
chunks_elements: &[T],
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks_elements<T: Element + Send + Sync>( &self, chunks: &ArraySubset, chunks_elements: &[T], ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks_elements
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_store_chunks_ndarray<T: Element + Send + Sync, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: impl Into<Array<T, D>> + Send,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunks_ndarray<T: Element + Send + Sync, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: impl Into<Array<T, D>> + Send, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunks_ndarray
.
Sourcepub async fn async_erase_metadata(&self) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_metadata(&self) -> Result<(), StorageError>
async
only.Async variant of erase_metadata
.
Sourcepub async fn async_erase_metadata_opt(
&self,
options: MetadataEraseVersion,
) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_metadata_opt( &self, options: MetadataEraseVersion, ) -> Result<(), StorageError>
async
only.Async variant of erase_metadata_opt
.
Sourcepub async fn async_erase_chunk(
&self,
chunk_indices: &[u64],
) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_chunk( &self, chunk_indices: &[u64], ) -> Result<(), StorageError>
async
only.Async variant of erase_chunk
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_erase_chunks(
&self,
chunks: &ArraySubset,
) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_chunks( &self, chunks: &ArraySubset, ) -> Result<(), StorageError>
async
only.Async variant of erase_chunks
.
Sourcepub async fn async_store_chunk_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_bytes: impl Into<ArrayBytes<'a>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_opt<'a>( &self, chunk_indices: &[u64], chunk_bytes: impl Into<ArrayBytes<'a>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_opt
.
Sourcepub async unsafe fn async_store_encoded_chunk(
&self,
chunk_indices: &[u64],
encoded_chunk_bytes: AsyncBytes,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async unsafe fn async_store_encoded_chunk( &self, chunk_indices: &[u64], encoded_chunk_bytes: AsyncBytes, ) -> Result<(), ArrayError>
async
only.Async variant of store_encoded_chunk
Sourcepub async fn async_store_chunk_elements_opt<T: Element + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_elements_opt<T: Element + Send + Sync>( &self, chunk_indices: &[u64], chunk_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_elements_opt
.
Sourcepub async fn async_store_chunk_ndarray_opt<T: Element + Send + Sync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: impl Into<Array<T, D>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunk_ndarray_opt<T: Element + Send + Sync, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: impl Into<Array<T, D>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunk_ndarray_opt
.
Sourcepub async fn async_store_chunks_opt<'a>(
&self,
chunks: &ArraySubset,
chunks_bytes: impl Into<ArrayBytes<'a>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks_opt<'a>( &self, chunks: &ArraySubset, chunks_bytes: impl Into<ArrayBytes<'a>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks_opt
.
Sourcepub async fn async_store_chunks_elements_opt<T: Element + Send + Sync>(
&self,
chunks: &ArraySubset,
chunks_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks_elements_opt<T: Element + Send + Sync>( &self, chunks: &ArraySubset, chunks_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks_elements_opt
.
Sourcepub async fn async_store_chunks_ndarray_opt<T: Element + Send + Sync, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: impl Into<Array<T, D>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunks_ndarray_opt<T: Element + Send + Sync, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: impl Into<Array<T, D>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunks_ndarray_opt
.
Source§impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>
Sourcepub async fn async_store_chunk_subset<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: impl Into<ArrayBytes<'a>> + Send,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset<'a>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: impl Into<ArrayBytes<'a>> + Send, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset
.
Sourcepub async fn async_store_chunk_subset_elements<T: Element + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: &[T],
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_elements<T: Element + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: &[T], ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_elements
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_store_chunk_subset_ndarray<T: Element + Send + Sync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: impl Into<Array<T, D>> + Send,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunk_subset_ndarray<T: Element + Send + Sync, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: impl Into<Array<T, D>> + Send, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunk_subset_ndarray
.
Sourcepub async fn async_store_array_subset<'a>(
&self,
array_subset: &ArraySubset,
subset_bytes: impl Into<ArrayBytes<'a>> + Send,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset<'a>( &self, array_subset: &ArraySubset, subset_bytes: impl Into<ArrayBytes<'a>> + Send, ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset
.
Sourcepub async fn async_store_array_subset_elements<T: Element + Send + Sync>(
&self,
array_subset: &ArraySubset,
subset_elements: &[T],
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset_elements<T: Element + Send + Sync>( &self, array_subset: &ArraySubset, subset_elements: &[T], ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset_elements
.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub async fn async_store_array_subset_ndarray<T: Element + Send + Sync, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: impl Into<Array<T, D>> + Send,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_array_subset_ndarray<T: Element + Send + Sync, D: Dimension>( &self, subset_start: &[u64], subset_array: impl Into<Array<T, D>> + Send, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_array_subset_ndarray
.
Sourcepub async fn async_store_chunk_subset_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: impl Into<ArrayBytes<'a>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_opt<'a>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: impl Into<ArrayBytes<'a>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_opt
.
Sourcepub async fn async_store_chunk_subset_elements_opt<T: Element + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_elements_opt<T: Element + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_elements_opt
.
Sourcepub async fn async_store_chunk_subset_ndarray_opt<T: Element + Send + Sync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: impl Into<Array<T, D>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_ndarray_opt<T: Element + Send + Sync, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: impl Into<Array<T, D>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_ndarray_opt
.
Sourcepub async fn async_store_array_subset_opt<'a>(
&self,
array_subset: &ArraySubset,
subset_bytes: impl Into<ArrayBytes<'a>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset_opt<'a>( &self, array_subset: &ArraySubset, subset_bytes: impl Into<ArrayBytes<'a>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset_opt
.
Sourcepub async fn async_store_array_subset_elements_opt<T: Element + Send + Sync>(
&self,
array_subset: &ArraySubset,
subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset_elements_opt<T: Element + Send + Sync>( &self, array_subset: &ArraySubset, subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset_elements_opt
.
Sourcepub async fn async_store_array_subset_ndarray_opt<T: Element + Send + Sync, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: impl Into<Array<T, D>> + Send,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_array_subset_ndarray_opt<T: Element + Send + Sync, D: Dimension>( &self, subset_start: &[u64], subset_array: impl Into<Array<T, D>> + Send, options: &CodecOptions, ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_array_subset_ndarray_opt
.
Source§impl<TStorage: ?Sized> Array<TStorage>
impl<TStorage: ?Sized> Array<TStorage>
Sourcepub fn new_with_metadata(
storage: Arc<TStorage>,
path: &str,
metadata: ArrayMetadata,
) -> Result<Self, ArrayCreateError>
pub fn new_with_metadata( storage: Arc<TStorage>, path: &str, metadata: ArrayMetadata, ) -> Result<Self, ArrayCreateError>
Create an array in storage
at path
with metadata
.
This does not write to the store, use store_metadata
to write metadata
to storage
.
§Errors
Returns ArrayCreateError
if:
- any metadata is invalid or,
- a plugin (e.g. data type/chunk grid/chunk key encoding/codec/storage transformer) is invalid.
Examples found in repository?
23fn main() -> Result<(), Box<dyn std::error::Error>> {
24 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
25
26 let serde_json::Value::Object(attributes) = serde_json::json!({
27 "foo": "bar",
28 "baz": 42,
29 }) else {
30 unreachable!()
31 };
32
33 // Create a Zarr V2 group
34 let group_metadata: GroupMetadata = GroupMetadataV2::new()
35 .with_attributes(attributes.clone())
36 .into();
37 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
38
39 // Store the metadata as V2 and V3
40 let convert_group_metadata_to_v3 =
41 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
42 group.store_metadata()?;
43 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
44 println!(
45 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
46 key_to_str(&store, "group/.zgroup")?
47 );
48 println!(
49 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
50 key_to_str(&store, "group/.zattrs")?
51 );
52 println!(
53 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
54 key_to_str(&store, "group/zarr.json")?
55 );
56 // println!(
57 // "The equivalent Zarr V3 group metadata is\n{}\n",
58 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
59 // );
60
61 // Create a Zarr V2 array
62 let array_metadata = ArrayMetadataV2::new(
63 vec![10, 10],
64 vec![5, 5].try_into()?,
65 ">f4".into(), // big endian float32
66 FillValueMetadataV2::NaN,
67 None,
68 None,
69 )
70 .with_dimension_separator(ChunkKeySeparator::Slash)
71 .with_order(ArrayMetadataV2Order::F)
72 .with_attributes(attributes.clone());
73 let array = zarrs::array::Array::new_with_metadata(
74 store.clone(),
75 "/group/array",
76 array_metadata.into(),
77 )?;
78
79 // Store the metadata as V2 and V3
80 let convert_array_metadata_to_v3 =
81 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
82 array.store_metadata()?;
83 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
84 println!(
85 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
86 key_to_str(&store, "group/array/.zarray")?
87 );
88 println!(
89 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
90 key_to_str(&store, "group/array/.zattrs")?
91 );
92 println!(
93 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
94 key_to_str(&store, "group/array/zarr.json")?
95 );
96 // println!(
97 // "The equivalent Zarr V3 array metadata is\n{}\n",
98 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
99 // );
100
101 array.store_chunk_elements::<f32>(&[0, 1], &[0.0; 5 * 5])?;
102
103 // Print the keys in the store
104 println!("The store contains keys:");
105 for key in store.list()? {
106 println!(" {}", key);
107 }
108
109 Ok(())
110}
Sourcepub const fn fill_value(&self) -> &FillValue
pub const fn fill_value(&self) -> &FillValue
Get the fill value.
Sourcepub fn shape(&self) -> &[u64]
pub fn shape(&self) -> &[u64]
Get the array shape.
Examples found in repository?
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub fn set_shape(&mut self, shape: ArrayShape) -> &mut Self
pub fn set_shape(&mut self, shape: ArrayShape) -> &mut Self
Set the array shape.
Sourcepub fn dimensionality(&self) -> usize
pub fn dimensionality(&self) -> usize
Get the array dimensionality.
Sourcepub fn codecs(&self) -> &CodecChain
pub fn codecs(&self) -> &CodecChain
Get the codecs.
Sourcepub const fn chunk_grid(&self) -> &ChunkGrid
pub const fn chunk_grid(&self) -> &ChunkGrid
Get the chunk grid.
Examples found in repository?
8fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
10 use zarrs::array::ChunkGrid;
11 use zarrs::{
12 array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
13 node::Node,
14 };
15 use zarrs::{
16 array::{DataType, ZARR_NAN_F32},
17 array_subset::ArraySubset,
18 storage::store,
19 };
20
21 // Create a store
22 // let path = tempfile::TempDir::new()?;
23 // let mut store: ReadableWritableListableStorage =
24 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 ChunkGrid::new(RectangularChunkGrid::new(&[
63 [1, 2, 3, 2].try_into()?,
64 4.try_into()?,
65 ])),
66 FillValue::from(ZARR_NAN_F32),
67 )
68 .bytes_to_bytes_codecs(vec![
69 #[cfg(feature = "gzip")]
70 Arc::new(codec::GzipCodec::new(5)?),
71 ])
72 .dimension_names(["y", "x"].into())
73 // .storage_transformers(vec![].into())
74 .build(store.clone(), array_path)?;
75
76 // Write array metadata to store
77 array.store_metadata()?;
78
79 // Write some chunks (in parallel)
80 (0..4).into_par_iter().try_for_each(|i| {
81 let chunk_grid = array.chunk_grid();
82 let chunk_indices = vec![i, 0];
83 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
84 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
85 chunk_shape
86 .iter()
87 .map(|u| u.get() as usize)
88 .collect::<Vec<_>>(),
89 i as f32,
90 );
91 array.store_chunk_ndarray(&chunk_indices, chunk_array)
92 } else {
93 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
94 chunk_indices.to_vec(),
95 ))
96 }
97 })?;
98
99 println!(
100 "The array metadata is:\n{}\n",
101 array.metadata().to_string_pretty()
102 );
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset_ndarray(
106 &[3, 3], // start
107 ndarray::ArrayD::<f32>::from_shape_vec(
108 vec![3, 3],
109 vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
110 )?,
111 )?;
112
113 // Store elements directly, in this case set the 7th column to 123.0
114 array.store_array_subset_elements::<f32>(
115 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
116 &[123.0; 8],
117 )?;
118
119 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
120 array.store_chunk_subset_elements::<f32>(
121 // chunk indices
122 &[3, 1],
123 // subset within chunk
124 &ArraySubset::new_with_ranges(&[1..2, 0..4]),
125 &[-4.0; 4],
126 )?;
127
128 // Read the whole array
129 let data_all = array.retrieve_array_subset_ndarray::<f32>(&array.subset_all())?;
130 println!("The whole array is:\n{data_all}\n");
131
132 // Read a chunk back from the store
133 let chunk_indices = vec![1, 0];
134 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
135 println!("Chunk [1,0] is:\n{data_chunk}\n");
136
137 // Read the central 4x2 subset of the array
138 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
139 let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
140 println!("The middle 4x2 subset is:\n{data_4x2}\n");
141
142 // Show the hierarchy
143 let node = Node::open(&store, "/").unwrap();
144 let tree = node.hierarchy_tree();
145 println!("The Zarr hierarchy tree is:\n{tree}");
146
147 Ok(())
148}
More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10 use zarrs::{
11 array::{DataType, FillValue, ZARR_NAN_F32},
12 array_subset::ArraySubset,
13 node::Node,
14 storage::store,
15 };
16
17 // Create a store
18 // let path = tempfile::TempDir::new()?;
19 // let mut store: ReadableWritableListableStorage =
20 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
21 // let mut store: ReadableWritableListableStorage = Arc::new(
22 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
23 // );
24 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
25 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
26 if arg1 == "--usage-log" {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36 }
37
38 // Create the root group
39 zarrs::group::GroupBuilder::new()
40 .build(store.clone(), "/")?
41 .store_metadata()?;
42
43 // Create a group with attributes
44 let group_path = "/group";
45 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
46 group
47 .attributes_mut()
48 .insert("foo".into(), serde_json::Value::String("bar".into()));
49 group.store_metadata()?;
50
51 println!(
52 "The group metadata is:\n{}\n",
53 group.metadata().to_string_pretty()
54 );
55
56 // Create an array
57 let array_path = "/group/array";
58 let array = zarrs::array::ArrayBuilder::new(
59 vec![8, 8], // array shape
60 DataType::Float32,
61 vec![4, 4].try_into()?, // regular chunk shape
62 FillValue::from(ZARR_NAN_F32),
63 )
64 // .bytes_to_bytes_codecs(vec![]) // uncompressed
65 .dimension_names(["y", "x"].into())
66 // .storage_transformers(vec![].into())
67 .build(store.clone(), array_path)?;
68
69 // Write array metadata to store
70 array.store_metadata()?;
71
72 println!(
73 "The array metadata is:\n{}\n",
74 array.metadata().to_string_pretty()
75 );
76
77 // Write some chunks
78 (0..2).into_par_iter().try_for_each(|i| {
79 let chunk_indices: Vec<u64> = vec![0, i];
80 let chunk_subset = array
81 .chunk_grid()
82 .subset(&chunk_indices, array.shape())?
83 .ok_or_else(|| {
84 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
85 })?;
86 array.store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 })?;
91
92 let subset_all = array.subset_all();
93 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
94 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
95
96 // Store multiple chunks
97 array.store_chunks_elements::<f32>(
98 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
99 &[
100 //
101 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
102 //
103 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 ],
105 )?;
106 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
107 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
108
109 // Write a subset spanning multiple chunks, including updating chunks already written
110 array.store_array_subset_elements::<f32>(
111 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
112 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
113 )?;
114 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
115 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
116
117 // Store array subset
118 array.store_array_subset_elements::<f32>(
119 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
120 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
121 )?;
122 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
123 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
124
125 // Store chunk subset
126 array.store_chunk_subset_elements::<f32>(
127 // chunk indices
128 &[1, 1],
129 // subset within chunk
130 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
131 &[-7.4, -7.5, -7.6, -7.7],
132 )?;
133 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
134 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
135
136 // Erase a chunk
137 array.erase_chunk(&[0, 0])?;
138 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
139 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
140
141 // Read a chunk
142 let chunk_indices = vec![0, 1];
143 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
144 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
145
146 // Read chunks
147 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
148 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
149 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
150
151 // Retrieve an array subset
152 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
153 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
154 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("hierarchy_tree:\n{}", tree);
160
161 Ok(())
162}
11fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
12 use zarrs::{
13 array::{
14 codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
15 DataType, FillValue,
16 },
17 array_subset::ArraySubset,
18 node::Node,
19 storage::store,
20 };
21
22 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
23 use std::sync::Arc;
24
25 // Create a store
26 // let path = tempfile::TempDir::new()?;
27 // let mut store: ReadableWritableListableStorage =
28 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
29 // let mut store: ReadableWritableListableStorage = Arc::new(
30 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
31 // );
32 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
33 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
34 if arg1 == "--usage-log" {
35 let log_writer = Arc::new(std::sync::Mutex::new(
36 // std::io::BufWriter::new(
37 std::io::stdout(),
38 // )
39 ));
40 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
41 chrono::Utc::now().format("[%T%.3f] ").to_string()
42 }));
43 }
44 }
45
46 // Create the root group
47 zarrs::group::GroupBuilder::new()
48 .build(store.clone(), "/")?
49 .store_metadata()?;
50
51 // Create a group with attributes
52 let group_path = "/group";
53 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
54 group
55 .attributes_mut()
56 .insert("foo".into(), serde_json::Value::String("bar".into()));
57 group.store_metadata()?;
58
59 // Create an array
60 let array_path = "/group/array";
61 let shard_shape = vec![4, 8];
62 let inner_chunk_shape = vec![4, 4];
63 let mut sharding_codec_builder =
64 ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
65 sharding_codec_builder.bytes_to_bytes_codecs(vec![
66 #[cfg(feature = "gzip")]
67 Arc::new(codec::GzipCodec::new(5)?),
68 ]);
69 let array = zarrs::array::ArrayBuilder::new(
70 vec![8, 8], // array shape
71 DataType::UInt16,
72 shard_shape.try_into()?,
73 FillValue::from(0u16),
74 )
75 .array_to_bytes_codec(Arc::new(sharding_codec_builder.build()))
76 .dimension_names(["y", "x"].into())
77 // .storage_transformers(vec![].into())
78 .build(store.clone(), array_path)?;
79
80 // Write array metadata to store
81 array.store_metadata()?;
82
83 // The array metadata is
84 println!(
85 "The array metadata is:\n{}\n",
86 array.metadata().to_string_pretty()
87 );
88
89 // Use default codec options (concurrency etc)
90 let options = CodecOptions::default();
91
92 // Write some shards (in parallel)
93 (0..2).into_par_iter().try_for_each(|s| {
94 let chunk_grid = array.chunk_grid();
95 let chunk_indices = vec![s, 0];
96 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
97 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
98 chunk_shape
99 .iter()
100 .map(|u| u.get() as usize)
101 .collect::<Vec<_>>(),
102 |ij| {
103 (s * chunk_shape[0].get() * chunk_shape[1].get()
104 + ij[0] as u64 * chunk_shape[1].get()
105 + ij[1] as u64) as u16
106 },
107 );
108 array.store_chunk_ndarray(&chunk_indices, chunk_array)
109 } else {
110 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
111 chunk_indices.to_vec(),
112 ))
113 }
114 })?;
115
116 // Read the whole array
117 let data_all = array.retrieve_array_subset_ndarray::<u16>(&array.subset_all())?;
118 println!("The whole array is:\n{data_all}\n");
119
120 // Read a shard back from the store
121 let shard_indices = vec![1, 0];
122 let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
123 println!("Shard [1,0] is:\n{data_shard}\n");
124
125 // Read an inner chunk from the store
126 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
127 let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
128 println!("Chunk [1,0] is:\n{data_chunk}\n");
129
130 // Read the central 4x2 subset of the array
131 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
132 let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
133 println!("The middle 4x2 subset is:\n{data_4x2}\n");
134
135 // Decode inner chunks
136 // In some cases, it might be preferable to decode inner chunks in a shard directly.
137 // If using the partial decoder, then the shard index will only be read once from the store.
138 let partial_decoder = array.partial_decoder(&[0, 0])?;
139 let inner_chunks_to_decode = vec![
140 ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
141 ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
142 ];
143 let decoded_inner_chunks_bytes =
144 partial_decoder.partial_decode(&inner_chunks_to_decode, &options)?;
145 println!("Decoded inner chunks:");
146 for (inner_chunk_subset, decoded_inner_chunk) in
147 std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_bytes)
148 {
149 let ndarray = bytes_to_ndarray::<u16>(
150 &inner_chunk_shape,
151 decoded_inner_chunk.into_fixed()?.into_owned(),
152 )?;
153 println!("{inner_chunk_subset}\n{ndarray}\n");
154 }
155
156 // Show the hierarchy
157 let node = Node::open(&store, "/").unwrap();
158 let tree = node.hierarchy_tree();
159 println!("The Zarr hierarchy tree is:\n{}", tree);
160
161 println!(
162 "The keys in the store are:\n[{}]",
163 store.list().unwrap_or_default().iter().format(", ")
164 );
165
166 Ok(())
167}
9fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 storage::store,
16 };
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
27 if arg1 == "--usage-log" {
28 let log_writer = Arc::new(std::sync::Mutex::new(
29 // std::io::BufWriter::new(
30 std::io::stdout(),
31 // )
32 ));
33 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
34 chrono::Utc::now().format("[%T%.3f] ").to_string()
35 }));
36 }
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 println!(
53 "The group metadata is:\n{}\n",
54 group.metadata().to_string_pretty()
55 );
56
57 // Create an array
58 let array_path = "/group/array";
59 let array = zarrs::array::ArrayBuilder::new(
60 vec![8, 8], // array shape
61 DataType::Float32,
62 vec![4, 4].try_into()?, // regular chunk shape
63 FillValue::from(ZARR_NAN_F32),
64 )
65 // .bytes_to_bytes_codecs(vec![]) // uncompressed
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 println!(
74 "The array metadata is:\n{}\n",
75 array.metadata().to_string_pretty()
76 );
77
78 // Write some chunks
79 (0..2).into_par_iter().try_for_each(|i| {
80 let chunk_indices: Vec<u64> = vec![0, i];
81 let chunk_subset = array
82 .chunk_grid()
83 .subset(&chunk_indices, array.shape())?
84 .ok_or_else(|| {
85 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
86 })?;
87 array.store_chunk_ndarray(
88 &chunk_indices,
89 ArrayD::<f32>::from_shape_vec(
90 chunk_subset.shape_usize(),
91 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
92 )
93 .unwrap(),
94 )
95 })?;
96
97 let subset_all = array.subset_all();
98 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
99 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
100
101 // Store multiple chunks
102 let ndarray_chunks: Array2<f32> = array![
103 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
104 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
105 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
106 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
107 ];
108 array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
109 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
110 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 let ndarray_subset: Array2<f32> =
114 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
115 array.store_array_subset_ndarray(
116 ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
117 ndarray_subset,
118 )?;
119 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
120 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 let ndarray_subset: Array2<f32> = array![
124 [-0.6],
125 [-1.6],
126 [-2.6],
127 [-3.6],
128 [-4.6],
129 [-5.6],
130 [-6.6],
131 [-7.6],
132 ];
133 array.store_array_subset_ndarray(
134 ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
135 ndarray_subset,
136 )?;
137 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
138 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
139
140 // Store chunk subset
141 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
142 array.store_chunk_subset_ndarray(
143 // chunk indices
144 &[1, 1],
145 // subset within chunk
146 ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
147 ndarray_chunk_subset,
148 )?;
149 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
150 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
151
152 // Erase a chunk
153 array.erase_chunk(&[0, 0])?;
154 let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
155 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
156
157 // Read a chunk
158 let chunk_indices = vec![0, 1];
159 let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
160 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
161
162 // Read chunks
163 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
164 let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
165 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
166
167 // Retrieve an array subset
168 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
169 let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
170 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
171
172 // Show the hierarchy
173 let node = Node::open(&store, "/").unwrap();
174 let tree = node.hierarchy_tree();
175 println!("hierarchy_tree:\n{}", tree);
176
177 Ok(())
178}
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use futures::StreamExt;
10 use std::sync::Arc;
11 use zarrs::{
12 array::{DataType, FillValue, ZARR_NAN_F32},
13 array_subset::ArraySubset,
14 node::Node,
15 };
16
17 // Create a store
18 let mut store: AsyncReadableWritableListableStorage = Arc::new(
19 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
20 );
21 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
22 if arg1 == "--usage-log" {
23 let log_writer = Arc::new(std::sync::Mutex::new(
24 // std::io::BufWriter::new(
25 std::io::stdout(),
26 // )
27 ));
28 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
29 chrono::Utc::now().format("[%T%.3f] ").to_string()
30 }));
31 }
32 }
33
34 // Create the root group
35 zarrs::group::GroupBuilder::new()
36 .build(store.clone(), "/")?
37 .async_store_metadata()
38 .await?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.async_store_metadata().await?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 DataType::Float32,
58 vec![4, 4].try_into()?, // regular chunk shape
59 FillValue::from(ZARR_NAN_F32),
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build_arc(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.async_store_metadata().await?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 let store_chunk = |i: u64| {
76 let array = array.clone();
77 async move {
78 let chunk_indices: Vec<u64> = vec![0, i];
79 let chunk_subset = array
80 .chunk_grid()
81 .subset(&chunk_indices, array.shape())?
82 .ok_or_else(|| {
83 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
84 })?;
85 array
86 .async_store_chunk_elements(
87 &chunk_indices,
88 &vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
89 )
90 .await
91 }
92 };
93 futures::stream::iter(0..2)
94 .map(Ok)
95 .try_for_each_concurrent(None, store_chunk)
96 .await?;
97
98 let subset_all = array.subset_all();
99 let data_all = array
100 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
101 .await?;
102 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
103
104 // Store multiple chunks
105 array
106 .async_store_chunks_elements::<f32>(
107 &ArraySubset::new_with_ranges(&[1..2, 0..2]),
108 &[
109 //
110 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
111 //
112 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
113 ],
114 )
115 .await?;
116 let data_all = array
117 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
118 .await?;
119 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
120
121 // Write a subset spanning multiple chunks, including updating chunks already written
122 array
123 .async_store_array_subset_elements::<f32>(
124 &ArraySubset::new_with_ranges(&[3..6, 3..6]),
125 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
126 )
127 .await?;
128 let data_all = array
129 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
130 .await?;
131 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
132
133 // Store array subset
134 array
135 .async_store_array_subset_elements::<f32>(
136 &ArraySubset::new_with_ranges(&[0..8, 6..7]),
137 &[-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
138 )
139 .await?;
140 let data_all = array
141 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
142 .await?;
143 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
144
145 // Store chunk subset
146 array
147 .async_store_chunk_subset_elements::<f32>(
148 // chunk indices
149 &[1, 1],
150 // subset within chunk
151 &ArraySubset::new_with_ranges(&[3..4, 0..4]),
152 &[-7.4, -7.5, -7.6, -7.7],
153 )
154 .await?;
155 let data_all = array
156 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
157 .await?;
158 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
159
160 // Erase a chunk
161 array.async_erase_chunk(&[0, 0]).await?;
162 let data_all = array
163 .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
164 .await?;
165 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
166
167 // Read a chunk
168 let chunk_indices = vec![0, 1];
169 let data_chunk = array
170 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
171 .await?;
172 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
173
174 // Read chunks
175 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
176 let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
177 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
178
179 // Retrieve an array subset
180 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
181 let data_subset = array
182 .async_retrieve_array_subset_ndarray::<f32>(&subset)
183 .await?;
184 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
185
186 // Show the hierarchy
187 let node = Node::async_open(store, "/").await.unwrap();
188 let tree = node.hierarchy_tree();
189 println!("hierarchy_tree:\n{}", tree);
190
191 Ok(())
192}
Sourcepub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding
pub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding
Get the chunk key encoding.
Sourcepub const fn storage_transformers(&self) -> &StorageTransformerChain
pub const fn storage_transformers(&self) -> &StorageTransformerChain
Get the storage transformers.
Sourcepub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>
pub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>
Get the dimension names.
Sourcepub fn set_dimension_names(
&mut self,
dimension_names: Option<Vec<DimensionName>>,
) -> &mut Self
pub fn set_dimension_names( &mut self, dimension_names: Option<Vec<DimensionName>>, ) -> &mut Self
Set the dimension names.
Sourcepub const fn attributes(&self) -> &Map<String, Value>
pub const fn attributes(&self) -> &Map<String, Value>
Get the attributes.
Sourcepub fn attributes_mut(&mut self) -> &mut Map<String, Value>
pub fn attributes_mut(&mut self) -> &mut Map<String, Value>
Mutably borrow the array attributes.
Sourcepub fn metadata(&self) -> &ArrayMetadata
pub fn metadata(&self) -> &ArrayMetadata
Return the underlying array metadata.
Examples found in repository?
153fn main() {
154 let store = std::sync::Arc::new(MemoryStore::default());
155 let array_path = "/array";
156 let array = ArrayBuilder::new(
157 vec![4, 1], // array shape
158 DataType::Extension(Arc::new(CustomDataTypeVariableSize)),
159 vec![3, 1].try_into().unwrap(), // regular chunk shape
160 FillValue::from(vec![]),
161 )
162 .array_to_array_codecs(vec![
163 #[cfg(feature = "transpose")]
164 Arc::new(zarrs::array::codec::TransposeCodec::new(
165 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
166 )),
167 ])
168 .bytes_to_bytes_codecs(vec![
169 #[cfg(feature = "gzip")]
170 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
171 #[cfg(feature = "crc32c")]
172 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
173 ])
174 // .storage_transformers(vec![].into())
175 .build(store, array_path)
176 .unwrap();
177 println!("{}", array.metadata().to_string_pretty());
178
179 let data = [
180 CustomDataTypeVariableSizeElement::from(Some(1.0)),
181 CustomDataTypeVariableSizeElement::from(None),
182 CustomDataTypeVariableSizeElement::from(Some(3.0)),
183 ];
184 array.store_chunk_elements(&[0, 0], &data).unwrap();
185
186 let data = array
187 .retrieve_array_subset_elements::<CustomDataTypeVariableSizeElement>(&array.subset_all())
188 .unwrap();
189
190 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
191 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
192 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
193 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
194
195 println!("{data:#?}");
196}
More examples
269fn main() {
270 let store = std::sync::Arc::new(MemoryStore::default());
271 let array_path = "/array";
272 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
273 let array = ArrayBuilder::new(
274 vec![4, 1], // array shape
275 DataType::Extension(Arc::new(CustomDataTypeFixedSize)),
276 vec![2, 1].try_into().unwrap(), // regular chunk shape
277 FillValue::new(fill_value.to_ne_bytes().to_vec()),
278 )
279 .array_to_array_codecs(vec![
280 #[cfg(feature = "transpose")]
281 Arc::new(zarrs::array::codec::TransposeCodec::new(
282 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
283 )),
284 ])
285 .bytes_to_bytes_codecs(vec![
286 #[cfg(feature = "gzip")]
287 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
288 #[cfg(feature = "crc32c")]
289 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
290 ])
291 // .storage_transformers(vec![].into())
292 .build(store, array_path)
293 .unwrap();
294 println!("{}", array.metadata().to_string_pretty());
295
296 let data = [
297 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
298 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
299 ];
300 array.store_chunk_elements(&[0, 0], &data).unwrap();
301
302 let data = array
303 .retrieve_array_subset_elements::<CustomDataTypeFixedSizeElement>(&array.subset_all())
304 .unwrap();
305
306 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
307 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
308 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
309 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
310
311 println!("{data:#?}");
312}
205fn main() {
206 let store = std::sync::Arc::new(MemoryStore::default());
207 let array_path = "/array";
208 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
209 let array = ArrayBuilder::new(
210 vec![4096, 1], // array shape
211 DataType::Extension(Arc::new(CustomDataTypeUInt12)),
212 vec![5, 1].try_into().unwrap(), // regular chunk shape
213 FillValue::new(fill_value.to_le_bytes().to_vec()),
214 )
215 .array_to_array_codecs(vec![
216 #[cfg(feature = "transpose")]
217 Arc::new(zarrs::array::codec::TransposeCodec::new(
218 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
219 )),
220 ])
221 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
222 .bytes_to_bytes_codecs(vec![
223 #[cfg(feature = "gzip")]
224 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
225 #[cfg(feature = "crc32c")]
226 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
227 ])
228 // .storage_transformers(vec![].into())
229 .build(store, array_path)
230 .unwrap();
231 println!("{}", array.metadata().to_string_pretty());
232
233 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
234 .into_iter()
235 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
236 .collect();
237
238 array
239 .store_array_subset_elements(&array.subset_all(), &data)
240 .unwrap();
241
242 let data = array
243 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(&array.subset_all())
244 .unwrap();
245
246 for i in 0usize..4096 {
247 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
248 assert_eq!(data[i], element);
249 let element_pd = array
250 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(
251 &ArraySubset::new_with_ranges(&[(i as u64)..i as u64 + 1, 0..1]),
252 )
253 .unwrap()[0];
254 assert_eq!(element_pd, element);
255 }
256}
217fn main() {
218 let store = std::sync::Arc::new(MemoryStore::default());
219 let array_path = "/array";
220 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
221 let array = ArrayBuilder::new(
222 vec![6, 1], // array shape
223 DataType::Extension(Arc::new(CustomDataTypeFloat8e3m4)),
224 vec![5, 1].try_into().unwrap(), // regular chunk shape
225 FillValue::new(fill_value.to_ne_bytes().to_vec()),
226 )
227 .array_to_array_codecs(vec![
228 #[cfg(feature = "transpose")]
229 Arc::new(zarrs::array::codec::TransposeCodec::new(
230 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
231 )),
232 ])
233 .bytes_to_bytes_codecs(vec![
234 #[cfg(feature = "gzip")]
235 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
236 #[cfg(feature = "crc32c")]
237 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
238 ])
239 // .storage_transformers(vec![].into())
240 .build(store, array_path)
241 .unwrap();
242 println!("{}", array.metadata().to_string_pretty());
243
244 let data = [
245 CustomDataTypeFloat8e3m4Element::from(2.34),
246 CustomDataTypeFloat8e3m4Element::from(3.45),
247 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
248 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
249 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
250 ];
251 array.store_chunk_elements(&[0, 0], &data).unwrap();
252
253 let data = array
254 .retrieve_array_subset_elements::<CustomDataTypeFloat8e3m4Element>(&array.subset_all())
255 .unwrap();
256
257 for f in &data {
258 println!(
259 "float8_e3m4: {:08b} f32: {}",
260 f.to_ne_bytes()[0],
261 f.as_f32()
262 );
263 }
264
265 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
266 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
267 assert_eq!(
268 data[2],
269 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
270 );
271 assert_eq!(
272 data[3],
273 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
274 );
275 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
276 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
277}
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 Backend::OpenDAL => {
23 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 let operator = opendal::Operator::new(builder)?.finish();
25 Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
37 if arg1 == "--usage-log" {
38 let log_writer = Arc::new(std::sync::Mutex::new(
39 // std::io::BufWriter::new(
40 std::io::stdout(),
41 // )
42 ));
43 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
44 chrono::Utc::now().format("[%T%.3f] ").to_string()
45 }));
46 }
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all = array
59 .async_retrieve_array_subset_ndarray::<f32>(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk = array
66 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
67 .await?;
68 println!("Chunk [1,0] is:\n{data_chunk}\n");
69
70 // Read the central 4x2 subset of the array
71 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
72 let data_4x2 = array
73 .async_retrieve_array_subset_ndarray::<f32>(&subset_4x2)
74 .await?;
75 println!("The middle 4x2 subset is:\n{data_4x2}\n");
76
77 Ok(())
78}
203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 DataType::Extension(Arc::new(CustomDataTypeUInt4)),
210 vec![5, 1].try_into().unwrap(), // regular chunk shape
211 FillValue::new(fill_value.to_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
220 .bytes_to_bytes_codecs(vec![
221 #[cfg(feature = "gzip")]
222 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
223 #[cfg(feature = "crc32c")]
224 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
225 ])
226 // .storage_transformers(vec![].into())
227 .build(store, array_path)
228 .unwrap();
229 println!("{}", array.metadata().to_string_pretty());
230
231 let data = [
232 CustomDataTypeUInt4Element::try_from(1).unwrap(),
233 CustomDataTypeUInt4Element::try_from(2).unwrap(),
234 CustomDataTypeUInt4Element::try_from(3).unwrap(),
235 CustomDataTypeUInt4Element::try_from(4).unwrap(),
236 CustomDataTypeUInt4Element::try_from(5).unwrap(),
237 ];
238 array.store_chunk_elements(&[0, 0], &data).unwrap();
239
240 let data = array
241 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(&array.subset_all())
242 .unwrap();
243
244 for f in &data {
245 println!("uint4: {:08b} u8: {}", f.as_u8(), f.as_u8());
246 }
247
248 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
249 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
250 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
251 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
252 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
253 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
254
255 let data = array
256 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(
257 &ArraySubset::new_with_ranges(&[1..3, 0..1]),
258 )
259 .unwrap();
260 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
261 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
262}
Sourcepub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata
pub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata
Return a new ArrayMetadata
with ArrayMetadataOptions
applied.
This method is used internally by Array::store_metadata
and Array::store_metadata_opt
.
Sourcepub fn builder(&self) -> ArrayBuilder
pub fn builder(&self) -> ArrayBuilder
Create an array builder matching the parameters of this array.
Sourcepub fn chunk_grid_shape(&self) -> Option<ArrayShape>
pub fn chunk_grid_shape(&self) -> Option<ArrayShape>
Return the shape of the chunk grid (i.e., the number of chunks).
Sourcepub fn chunk_key(&self, chunk_indices: &[u64]) -> StoreKey
pub fn chunk_key(&self, chunk_indices: &[u64]) -> StoreKey
Return the StoreKey
of the chunk at chunk_indices
.
Sourcepub fn chunk_origin(
&self,
chunk_indices: &[u64],
) -> Result<ArrayIndices, ArrayError>
pub fn chunk_origin( &self, chunk_indices: &[u64], ) -> Result<ArrayIndices, ArrayError>
Return the origin of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
Sourcepub fn chunk_shape(
&self,
chunk_indices: &[u64],
) -> Result<ChunkShape, ArrayError>
pub fn chunk_shape( &self, chunk_indices: &[u64], ) -> Result<ChunkShape, ArrayError>
Return the shape of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
Sourcepub fn subset_all(&self) -> ArraySubset
pub fn subset_all(&self) -> ArraySubset
Return an array subset that spans the entire array.
Examples found in repository?
153fn main() {
154 let store = std::sync::Arc::new(MemoryStore::default());
155 let array_path = "/array";
156 let array = ArrayBuilder::new(
157 vec![4, 1], // array shape
158 DataType::Extension(Arc::new(CustomDataTypeVariableSize)),
159 vec![3, 1].try_into().unwrap(), // regular chunk shape
160 FillValue::from(vec![]),
161 )
162 .array_to_array_codecs(vec![
163 #[cfg(feature = "transpose")]
164 Arc::new(zarrs::array::codec::TransposeCodec::new(
165 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
166 )),
167 ])
168 .bytes_to_bytes_codecs(vec![
169 #[cfg(feature = "gzip")]
170 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
171 #[cfg(feature = "crc32c")]
172 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
173 ])
174 // .storage_transformers(vec![].into())
175 .build(store, array_path)
176 .unwrap();
177 println!("{}", array.metadata().to_string_pretty());
178
179 let data = [
180 CustomDataTypeVariableSizeElement::from(Some(1.0)),
181 CustomDataTypeVariableSizeElement::from(None),
182 CustomDataTypeVariableSizeElement::from(Some(3.0)),
183 ];
184 array.store_chunk_elements(&[0, 0], &data).unwrap();
185
186 let data = array
187 .retrieve_array_subset_elements::<CustomDataTypeVariableSizeElement>(&array.subset_all())
188 .unwrap();
189
190 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
191 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
192 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
193 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
194
195 println!("{data:#?}");
196}
More examples
269fn main() {
270 let store = std::sync::Arc::new(MemoryStore::default());
271 let array_path = "/array";
272 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
273 let array = ArrayBuilder::new(
274 vec![4, 1], // array shape
275 DataType::Extension(Arc::new(CustomDataTypeFixedSize)),
276 vec![2, 1].try_into().unwrap(), // regular chunk shape
277 FillValue::new(fill_value.to_ne_bytes().to_vec()),
278 )
279 .array_to_array_codecs(vec![
280 #[cfg(feature = "transpose")]
281 Arc::new(zarrs::array::codec::TransposeCodec::new(
282 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
283 )),
284 ])
285 .bytes_to_bytes_codecs(vec![
286 #[cfg(feature = "gzip")]
287 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
288 #[cfg(feature = "crc32c")]
289 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
290 ])
291 // .storage_transformers(vec![].into())
292 .build(store, array_path)
293 .unwrap();
294 println!("{}", array.metadata().to_string_pretty());
295
296 let data = [
297 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
298 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
299 ];
300 array.store_chunk_elements(&[0, 0], &data).unwrap();
301
302 let data = array
303 .retrieve_array_subset_elements::<CustomDataTypeFixedSizeElement>(&array.subset_all())
304 .unwrap();
305
306 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
307 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
308 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
309 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
310
311 println!("{data:#?}");
312}
205fn main() {
206 let store = std::sync::Arc::new(MemoryStore::default());
207 let array_path = "/array";
208 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
209 let array = ArrayBuilder::new(
210 vec![4096, 1], // array shape
211 DataType::Extension(Arc::new(CustomDataTypeUInt12)),
212 vec![5, 1].try_into().unwrap(), // regular chunk shape
213 FillValue::new(fill_value.to_le_bytes().to_vec()),
214 )
215 .array_to_array_codecs(vec![
216 #[cfg(feature = "transpose")]
217 Arc::new(zarrs::array::codec::TransposeCodec::new(
218 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
219 )),
220 ])
221 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
222 .bytes_to_bytes_codecs(vec![
223 #[cfg(feature = "gzip")]
224 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
225 #[cfg(feature = "crc32c")]
226 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
227 ])
228 // .storage_transformers(vec![].into())
229 .build(store, array_path)
230 .unwrap();
231 println!("{}", array.metadata().to_string_pretty());
232
233 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
234 .into_iter()
235 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
236 .collect();
237
238 array
239 .store_array_subset_elements(&array.subset_all(), &data)
240 .unwrap();
241
242 let data = array
243 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(&array.subset_all())
244 .unwrap();
245
246 for i in 0usize..4096 {
247 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
248 assert_eq!(data[i], element);
249 let element_pd = array
250 .retrieve_array_subset_elements::<CustomDataTypeUInt12Element>(
251 &ArraySubset::new_with_ranges(&[(i as u64)..i as u64 + 1, 0..1]),
252 )
253 .unwrap()[0];
254 assert_eq!(element_pd, element);
255 }
256}
217fn main() {
218 let store = std::sync::Arc::new(MemoryStore::default());
219 let array_path = "/array";
220 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
221 let array = ArrayBuilder::new(
222 vec![6, 1], // array shape
223 DataType::Extension(Arc::new(CustomDataTypeFloat8e3m4)),
224 vec![5, 1].try_into().unwrap(), // regular chunk shape
225 FillValue::new(fill_value.to_ne_bytes().to_vec()),
226 )
227 .array_to_array_codecs(vec![
228 #[cfg(feature = "transpose")]
229 Arc::new(zarrs::array::codec::TransposeCodec::new(
230 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
231 )),
232 ])
233 .bytes_to_bytes_codecs(vec![
234 #[cfg(feature = "gzip")]
235 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
236 #[cfg(feature = "crc32c")]
237 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
238 ])
239 // .storage_transformers(vec![].into())
240 .build(store, array_path)
241 .unwrap();
242 println!("{}", array.metadata().to_string_pretty());
243
244 let data = [
245 CustomDataTypeFloat8e3m4Element::from(2.34),
246 CustomDataTypeFloat8e3m4Element::from(3.45),
247 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
248 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
249 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
250 ];
251 array.store_chunk_elements(&[0, 0], &data).unwrap();
252
253 let data = array
254 .retrieve_array_subset_elements::<CustomDataTypeFloat8e3m4Element>(&array.subset_all())
255 .unwrap();
256
257 for f in &data {
258 println!(
259 "float8_e3m4: {:08b} f32: {}",
260 f.to_ne_bytes()[0],
261 f.as_f32()
262 );
263 }
264
265 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
266 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
267 assert_eq!(
268 data[2],
269 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
270 );
271 assert_eq!(
272 data[3],
273 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
274 );
275 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
276 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
277}
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 Backend::OpenDAL => {
23 let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 let operator = opendal::Operator::new(builder)?.finish();
25 Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
37 if arg1 == "--usage-log" {
38 let log_writer = Arc::new(std::sync::Mutex::new(
39 // std::io::BufWriter::new(
40 std::io::stdout(),
41 // )
42 ));
43 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
44 chrono::Utc::now().format("[%T%.3f] ").to_string()
45 }));
46 }
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all = array
59 .async_retrieve_array_subset_ndarray::<f32>(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk = array
66 .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
67 .await?;
68 println!("Chunk [1,0] is:\n{data_chunk}\n");
69
70 // Read the central 4x2 subset of the array
71 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
72 let data_4x2 = array
73 .async_retrieve_array_subset_ndarray::<f32>(&subset_4x2)
74 .await?;
75 println!("The middle 4x2 subset is:\n{data_4x2}\n");
76
77 Ok(())
78}
203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 DataType::Extension(Arc::new(CustomDataTypeUInt4)),
210 vec![5, 1].try_into().unwrap(), // regular chunk shape
211 FillValue::new(fill_value.to_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
220 .bytes_to_bytes_codecs(vec![
221 #[cfg(feature = "gzip")]
222 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
223 #[cfg(feature = "crc32c")]
224 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
225 ])
226 // .storage_transformers(vec![].into())
227 .build(store, array_path)
228 .unwrap();
229 println!("{}", array.metadata().to_string_pretty());
230
231 let data = [
232 CustomDataTypeUInt4Element::try_from(1).unwrap(),
233 CustomDataTypeUInt4Element::try_from(2).unwrap(),
234 CustomDataTypeUInt4Element::try_from(3).unwrap(),
235 CustomDataTypeUInt4Element::try_from(4).unwrap(),
236 CustomDataTypeUInt4Element::try_from(5).unwrap(),
237 ];
238 array.store_chunk_elements(&[0, 0], &data).unwrap();
239
240 let data = array
241 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(&array.subset_all())
242 .unwrap();
243
244 for f in &data {
245 println!("uint4: {:08b} u8: {}", f.as_u8(), f.as_u8());
246 }
247
248 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
249 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
250 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
251 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
252 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
253 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
254
255 let data = array
256 .retrieve_array_subset_elements::<CustomDataTypeUInt4Element>(
257 &ArraySubset::new_with_ranges(&[1..3, 0..1]),
258 )
259 .unwrap();
260 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
261 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
262}
Sourcepub fn chunk_shape_usize(
&self,
chunk_indices: &[u64],
) -> Result<Vec<usize>, ArrayError>
pub fn chunk_shape_usize( &self, chunk_indices: &[u64], ) -> Result<Vec<usize>, ArrayError>
Return the shape of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
§Panics
Panics if any component of the chunk shape exceeds usize::MAX
.
Sourcepub fn chunk_subset(
&self,
chunk_indices: &[u64],
) -> Result<ArraySubset, ArrayError>
pub fn chunk_subset( &self, chunk_indices: &[u64], ) -> Result<ArraySubset, ArrayError>
Return the array subset of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
Sourcepub fn chunk_subset_bounded(
&self,
chunk_indices: &[u64],
) -> Result<ArraySubset, ArrayError>
pub fn chunk_subset_bounded( &self, chunk_indices: &[u64], ) -> Result<ArraySubset, ArrayError>
Return the array subset of the chunk at chunk_indices
bounded by the array shape.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
Sourcepub fn chunks_subset(
&self,
chunks: &ArraySubset,
) -> Result<ArraySubset, ArrayError>
pub fn chunks_subset( &self, chunks: &ArraySubset, ) -> Result<ArraySubset, ArrayError>
Return the array subset of chunks
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if a chunk in chunks
is incompatible with the chunk grid.
Sourcepub fn chunks_subset_bounded(
&self,
chunks: &ArraySubset,
) -> Result<ArraySubset, ArrayError>
pub fn chunks_subset_bounded( &self, chunks: &ArraySubset, ) -> Result<ArraySubset, ArrayError>
Return the array subset of chunks
bounded by the array shape.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
Sourcepub fn chunk_array_representation(
&self,
chunk_indices: &[u64],
) -> Result<ChunkRepresentation, ArrayError>
pub fn chunk_array_representation( &self, chunk_indices: &[u64], ) -> Result<ChunkRepresentation, ArrayError>
Get the chunk array representation at chunk_index
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
Sourcepub fn chunks_in_array_subset(
&self,
array_subset: &ArraySubset,
) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>
pub fn chunks_in_array_subset( &self, array_subset: &ArraySubset, ) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>
Return an array subset indicating the chunks intersecting array_subset
.
Returns None
if the intersecting chunks cannot be determined.
§Errors
Returns IncompatibleDimensionalityError
if the array subset has an incorrect dimensionality.
Sourcepub fn to_v3(self) -> Result<Self, ArrayMetadataV2ToV3Error>
pub fn to_v3(self) -> Result<Self, ArrayMetadataV2ToV3Error>
Convert the array to Zarr V3.
§Errors
Returns a ArrayMetadataV2ToV3Error
if the metadata is not compatible with Zarr V3 metadata.
Trait Implementations§
Source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayChunkCacheExt<TStorage> for Array<TStorage>
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayChunkCacheExt<TStorage> for Array<TStorage>
Source§fn retrieve_chunk_opt_cached<CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<ArrayBytes<'static>>, ArrayError>
fn retrieve_chunk_opt_cached<CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<ArrayBytes<'static>>, ArrayError>
retrieve_chunk_opt
.Source§fn retrieve_chunk_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_chunk_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
retrieve_chunk_elements_opt
.Source§fn retrieve_chunk_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_chunk_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.retrieve_chunk_ndarray_opt
.Source§fn retrieve_chunks_opt_cached<CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
fn retrieve_chunks_opt_cached<CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
retrieve_chunks_opt
.Source§fn retrieve_chunks_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_chunks_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
retrieve_chunks_elements_opt
.Source§fn retrieve_chunks_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_chunks_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.retrieve_chunks_ndarray_opt
.Source§fn retrieve_chunk_subset_opt_cached<CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
fn retrieve_chunk_subset_opt_cached<CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
retrieve_chunk_subset_opt
.Source§fn retrieve_chunk_subset_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_chunk_subset_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
retrieve_chunk_subset_elements_opt
.Source§fn retrieve_chunk_subset_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_chunk_subset_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.retrieve_chunk_subset_ndarray_opt
.Source§fn retrieve_array_subset_opt_cached<CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
fn retrieve_array_subset_opt_cached<CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
retrieve_array_subset_opt
.Source§fn retrieve_array_subset_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_array_subset_elements_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
retrieve_array_subset_elements_opt
.Source§fn retrieve_array_subset_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>(
&self,
cache: &impl ChunkCache<CT>,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_array_subset_ndarray_opt_cached<T: ElementOwned, CT: ChunkCacheType>( &self, cache: &impl ChunkCache<CT>, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.retrieve_array_subset_ndarray_opt
.Source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayDlPackExt<TStorage> for Array<TStorage>
Available on crate feature dlpack
only.
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayDlPackExt<TStorage> for Array<TStorage>
dlpack
only.Source§fn retrieve_array_subset_dlpack(
&self,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ManagerCtx<RawBytesDlPack>, ArrayError>
fn retrieve_array_subset_dlpack( &self, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ManagerCtx<RawBytesDlPack>, ArrayError>
Source§fn retrieve_chunk_if_exists_dlpack(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ManagerCtx<RawBytesDlPack>>, ArrayError>
fn retrieve_chunk_if_exists_dlpack( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ManagerCtx<RawBytesDlPack>>, ArrayError>
Source§fn retrieve_chunk_dlpack(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ManagerCtx<RawBytesDlPack>, ArrayError>
fn retrieve_chunk_dlpack( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ManagerCtx<RawBytesDlPack>, ArrayError>
Source§fn retrieve_chunks_dlpack(
&self,
chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ManagerCtx<RawBytesDlPack>, ArrayError>
fn retrieve_chunks_dlpack( &self, chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ManagerCtx<RawBytesDlPack>, ArrayError>
Source§impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>
Available on crate feature sharding
only.
impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>
sharding
only.Source§fn is_sharded(&self) -> bool
fn is_sharded(&self) -> bool
sharding_indexed
.Source§fn is_exclusively_sharded(&self) -> bool
fn is_exclusively_sharded(&self) -> bool
sharding_indexed
and the array has no array-to-array or bytes-to-bytes codecs.Source§fn inner_chunk_shape(&self) -> Option<ChunkShape>
fn inner_chunk_shape(&self) -> Option<ChunkShape>
sharding_indexed
codec metadata. Read moreSource§fn effective_inner_chunk_shape(&self) -> Option<ChunkShape>
fn effective_inner_chunk_shape(&self) -> Option<ChunkShape>
Source§fn inner_chunk_grid(&self) -> ChunkGrid
fn inner_chunk_grid(&self) -> ChunkGrid
Source§fn inner_chunk_grid_shape(&self) -> Option<ArrayShape>
fn inner_chunk_grid_shape(&self) -> Option<ArrayShape>
Source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>
Available on crate feature sharding
only.
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>
sharding
only.Source§fn inner_chunk_byte_range(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunk_indices: &[u64],
) -> Result<Option<ByteRange>, ArrayError>
fn inner_chunk_byte_range( &self, cache: &ArrayShardedReadableExtCache, inner_chunk_indices: &[u64], ) -> Result<Option<ByteRange>, ArrayError>
Source§fn retrieve_encoded_inner_chunk(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunk_indices: &[u64],
) -> Result<Option<Vec<u8>>, ArrayError>
fn retrieve_encoded_inner_chunk( &self, cache: &ArrayShardedReadableExtCache, inner_chunk_indices: &[u64], ) -> Result<Option<Vec<u8>>, ArrayError>
Source§fn retrieve_inner_chunk_opt(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
fn retrieve_inner_chunk_opt( &self, cache: &ArrayShardedReadableExtCache, inner_chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
chunk_indices
into its bytes. Read moreSource§fn retrieve_inner_chunk_elements_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_inner_chunk_elements_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, inner_chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
chunk_indices
into a vector of its elements. Read moreSource§fn retrieve_inner_chunk_ndarray_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_inner_chunk_ndarray_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, inner_chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Source§fn retrieve_inner_chunks_opt(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
fn retrieve_inner_chunks_opt( &self, cache: &ArrayShardedReadableExtCache, inner_chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
chunks
into their bytes. Read moreSource§fn retrieve_inner_chunks_elements_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_inner_chunks_elements_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, inner_chunks: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
inner_chunks
into a vector of their elements. Read moreSource§fn retrieve_inner_chunks_ndarray_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
inner_chunks: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_inner_chunks_ndarray_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, inner_chunks: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Source§fn retrieve_array_subset_sharded_opt(
&self,
cache: &ArrayShardedReadableExtCache,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayBytes<'_>, ArrayError>
fn retrieve_array_subset_sharded_opt( &self, cache: &ArrayShardedReadableExtCache, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayBytes<'_>, ArrayError>
array_subset
of array into its bytes. Read moreSource§fn retrieve_array_subset_elements_sharded_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_array_subset_elements_sharded_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
array_subset
of array into a vector of its elements. Read moreSource§fn retrieve_array_subset_ndarray_sharded_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
array_subset: &ArraySubset,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_array_subset_ndarray_sharded_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, array_subset: &ArraySubset, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Source§impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> AsyncArrayDlPackExt<TStorage> for Array<TStorage>
Available on crate feature dlpack
only.
impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> AsyncArrayDlPackExt<TStorage> for Array<TStorage>
dlpack
only.Source§fn retrieve_array_subset_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
array_subset: &'life1 ArraySubset,
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ManagerCtx<RawBytesDlPack>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn retrieve_array_subset_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
array_subset: &'life1 ArraySubset,
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ManagerCtx<RawBytesDlPack>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn retrieve_chunk_if_exists_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
chunk_indices: &'life1 [u64],
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Option<ManagerCtx<RawBytesDlPack>>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn retrieve_chunk_if_exists_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
chunk_indices: &'life1 [u64],
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Option<ManagerCtx<RawBytesDlPack>>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn retrieve_chunk_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
chunk_indices: &'life1 [u64],
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ManagerCtx<RawBytesDlPack>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn retrieve_chunk_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
chunk_indices: &'life1 [u64],
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ManagerCtx<RawBytesDlPack>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn retrieve_chunks_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
chunks: &'life1 ArraySubset,
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ManagerCtx<RawBytesDlPack>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn retrieve_chunks_dlpack<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
chunks: &'life1 ArraySubset,
options: &'life2 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ManagerCtx<RawBytesDlPack>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> AsyncArrayShardedReadableExt<TStorage> for Array<TStorage>
Available on crate feature async
only.
impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> AsyncArrayShardedReadableExt<TStorage> for Array<TStorage>
async
only.Source§fn async_inner_chunk_byte_range<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<ByteRange>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn async_inner_chunk_byte_range<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<ByteRange>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn async_retrieve_encoded_inner_chunk<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<Vec<u8>>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn async_retrieve_encoded_inner_chunk<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<Vec<u8>>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn async_retrieve_inner_chunk_opt<'life0, 'life1, 'life2, 'life3, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayBytes<'_>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_inner_chunk_opt<'life0, 'life1, 'life2, 'life3, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayBytes<'_>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
chunk_indices
into its bytes. Read moreSource§fn async_retrieve_inner_chunk_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_inner_chunk_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
chunk_indices
into a vector of its elements. Read moreSource§fn async_retrieve_inner_chunk_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_inner_chunk_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
ndarray
only.Source§fn async_retrieve_inner_chunks_opt<'life0, 'life1, 'life2, 'life3, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunks: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayBytes<'_>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_inner_chunks_opt<'life0, 'life1, 'life2, 'life3, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunks: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayBytes<'_>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
chunks
into their bytes. Read moreSource§fn async_retrieve_inner_chunks_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunks: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_inner_chunks_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunks: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
inner_chunks
into a vector of their elements. Read moreSource§fn async_retrieve_inner_chunks_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunks: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_inner_chunks_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
inner_chunks: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
ndarray
only.Source§fn async_retrieve_array_subset_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayBytes<'_>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_array_subset_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayBytes<'_>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
array_subset
of array into its bytes. Read moreSource§fn async_retrieve_array_subset_elements_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_array_subset_elements_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
array_subset
of array into a vector of its elements. Read moreSource§fn async_retrieve_array_subset_ndarray_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_array_subset_ndarray_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 ArraySubset,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + Send + Sync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
ndarray
only.Auto Trait Implementations§
impl<TStorage> Freeze for Array<TStorage>where
TStorage: ?Sized,
impl<TStorage> !RefUnwindSafe for Array<TStorage>
impl<TStorage> Send for Array<TStorage>
impl<TStorage> Sync for Array<TStorage>
impl<TStorage> Unpin for Array<TStorage>where
TStorage: ?Sized,
impl<TStorage> !UnwindSafe for Array<TStorage>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more