pub struct Array<TStorage: ?Sized> { /* private fields */ }Expand description
A Zarr array.
§Initilisation
The easiest way to create a new Zarr V3 array is with an ArrayBuilder.
Alternatively, a new Zarr V2 or Zarr V3 array can be created with Array::new_with_metadata.
An existing Zarr V2 or Zarr V3 array can be initialised with Array::open or Array::open_opt with metadata read from the store.
Array initialisation will error if ArrayMetadata contains:
- unsupported extension points, including extensions which are supported by
zarrsbut have not been enabled with the appropriate features gates, or - incompatible codecs (e.g. codecs in wrong order, codecs incompatible with data type, etc.),
- a chunk grid incompatible with the array shape,
- a fill value incompatible with the data type, or
- the metadata is in invalid in some other way.
§Array Metadata
Array metadata must be explicitly stored with store_metadata or store_metadata_opt if an array is newly created or its metadata has been mutated.
The underlying metadata of an Array can be accessed with metadata or metadata_opt.
The latter accepts ArrayMetadataOptions that can be used to convert array metadata from Zarr V2 to V3, for example.
metadata_opt is used internally by store_metadata / store_metadata_opt.
Use serde_json::to_string or serde_json::to_string_pretty on ArrayMetadata to convert it to a JSON string.
§Immutable Array Metadata / Properties
metadata: the underlyingArrayMetadatastructure containing all array metadatadata_typefill_valuechunk_gridchunk_key_encodingcodecsstorage_transformerspath
§Mutable Array Metadata
Do not forget to store metadata after mutation.
shape/set_shape/set_shape_and_chunk_gridattributes/attributes_mutdimension_names/set_dimension_names
§zarrs Metadata
By default, the zarrs version and a link to its source code is written to the _zarrs attribute in array metadata when calling store_metadata.
Override this behaviour globally with Config::set_include_zarrs_metadata or call store_metadata_opt with an explicit ArrayMetadataOptions.
§Array Data
Array operations are divided into several categories based on the traits implemented for the backing storage. The core array methods are:
[Async]ReadableStorageTraits: read array data and metadata[Async]WritableStorageTraits: store/erase array data and metadata[Async]ReadableWritableStorageTraits: store operations requiring reading and writing
Many retrieve and store methods have a standard and an _opt variant.
The latter has an additional CodecOptions parameter for fine-grained concurrency control and more.
Array retrieve_* methods are generic over the return type.
For example, the following variants are available for retrieving chunks or array subsets:
- Raw bytes variants:
ArrayBytes - Typed element variants: e.g.
Vec<T>whereT: Element ndarrayvariants:ndarray::ArrayD<T>whereT: Element(requiresndarrayfeature)dlpackvariants:RawBytesDlPackwhereT: Element(requiresdlpackfeature)
Similarly, array store_* methods are generic over the input type.
async_ prefix variants can be used with async stores (requires async feature).
Additional Array methods are offered by extension traits:
ChunkCache implementations offer a similar API to Array::ReadableStorageTraits, except with Chunk Caching support.
§Chunks and Array Subsets
Several convenience methods are available for querying the underlying chunk grid:
chunk_originchunk_shapechunk_subsetchunk_subset_boundedchunks_subset/chunks_subset_boundedchunks_in_array_subset
An ArraySubset spanning the entire array can be retrieved with subset_all.
§Example: Update an Array Chunk-by-Chunk (in Parallel)
In the below example, an array is updated chunk-by-chunk in parallel.
This makes use of chunk_subset_bounded to retrieve and store only the subset of chunks that are within the array bounds.
This can occur when a regular chunk grid does not evenly divide the array shape, for example.
// Get an iterator over the chunk indices
// The array shape must have been set (i.e. non-zero), otherwise the
// iterator will be empty
let chunk_grid_shape = array.chunk_grid_shape();
let chunks: Indices = ArraySubset::new_with_shape(chunk_grid_shape.to_vec()).indices();
// Iterate over chunk indices (in parallel)
chunks.into_par_iter().try_for_each(|chunk_indices: ArrayIndicesTinyVec| {
// Retrieve the array subset of the chunk within the array bounds
// This partially decodes chunks that extend beyond the array end
let subset: ArraySubset = array.chunk_subset_bounded(&chunk_indices)?;
let chunk_bytes: ArrayBytes = array.retrieve_array_subset(&subset)?;
// ... Update the chunk bytes
// Write the updated chunk
// Elements beyond the array bounds in straddling chunks are left
// unmodified or set to the fill value if the chunk did not exist.
array.store_array_subset(&subset, chunk_bytes)
})?;§Optimising Writes
For optimum write performance, an array should be written using store_chunk or store_chunks where possible.
store_chunk_subset and store_array_subset may incur decoding overhead, and they require careful usage if executed in parallel (see Parallel Writing below).
However, these methods will use a fast path and avoid decoding if the subset covers entire chunks.
§Direct IO (Linux)
If using Linux, enabling direct IO with the FilesystemStore may improve write performance.
Currently, the most performant path for uncompressed writing is to reuse page aligned buffers via store_encoded_chunk.
See zarrs GitHub issue #58 for a discussion on this method.
§Parallel Writing
zarrs does not currently offer a “synchronisation” API for locking chunks or array subsets.
It is the responsibility of zarrs consumers to ensure that chunks are not written to concurrently.
If a chunk is written more than once, its element values depend on whichever operation wrote to the chunk last.
The store_chunk_subset and store_array_subset methods and their variants internally retrieve, update, and store chunks.
So do partial_encoders, which may used internally by the above methods.
It is the responsibility of zarrs consumers to ensure that:
store_array_subsetis not called concurrently on array subsets sharing chunks,store_chunk_subsetis not called concurrently on the same chunk,partial_encoders are created or used concurrently for the same chunk,- or any combination of the above are called concurrently on the same chunk.
Partial writes to a chunk may be lost if these rules are not respected.
§Optimising Reads
It is fastest to load arrays using retrieve_chunk or retrieve_chunks where possible.
In contrast, the retrieve_chunk_subset and retrieve_array_subset may use partial decoders which can be less efficient with some codecs/stores.
Like their write counterparts, these methods will use a fast path if subsets cover entire chunks.
Standard Array retrieve methods do not perform any caching.
For this reason, retrieving multiple subsets in a chunk with retrieve_chunk_subset is very inefficient and strongly discouraged.
For example, consider that a compressed chunk may need to be retrieved and decoded in its entirety even if only a small part of the data is needed.
In such situations, prefer to initialise a partial decoder for a chunk with partial_decoder and then retrieve multiple chunk subsets with partial_decode.
The underlying codec chain will use a cache where efficient to optimise multiple partial decoding requests (see CodecChain).
Another alternative is to use Chunk Caching.
§Chunk Caching
Refer to the chunk_cache module.
§Reading Sharded Arrays
The sharding_indexed codec (ShardingCodec) enables multiple subchunks (“inner chunks”) to be stored in a single chunk (“shard”).
With a sharded array, the chunk_grid and chunk indices in store/retrieve methods reference the chunks (“shards”) of an array.
The ArrayShardedExt trait provides additional methods to Array to query if an array is sharded and retrieve the subchunk shape.
Additionally, the subchunk grid can be queried, which is a ChunkGrid where chunk indices refer to subchunks rather than shards.
The ArrayShardedReadableExt trait adds Array methods to conveniently and efficiently access the data in a sharded array (with _elements and _ndarray variants):
For unsharded arrays, these methods gracefully fallback to referencing standard chunks.
Each method has a cache parameter (ArrayShardedReadableExtCache) that stores shard indexes so that they do not have to be repeatedly retrieved and decoded.
§Parallelism and Concurrency
§Sync API
Codecs run in parallel using a dedicated threadpool.
Array store and retrieve methods will also run in parallel when they involve multiple chunks.
zarrs will automatically choose where to prioritise parallelism between codecs/chunks based on the codecs and number of chunks.
By default, all available CPU cores will be used (where possible/efficient).
Concurrency can be limited globally with Config::set_codec_concurrent_target or as required using _opt methods with CodecOptions manipulated with CodecOptions::set_concurrent_target.
§Async API
This crate is async runtime-agnostic. Async methods do not spawn tasks internally, so asynchronous storage calls are concurrent but not parallel. Codec encoding and decoding operations still execute in parallel (where supported) in an asynchronous context.
Due the lack of parallelism, methods like async_retrieve_array_subset or async_retrieve_chunks do not parallelise over chunks and can be slow compared to the sync API.
Parallelism over chunks can be achieved by spawning tasks outside of zarrs.
A crate like async-scoped can enable spawning non-'static futures.
If executing many tasks concurrently, consider reducing the codec concurrent_target.
§Custom Extensions
zarrs can be extended with custom data types, codecs, chunk grids, chunk key encodings, and storage transformers.
The best way to learn how to create extensions is to study the source code of existing extensions in the zarrs crate.
The zarrs book also has a chapter on creating custom extensions.
zarrs uses a plugin system to create extension point implementations (e.g. data types, codecs, chunk grids, chunk key encodings, and storage transformers) from metadata.
Plugins are registered at compile time using the inventory crate.
Runtime plugins are also supported, which take precedence over compile-time plugins.
Each plugin has a name matching function that identifies whether it should handle given metadata.
Extensions support name aliases, which can be tied to specific Zarr versions.
This allows experimental codecs (e.g. zarrs.zfp) to be later promoted to registered Zarr codecs (e.g. zfp) without breaking support for older arrays.
The aliasing system allows matching against string aliases or regex patterns.
The key traits for each extension type are:
- Data types (
zarrs_data_type):DataTypeTraits,DataTypeTraitsV2,DataTypeTraitsV3 - Codecs (
zarrs_codec):CodecTraits,CodecTraitsV2,CodecTraitsV3)- Array-to-array codecs:
ArrayCodecTraits+ArrayToArrayCodecTraits - Array-to-bytes codecs:
ArrayCodecTraits+ArrayToBytesCodecTraits - Bytes-to-bytes codecs:
BytesToBytesCodecTraits - Chunk grids (
zarrs_chunk_grid):ChunkGridTraits - Chunk key encodings (
zarrs_chunk_key_encoding):ChunkKeyEncodingTraits - Storage transformers (
crate::array::storage_transformer):StorageTransformerTraits
- Array-to-array codecs:
Extensions are registered via the following:
- Data types:
DataTypePluginV2,DataTypePluginV3DataTypeRuntimePluginV2,DataTypeRuntimePluginV3 - Codecs:
CodecPluginV3,CodecPluginV2,CodecRuntimePluginV2,CodecRuntimePluginV3 - Chunk grids:
ChunkGridPlugin,ChunkGridRuntimePlugin - Chunk key encodings:
ChunkKeyEncodingPlugin,ChunkKeyEncodingRuntimePlugin - Storage transformers:
StorageTransformerPlugin,StorageTransformerRuntimePlugin
Implementations§
Source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>
Sourcepub fn open(
storage: Arc<TStorage>,
path: &str,
) -> Result<Self, ArrayCreateError>
pub fn open( storage: Arc<TStorage>, path: &str, ) -> Result<Self, ArrayCreateError>
Open an existing array in storage at path with default MetadataRetrieveVersion.
The metadata is read from the store.
§Errors
Returns ArrayCreateError if there is a storage error or any metadata is invalid.
Examples found in repository?
26fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
27 const HTTP_URL: &str =
28 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
29 const ARRAY_PATH: &str = "/group/array";
30
31 // Create a HTTP store
32 // let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
33 let block_on = TokioBlockOn(tokio::runtime::Runtime::new()?);
34 let mut store: ReadableStorage = match backend {
35 // Backend::OpenDAL => {
36 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
37 // let operator = opendal::Operator::new(builder)?.finish();
38 // let store = Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator));
39 // Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
40 // }
41 Backend::ObjectStore => {
42 let options = object_store::ClientOptions::new().with_allow_http(true);
43 let store = object_store::http::HttpBuilder::new()
44 .with_url(HTTP_URL)
45 .with_client_options(options)
46 .build()?;
47 let store = Arc::new(zarrs_object_store::AsyncObjectStore::new(store));
48 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
49 }
50 };
51 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
52 && arg1 == "--usage-log"
53 {
54 let log_writer = Arc::new(std::sync::Mutex::new(
55 // std::io::BufWriter::new(
56 std::io::stdout(),
57 // )
58 ));
59 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
60 chrono::Utc::now().format("[%T%.3f] ").to_string()
61 }));
62 }
63
64 // Init the existing array, reading metadata
65 let array = Array::open(store, ARRAY_PATH)?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Read the whole array
73 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
74 println!("The whole array is:\n{data_all}\n");
75
76 // Read a chunk back from the store
77 let chunk_indices = vec![1, 0];
78 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
79 println!("Chunk [1,0] is:\n{data_chunk}\n");
80
81 // Read the central 4x2 subset of the array
82 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
83 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
84 println!("The middle 4x2 subset is:\n{data_4x2}\n");
85
86 Ok(())
87}Sourcepub fn open_opt(
storage: Arc<TStorage>,
path: &str,
version: &MetadataRetrieveVersion,
) -> Result<Self, ArrayCreateError>
pub fn open_opt( storage: Arc<TStorage>, path: &str, version: &MetadataRetrieveVersion, ) -> Result<Self, ArrayCreateError>
Open an existing array in storage at path with non-default MetadataRetrieveVersion.
The metadata is read from the store.
§Errors
Returns ArrayCreateError if there is a storage error or any metadata is invalid.
Sourcepub fn retrieve_chunk_if_exists<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
) -> Result<Option<T>, ArrayError>
pub fn retrieve_chunk_if_exists<T: FromArrayBytes>( &self, chunk_indices: &[u64], ) -> Result<Option<T>, ArrayError>
Read and decode the chunk at chunk_indices into its bytes if it exists with default codec options.
§Errors
Returns an ArrayError if
chunk_indicesare invalid,- there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of elements in the chunk exceeds usize::MAX.
Sourcepub fn retrieve_chunk_elements_if_exists<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<Option<Vec<T>>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_if_exists::<Vec<T>>() instead
pub fn retrieve_chunk_elements_if_exists<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<Option<Vec<T>>, ArrayError>
Read and decode the chunk at chunk_indices into a vector of its elements if it exists with default codec options.
§Errors
Returns an ArrayError if
- the size of
Tdoes not match the data type size, - the decoded bytes cannot be transmuted,
chunk_indicesare invalid,- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_chunk_ndarray_if_exists<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<Option<ArrayD<T>>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_if_exists::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunk_ndarray_if_exists<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<Option<ArrayD<T>>, ArrayError>
ndarray only.Read and decode the chunk at chunk_indices into an ndarray::ArrayD if it exists.
§Errors
Returns an ArrayError if:
- the size of
Tdoes not match the data type size, - the decoded bytes cannot be transmuted,
- the chunk indices are invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if a chunk dimension is larger than usize::MAX.
Sourcepub fn retrieve_encoded_chunk(
&self,
chunk_indices: &[u64],
) -> Result<Option<Vec<u8>>, StorageError>
pub fn retrieve_encoded_chunk( &self, chunk_indices: &[u64], ) -> Result<Option<Vec<u8>>, StorageError>
Retrieve the encoded bytes of a chunk.
§Errors
Returns an StorageError if there is an underlying store error.
Sourcepub fn retrieve_chunk<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
) -> Result<T, ArrayError>
pub fn retrieve_chunk<T: FromArrayBytes>( &self, chunk_indices: &[u64], ) -> Result<T, ArrayError>
Read and decode the chunk at chunk_indices into its bytes or the fill value if it does not exist with default codec options.
§Errors
Returns an ArrayError if
chunk_indicesare invalid,- there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of elements in the chunk exceeds usize::MAX.
Examples found in repository?
26fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
27 const HTTP_URL: &str =
28 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
29 const ARRAY_PATH: &str = "/group/array";
30
31 // Create a HTTP store
32 // let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
33 let block_on = TokioBlockOn(tokio::runtime::Runtime::new()?);
34 let mut store: ReadableStorage = match backend {
35 // Backend::OpenDAL => {
36 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
37 // let operator = opendal::Operator::new(builder)?.finish();
38 // let store = Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator));
39 // Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
40 // }
41 Backend::ObjectStore => {
42 let options = object_store::ClientOptions::new().with_allow_http(true);
43 let store = object_store::http::HttpBuilder::new()
44 .with_url(HTTP_URL)
45 .with_client_options(options)
46 .build()?;
47 let store = Arc::new(zarrs_object_store::AsyncObjectStore::new(store));
48 Arc::new(AsyncToSyncStorageAdapter::new(store, block_on))
49 }
50 };
51 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
52 && arg1 == "--usage-log"
53 {
54 let log_writer = Arc::new(std::sync::Mutex::new(
55 // std::io::BufWriter::new(
56 std::io::stdout(),
57 // )
58 ));
59 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
60 chrono::Utc::now().format("[%T%.3f] ").to_string()
61 }));
62 }
63
64 // Init the existing array, reading metadata
65 let array = Array::open(store, ARRAY_PATH)?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Read the whole array
73 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
74 println!("The whole array is:\n{data_all}\n");
75
76 // Read a chunk back from the store
77 let chunk_indices = vec![1, 0];
78 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
79 println!("Chunk [1,0] is:\n{data_chunk}\n");
80
81 // Read the central 4x2 subset of the array
82 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
83 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
84 println!("The middle 4x2 subset is:\n{data_4x2}\n");
85
86 Ok(())
87}More examples
13fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
14 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
15 use zarrs::array::{ArraySubset, ZARR_NAN_F32, codec, data_type};
16 use zarrs::node::Node;
17 use zarrs::storage::store;
18
19 // Create a store
20 // let path = tempfile::TempDir::new()?;
21 // let mut store: ReadableWritableListableStorage =
22 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
23 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
24 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
25 && arg1 == "--usage-log"
26 {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36
37 // Create the root group
38 zarrs::group::GroupBuilder::new()
39 .build(store.clone(), "/")?
40 .store_metadata()?;
41
42 // Create a group with attributes
43 let group_path = "/group";
44 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
45 group
46 .attributes_mut()
47 .insert("foo".into(), serde_json::Value::String("bar".into()));
48 group.store_metadata()?;
49
50 println!(
51 "The group metadata is:\n{}\n",
52 group.metadata().to_string_pretty()
53 );
54
55 // Create an array
56 let array_path = "/group/array";
57 let array = zarrs::array::ArrayBuilder::new(
58 vec![8, 8], // array shape
59 MetadataV3::new_with_configuration(
60 "rectangular",
61 RectangularChunkGridConfiguration {
62 chunk_shape: vec![
63 vec![
64 NonZeroU64::new(1).unwrap(),
65 NonZeroU64::new(2).unwrap(),
66 NonZeroU64::new(3).unwrap(),
67 NonZeroU64::new(2).unwrap(),
68 ]
69 .into(),
70 NonZeroU64::new(4).unwrap().into(),
71 ], // chunk sizes
72 },
73 ),
74 data_type::float32(),
75 ZARR_NAN_F32,
76 )
77 .bytes_to_bytes_codecs(vec![
78 #[cfg(feature = "gzip")]
79 Arc::new(codec::GzipCodec::new(5)?),
80 ])
81 .dimension_names(["y", "x"].into())
82 // .storage_transformers(vec![].into())
83 .build(store.clone(), array_path)?;
84
85 // Write array metadata to store
86 array.store_metadata()?;
87
88 // Write some chunks (in parallel)
89 (0..4).into_par_iter().try_for_each(|i| {
90 let chunk_grid = array.chunk_grid();
91 let chunk_indices = vec![i, 0];
92 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
93 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
94 chunk_shape
95 .iter()
96 .map(|u| u.get() as usize)
97 .collect::<Vec<_>>(),
98 i as f32,
99 );
100 array.store_chunk(&chunk_indices, chunk_array)
101 } else {
102 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
103 chunk_indices.to_vec(),
104 ))
105 }
106 })?;
107
108 println!(
109 "The array metadata is:\n{}\n",
110 array.metadata().to_string_pretty()
111 );
112
113 // Write a subset spanning multiple chunks, including updating chunks already written
114 array.store_array_subset(
115 &[3..6, 3..6], // start
116 ndarray::ArrayD::<f32>::from_shape_vec(
117 vec![3, 3],
118 vec![0.1f32, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
119 )?,
120 )?;
121
122 // Store elements directly, in this case set the 7th column to 123.0
123 array.store_array_subset(&[0..8, 6..7], &[123.0f32; 8])?;
124
125 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
126 array.store_chunk_subset(
127 // chunk indices
128 &[3, 1],
129 // subset within chunk
130 &[1..2, 0..4],
131 &[-4.0f32; 4],
132 )?;
133
134 // Read the whole array
135 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
136 println!("The whole array is:\n{data_all}\n");
137
138 // Read a chunk back from the store
139 let chunk_indices = vec![1, 0];
140 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
141 println!("Chunk [1,0] is:\n{data_chunk}\n");
142
143 // Read the central 4x2 subset of the array
144 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
145 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
146 println!("The middle 4x2 subset is:\n{data_4x2}\n");
147
148 // Show the hierarchy
149 let node = Node::open(&store, "/").unwrap();
150 let tree = node.hierarchy_tree();
151 println!("The Zarr hierarchy tree is:\n{tree}");
152
153 Ok(())
154}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}10fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12
13 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
14 use zarrs::array::{ArraySubset, codec, data_type};
15 use zarrs::node::Node;
16 use zarrs::storage::store;
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
27 && arg1 == "--usage-log"
28 {
29 let log_writer = Arc::new(std::sync::Mutex::new(
30 // std::io::BufWriter::new(
31 std::io::stdout(),
32 // )
33 ));
34 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
35 chrono::Utc::now().format("[%T%.3f] ").to_string()
36 }));
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 // Create an array
53 let array_path = "/group/array";
54 let subchunk_shape = vec![4, 4];
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 vec![4, 8], // chunk (shard) shape
58 data_type::uint16(),
59 0u16,
60 )
61 .subchunk_shape(subchunk_shape.clone())
62 .bytes_to_bytes_codecs(vec![
63 #[cfg(feature = "gzip")]
64 Arc::new(codec::GzipCodec::new(5)?),
65 ])
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 // The array metadata is
74 println!(
75 "The array metadata is:\n{}\n",
76 array.metadata().to_string_pretty()
77 );
78
79 // Use default codec options (concurrency etc)
80 let options = CodecOptions::default();
81
82 // Write some shards (in parallel)
83 (0..2).into_par_iter().try_for_each(|s| {
84 let chunk_grid = array.chunk_grid();
85 let chunk_indices = vec![s, 0];
86 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
87 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
88 chunk_shape
89 .iter()
90 .map(|u| u.get() as usize)
91 .collect::<Vec<_>>(),
92 |ij| {
93 (s * chunk_shape[0].get() * chunk_shape[1].get()
94 + ij[0] as u64 * chunk_shape[1].get()
95 + ij[1] as u64) as u16
96 },
97 );
98 array.store_chunk(&chunk_indices, chunk_array)
99 } else {
100 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
101 chunk_indices.to_vec(),
102 ))
103 }
104 })?;
105
106 // Read the whole array
107 let data_all: ArrayD<u16> = array.retrieve_array_subset(&array.subset_all())?;
108 println!("The whole array is:\n{data_all}\n");
109
110 // Read a shard back from the store
111 let shard_indices = vec![1, 0];
112 let data_shard: ArrayD<u16> = array.retrieve_chunk(&shard_indices)?;
113 println!("Shard [1,0] is:\n{data_shard}\n");
114
115 // Read a subchunk from the store
116 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
117 let data_chunk: ArrayD<u16> = array.retrieve_array_subset(&subset_chunk_1_0)?;
118 println!("Chunk [1,0] is:\n{data_chunk}\n");
119
120 // Read the central 4x2 subset of the array
121 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
122 let data_4x2: ArrayD<u16> = array.retrieve_array_subset(&subset_4x2)?;
123 println!("The middle 4x2 subset is:\n{data_4x2}\n");
124
125 // Decode subchunks
126 // In some cases, it might be preferable to decode subchunks in a shard directly.
127 // If using the partial decoder, then the shard index will only be read once from the store.
128 let partial_decoder = array.partial_decoder(&[0, 0])?;
129 println!("Decoded subchunks:");
130 for subchunk_subset in [
131 ArraySubset::new_with_start_shape(vec![0, 0], subchunk_shape.clone())?,
132 ArraySubset::new_with_start_shape(vec![0, 4], subchunk_shape.clone())?,
133 ] {
134 println!("{subchunk_subset}");
135 let decoded_subchunk_bytes = partial_decoder.partial_decode(&subchunk_subset, &options)?;
136 let ndarray = bytes_to_ndarray::<u16>(
137 &subchunk_shape,
138 decoded_subchunk_bytes.into_fixed()?.into_owned(),
139 )?;
140 println!("{ndarray}\n");
141 }
142
143 // Show the hierarchy
144 let node = Node::open(&store, "/").unwrap();
145 let tree = node.hierarchy_tree();
146 println!("The Zarr hierarchy tree is:\n{}", tree);
147
148 println!(
149 "The keys in the store are:\n[{}]",
150 store.list().unwrap_or_default().iter().format(", ")
151 );
152
153 Ok(())
154}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}Sourcepub fn retrieve_chunk_elements<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk::<Vec<T>>() instead
pub fn retrieve_chunk_elements<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<Vec<T>, ArrayError>
Read and decode the chunk at chunk_indices into a vector of its elements or the fill value if it does not exist.
§Errors
Returns an ArrayError if
- the size of
Tdoes not match the data type size, - the decoded bytes cannot be transmuted,
chunk_indicesare invalid,- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_chunk_ndarray<T: ElementOwned>(
&self,
chunk_indices: &[u64],
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunk_ndarray<T: ElementOwned>( &self, chunk_indices: &[u64], ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Read and decode the chunk at chunk_indices into an ndarray::ArrayD. It is filled with the fill value if it does not exist.
§Errors
Returns an ArrayError if:
- the size of
Tdoes not match the data type size, - the decoded bytes cannot be transmuted,
- the chunk indices are invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if a chunk dimension is larger than usize::MAX.
Sourcepub fn retrieve_encoded_chunks(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<Option<Vec<u8>>>, StorageError>
pub fn retrieve_encoded_chunks( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<Option<Vec<u8>>>, StorageError>
Retrieve the encoded bytes of the chunks in chunks.
The chunks are in order of the chunk indices returned by chunks.indices().into_iter().
§Errors
Returns a StorageError if there is an underlying store error.
Sourcepub fn retrieve_chunks<T: FromArrayBytes>(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<T, ArrayError>
pub fn retrieve_chunks<T: FromArrayBytes>( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<T, ArrayError>
Read and decode the chunks at chunks into their bytes.
§Errors
Returns an ArrayError if
- any chunk indices in
chunksare invalid, - there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of array elements in the chunk exceeds usize::MAX.
Examples found in repository?
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}Sourcepub fn retrieve_chunks_elements<T: ElementOwned>(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunks::<Vec<T>>() instead
pub fn retrieve_chunks_elements<T: ElementOwned>( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<Vec<T>, ArrayError>
Read and decode the chunks at chunks into a vector of their elements.
§Errors
Returns an ArrayError if any chunk indices in chunks are invalid or an error condition in Array::retrieve_chunks_opt.
§Panics
Panics if the number of array elements in the chunks exceeds usize::MAX.
Sourcepub fn retrieve_chunks_ndarray<T: ElementOwned>(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunks::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunks_ndarray<T: ElementOwned>( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Read and decode the chunks at chunks into an ndarray::ArrayD.
§Errors
Returns an ArrayError if any chunk indices in chunks are invalid or an error condition in Array::retrieve_chunks_elements_opt.
§Panics
Panics if the number of array elements in the chunks exceeds usize::MAX.
Sourcepub fn retrieve_chunk_subset<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
) -> Result<T, ArrayError>
pub fn retrieve_chunk_subset<T: FromArrayBytes>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, ) -> Result<T, ArrayError>
Read and decode the chunk_subset of the chunk at chunk_indices into its bytes.
§Errors
Returns an ArrayError if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if the number of elements in chunk_subset is usize::MAX or larger.
Sourcepub fn retrieve_chunk_subset_elements<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_subset::<Vec<T>>() instead
pub fn retrieve_chunk_subset_elements<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, ) -> Result<Vec<T>, ArrayError>
Read and decode the chunk_subset of the chunk at chunk_indices into its elements.
§Errors
Returns an ArrayError if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_chunk_subset_ndarray<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_subset::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunk_subset_ndarray<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Read and decode the chunk_subset of the chunk at chunk_indices into an ndarray::ArrayD.
§Errors
Returns an ArrayError if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if the number of elements in chunk_subset is usize::MAX or larger.
Sourcepub fn retrieve_array_subset<T: FromArrayBytes>(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<T, ArrayError>
pub fn retrieve_array_subset<T: FromArrayBytes>( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<T, ArrayError>
Read and decode the array_subset of array into its bytes.
Out-of-bounds elements will have the fill value.
§Errors
Returns an ArrayError if:
- the
array_subsetdimensionality does not match the chunk grid dimensionality, - there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if attempting to reference a byte beyond usize::MAX.
Examples found in repository?
157fn main() {
158 let store = std::sync::Arc::new(MemoryStore::default());
159 let array_path = "/array";
160 let array = ArrayBuilder::new(
161 vec![4, 1], // array shape
162 vec![3, 1], // regular chunk shape
163 Arc::new(CustomDataTypeVariableSize),
164 [],
165 )
166 .array_to_array_codecs(vec![
167 #[cfg(feature = "transpose")]
168 Arc::new(zarrs::array::codec::TransposeCodec::new(
169 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
170 )),
171 ])
172 .bytes_to_bytes_codecs(vec![
173 #[cfg(feature = "gzip")]
174 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
175 #[cfg(feature = "crc32c")]
176 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
177 ])
178 // .storage_transformers(vec![].into())
179 .build(store, array_path)
180 .unwrap();
181 println!("{}", array.metadata().to_string_pretty());
182
183 let data = [
184 CustomDataTypeVariableSizeElement::from(Some(1.0)),
185 CustomDataTypeVariableSizeElement::from(None),
186 CustomDataTypeVariableSizeElement::from(Some(3.0)),
187 ];
188 array.store_chunk(&[0, 0], &data).unwrap();
189
190 let data: Vec<CustomDataTypeVariableSizeElement> =
191 array.retrieve_array_subset(&array.subset_all()).unwrap();
192
193 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
194 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
195 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
196 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
197
198 println!("{data:#?}");
199}More examples
280fn main() {
281 let store = std::sync::Arc::new(MemoryStore::default());
282 let array_path = "/array";
283 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
284 let array = ArrayBuilder::new(
285 vec![4, 1], // array shape
286 vec![2, 1], // regular chunk shape
287 Arc::new(CustomDataTypeFixedSize),
288 FillValue::new(fill_value.to_ne_bytes().to_vec()),
289 )
290 .array_to_array_codecs(vec![
291 #[cfg(feature = "transpose")]
292 Arc::new(zarrs::array::codec::TransposeCodec::new(
293 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
294 )),
295 ])
296 .bytes_to_bytes_codecs(vec![
297 #[cfg(feature = "gzip")]
298 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
299 #[cfg(feature = "crc32c")]
300 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
301 ])
302 // .storage_transformers(vec![].into())
303 .build(store, array_path)
304 .unwrap();
305 println!("{}", array.metadata().to_string_pretty());
306
307 let data = [
308 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
309 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
310 ];
311 array.store_chunk(&[0, 0], &data).unwrap();
312
313 let data: Vec<CustomDataTypeFixedSizeElement> =
314 array.retrieve_array_subset(&array.subset_all()).unwrap();
315
316 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
317 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
318 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
319 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
320
321 println!("{data:#?}");
322}192fn main() {
193 let store = std::sync::Arc::new(MemoryStore::default());
194 let array_path = "/array";
195 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
196 let array = ArrayBuilder::new(
197 vec![4096, 1], // array shape
198 vec![5, 1], // regular chunk shape
199 Arc::new(CustomDataTypeUInt12),
200 FillValue::new(fill_value.into_le_bytes().to_vec()),
201 )
202 .array_to_array_codecs(vec![
203 #[cfg(feature = "transpose")]
204 Arc::new(zarrs::array::codec::TransposeCodec::new(
205 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
206 )),
207 ])
208 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
209 .bytes_to_bytes_codecs(vec![
210 #[cfg(feature = "gzip")]
211 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
212 #[cfg(feature = "crc32c")]
213 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
214 ])
215 // .storage_transformers(vec![].into())
216 .build(store, array_path)
217 .unwrap();
218 println!("{}", array.metadata().to_string_pretty());
219
220 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
221 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
222 .collect();
223
224 array
225 .store_array_subset(&array.subset_all(), &data)
226 .unwrap();
227
228 let mut data: Vec<CustomDataTypeUInt12Element> =
229 array.retrieve_array_subset(&array.subset_all()).unwrap();
230
231 for (i, d) in data.drain(0..4096).enumerate() {
232 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
233 assert_eq!(d, element);
234 let element_pd: Vec<CustomDataTypeUInt12Element> = array
235 .retrieve_array_subset(&[(i as u64)..i as u64 + 1, 0..1])
236 .unwrap();
237 assert_eq!(element_pd[0], element);
238 }
239}203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 vec![5, 1], // regular chunk shape
210 Arc::new(CustomDataTypeFloat8e3m4),
211 FillValue::new(fill_value.into_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .bytes_to_bytes_codecs(vec![
220 #[cfg(feature = "gzip")]
221 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
222 #[cfg(feature = "crc32c")]
223 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
224 ])
225 // .storage_transformers(vec![].into())
226 .build(store, array_path)
227 .unwrap();
228 println!("{}", array.metadata().to_string_pretty());
229
230 let data = [
231 CustomDataTypeFloat8e3m4Element::from(2.34),
232 CustomDataTypeFloat8e3m4Element::from(3.45),
233 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
234 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
235 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
236 ];
237 array.store_chunk(&[0, 0], &data).unwrap();
238
239 let data: Vec<CustomDataTypeFloat8e3m4Element> =
240 array.retrieve_array_subset(&array.subset_all()).unwrap();
241
242 for f in &data {
243 println!(
244 "float8_e3m4: {:08b} f32: {}",
245 f.into_ne_bytes()[0],
246 f.into_f32()
247 );
248 }
249
250 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
251 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
252 assert_eq!(
253 data[2],
254 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
255 );
256 assert_eq!(
257 data[3],
258 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
259 );
260 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
261 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
262}194fn main() {
195 let store = std::sync::Arc::new(MemoryStore::default());
196 let array_path = "/array";
197 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
198 let array = ArrayBuilder::new(
199 vec![6, 1], // array shape
200 vec![5, 1], // regular chunk shape
201 Arc::new(CustomDataTypeUInt4),
202 FillValue::new(fill_value.into_ne_bytes().to_vec()),
203 )
204 .array_to_array_codecs(vec![
205 #[cfg(feature = "transpose")]
206 Arc::new(zarrs::array::codec::TransposeCodec::new(
207 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
208 )),
209 ])
210 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
211 .bytes_to_bytes_codecs(vec![
212 #[cfg(feature = "gzip")]
213 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
214 #[cfg(feature = "crc32c")]
215 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
216 ])
217 // .storage_transformers(vec![].into())
218 .build(store, array_path)
219 .unwrap();
220 println!("{}", array.metadata().to_string_pretty());
221
222 let data = [
223 CustomDataTypeUInt4Element::try_from(1).unwrap(),
224 CustomDataTypeUInt4Element::try_from(2).unwrap(),
225 CustomDataTypeUInt4Element::try_from(3).unwrap(),
226 CustomDataTypeUInt4Element::try_from(4).unwrap(),
227 CustomDataTypeUInt4Element::try_from(5).unwrap(),
228 ];
229 array.store_chunk(&[0, 0], &data).unwrap();
230
231 let data: Vec<CustomDataTypeUInt4Element> =
232 array.retrieve_array_subset(&array.subset_all()).unwrap();
233
234 for f in &data {
235 println!("uint4: {:08b} u8: {}", f.into_u8(), f.into_u8());
236 }
237
238 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
239 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
240 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
241 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
242 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
243 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
244
245 let data: Vec<CustomDataTypeUInt4Element> = array.retrieve_array_subset(&[1..3, 0..1]).unwrap();
246 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
247 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
248}11fn main() -> Result<(), Box<dyn std::error::Error>> {
12 // Create an in-memory store
13 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
14 // "zarrs/tests/data/v3/array_optional_nested.zarr",
15 // )?);
16 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
17
18 // Build the codec chains for the optional codec
19 let array = ArrayBuilder::new(
20 vec![4, 4], // 4x4 array
21 vec![2, 2], // 2x2 chunks
22 data_type::uint8().to_optional().to_optional(), // Optional optional uint8 => Option<Option<u8>>
23 FillValue::new_optional_null().into_optional(), // Fill value => Some(None)
24 )
25 .dimension_names(["y", "x"].into())
26 .attributes(
27 serde_json::json!({
28 "description": r#"A 4x4 array of optional optional uint8 values with some missing data.
29The fill value is null on the inner optional layer, i.e. Some(None).
30N marks missing (`None`=`null`) values. SN marks `Some(None)`=`[null]` values:
31 N SN 2 3
32 N 5 N 7
33 SN SN N N
34 SN SN N N"#,
35 })
36 .as_object()
37 .unwrap()
38 .clone(),
39 )
40 .build(store.clone(), "/array")?;
41 array.store_metadata_opt(
42 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
43 )?;
44
45 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
46
47 // Create some data with missing values
48 let data = ndarray::array![
49 [None, Some(None), Some(Some(2u8)), Some(Some(3u8))],
50 [None, Some(Some(5u8)), None, Some(Some(7u8))],
51 [Some(None), Some(None), None, None],
52 [Some(None), Some(None), None, None],
53 ]
54 .into_dyn();
55
56 // Write the data
57 array.store_array_subset(&array.subset_all(), data.clone())?;
58 println!("Data written to array.");
59
60 // Read back the data
61 let data_read: ArrayD<Option<Option<u8>>> = array.retrieve_array_subset(&array.subset_all())?;
62
63 // Verify data integrity
64 assert_eq!(data, data_read);
65
66 // Display the data in a grid format
67 println!(
68 "Data grid. N marks missing (`None`=`null`) values. SN marks `Some(None)`=`[null]` values"
69 );
70 println!(" 0 1 2 3");
71 for y in 0..4 {
72 print!("{} ", y);
73 for x in 0..4 {
74 match data_read[[y, x]] {
75 Some(Some(value)) => print!("{:3} ", value),
76 Some(None) => print!(" SN "),
77 None => print!(" N "),
78 }
79 }
80 println!();
81 }
82 Ok(())
83}Sourcepub fn retrieve_array_subset_into(
&self,
array_subset: &dyn ArraySubsetTraits,
output_target: ArrayBytesDecodeIntoTarget<'_>,
) -> Result<(), ArrayError>
pub fn retrieve_array_subset_into( &self, array_subset: &dyn ArraySubsetTraits, output_target: ArrayBytesDecodeIntoTarget<'_>, ) -> Result<(), ArrayError>
Read and decode the array_subset of array into a preallocated output_target.
Only supports fixed-length data types (including optional types with fixed inner types).
Out-of-bounds elements will have the fill value.
§Errors
Returns an ArrayError if:
- the
array_subsetdimensionality does not match the chunk grid dimensionality, - the data type is variable-length,
- the number of elements in
output_targetdoes not matcharray_subset, - there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_array_subset_elements<T: ElementOwned>(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_array_subset::<Vec<T>>() instead
pub fn retrieve_array_subset_elements<T: ElementOwned>( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<Vec<T>, ArrayError>
Read and decode the array_subset of array into a vector of its elements.
§Errors
Returns an ArrayError if:
- the size of
Tdoes not match the data type size, - the decoded bytes cannot be transmuted,
- an array subset is invalid or out of bounds of the array,
- there is a codec decoding error, or
- an underlying store error.
Sourcepub fn retrieve_array_subset_ndarray<T: ElementOwned>(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_array_subset::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_array_subset_ndarray<T: ElementOwned>( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Read and decode the array_subset of array into an ndarray::ArrayD.
§Errors
Returns an ArrayError if:
- an array subset is invalid or out of bounds of the array,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if any dimension in chunk_subset is usize::MAX or larger.
Sourcepub fn partial_decoder(
&self,
chunk_indices: &[u64],
) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
pub fn partial_decoder( &self, chunk_indices: &[u64], ) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
Initialises a partial decoder for the chunk at chunk_indices.
§Errors
Returns an ArrayError if initialisation of the partial decoder fails.
Examples found in repository?
10fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12
13 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
14 use zarrs::array::{ArraySubset, codec, data_type};
15 use zarrs::node::Node;
16 use zarrs::storage::store;
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
27 && arg1 == "--usage-log"
28 {
29 let log_writer = Arc::new(std::sync::Mutex::new(
30 // std::io::BufWriter::new(
31 std::io::stdout(),
32 // )
33 ));
34 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
35 chrono::Utc::now().format("[%T%.3f] ").to_string()
36 }));
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 // Create an array
53 let array_path = "/group/array";
54 let subchunk_shape = vec![4, 4];
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 vec![4, 8], // chunk (shard) shape
58 data_type::uint16(),
59 0u16,
60 )
61 .subchunk_shape(subchunk_shape.clone())
62 .bytes_to_bytes_codecs(vec![
63 #[cfg(feature = "gzip")]
64 Arc::new(codec::GzipCodec::new(5)?),
65 ])
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 // The array metadata is
74 println!(
75 "The array metadata is:\n{}\n",
76 array.metadata().to_string_pretty()
77 );
78
79 // Use default codec options (concurrency etc)
80 let options = CodecOptions::default();
81
82 // Write some shards (in parallel)
83 (0..2).into_par_iter().try_for_each(|s| {
84 let chunk_grid = array.chunk_grid();
85 let chunk_indices = vec![s, 0];
86 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
87 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
88 chunk_shape
89 .iter()
90 .map(|u| u.get() as usize)
91 .collect::<Vec<_>>(),
92 |ij| {
93 (s * chunk_shape[0].get() * chunk_shape[1].get()
94 + ij[0] as u64 * chunk_shape[1].get()
95 + ij[1] as u64) as u16
96 },
97 );
98 array.store_chunk(&chunk_indices, chunk_array)
99 } else {
100 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
101 chunk_indices.to_vec(),
102 ))
103 }
104 })?;
105
106 // Read the whole array
107 let data_all: ArrayD<u16> = array.retrieve_array_subset(&array.subset_all())?;
108 println!("The whole array is:\n{data_all}\n");
109
110 // Read a shard back from the store
111 let shard_indices = vec![1, 0];
112 let data_shard: ArrayD<u16> = array.retrieve_chunk(&shard_indices)?;
113 println!("Shard [1,0] is:\n{data_shard}\n");
114
115 // Read a subchunk from the store
116 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
117 let data_chunk: ArrayD<u16> = array.retrieve_array_subset(&subset_chunk_1_0)?;
118 println!("Chunk [1,0] is:\n{data_chunk}\n");
119
120 // Read the central 4x2 subset of the array
121 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
122 let data_4x2: ArrayD<u16> = array.retrieve_array_subset(&subset_4x2)?;
123 println!("The middle 4x2 subset is:\n{data_4x2}\n");
124
125 // Decode subchunks
126 // In some cases, it might be preferable to decode subchunks in a shard directly.
127 // If using the partial decoder, then the shard index will only be read once from the store.
128 let partial_decoder = array.partial_decoder(&[0, 0])?;
129 println!("Decoded subchunks:");
130 for subchunk_subset in [
131 ArraySubset::new_with_start_shape(vec![0, 0], subchunk_shape.clone())?,
132 ArraySubset::new_with_start_shape(vec![0, 4], subchunk_shape.clone())?,
133 ] {
134 println!("{subchunk_subset}");
135 let decoded_subchunk_bytes = partial_decoder.partial_decode(&subchunk_subset, &options)?;
136 let ndarray = bytes_to_ndarray::<u16>(
137 &subchunk_shape,
138 decoded_subchunk_bytes.into_fixed()?.into_owned(),
139 )?;
140 println!("{ndarray}\n");
141 }
142
143 // Show the hierarchy
144 let node = Node::open(&store, "/").unwrap();
145 let tree = node.hierarchy_tree();
146 println!("The Zarr hierarchy tree is:\n{}", tree);
147
148 println!(
149 "The keys in the store are:\n[{}]",
150 store.list().unwrap_or_default().iter().format(", ")
151 );
152
153 Ok(())
154}Sourcepub fn retrieve_chunk_if_exists_opt<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<T>, ArrayError>
pub fn retrieve_chunk_if_exists_opt<T: FromArrayBytes>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<T>, ArrayError>
Explicit options version of retrieve_chunk_if_exists.
Sourcepub fn retrieve_chunk_opt<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<T, ArrayError>
pub fn retrieve_chunk_opt<T: FromArrayBytes>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<T, ArrayError>
Explicit options version of retrieve_chunk.
Sourcepub fn retrieve_chunk_elements_if_exists_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<Vec<T>>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_if_exists_opt::<Vec<T>>() instead
pub fn retrieve_chunk_elements_if_exists_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<Vec<T>>, ArrayError>
Explicit options version of retrieve_chunk_elements_if_exists.
Sourcepub fn retrieve_chunk_elements_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_opt::<Vec<T>>() instead
pub fn retrieve_chunk_elements_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunk_elements.
Sourcepub fn retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ArrayD<T>>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_if_exists_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ArrayD<T>>, ArrayError>
ndarray only.Explicit options version of retrieve_chunk_ndarray_if_exists.
Sourcepub fn retrieve_chunk_ndarray_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunk_ndarray_opt<T: ElementOwned>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Explicit options version of retrieve_chunk_ndarray.
Sourcepub fn retrieve_chunks_opt<T: FromArrayBytes>(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
pub fn retrieve_chunks_opt<T: FromArrayBytes>( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
Explicit options version of retrieve_chunks.
Sourcepub fn retrieve_chunks_elements_opt<T: ElementOwned>(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunks_opt::<Vec<T>>() instead
pub fn retrieve_chunks_elements_opt<T: ElementOwned>( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunks_elements.
Sourcepub fn retrieve_chunks_ndarray_opt<T: ElementOwned>(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunks_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunks_ndarray_opt<T: ElementOwned>( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Explicit options version of retrieve_chunks_ndarray.
Sourcepub fn retrieve_array_subset_opt<T: FromArrayBytes>(
&self,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
pub fn retrieve_array_subset_opt<T: FromArrayBytes>( &self, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
Explicit options version of retrieve_array_subset.
Sourcepub fn retrieve_array_subset_into_opt(
&self,
array_subset: &dyn ArraySubsetTraits,
output_target: ArrayBytesDecodeIntoTarget<'_>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn retrieve_array_subset_into_opt( &self, array_subset: &dyn ArraySubsetTraits, output_target: ArrayBytesDecodeIntoTarget<'_>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of retrieve_array_subset_into.
Sourcepub fn retrieve_array_subset_elements_opt<T: ElementOwned>(
&self,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_array_subset_opt::<Vec<T>>() instead
pub fn retrieve_array_subset_elements_opt<T: ElementOwned>( &self, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_array_subset_elements.
Sourcepub fn retrieve_array_subset_ndarray_opt<T: ElementOwned>(
&self,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_array_subset_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_array_subset_ndarray_opt<T: ElementOwned>( &self, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Explicit options version of retrieve_array_subset_ndarray.
Sourcepub fn retrieve_chunk_subset_opt<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
pub fn retrieve_chunk_subset_opt<T: FromArrayBytes>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
Explicit options version of retrieve_chunk_subset.
Sourcepub fn retrieve_chunk_subset_elements_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_subset_opt::<Vec<T>>() instead
pub fn retrieve_chunk_subset_elements_opt<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunk_subset_elements.
Sourcepub fn retrieve_chunk_subset_ndarray_opt<T: ElementOwned>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use retrieve_chunk_subset_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate feature ndarray only.
pub fn retrieve_chunk_subset_ndarray_opt<T: ElementOwned>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Explicit options version of retrieve_chunk_subset_ndarray.
Sourcepub fn partial_decoder_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
pub fn partial_decoder_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn ArrayPartialDecoderTraits>, ArrayError>
Explicit options version of partial_decoder.
Source§impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>
Sourcepub fn store_metadata(&self) -> Result<(), StorageError>
pub fn store_metadata(&self) -> Result<(), StorageError>
Store metadata with default ArrayMetadataOptions.
The metadata is created with Array::metadata_opt.
§Errors
Returns StorageError if there is an underlying store error.
Examples found in repository?
22fn main() -> Result<(), Box<dyn std::error::Error>> {
23 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
24
25 let serde_json::Value::Object(attributes) = serde_json::json!({
26 "foo": "bar",
27 "baz": 42,
28 }) else {
29 unreachable!()
30 };
31
32 // Create a Zarr V2 group
33 let group_metadata: GroupMetadata = GroupMetadataV2::new()
34 .with_attributes(attributes.clone())
35 .into();
36 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
37
38 // Store the metadata as V2 and V3
39 let convert_group_metadata_to_v3 =
40 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
41 group.store_metadata()?;
42 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
43 println!(
44 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
45 key_to_str(&store, "group/.zgroup")?
46 );
47 println!(
48 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
49 key_to_str(&store, "group/.zattrs")?
50 );
51 println!(
52 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
53 key_to_str(&store, "group/zarr.json")?
54 );
55 // println!(
56 // "The equivalent Zarr V3 group metadata is\n{}\n",
57 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
58 // );
59
60 // Create a Zarr V2 array
61 let array_metadata = ArrayMetadataV2::new(
62 vec![10, 10],
63 vec![NonZeroU64::new(5).unwrap(); 2],
64 ">f4".into(), // big endian float32
65 FillValueMetadata::from(f32::NAN),
66 None,
67 None,
68 )
69 .with_dimension_separator(ChunkKeySeparator::Slash)
70 .with_order(ArrayMetadataV2Order::F)
71 .with_attributes(attributes.clone());
72 let array = zarrs::array::Array::new_with_metadata(
73 store.clone(),
74 "/group/array",
75 array_metadata.into(),
76 )?;
77
78 // Store the metadata as V2 and V3
79 let convert_array_metadata_to_v3 =
80 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
81 array.store_metadata()?;
82 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
83 println!(
84 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
85 key_to_str(&store, "group/array/.zarray")?
86 );
87 println!(
88 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
89 key_to_str(&store, "group/array/.zattrs")?
90 );
91 println!(
92 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
93 key_to_str(&store, "group/array/zarr.json")?
94 );
95 // println!(
96 // "The equivalent Zarr V3 array metadata is\n{}\n",
97 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
98 // );
99
100 array.store_chunk(&[0, 1], &[0.0f32; 5 * 5])?;
101
102 // Print the keys in the store
103 println!("The store contains keys:");
104 for key in store.list()? {
105 println!(" {}", key);
106 }
107
108 Ok(())
109}More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArrayBytes, data_type};
12 use zarrs::storage::store;
13
14 // Create a store
15 // let path = tempfile::TempDir::new()?;
16 // let mut store: ReadableWritableListableStorage =
17 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
18 // let mut store: ReadableWritableListableStorage = Arc::new(
19 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
20 // );
21 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
22 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
23 && arg1 == "--usage-log"
24 {
25 let log_writer = Arc::new(std::sync::Mutex::new(
26 // std::io::BufWriter::new(
27 std::io::stdout(),
28 // )
29 ));
30 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
31 chrono::Utc::now().format("[%T%.3f] ").to_string()
32 }));
33 }
34
35 // Create the root group
36 zarrs::group::GroupBuilder::new()
37 .build(store.clone(), "/")?
38 .store_metadata()?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.store_metadata()?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![4, 4], // array shape
57 vec![2, 2], // regular chunk shape
58 data_type::string(),
59 "_",
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.store_metadata()?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 array.store_chunk(
76 &[0, 0],
77 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
78 )?;
79 array.store_chunk(
80 &[0, 1],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
82 )?;
83 let subset_all = array.subset_all();
84 let data_all: ArrayD<String> = array.retrieve_array_subset(&subset_all)?;
85 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
86
87 // Write a subset spanning multiple chunks, including updating chunks already written
88 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
89 array.store_array_subset(&[1..3, 1..3], ndarray_subset)?;
90 let data_all: ArrayD<String> = array.retrieve_array_subset(&subset_all)?;
91 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
92
93 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
94 // TODO: Add a convenience function for this?
95 let data_all: ArrayBytes = array.retrieve_array_subset(&subset_all)?;
96 let (bytes, offsets) = data_all.into_variable()?.into_parts();
97 let string = String::from_utf8(bytes.into_owned())?;
98 let elements = offsets
99 .iter()
100 .tuple_windows()
101 .map(|(&curr, &next)| &string[curr..next])
102 .collect::<Vec<&str>>();
103 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
104 println!("ndarray::ArrayD<&str>:\n{ndarray}");
105
106 Ok(())
107}13fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
14 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
15 use zarrs::array::{ArraySubset, ZARR_NAN_F32, codec, data_type};
16 use zarrs::node::Node;
17 use zarrs::storage::store;
18
19 // Create a store
20 // let path = tempfile::TempDir::new()?;
21 // let mut store: ReadableWritableListableStorage =
22 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
23 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
24 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
25 && arg1 == "--usage-log"
26 {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36
37 // Create the root group
38 zarrs::group::GroupBuilder::new()
39 .build(store.clone(), "/")?
40 .store_metadata()?;
41
42 // Create a group with attributes
43 let group_path = "/group";
44 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
45 group
46 .attributes_mut()
47 .insert("foo".into(), serde_json::Value::String("bar".into()));
48 group.store_metadata()?;
49
50 println!(
51 "The group metadata is:\n{}\n",
52 group.metadata().to_string_pretty()
53 );
54
55 // Create an array
56 let array_path = "/group/array";
57 let array = zarrs::array::ArrayBuilder::new(
58 vec![8, 8], // array shape
59 MetadataV3::new_with_configuration(
60 "rectangular",
61 RectangularChunkGridConfiguration {
62 chunk_shape: vec![
63 vec![
64 NonZeroU64::new(1).unwrap(),
65 NonZeroU64::new(2).unwrap(),
66 NonZeroU64::new(3).unwrap(),
67 NonZeroU64::new(2).unwrap(),
68 ]
69 .into(),
70 NonZeroU64::new(4).unwrap().into(),
71 ], // chunk sizes
72 },
73 ),
74 data_type::float32(),
75 ZARR_NAN_F32,
76 )
77 .bytes_to_bytes_codecs(vec![
78 #[cfg(feature = "gzip")]
79 Arc::new(codec::GzipCodec::new(5)?),
80 ])
81 .dimension_names(["y", "x"].into())
82 // .storage_transformers(vec![].into())
83 .build(store.clone(), array_path)?;
84
85 // Write array metadata to store
86 array.store_metadata()?;
87
88 // Write some chunks (in parallel)
89 (0..4).into_par_iter().try_for_each(|i| {
90 let chunk_grid = array.chunk_grid();
91 let chunk_indices = vec![i, 0];
92 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
93 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
94 chunk_shape
95 .iter()
96 .map(|u| u.get() as usize)
97 .collect::<Vec<_>>(),
98 i as f32,
99 );
100 array.store_chunk(&chunk_indices, chunk_array)
101 } else {
102 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
103 chunk_indices.to_vec(),
104 ))
105 }
106 })?;
107
108 println!(
109 "The array metadata is:\n{}\n",
110 array.metadata().to_string_pretty()
111 );
112
113 // Write a subset spanning multiple chunks, including updating chunks already written
114 array.store_array_subset(
115 &[3..6, 3..6], // start
116 ndarray::ArrayD::<f32>::from_shape_vec(
117 vec![3, 3],
118 vec![0.1f32, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
119 )?,
120 )?;
121
122 // Store elements directly, in this case set the 7th column to 123.0
123 array.store_array_subset(&[0..8, 6..7], &[123.0f32; 8])?;
124
125 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
126 array.store_chunk_subset(
127 // chunk indices
128 &[3, 1],
129 // subset within chunk
130 &[1..2, 0..4],
131 &[-4.0f32; 4],
132 )?;
133
134 // Read the whole array
135 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
136 println!("The whole array is:\n{data_all}\n");
137
138 // Read a chunk back from the store
139 let chunk_indices = vec![1, 0];
140 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
141 println!("Chunk [1,0] is:\n{data_chunk}\n");
142
143 // Read the central 4x2 subset of the array
144 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
145 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
146 println!("The middle 4x2 subset is:\n{data_4x2}\n");
147
148 // Show the hierarchy
149 let node = Node::open(&store, "/").unwrap();
150 let tree = node.hierarchy_tree();
151 println!("The Zarr hierarchy tree is:\n{tree}");
152
153 Ok(())
154}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}10fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12
13 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
14 use zarrs::array::{ArraySubset, codec, data_type};
15 use zarrs::node::Node;
16 use zarrs::storage::store;
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
27 && arg1 == "--usage-log"
28 {
29 let log_writer = Arc::new(std::sync::Mutex::new(
30 // std::io::BufWriter::new(
31 std::io::stdout(),
32 // )
33 ));
34 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
35 chrono::Utc::now().format("[%T%.3f] ").to_string()
36 }));
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 // Create an array
53 let array_path = "/group/array";
54 let subchunk_shape = vec![4, 4];
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 vec![4, 8], // chunk (shard) shape
58 data_type::uint16(),
59 0u16,
60 )
61 .subchunk_shape(subchunk_shape.clone())
62 .bytes_to_bytes_codecs(vec![
63 #[cfg(feature = "gzip")]
64 Arc::new(codec::GzipCodec::new(5)?),
65 ])
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 // The array metadata is
74 println!(
75 "The array metadata is:\n{}\n",
76 array.metadata().to_string_pretty()
77 );
78
79 // Use default codec options (concurrency etc)
80 let options = CodecOptions::default();
81
82 // Write some shards (in parallel)
83 (0..2).into_par_iter().try_for_each(|s| {
84 let chunk_grid = array.chunk_grid();
85 let chunk_indices = vec![s, 0];
86 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
87 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
88 chunk_shape
89 .iter()
90 .map(|u| u.get() as usize)
91 .collect::<Vec<_>>(),
92 |ij| {
93 (s * chunk_shape[0].get() * chunk_shape[1].get()
94 + ij[0] as u64 * chunk_shape[1].get()
95 + ij[1] as u64) as u16
96 },
97 );
98 array.store_chunk(&chunk_indices, chunk_array)
99 } else {
100 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
101 chunk_indices.to_vec(),
102 ))
103 }
104 })?;
105
106 // Read the whole array
107 let data_all: ArrayD<u16> = array.retrieve_array_subset(&array.subset_all())?;
108 println!("The whole array is:\n{data_all}\n");
109
110 // Read a shard back from the store
111 let shard_indices = vec![1, 0];
112 let data_shard: ArrayD<u16> = array.retrieve_chunk(&shard_indices)?;
113 println!("Shard [1,0] is:\n{data_shard}\n");
114
115 // Read a subchunk from the store
116 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
117 let data_chunk: ArrayD<u16> = array.retrieve_array_subset(&subset_chunk_1_0)?;
118 println!("Chunk [1,0] is:\n{data_chunk}\n");
119
120 // Read the central 4x2 subset of the array
121 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
122 let data_4x2: ArrayD<u16> = array.retrieve_array_subset(&subset_4x2)?;
123 println!("The middle 4x2 subset is:\n{data_4x2}\n");
124
125 // Decode subchunks
126 // In some cases, it might be preferable to decode subchunks in a shard directly.
127 // If using the partial decoder, then the shard index will only be read once from the store.
128 let partial_decoder = array.partial_decoder(&[0, 0])?;
129 println!("Decoded subchunks:");
130 for subchunk_subset in [
131 ArraySubset::new_with_start_shape(vec![0, 0], subchunk_shape.clone())?,
132 ArraySubset::new_with_start_shape(vec![0, 4], subchunk_shape.clone())?,
133 ] {
134 println!("{subchunk_subset}");
135 let decoded_subchunk_bytes = partial_decoder.partial_decode(&subchunk_subset, &options)?;
136 let ndarray = bytes_to_ndarray::<u16>(
137 &subchunk_shape,
138 decoded_subchunk_bytes.into_fixed()?.into_owned(),
139 )?;
140 println!("{ndarray}\n");
141 }
142
143 // Show the hierarchy
144 let node = Node::open(&store, "/").unwrap();
145 let tree = node.hierarchy_tree();
146 println!("The Zarr hierarchy tree is:\n{}", tree);
147
148 println!(
149 "The keys in the store are:\n[{}]",
150 store.list().unwrap_or_default().iter().format(", ")
151 );
152
153 Ok(())
154}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}Sourcepub fn store_metadata_opt(
&self,
options: &ArrayMetadataOptions,
) -> Result<(), StorageError>
pub fn store_metadata_opt( &self, options: &ArrayMetadataOptions, ) -> Result<(), StorageError>
Store metadata with non-default ArrayMetadataOptions.
The metadata is created with Array::metadata_opt.
§Errors
Returns StorageError if there is an underlying store error.
Examples found in repository?
11fn main() -> Result<(), Box<dyn std::error::Error>> {
12 // Create an in-memory store
13 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
14 // "zarrs/tests/data/v3/array_optional_nested.zarr",
15 // )?);
16 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
17
18 // Build the codec chains for the optional codec
19 let array = ArrayBuilder::new(
20 vec![4, 4], // 4x4 array
21 vec![2, 2], // 2x2 chunks
22 data_type::uint8().to_optional().to_optional(), // Optional optional uint8 => Option<Option<u8>>
23 FillValue::new_optional_null().into_optional(), // Fill value => Some(None)
24 )
25 .dimension_names(["y", "x"].into())
26 .attributes(
27 serde_json::json!({
28 "description": r#"A 4x4 array of optional optional uint8 values with some missing data.
29The fill value is null on the inner optional layer, i.e. Some(None).
30N marks missing (`None`=`null`) values. SN marks `Some(None)`=`[null]` values:
31 N SN 2 3
32 N 5 N 7
33 SN SN N N
34 SN SN N N"#,
35 })
36 .as_object()
37 .unwrap()
38 .clone(),
39 )
40 .build(store.clone(), "/array")?;
41 array.store_metadata_opt(
42 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
43 )?;
44
45 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
46
47 // Create some data with missing values
48 let data = ndarray::array![
49 [None, Some(None), Some(Some(2u8)), Some(Some(3u8))],
50 [None, Some(Some(5u8)), None, Some(Some(7u8))],
51 [Some(None), Some(None), None, None],
52 [Some(None), Some(None), None, None],
53 ]
54 .into_dyn();
55
56 // Write the data
57 array.store_array_subset(&array.subset_all(), data.clone())?;
58 println!("Data written to array.");
59
60 // Read back the data
61 let data_read: ArrayD<Option<Option<u8>>> = array.retrieve_array_subset(&array.subset_all())?;
62
63 // Verify data integrity
64 assert_eq!(data, data_read);
65
66 // Display the data in a grid format
67 println!(
68 "Data grid. N marks missing (`None`=`null`) values. SN marks `Some(None)`=`[null]` values"
69 );
70 println!(" 0 1 2 3");
71 for y in 0..4 {
72 print!("{} ", y);
73 for x in 0..4 {
74 match data_read[[y, x]] {
75 Some(Some(value)) => print!("{:3} ", value),
76 Some(None) => print!(" SN "),
77 None => print!(" N "),
78 }
79 }
80 println!();
81 }
82 Ok(())
83}More examples
22fn main() -> Result<(), Box<dyn std::error::Error>> {
23 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
24
25 let serde_json::Value::Object(attributes) = serde_json::json!({
26 "foo": "bar",
27 "baz": 42,
28 }) else {
29 unreachable!()
30 };
31
32 // Create a Zarr V2 group
33 let group_metadata: GroupMetadata = GroupMetadataV2::new()
34 .with_attributes(attributes.clone())
35 .into();
36 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
37
38 // Store the metadata as V2 and V3
39 let convert_group_metadata_to_v3 =
40 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
41 group.store_metadata()?;
42 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
43 println!(
44 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
45 key_to_str(&store, "group/.zgroup")?
46 );
47 println!(
48 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
49 key_to_str(&store, "group/.zattrs")?
50 );
51 println!(
52 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
53 key_to_str(&store, "group/zarr.json")?
54 );
55 // println!(
56 // "The equivalent Zarr V3 group metadata is\n{}\n",
57 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
58 // );
59
60 // Create a Zarr V2 array
61 let array_metadata = ArrayMetadataV2::new(
62 vec![10, 10],
63 vec![NonZeroU64::new(5).unwrap(); 2],
64 ">f4".into(), // big endian float32
65 FillValueMetadata::from(f32::NAN),
66 None,
67 None,
68 )
69 .with_dimension_separator(ChunkKeySeparator::Slash)
70 .with_order(ArrayMetadataV2Order::F)
71 .with_attributes(attributes.clone());
72 let array = zarrs::array::Array::new_with_metadata(
73 store.clone(),
74 "/group/array",
75 array_metadata.into(),
76 )?;
77
78 // Store the metadata as V2 and V3
79 let convert_array_metadata_to_v3 =
80 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
81 array.store_metadata()?;
82 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
83 println!(
84 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
85 key_to_str(&store, "group/array/.zarray")?
86 );
87 println!(
88 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
89 key_to_str(&store, "group/array/.zattrs")?
90 );
91 println!(
92 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
93 key_to_str(&store, "group/array/zarr.json")?
94 );
95 // println!(
96 // "The equivalent Zarr V3 array metadata is\n{}\n",
97 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
98 // );
99
100 array.store_chunk(&[0, 1], &[0.0f32; 5 * 5])?;
101
102 // Print the keys in the store
103 println!("The store contains keys:");
104 for key in store.list()? {
105 println!(" {}", key);
106 }
107
108 Ok(())
109}18fn main() -> Result<(), Box<dyn std::error::Error>> {
19 // Create an in-memory store
20 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
21 // "zarrs/tests/data/v3/array_optional.zarr",
22 // )?);
23 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
24
25 // Build the codec chains for the optional codec
26 let array = ArrayBuilder::new(
27 vec![4, 4], // 4x4 array
28 vec![2, 2], // 2x2 chunks
29 data_type::uint8().to_optional(), // Optional uint8
30 FillValue::new_optional_null(), // Null fill value: [0]
31 )
32 .dimension_names(["y", "x"].into())
33 .attributes(
34 serde_json::json!({
35 "description": r#"A 4x4 array of optional uint8 values with some missing data.
36N marks missing (`None`=`null`) values:
37 0 N 2 3
38 N 5 N 7
39 8 9 N N
4012 N N N"#,
41 })
42 .as_object()
43 .unwrap()
44 .clone(),
45 )
46 .build(store.clone(), "/array")?;
47 array.store_metadata_opt(
48 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
49 )?;
50
51 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
52
53 // Create some data with missing values
54 let data = ndarray::array![
55 [Some(0u8), None, Some(2u8), Some(3u8)],
56 [None, Some(5u8), None, Some(7u8)],
57 [Some(8u8), Some(9u8), None, None],
58 [Some(12u8), None, None, None],
59 ]
60 .into_dyn();
61
62 // Write the data
63 array.store_array_subset(&array.subset_all(), data.clone())?;
64
65 // Read back the data
66 let data_read: ArrayD<Option<u8>> = array.retrieve_array_subset(&array.subset_all())?;
67
68 // Verify data integrity
69 assert_eq!(data, data_read);
70
71 // Display the data in a grid format
72 println!("Data grid, N marks missing (`None`=`null`) values");
73 println!(" 0 1 2 3");
74 for y in 0..4 {
75 print!("{} ", y);
76 for x in 0..4 {
77 match data_read[[y, x]] {
78 Some(value) => print!("{:2} ", value),
79 None => print!(" N "),
80 }
81 }
82 println!();
83 }
84
85 // Print the raw bytes in all chunks
86 println!("Raw bytes in all chunks:");
87 let chunk_grid_shape = array.chunk_grid_shape();
88 for chunk_y in 0..chunk_grid_shape[0] {
89 for chunk_x in 0..chunk_grid_shape[1] {
90 let chunk_indices = vec![chunk_y, chunk_x];
91 let chunk_key = array.chunk_key(&chunk_indices);
92 println!(" Chunk [{}, {}] (key: {}):", chunk_y, chunk_x, chunk_key);
93
94 if let Some(chunk_bytes) = store.get(&chunk_key)? {
95 println!(" Size: {} bytes", chunk_bytes.len());
96
97 if chunk_bytes.len() >= 16 {
98 // Parse first 8 bytes as mask size (little-endian u64)
99 let mask_size = u64::from_le_bytes([
100 chunk_bytes[0],
101 chunk_bytes[1],
102 chunk_bytes[2],
103 chunk_bytes[3],
104 chunk_bytes[4],
105 chunk_bytes[5],
106 chunk_bytes[6],
107 chunk_bytes[7],
108 ]) as usize;
109
110 // Parse second 8 bytes as data size (little-endian u64)
111 let data_size = u64::from_le_bytes([
112 chunk_bytes[8],
113 chunk_bytes[9],
114 chunk_bytes[10],
115 chunk_bytes[11],
116 chunk_bytes[12],
117 chunk_bytes[13],
118 chunk_bytes[14],
119 chunk_bytes[15],
120 ]) as usize;
121
122 // Display mask size header with raw bytes
123 print!(" Mask size: 0b");
124 for byte in &chunk_bytes[0..8] {
125 print!("{:08b}", byte);
126 }
127 println!(" -> {} bytes", mask_size);
128
129 // Display data size header with raw bytes
130 print!(" Data size: 0b");
131 for byte in &chunk_bytes[8..16] {
132 print!("{:08b}", byte);
133 }
134 println!(" -> {} bytes", data_size);
135
136 // Show mask and data sections separately
137 if chunk_bytes.len() >= 16 + mask_size + data_size {
138 let mask_start = 16;
139 let data_start = 16 + mask_size;
140
141 // Show mask as binary
142 if mask_size > 0 {
143 println!(" Mask (binary):");
144 print!(" ");
145 for byte in &chunk_bytes[mask_start..mask_start + mask_size] {
146 print!("0b{:08b} ", byte);
147 }
148 println!();
149 }
150
151 // Show data as binary
152 if data_size > 0 {
153 println!(" Data (binary):");
154 print!(" ");
155 for byte in &chunk_bytes[data_start..data_start + data_size] {
156 print!("0b{:08b} ", byte);
157 }
158 println!();
159 }
160 }
161 } else {
162 panic!(" Chunk too small to parse headers");
163 }
164 } else {
165 println!(" Chunk missing (fill value chunk)");
166 }
167 }
168 }
169 Ok(())
170}Sourcepub fn store_chunk<'a>(
&self,
chunk_indices: &[u64],
chunk_data: impl IntoArrayBytes<'a>,
) -> Result<(), ArrayError>
pub fn store_chunk<'a>( &self, chunk_indices: &[u64], chunk_data: impl IntoArrayBytes<'a>, ) -> Result<(), ArrayError>
Encode chunk_data and store at chunk_indices.
Use store_chunk_opt to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError if
chunk_indicesare invalid,- the length of
chunk_datais not equal to the expected length (the product of the number of elements in the chunk and the data type size in bytes), - there is a codec encoding error, or
- an underlying store error.
Examples found in repository?
157fn main() {
158 let store = std::sync::Arc::new(MemoryStore::default());
159 let array_path = "/array";
160 let array = ArrayBuilder::new(
161 vec![4, 1], // array shape
162 vec![3, 1], // regular chunk shape
163 Arc::new(CustomDataTypeVariableSize),
164 [],
165 )
166 .array_to_array_codecs(vec![
167 #[cfg(feature = "transpose")]
168 Arc::new(zarrs::array::codec::TransposeCodec::new(
169 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
170 )),
171 ])
172 .bytes_to_bytes_codecs(vec![
173 #[cfg(feature = "gzip")]
174 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
175 #[cfg(feature = "crc32c")]
176 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
177 ])
178 // .storage_transformers(vec![].into())
179 .build(store, array_path)
180 .unwrap();
181 println!("{}", array.metadata().to_string_pretty());
182
183 let data = [
184 CustomDataTypeVariableSizeElement::from(Some(1.0)),
185 CustomDataTypeVariableSizeElement::from(None),
186 CustomDataTypeVariableSizeElement::from(Some(3.0)),
187 ];
188 array.store_chunk(&[0, 0], &data).unwrap();
189
190 let data: Vec<CustomDataTypeVariableSizeElement> =
191 array.retrieve_array_subset(&array.subset_all()).unwrap();
192
193 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
194 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
195 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
196 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
197
198 println!("{data:#?}");
199}More examples
280fn main() {
281 let store = std::sync::Arc::new(MemoryStore::default());
282 let array_path = "/array";
283 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
284 let array = ArrayBuilder::new(
285 vec![4, 1], // array shape
286 vec![2, 1], // regular chunk shape
287 Arc::new(CustomDataTypeFixedSize),
288 FillValue::new(fill_value.to_ne_bytes().to_vec()),
289 )
290 .array_to_array_codecs(vec![
291 #[cfg(feature = "transpose")]
292 Arc::new(zarrs::array::codec::TransposeCodec::new(
293 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
294 )),
295 ])
296 .bytes_to_bytes_codecs(vec![
297 #[cfg(feature = "gzip")]
298 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
299 #[cfg(feature = "crc32c")]
300 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
301 ])
302 // .storage_transformers(vec![].into())
303 .build(store, array_path)
304 .unwrap();
305 println!("{}", array.metadata().to_string_pretty());
306
307 let data = [
308 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
309 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
310 ];
311 array.store_chunk(&[0, 0], &data).unwrap();
312
313 let data: Vec<CustomDataTypeFixedSizeElement> =
314 array.retrieve_array_subset(&array.subset_all()).unwrap();
315
316 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
317 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
318 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
319 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
320
321 println!("{data:#?}");
322}203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 vec![5, 1], // regular chunk shape
210 Arc::new(CustomDataTypeFloat8e3m4),
211 FillValue::new(fill_value.into_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .bytes_to_bytes_codecs(vec![
220 #[cfg(feature = "gzip")]
221 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
222 #[cfg(feature = "crc32c")]
223 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
224 ])
225 // .storage_transformers(vec![].into())
226 .build(store, array_path)
227 .unwrap();
228 println!("{}", array.metadata().to_string_pretty());
229
230 let data = [
231 CustomDataTypeFloat8e3m4Element::from(2.34),
232 CustomDataTypeFloat8e3m4Element::from(3.45),
233 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
234 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
235 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
236 ];
237 array.store_chunk(&[0, 0], &data).unwrap();
238
239 let data: Vec<CustomDataTypeFloat8e3m4Element> =
240 array.retrieve_array_subset(&array.subset_all()).unwrap();
241
242 for f in &data {
243 println!(
244 "float8_e3m4: {:08b} f32: {}",
245 f.into_ne_bytes()[0],
246 f.into_f32()
247 );
248 }
249
250 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
251 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
252 assert_eq!(
253 data[2],
254 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
255 );
256 assert_eq!(
257 data[3],
258 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
259 );
260 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
261 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
262}194fn main() {
195 let store = std::sync::Arc::new(MemoryStore::default());
196 let array_path = "/array";
197 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
198 let array = ArrayBuilder::new(
199 vec![6, 1], // array shape
200 vec![5, 1], // regular chunk shape
201 Arc::new(CustomDataTypeUInt4),
202 FillValue::new(fill_value.into_ne_bytes().to_vec()),
203 )
204 .array_to_array_codecs(vec![
205 #[cfg(feature = "transpose")]
206 Arc::new(zarrs::array::codec::TransposeCodec::new(
207 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
208 )),
209 ])
210 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
211 .bytes_to_bytes_codecs(vec![
212 #[cfg(feature = "gzip")]
213 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
214 #[cfg(feature = "crc32c")]
215 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
216 ])
217 // .storage_transformers(vec![].into())
218 .build(store, array_path)
219 .unwrap();
220 println!("{}", array.metadata().to_string_pretty());
221
222 let data = [
223 CustomDataTypeUInt4Element::try_from(1).unwrap(),
224 CustomDataTypeUInt4Element::try_from(2).unwrap(),
225 CustomDataTypeUInt4Element::try_from(3).unwrap(),
226 CustomDataTypeUInt4Element::try_from(4).unwrap(),
227 CustomDataTypeUInt4Element::try_from(5).unwrap(),
228 ];
229 array.store_chunk(&[0, 0], &data).unwrap();
230
231 let data: Vec<CustomDataTypeUInt4Element> =
232 array.retrieve_array_subset(&array.subset_all()).unwrap();
233
234 for f in &data {
235 println!("uint4: {:08b} u8: {}", f.into_u8(), f.into_u8());
236 }
237
238 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
239 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
240 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
241 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
242 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
243 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
244
245 let data: Vec<CustomDataTypeUInt4Element> = array.retrieve_array_subset(&[1..3, 0..1]).unwrap();
246 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
247 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
248}22fn main() -> Result<(), Box<dyn std::error::Error>> {
23 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
24
25 let serde_json::Value::Object(attributes) = serde_json::json!({
26 "foo": "bar",
27 "baz": 42,
28 }) else {
29 unreachable!()
30 };
31
32 // Create a Zarr V2 group
33 let group_metadata: GroupMetadata = GroupMetadataV2::new()
34 .with_attributes(attributes.clone())
35 .into();
36 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
37
38 // Store the metadata as V2 and V3
39 let convert_group_metadata_to_v3 =
40 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
41 group.store_metadata()?;
42 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
43 println!(
44 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
45 key_to_str(&store, "group/.zgroup")?
46 );
47 println!(
48 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
49 key_to_str(&store, "group/.zattrs")?
50 );
51 println!(
52 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
53 key_to_str(&store, "group/zarr.json")?
54 );
55 // println!(
56 // "The equivalent Zarr V3 group metadata is\n{}\n",
57 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
58 // );
59
60 // Create a Zarr V2 array
61 let array_metadata = ArrayMetadataV2::new(
62 vec![10, 10],
63 vec![NonZeroU64::new(5).unwrap(); 2],
64 ">f4".into(), // big endian float32
65 FillValueMetadata::from(f32::NAN),
66 None,
67 None,
68 )
69 .with_dimension_separator(ChunkKeySeparator::Slash)
70 .with_order(ArrayMetadataV2Order::F)
71 .with_attributes(attributes.clone());
72 let array = zarrs::array::Array::new_with_metadata(
73 store.clone(),
74 "/group/array",
75 array_metadata.into(),
76 )?;
77
78 // Store the metadata as V2 and V3
79 let convert_array_metadata_to_v3 =
80 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
81 array.store_metadata()?;
82 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
83 println!(
84 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
85 key_to_str(&store, "group/array/.zarray")?
86 );
87 println!(
88 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
89 key_to_str(&store, "group/array/.zattrs")?
90 );
91 println!(
92 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
93 key_to_str(&store, "group/array/zarr.json")?
94 );
95 // println!(
96 // "The equivalent Zarr V3 array metadata is\n{}\n",
97 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
98 // );
99
100 array.store_chunk(&[0, 1], &[0.0f32; 5 * 5])?;
101
102 // Print the keys in the store
103 println!("The store contains keys:");
104 for key in store.list()? {
105 println!(" {}", key);
106 }
107
108 Ok(())
109}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArrayBytes, data_type};
12 use zarrs::storage::store;
13
14 // Create a store
15 // let path = tempfile::TempDir::new()?;
16 // let mut store: ReadableWritableListableStorage =
17 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
18 // let mut store: ReadableWritableListableStorage = Arc::new(
19 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
20 // );
21 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
22 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
23 && arg1 == "--usage-log"
24 {
25 let log_writer = Arc::new(std::sync::Mutex::new(
26 // std::io::BufWriter::new(
27 std::io::stdout(),
28 // )
29 ));
30 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
31 chrono::Utc::now().format("[%T%.3f] ").to_string()
32 }));
33 }
34
35 // Create the root group
36 zarrs::group::GroupBuilder::new()
37 .build(store.clone(), "/")?
38 .store_metadata()?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.store_metadata()?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![4, 4], // array shape
57 vec![2, 2], // regular chunk shape
58 data_type::string(),
59 "_",
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.store_metadata()?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 array.store_chunk(
76 &[0, 0],
77 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
78 )?;
79 array.store_chunk(
80 &[0, 1],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
82 )?;
83 let subset_all = array.subset_all();
84 let data_all: ArrayD<String> = array.retrieve_array_subset(&subset_all)?;
85 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
86
87 // Write a subset spanning multiple chunks, including updating chunks already written
88 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
89 array.store_array_subset(&[1..3, 1..3], ndarray_subset)?;
90 let data_all: ArrayD<String> = array.retrieve_array_subset(&subset_all)?;
91 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
92
93 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
94 // TODO: Add a convenience function for this?
95 let data_all: ArrayBytes = array.retrieve_array_subset(&subset_all)?;
96 let (bytes, offsets) = data_all.into_variable()?.into_parts();
97 let string = String::from_utf8(bytes.into_owned())?;
98 let elements = offsets
99 .iter()
100 .tuple_windows()
101 .map(|(&curr, &next)| &string[curr..next])
102 .collect::<Vec<&str>>();
103 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
104 println!("ndarray::ArrayD<&str>:\n{ndarray}");
105
106 Ok(())
107}Sourcepub fn store_chunk_elements<T: Element>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk() instead
pub fn store_chunk_elements<T: Element>( &self, chunk_indices: &[u64], chunk_elements: &[T], ) -> Result<(), ArrayError>
Encode chunk_elements and store at chunk_indices.
Use store_chunk_elements_opt to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError if
- the size of
Tdoes not match the data type size, or - a
store_chunkerror condition is met.
Sourcepub fn store_chunk_ndarray<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk() insteadAvailable on crate feature ndarray only.
pub fn store_chunk_ndarray<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
ndarray only.Encode chunk_array and store at chunk_indices.
Use store_chunk_ndarray_opt to control codec options.
§Errors
Returns an ArrayError if
- the shape of the array does not match the shape of the chunk,
- a
store_chunk_elementserror condition is met.
Sourcepub fn store_chunks<'a>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_data: impl IntoArrayBytes<'a>,
) -> Result<(), ArrayError>
pub fn store_chunks<'a>( &self, chunks: &dyn ArraySubsetTraits, chunks_data: impl IntoArrayBytes<'a>, ) -> Result<(), ArrayError>
Encode chunks_data and store at the chunks with indices represented by the chunks array subset.
Use store_chunks_opt to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError if
chunksare invalid,- the length of
chunks_datais not equal to the expected length (the product of the number of elements in the chunks and the data type size in bytes), - there is a codec encoding error, or
- an underlying store error.
Examples found in repository?
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}Sourcepub fn store_chunks_elements<T: Element>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunks() instead
pub fn store_chunks_elements<T: Element>( &self, chunks: &dyn ArraySubsetTraits, chunks_elements: &[T], ) -> Result<(), ArrayError>
Encode chunks_elements and store at the chunks with indices represented by the chunks array subset.
§Errors
Returns an ArrayError if
- the size of
Tdoes not match the data type size, or - a
store_chunkserror condition is met.
Sourcepub fn store_chunks_ndarray<T: Element, D: Dimension>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunks() insteadAvailable on crate feature ndarray only.
pub fn store_chunks_ndarray<T: Element, D: Dimension>( &self, chunks: &dyn ArraySubsetTraits, chunks_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
ndarray only.Encode chunks_array and store at the chunks with indices represented by the chunks array subset.
§Errors
Returns an ArrayError if
- the shape of the array does not match the shape of the chunks,
- a
store_chunks_elementserror condition is met.
Sourcepub fn erase_metadata(&self) -> Result<(), StorageError>
pub fn erase_metadata(&self) -> Result<(), StorageError>
Erase the metadata with default MetadataEraseVersion options.
Succeeds if the metadata does not exist.
§Errors
Returns a StorageError if there is an underlying store error.
Sourcepub fn erase_metadata_opt(
&self,
options: MetadataEraseVersion,
) -> Result<(), StorageError>
pub fn erase_metadata_opt( &self, options: MetadataEraseVersion, ) -> Result<(), StorageError>
Erase the metadata with non-default MetadataEraseVersion options.
Succeeds if the metadata does not exist.
§Errors
Returns a StorageError if there is an underlying store error.
Sourcepub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>
pub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>
Erase the chunk at chunk_indices.
Succeeds if the chunk does not exist.
§Errors
Returns a StorageError if there is an underlying store error.
Examples found in repository?
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}Sourcepub fn erase_chunks(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<(), StorageError>
pub fn erase_chunks( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<(), StorageError>
Sourcepub fn store_chunk_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_data: impl IntoArrayBytes<'a>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunk_opt<'a>( &self, chunk_indices: &[u64], chunk_data: impl IntoArrayBytes<'a>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk.
Sourcepub unsafe fn store_encoded_chunk(
&self,
chunk_indices: &[u64],
encoded_chunk_bytes: Bytes,
) -> Result<(), ArrayError>
pub unsafe fn store_encoded_chunk( &self, chunk_indices: &[u64], encoded_chunk_bytes: Bytes, ) -> Result<(), ArrayError>
Store encoded_chunk_bytes at chunk_indices
§Safety
The responsibility is on the caller to ensure the chunk is encoded correctly
§Errors
Returns StorageError if there is an underlying store error.
Sourcepub fn store_chunk_elements_opt<T: Element>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk_opt() instead
pub fn store_chunk_elements_opt<T: Element>( &self, chunk_indices: &[u64], chunk_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk_elements.
Sourcepub fn store_chunk_ndarray_opt<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk_opt() insteadAvailable on crate feature ndarray only.
pub fn store_chunk_ndarray_opt<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray only.Explicit options version of store_chunk_ndarray.
Sourcepub fn store_chunks_opt<'a>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_data: impl IntoArrayBytes<'a>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunks_opt<'a>( &self, chunks: &dyn ArraySubsetTraits, chunks_data: impl IntoArrayBytes<'a>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunks.
Sourcepub fn store_chunks_elements_opt<T: Element>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunks_opt() instead
pub fn store_chunks_elements_opt<T: Element>( &self, chunks: &dyn ArraySubsetTraits, chunks_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunks_elements.
Sourcepub fn store_chunks_ndarray_opt<T: Element, D: Dimension>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunks_opt() insteadAvailable on crate feature ndarray only.
pub fn store_chunks_ndarray_opt<T: Element, D: Dimension>( &self, chunks: &dyn ArraySubsetTraits, chunks_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray only.Explicit options version of store_chunks_ndarray.
Source§impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>
Sourcepub fn readable(&self) -> Array<dyn ReadableStorageTraits>
pub fn readable(&self) -> Array<dyn ReadableStorageTraits>
Return a read-only instantiation of the array.
Sourcepub fn store_chunk_subset<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_data: impl IntoArrayBytes<'a>,
) -> Result<(), ArrayError>
pub fn store_chunk_subset<'a>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_data: impl IntoArrayBytes<'a>, ) -> Result<(), ArrayError>
Encode chunk_subset_data and store in chunk_subset of the chunk at chunk_indices with default codec options.
Use store_chunk_subset_opt to control codec options.
Prefer to use store_chunk where possible, since this function may decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError if
chunk_subsetis invalid or out of bounds of the chunk,- there is a codec encoding error, or
- an underlying store error.
§Panics
Panics if attempting to reference a byte beyond usize::MAX.
Examples found in repository?
13fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
14 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
15 use zarrs::array::{ArraySubset, ZARR_NAN_F32, codec, data_type};
16 use zarrs::node::Node;
17 use zarrs::storage::store;
18
19 // Create a store
20 // let path = tempfile::TempDir::new()?;
21 // let mut store: ReadableWritableListableStorage =
22 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
23 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
24 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
25 && arg1 == "--usage-log"
26 {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36
37 // Create the root group
38 zarrs::group::GroupBuilder::new()
39 .build(store.clone(), "/")?
40 .store_metadata()?;
41
42 // Create a group with attributes
43 let group_path = "/group";
44 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
45 group
46 .attributes_mut()
47 .insert("foo".into(), serde_json::Value::String("bar".into()));
48 group.store_metadata()?;
49
50 println!(
51 "The group metadata is:\n{}\n",
52 group.metadata().to_string_pretty()
53 );
54
55 // Create an array
56 let array_path = "/group/array";
57 let array = zarrs::array::ArrayBuilder::new(
58 vec![8, 8], // array shape
59 MetadataV3::new_with_configuration(
60 "rectangular",
61 RectangularChunkGridConfiguration {
62 chunk_shape: vec![
63 vec![
64 NonZeroU64::new(1).unwrap(),
65 NonZeroU64::new(2).unwrap(),
66 NonZeroU64::new(3).unwrap(),
67 NonZeroU64::new(2).unwrap(),
68 ]
69 .into(),
70 NonZeroU64::new(4).unwrap().into(),
71 ], // chunk sizes
72 },
73 ),
74 data_type::float32(),
75 ZARR_NAN_F32,
76 )
77 .bytes_to_bytes_codecs(vec![
78 #[cfg(feature = "gzip")]
79 Arc::new(codec::GzipCodec::new(5)?),
80 ])
81 .dimension_names(["y", "x"].into())
82 // .storage_transformers(vec![].into())
83 .build(store.clone(), array_path)?;
84
85 // Write array metadata to store
86 array.store_metadata()?;
87
88 // Write some chunks (in parallel)
89 (0..4).into_par_iter().try_for_each(|i| {
90 let chunk_grid = array.chunk_grid();
91 let chunk_indices = vec![i, 0];
92 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
93 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
94 chunk_shape
95 .iter()
96 .map(|u| u.get() as usize)
97 .collect::<Vec<_>>(),
98 i as f32,
99 );
100 array.store_chunk(&chunk_indices, chunk_array)
101 } else {
102 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
103 chunk_indices.to_vec(),
104 ))
105 }
106 })?;
107
108 println!(
109 "The array metadata is:\n{}\n",
110 array.metadata().to_string_pretty()
111 );
112
113 // Write a subset spanning multiple chunks, including updating chunks already written
114 array.store_array_subset(
115 &[3..6, 3..6], // start
116 ndarray::ArrayD::<f32>::from_shape_vec(
117 vec![3, 3],
118 vec![0.1f32, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
119 )?,
120 )?;
121
122 // Store elements directly, in this case set the 7th column to 123.0
123 array.store_array_subset(&[0..8, 6..7], &[123.0f32; 8])?;
124
125 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
126 array.store_chunk_subset(
127 // chunk indices
128 &[3, 1],
129 // subset within chunk
130 &[1..2, 0..4],
131 &[-4.0f32; 4],
132 )?;
133
134 // Read the whole array
135 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
136 println!("The whole array is:\n{data_all}\n");
137
138 // Read a chunk back from the store
139 let chunk_indices = vec![1, 0];
140 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
141 println!("Chunk [1,0] is:\n{data_chunk}\n");
142
143 // Read the central 4x2 subset of the array
144 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
145 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
146 println!("The middle 4x2 subset is:\n{data_4x2}\n");
147
148 // Show the hierarchy
149 let node = Node::open(&store, "/").unwrap();
150 let tree = node.hierarchy_tree();
151 println!("The Zarr hierarchy tree is:\n{tree}");
152
153 Ok(())
154}More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}Sourcepub fn store_chunk_subset_elements<T: Element>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk_subset() instead
pub fn store_chunk_subset_elements<T: Element>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_elements: &[T], ) -> Result<(), ArrayError>
Encode chunk_subset_elements and store in chunk_subset of the chunk at chunk_indices with default codec options.
Use store_chunk_subset_elements_opt to control codec options.
Prefer to use store_chunk_elements where possible, since this will decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError if
- the size of
Tdoes not match the data type size, or - a
store_chunk_subseterror condition is met.
Sourcepub fn store_chunk_subset_ndarray<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk_subset() insteadAvailable on crate feature ndarray only.
pub fn store_chunk_subset_ndarray<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
ndarray only.Encode chunk_subset_array and store in chunk_subset of the chunk in the subset starting at chunk_subset_start.
Use store_chunk_subset_ndarray_opt to control codec options.
Prefer to use store_chunk_ndarray where possible, since this will decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError if a store_chunk_subset_elements error condition is met.
Sourcepub fn store_array_subset<'a>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_data: impl IntoArrayBytes<'a>,
) -> Result<(), ArrayError>
pub fn store_array_subset<'a>( &self, array_subset: &dyn ArraySubsetTraits, subset_data: impl IntoArrayBytes<'a>, ) -> Result<(), ArrayError>
Encode subset_data and store in array_subset.
Use store_array_subset_opt to control codec options.
Prefer to use store_chunk or store_chunks where possible, since this will decode and encode each chunk intersecting array_subset.
§Errors
Returns an ArrayError if
- the dimensionality of
array_subsetdoes not match the chunk grid dimensionality - the length of
subset_datadoes not match the expected length governed by the shape of the array subset and the data type size, - there is a codec encoding error, or
- an underlying store error.
Examples found in repository?
192fn main() {
193 let store = std::sync::Arc::new(MemoryStore::default());
194 let array_path = "/array";
195 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
196 let array = ArrayBuilder::new(
197 vec![4096, 1], // array shape
198 vec![5, 1], // regular chunk shape
199 Arc::new(CustomDataTypeUInt12),
200 FillValue::new(fill_value.into_le_bytes().to_vec()),
201 )
202 .array_to_array_codecs(vec![
203 #[cfg(feature = "transpose")]
204 Arc::new(zarrs::array::codec::TransposeCodec::new(
205 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
206 )),
207 ])
208 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
209 .bytes_to_bytes_codecs(vec![
210 #[cfg(feature = "gzip")]
211 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
212 #[cfg(feature = "crc32c")]
213 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
214 ])
215 // .storage_transformers(vec![].into())
216 .build(store, array_path)
217 .unwrap();
218 println!("{}", array.metadata().to_string_pretty());
219
220 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
221 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
222 .collect();
223
224 array
225 .store_array_subset(&array.subset_all(), &data)
226 .unwrap();
227
228 let mut data: Vec<CustomDataTypeUInt12Element> =
229 array.retrieve_array_subset(&array.subset_all()).unwrap();
230
231 for (i, d) in data.drain(0..4096).enumerate() {
232 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
233 assert_eq!(d, element);
234 let element_pd: Vec<CustomDataTypeUInt12Element> = array
235 .retrieve_array_subset(&[(i as u64)..i as u64 + 1, 0..1])
236 .unwrap();
237 assert_eq!(element_pd[0], element);
238 }
239}More examples
11fn main() -> Result<(), Box<dyn std::error::Error>> {
12 // Create an in-memory store
13 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
14 // "zarrs/tests/data/v3/array_optional_nested.zarr",
15 // )?);
16 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
17
18 // Build the codec chains for the optional codec
19 let array = ArrayBuilder::new(
20 vec![4, 4], // 4x4 array
21 vec![2, 2], // 2x2 chunks
22 data_type::uint8().to_optional().to_optional(), // Optional optional uint8 => Option<Option<u8>>
23 FillValue::new_optional_null().into_optional(), // Fill value => Some(None)
24 )
25 .dimension_names(["y", "x"].into())
26 .attributes(
27 serde_json::json!({
28 "description": r#"A 4x4 array of optional optional uint8 values with some missing data.
29The fill value is null on the inner optional layer, i.e. Some(None).
30N marks missing (`None`=`null`) values. SN marks `Some(None)`=`[null]` values:
31 N SN 2 3
32 N 5 N 7
33 SN SN N N
34 SN SN N N"#,
35 })
36 .as_object()
37 .unwrap()
38 .clone(),
39 )
40 .build(store.clone(), "/array")?;
41 array.store_metadata_opt(
42 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
43 )?;
44
45 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
46
47 // Create some data with missing values
48 let data = ndarray::array![
49 [None, Some(None), Some(Some(2u8)), Some(Some(3u8))],
50 [None, Some(Some(5u8)), None, Some(Some(7u8))],
51 [Some(None), Some(None), None, None],
52 [Some(None), Some(None), None, None],
53 ]
54 .into_dyn();
55
56 // Write the data
57 array.store_array_subset(&array.subset_all(), data.clone())?;
58 println!("Data written to array.");
59
60 // Read back the data
61 let data_read: ArrayD<Option<Option<u8>>> = array.retrieve_array_subset(&array.subset_all())?;
62
63 // Verify data integrity
64 assert_eq!(data, data_read);
65
66 // Display the data in a grid format
67 println!(
68 "Data grid. N marks missing (`None`=`null`) values. SN marks `Some(None)`=`[null]` values"
69 );
70 println!(" 0 1 2 3");
71 for y in 0..4 {
72 print!("{} ", y);
73 for x in 0..4 {
74 match data_read[[y, x]] {
75 Some(Some(value)) => print!("{:3} ", value),
76 Some(None) => print!(" SN "),
77 None => print!(" N "),
78 }
79 }
80 println!();
81 }
82 Ok(())
83}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArrayBytes, data_type};
12 use zarrs::storage::store;
13
14 // Create a store
15 // let path = tempfile::TempDir::new()?;
16 // let mut store: ReadableWritableListableStorage =
17 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
18 // let mut store: ReadableWritableListableStorage = Arc::new(
19 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
20 // );
21 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
22 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
23 && arg1 == "--usage-log"
24 {
25 let log_writer = Arc::new(std::sync::Mutex::new(
26 // std::io::BufWriter::new(
27 std::io::stdout(),
28 // )
29 ));
30 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
31 chrono::Utc::now().format("[%T%.3f] ").to_string()
32 }));
33 }
34
35 // Create the root group
36 zarrs::group::GroupBuilder::new()
37 .build(store.clone(), "/")?
38 .store_metadata()?;
39
40 // Create a group with attributes
41 let group_path = "/group";
42 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
43 group
44 .attributes_mut()
45 .insert("foo".into(), serde_json::Value::String("bar".into()));
46 group.store_metadata()?;
47
48 println!(
49 "The group metadata is:\n{}\n",
50 group.metadata().to_string_pretty()
51 );
52
53 // Create an array
54 let array_path = "/group/array";
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![4, 4], // array shape
57 vec![2, 2], // regular chunk shape
58 data_type::string(),
59 "_",
60 )
61 // .bytes_to_bytes_codecs(vec![]) // uncompressed
62 .dimension_names(["y", "x"].into())
63 // .storage_transformers(vec![].into())
64 .build(store.clone(), array_path)?;
65
66 // Write array metadata to store
67 array.store_metadata()?;
68
69 println!(
70 "The array metadata is:\n{}\n",
71 array.metadata().to_string_pretty()
72 );
73
74 // Write some chunks
75 array.store_chunk(
76 &[0, 0],
77 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["a", "bb", "ccc", "dddd"]).unwrap(),
78 )?;
79 array.store_chunk(
80 &[0, 1],
81 ArrayD::<&str>::from_shape_vec(vec![2, 2], vec!["4444", "333", "22", "1"]).unwrap(),
82 )?;
83 let subset_all = array.subset_all();
84 let data_all: ArrayD<String> = array.retrieve_array_subset(&subset_all)?;
85 println!("store_chunk [0, 0] and [0, 1]:\n{data_all}\n");
86
87 // Write a subset spanning multiple chunks, including updating chunks already written
88 let ndarray_subset: Array2<&str> = array![["!", "@@"], ["###", "$$$$"]];
89 array.store_array_subset(&[1..3, 1..3], ndarray_subset)?;
90 let data_all: ArrayD<String> = array.retrieve_array_subset(&subset_all)?;
91 println!("store_array_subset [1..3, 1..3]:\nndarray::ArrayD<String>\n{data_all}");
92
93 // Retrieve bytes directly, convert into a single string allocation, create a &str ndarray
94 // TODO: Add a convenience function for this?
95 let data_all: ArrayBytes = array.retrieve_array_subset(&subset_all)?;
96 let (bytes, offsets) = data_all.into_variable()?.into_parts();
97 let string = String::from_utf8(bytes.into_owned())?;
98 let elements = offsets
99 .iter()
100 .tuple_windows()
101 .map(|(&curr, &next)| &string[curr..next])
102 .collect::<Vec<&str>>();
103 let ndarray = ArrayD::<&str>::from_shape_vec(subset_all.shape_usize(), elements)?;
104 println!("ndarray::ArrayD<&str>:\n{ndarray}");
105
106 Ok(())
107}13fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
14 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
15 use zarrs::array::{ArraySubset, ZARR_NAN_F32, codec, data_type};
16 use zarrs::node::Node;
17 use zarrs::storage::store;
18
19 // Create a store
20 // let path = tempfile::TempDir::new()?;
21 // let mut store: ReadableWritableListableStorage =
22 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
23 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
24 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
25 && arg1 == "--usage-log"
26 {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36
37 // Create the root group
38 zarrs::group::GroupBuilder::new()
39 .build(store.clone(), "/")?
40 .store_metadata()?;
41
42 // Create a group with attributes
43 let group_path = "/group";
44 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
45 group
46 .attributes_mut()
47 .insert("foo".into(), serde_json::Value::String("bar".into()));
48 group.store_metadata()?;
49
50 println!(
51 "The group metadata is:\n{}\n",
52 group.metadata().to_string_pretty()
53 );
54
55 // Create an array
56 let array_path = "/group/array";
57 let array = zarrs::array::ArrayBuilder::new(
58 vec![8, 8], // array shape
59 MetadataV3::new_with_configuration(
60 "rectangular",
61 RectangularChunkGridConfiguration {
62 chunk_shape: vec![
63 vec![
64 NonZeroU64::new(1).unwrap(),
65 NonZeroU64::new(2).unwrap(),
66 NonZeroU64::new(3).unwrap(),
67 NonZeroU64::new(2).unwrap(),
68 ]
69 .into(),
70 NonZeroU64::new(4).unwrap().into(),
71 ], // chunk sizes
72 },
73 ),
74 data_type::float32(),
75 ZARR_NAN_F32,
76 )
77 .bytes_to_bytes_codecs(vec![
78 #[cfg(feature = "gzip")]
79 Arc::new(codec::GzipCodec::new(5)?),
80 ])
81 .dimension_names(["y", "x"].into())
82 // .storage_transformers(vec![].into())
83 .build(store.clone(), array_path)?;
84
85 // Write array metadata to store
86 array.store_metadata()?;
87
88 // Write some chunks (in parallel)
89 (0..4).into_par_iter().try_for_each(|i| {
90 let chunk_grid = array.chunk_grid();
91 let chunk_indices = vec![i, 0];
92 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
93 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
94 chunk_shape
95 .iter()
96 .map(|u| u.get() as usize)
97 .collect::<Vec<_>>(),
98 i as f32,
99 );
100 array.store_chunk(&chunk_indices, chunk_array)
101 } else {
102 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
103 chunk_indices.to_vec(),
104 ))
105 }
106 })?;
107
108 println!(
109 "The array metadata is:\n{}\n",
110 array.metadata().to_string_pretty()
111 );
112
113 // Write a subset spanning multiple chunks, including updating chunks already written
114 array.store_array_subset(
115 &[3..6, 3..6], // start
116 ndarray::ArrayD::<f32>::from_shape_vec(
117 vec![3, 3],
118 vec![0.1f32, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
119 )?,
120 )?;
121
122 // Store elements directly, in this case set the 7th column to 123.0
123 array.store_array_subset(&[0..8, 6..7], &[123.0f32; 8])?;
124
125 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
126 array.store_chunk_subset(
127 // chunk indices
128 &[3, 1],
129 // subset within chunk
130 &[1..2, 0..4],
131 &[-4.0f32; 4],
132 )?;
133
134 // Read the whole array
135 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
136 println!("The whole array is:\n{data_all}\n");
137
138 // Read a chunk back from the store
139 let chunk_indices = vec![1, 0];
140 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
141 println!("Chunk [1,0] is:\n{data_chunk}\n");
142
143 // Read the central 4x2 subset of the array
144 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
145 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
146 println!("The middle 4x2 subset is:\n{data_4x2}\n");
147
148 // Show the hierarchy
149 let node = Node::open(&store, "/").unwrap();
150 let tree = node.hierarchy_tree();
151 println!("The Zarr hierarchy tree is:\n{tree}");
152
153 Ok(())
154}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}18fn main() -> Result<(), Box<dyn std::error::Error>> {
19 // Create an in-memory store
20 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
21 // "zarrs/tests/data/v3/array_optional.zarr",
22 // )?);
23 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
24
25 // Build the codec chains for the optional codec
26 let array = ArrayBuilder::new(
27 vec![4, 4], // 4x4 array
28 vec![2, 2], // 2x2 chunks
29 data_type::uint8().to_optional(), // Optional uint8
30 FillValue::new_optional_null(), // Null fill value: [0]
31 )
32 .dimension_names(["y", "x"].into())
33 .attributes(
34 serde_json::json!({
35 "description": r#"A 4x4 array of optional uint8 values with some missing data.
36N marks missing (`None`=`null`) values:
37 0 N 2 3
38 N 5 N 7
39 8 9 N N
4012 N N N"#,
41 })
42 .as_object()
43 .unwrap()
44 .clone(),
45 )
46 .build(store.clone(), "/array")?;
47 array.store_metadata_opt(
48 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
49 )?;
50
51 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
52
53 // Create some data with missing values
54 let data = ndarray::array![
55 [Some(0u8), None, Some(2u8), Some(3u8)],
56 [None, Some(5u8), None, Some(7u8)],
57 [Some(8u8), Some(9u8), None, None],
58 [Some(12u8), None, None, None],
59 ]
60 .into_dyn();
61
62 // Write the data
63 array.store_array_subset(&array.subset_all(), data.clone())?;
64
65 // Read back the data
66 let data_read: ArrayD<Option<u8>> = array.retrieve_array_subset(&array.subset_all())?;
67
68 // Verify data integrity
69 assert_eq!(data, data_read);
70
71 // Display the data in a grid format
72 println!("Data grid, N marks missing (`None`=`null`) values");
73 println!(" 0 1 2 3");
74 for y in 0..4 {
75 print!("{} ", y);
76 for x in 0..4 {
77 match data_read[[y, x]] {
78 Some(value) => print!("{:2} ", value),
79 None => print!(" N "),
80 }
81 }
82 println!();
83 }
84
85 // Print the raw bytes in all chunks
86 println!("Raw bytes in all chunks:");
87 let chunk_grid_shape = array.chunk_grid_shape();
88 for chunk_y in 0..chunk_grid_shape[0] {
89 for chunk_x in 0..chunk_grid_shape[1] {
90 let chunk_indices = vec![chunk_y, chunk_x];
91 let chunk_key = array.chunk_key(&chunk_indices);
92 println!(" Chunk [{}, {}] (key: {}):", chunk_y, chunk_x, chunk_key);
93
94 if let Some(chunk_bytes) = store.get(&chunk_key)? {
95 println!(" Size: {} bytes", chunk_bytes.len());
96
97 if chunk_bytes.len() >= 16 {
98 // Parse first 8 bytes as mask size (little-endian u64)
99 let mask_size = u64::from_le_bytes([
100 chunk_bytes[0],
101 chunk_bytes[1],
102 chunk_bytes[2],
103 chunk_bytes[3],
104 chunk_bytes[4],
105 chunk_bytes[5],
106 chunk_bytes[6],
107 chunk_bytes[7],
108 ]) as usize;
109
110 // Parse second 8 bytes as data size (little-endian u64)
111 let data_size = u64::from_le_bytes([
112 chunk_bytes[8],
113 chunk_bytes[9],
114 chunk_bytes[10],
115 chunk_bytes[11],
116 chunk_bytes[12],
117 chunk_bytes[13],
118 chunk_bytes[14],
119 chunk_bytes[15],
120 ]) as usize;
121
122 // Display mask size header with raw bytes
123 print!(" Mask size: 0b");
124 for byte in &chunk_bytes[0..8] {
125 print!("{:08b}", byte);
126 }
127 println!(" -> {} bytes", mask_size);
128
129 // Display data size header with raw bytes
130 print!(" Data size: 0b");
131 for byte in &chunk_bytes[8..16] {
132 print!("{:08b}", byte);
133 }
134 println!(" -> {} bytes", data_size);
135
136 // Show mask and data sections separately
137 if chunk_bytes.len() >= 16 + mask_size + data_size {
138 let mask_start = 16;
139 let data_start = 16 + mask_size;
140
141 // Show mask as binary
142 if mask_size > 0 {
143 println!(" Mask (binary):");
144 print!(" ");
145 for byte in &chunk_bytes[mask_start..mask_start + mask_size] {
146 print!("0b{:08b} ", byte);
147 }
148 println!();
149 }
150
151 // Show data as binary
152 if data_size > 0 {
153 println!(" Data (binary):");
154 print!(" ");
155 for byte in &chunk_bytes[data_start..data_start + data_size] {
156 print!("0b{:08b} ", byte);
157 }
158 println!();
159 }
160 }
161 } else {
162 panic!(" Chunk too small to parse headers");
163 }
164 } else {
165 println!(" Chunk missing (fill value chunk)");
166 }
167 }
168 }
169 Ok(())
170}Sourcepub fn store_array_subset_elements<T: Element>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_array_subset() instead
pub fn store_array_subset_elements<T: Element>( &self, array_subset: &dyn ArraySubsetTraits, subset_elements: &[T], ) -> Result<(), ArrayError>
Encode subset_elements and store in array_subset.
Use store_array_subset_elements_opt to control codec options.
Prefer to use store_chunk_elements or store_chunks_elements where possible, since this will decode and encode each chunk intersecting array_subset.
§Errors
Returns an ArrayError if
- the size of
Tdoes not match the data type size, or - a
store_array_subseterror condition is met.
Sourcepub fn store_array_subset_ndarray<T: Element, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_array_subset() insteadAvailable on crate feature ndarray only.
pub fn store_array_subset_ndarray<T: Element, D: Dimension>( &self, subset_start: &[u64], subset_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
ndarray only.Encode subset_array and store in the array subset starting at subset_start.
Use store_array_subset_ndarray_opt to control codec options.
Prefer to use store_chunk_ndarray or store_chunks_ndarray where possible, since this will decode and encode each chunk intersecting array_subset.
§Errors
Returns an ArrayError if a store_array_subset_elements error condition is met.
Sourcepub fn compact_chunk(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<bool, ArrayError>
pub fn compact_chunk( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<bool, ArrayError>
Retrieve the chunk at chunk_indices, compact it if possible, and store the compacted chunk back.
Compaction removes any extraneous data from the encoded chunk representation.
§Errors
Returns an ArrayError if
- there is a codec error, or
- an underlying store error.
Sourcepub fn store_chunk_subset_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_data: impl IntoArrayBytes<'a>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_chunk_subset_opt<'a>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_data: impl IntoArrayBytes<'a>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk_subset.
Sourcepub fn store_chunk_subset_elements_opt<T: Element>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk_subset_opt() instead
pub fn store_chunk_subset_elements_opt<T: Element>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_chunk_subset_elements.
Sourcepub fn store_chunk_subset_ndarray_opt<T: Element, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_chunk_subset_opt() insteadAvailable on crate feature ndarray only.
pub fn store_chunk_subset_ndarray_opt<T: Element, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray only.Explicit options version of store_chunk_subset_ndarray.
Sourcepub fn store_array_subset_opt<'a>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_data: impl IntoArrayBytes<'a>,
options: &CodecOptions,
) -> Result<(), ArrayError>
pub fn store_array_subset_opt<'a>( &self, array_subset: &dyn ArraySubsetTraits, subset_data: impl IntoArrayBytes<'a>, options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_array_subset.
Sourcepub fn store_array_subset_elements_opt<T: Element>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_array_subset_opt() instead
pub fn store_array_subset_elements_opt<T: Element>( &self, array_subset: &dyn ArraySubsetTraits, subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
Explicit options version of store_array_subset_elements.
Sourcepub fn store_array_subset_ndarray_opt<T: Element, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use store_array_subset_opt() insteadAvailable on crate feature ndarray only.
pub fn store_array_subset_ndarray_opt<T: Element, D: Dimension>( &self, subset_start: &[u64], subset_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
ndarray only.Explicit options version of store_array_subset_ndarray.
Sourcepub fn partial_encoder(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn ArrayPartialEncoderTraits>, ArrayError>
pub fn partial_encoder( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn ArrayPartialEncoderTraits>, ArrayError>
Initialises a partial encoder for the chunk at chunk_indices.
Only one partial encoder should be created for a chunk at a time because:
- partial encoders can hold internal state that may become out of sync, and
- parallel writing to the same chunk may result in data loss.
Partial encoding with ArrayPartialEncoderTraits::partial_encode will use parallelism internally where possible.
§Errors
Returns an ArrayError if initialisation of the partial encoder fails.
Source§impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>
Sourcepub async fn async_open(
storage: Arc<TStorage>,
path: &str,
) -> Result<Array<TStorage>, ArrayCreateError>
Available on crate feature async only.
pub async fn async_open( storage: Arc<TStorage>, path: &str, ) -> Result<Array<TStorage>, ArrayCreateError>
async only.Async variant of open.
Examples found in repository?
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 // Backend::OpenDAL => {
23 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 // let operator = opendal::Operator::new(builder)?.finish();
25 // Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 // }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
37 && arg1 == "--usage-log"
38 {
39 let log_writer = Arc::new(std::sync::Mutex::new(
40 // std::io::BufWriter::new(
41 std::io::stdout(),
42 // )
43 ));
44 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
45 chrono::Utc::now().format("[%T%.3f] ").to_string()
46 }));
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all: ArrayD<f32> = array
59 .async_retrieve_array_subset(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
66 println!("Chunk [1,0] is:\n{data_chunk}\n");
67
68 // Read the central 4x2 subset of the array
69 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
70 let data_4x2: ArrayD<f32> = array.async_retrieve_array_subset(&subset_4x2).await?;
71 println!("The middle 4x2 subset is:\n{data_4x2}\n");
72
73 Ok(())
74}Sourcepub async fn async_open_opt(
storage: Arc<TStorage>,
path: &str,
version: &MetadataRetrieveVersion,
) -> Result<Array<TStorage>, ArrayCreateError>
Available on crate feature async only.
pub async fn async_open_opt( storage: Arc<TStorage>, path: &str, version: &MetadataRetrieveVersion, ) -> Result<Array<TStorage>, ArrayCreateError>
async only.Async variant of open_opt.
Sourcepub async fn async_retrieve_chunk_if_exists<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
) -> Result<Option<T>, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunk_if_exists<T: FromArrayBytes>( &self, chunk_indices: &[u64], ) -> Result<Option<T>, ArrayError>
async only.Async variant of retrieve_chunk_if_exists.
Sourcepub async fn async_retrieve_chunk_elements_if_exists<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
) -> Result<Option<Vec<T>>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_if_exists insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunk_elements_if_exists<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], ) -> Result<Option<Vec<T>>, ArrayError>
async only.Async variant of retrieve_chunk_elements_if_exists.
Sourcepub async fn async_retrieve_chunk_ndarray_if_exists<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
) -> Result<Option<ArrayD<T>>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_if_exists::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunk_ndarray_if_exists<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], ) -> Result<Option<ArrayD<T>>, ArrayError>
async and ndarray only.Async variant of retrieve_chunk_ndarray_if_exists.
Sourcepub async fn async_retrieve_encoded_chunk(
&self,
chunk_indices: &[u64],
) -> Result<Option<Bytes>, StorageError>
Available on crate feature async only.
pub async fn async_retrieve_encoded_chunk( &self, chunk_indices: &[u64], ) -> Result<Option<Bytes>, StorageError>
async only.Retrieve the encoded bytes of a chunk.
§Errors
Returns a StorageError if there is an underlying store error.
Sourcepub async fn async_retrieve_chunk<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunk<T: FromArrayBytes>( &self, chunk_indices: &[u64], ) -> Result<T, ArrayError>
async only.Async variant of retrieve_chunk.
Examples found in repository?
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 // Backend::OpenDAL => {
23 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 // let operator = opendal::Operator::new(builder)?.finish();
25 // Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 // }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
37 && arg1 == "--usage-log"
38 {
39 let log_writer = Arc::new(std::sync::Mutex::new(
40 // std::io::BufWriter::new(
41 std::io::stdout(),
42 // )
43 ));
44 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
45 chrono::Utc::now().format("[%T%.3f] ").to_string()
46 }));
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all: ArrayD<f32> = array
59 .async_retrieve_array_subset(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
66 println!("Chunk [1,0] is:\n{data_chunk}\n");
67
68 // Read the central 4x2 subset of the array
69 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
70 let data_4x2: ArrayD<f32> = array.async_retrieve_array_subset(&subset_4x2).await?;
71 println!("The middle 4x2 subset is:\n{data_4x2}\n");
72
73 Ok(())
74}More examples
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_retrieve_chunk_elements<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunk_elements<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_chunk_elements.
Sourcepub async fn async_retrieve_chunk_ndarray<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunk_ndarray<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_chunk_ndarray.
Sourcepub async fn async_retrieve_chunks<T: FromArrayBytes>(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunks<T: FromArrayBytes>( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_chunks.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_retrieve_chunks_elements<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunks::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunks_elements<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_chunks_elements.
Sourcepub async fn async_retrieve_chunks_ndarray<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunks::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunks_ndarray<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_chunks_ndarray.
Sourcepub async fn async_retrieve_chunk_subset<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunk_subset<T: FromArrayBytes>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_chunk_subset.
Sourcepub async fn async_retrieve_chunk_subset_elements<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_subset::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunk_subset_elements<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_chunk_subset_elements.
Sourcepub async fn async_retrieve_chunk_subset_ndarray<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_subset::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunk_subset_ndarray<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_chunk_subset_ndarray.
Sourcepub async fn async_retrieve_array_subset<T: FromArrayBytes>(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_array_subset<T: FromArrayBytes>( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_array_subset.
Examples found in repository?
15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 // Backend::OpenDAL => {
23 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 // let operator = opendal::Operator::new(builder)?.finish();
25 // Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 // }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
37 && arg1 == "--usage-log"
38 {
39 let log_writer = Arc::new(std::sync::Mutex::new(
40 // std::io::BufWriter::new(
41 std::io::stdout(),
42 // )
43 ));
44 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
45 chrono::Utc::now().format("[%T%.3f] ").to_string()
46 }));
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all: ArrayD<f32> = array
59 .async_retrieve_array_subset(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
66 println!("Chunk [1,0] is:\n{data_chunk}\n");
67
68 // Read the central 4x2 subset of the array
69 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
70 let data_4x2: ArrayD<f32> = array.async_retrieve_array_subset(&subset_4x2).await?;
71 println!("The middle 4x2 subset is:\n{data_4x2}\n");
72
73 Ok(())
74}More examples
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_retrieve_array_subset_into(
&self,
array_subset: &dyn ArraySubsetTraits,
output_target: ArrayBytesDecodeIntoTarget<'_>,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_array_subset_into( &self, array_subset: &dyn ArraySubsetTraits, output_target: ArrayBytesDecodeIntoTarget<'_>, ) -> Result<(), ArrayError>
async only.Async variant of retrieve_array_subset_into.
Sourcepub async fn async_retrieve_array_subset_elements<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_array_subset::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_array_subset_elements<T: ElementOwned + MaybeSend + MaybeSync>( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_array_subset_elements.
Sourcepub async fn async_retrieve_array_subset_ndarray<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_array_subset::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_array_subset_ndarray<T: ElementOwned + MaybeSend + MaybeSync>( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_array_subset_ndarray.
Sourcepub async fn async_partial_decoder(
&self,
chunk_indices: &[u64],
) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
Available on crate feature async only.
pub async fn async_partial_decoder( &self, chunk_indices: &[u64], ) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
async only.Async variant of partial_decoder.
Sourcepub async fn async_retrieve_chunk_if_exists_opt<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<T>, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunk_if_exists_opt<T: FromArrayBytes>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<T>, ArrayError>
async only.Async variant of retrieve_chunk_if_exists_opt.
Sourcepub async fn async_retrieve_chunk_opt<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunk_opt<T: FromArrayBytes>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_chunk_opt.
Sourcepub async fn async_retrieve_chunk_elements_if_exists_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<Vec<T>>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_if_exists_opt::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunk_elements_if_exists_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<Vec<T>>, ArrayError>
async only.Async variant of retrieve_chunk_elements_if_exists_opt.
Sourcepub async fn async_retrieve_chunk_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_opt::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunk_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_chunk_elements_opt.
Sourcepub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Option<ArrayD<T>>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_if_exists_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Option<ArrayD<T>>, ArrayError>
async and ndarray only.Async variant of retrieve_chunk_ndarray_if_exists_opt.
Sourcepub async fn async_retrieve_chunk_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunk_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_chunk_ndarray_opt.
Sourcepub async fn async_retrieve_encoded_chunks(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<Option<Bytes>>, StorageError>
Available on crate feature async only.
pub async fn async_retrieve_encoded_chunks( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<Option<Bytes>>, StorageError>
async only.Retrieve the encoded bytes of the chunks in chunks.
The chunks are in order of the chunk indices returned by chunks.indices().into_iter().
§Errors
Returns a StorageError if there is an underlying store error.
Sourcepub async fn async_retrieve_chunks_opt<T: FromArrayBytes>(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunks_opt<T: FromArrayBytes>( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_chunks_opt.
Sourcepub async fn async_retrieve_chunks_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunks_opt::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunks_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_chunks_elements_opt.
Sourcepub async fn async_retrieve_chunks_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunks_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunks_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_chunks_ndarray_opt.
Sourcepub async fn async_retrieve_array_subset_opt<T: FromArrayBytes>(
&self,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_array_subset_opt<T: FromArrayBytes>( &self, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_array_subset_opt.
Sourcepub async fn async_retrieve_array_subset_into_opt(
&self,
array_subset: &dyn ArraySubsetTraits,
output_target: ArrayBytesDecodeIntoTarget<'_>,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_array_subset_into_opt( &self, array_subset: &dyn ArraySubsetTraits, output_target: ArrayBytesDecodeIntoTarget<'_>, options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of retrieve_array_subset_into_opt.
Sourcepub async fn async_retrieve_array_subset_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_array_subset_opt::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_array_subset_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_array_subset_elements_opt.
Sourcepub async fn async_retrieve_array_subset_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_array_subset_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_array_subset_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_array_subset_ndarray_opt.
Sourcepub async fn async_retrieve_chunk_subset_opt<T: FromArrayBytes>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
Available on crate feature async only.
pub async fn async_retrieve_chunk_subset_opt<T: FromArrayBytes>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
async only.Async variant of retrieve_chunk_subset_opt.
Sourcepub async fn async_retrieve_chunk_subset_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_subset_opt::<Vec<T>>() insteadAvailable on crate feature async only.
pub async fn async_retrieve_chunk_subset_elements_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
async only.Async variant of retrieve_chunk_subset_elements_opt.
Sourcepub async fn async_retrieve_chunk_subset_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
👎Deprecated since 0.23.0: Use async_retrieve_chunk_subset_opt::<ndarray::ArrayD<T>>() insteadAvailable on crate features async and ndarray only.
pub async fn async_retrieve_chunk_subset_ndarray_opt<T: ElementOwned + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
async and ndarray only.Async variant of retrieve_chunk_subset_ndarray_opt.
Sourcepub async fn async_partial_decoder_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
Available on crate feature async only.
pub async fn async_partial_decoder_opt( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn AsyncArrayPartialDecoderTraits>, ArrayError>
async only.Async variant of partial_decoder_opt.
Source§impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>
Sourcepub async fn async_store_metadata(&self) -> Result<(), StorageError>
Available on crate feature async only.
pub async fn async_store_metadata(&self) -> Result<(), StorageError>
async only.Async variant of store_metadata.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_store_metadata_opt(
&self,
options: &ArrayMetadataOptions,
) -> Result<(), StorageError>
Available on crate feature async only.
pub async fn async_store_metadata_opt( &self, options: &ArrayMetadataOptions, ) -> Result<(), StorageError>
async only.Async variant of store_metadata_opt.
Sourcepub async fn async_store_chunk<'a>(
&self,
chunk_indices: &[u64],
chunk_data: impl IntoArrayBytes<'a> + MaybeSend,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_chunk<'a>( &self, chunk_indices: &[u64], chunk_data: impl IntoArrayBytes<'a> + MaybeSend, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_store_chunk_elements<T: Element + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk() insteadAvailable on crate feature async only.
pub async fn async_store_chunk_elements<T: Element + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_elements: &[T], ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_elements.
Sourcepub async fn async_store_chunk_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_chunk_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_chunk_ndarray.
Sourcepub async fn async_store_chunks<'a>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_data: impl IntoArrayBytes<'a> + MaybeSend,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_chunks<'a>( &self, chunks: &dyn ArraySubsetTraits, chunks_data: impl IntoArrayBytes<'a> + MaybeSend, ) -> Result<(), ArrayError>
async only.Async variant of store_chunks.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_store_chunks_elements<T: Element + MaybeSend + MaybeSync>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunks() insteadAvailable on crate feature async only.
pub async fn async_store_chunks_elements<T: Element + MaybeSend + MaybeSync>( &self, chunks: &dyn ArraySubsetTraits, chunks_elements: &[T], ) -> Result<(), ArrayError>
async only.Async variant of store_chunks_elements.
Sourcepub async fn async_store_chunks_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunks() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_chunks_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, chunks: &dyn ArraySubsetTraits, chunks_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_chunks_ndarray.
Sourcepub async fn async_erase_metadata(&self) -> Result<(), StorageError>
Available on crate feature async only.
pub async fn async_erase_metadata(&self) -> Result<(), StorageError>
async only.Async variant of erase_metadata.
Sourcepub async fn async_erase_metadata_opt(
&self,
options: MetadataEraseVersion,
) -> Result<(), StorageError>
Available on crate feature async only.
pub async fn async_erase_metadata_opt( &self, options: MetadataEraseVersion, ) -> Result<(), StorageError>
async only.Async variant of erase_metadata_opt.
Sourcepub async fn async_erase_chunk(
&self,
chunk_indices: &[u64],
) -> Result<(), StorageError>
Available on crate feature async only.
pub async fn async_erase_chunk( &self, chunk_indices: &[u64], ) -> Result<(), StorageError>
async only.Async variant of erase_chunk.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_erase_chunks(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<(), StorageError>
Available on crate feature async only.
pub async fn async_erase_chunks( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<(), StorageError>
async only.Async variant of erase_chunks.
Sourcepub async fn async_store_chunk_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_data: impl IntoArrayBytes<'a> + MaybeSend,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_chunk_opt<'a>( &self, chunk_indices: &[u64], chunk_data: impl IntoArrayBytes<'a> + MaybeSend, options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_opt.
Sourcepub async unsafe fn async_store_encoded_chunk(
&self,
chunk_indices: &[u64],
encoded_chunk_bytes: Bytes,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async unsafe fn async_store_encoded_chunk( &self, chunk_indices: &[u64], encoded_chunk_bytes: Bytes, ) -> Result<(), ArrayError>
async only.Async variant of store_encoded_chunk
Sourcepub async fn async_store_chunk_elements_opt<T: Element + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk_opt() insteadAvailable on crate feature async only.
pub async fn async_store_chunk_elements_opt<T: Element + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_elements_opt.
Sourcepub async fn async_store_chunk_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk_opt() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_chunk_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_chunk_ndarray_opt.
Sourcepub async fn async_store_chunks_opt<'a>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_data: impl IntoArrayBytes<'a> + MaybeSend,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_chunks_opt<'a>( &self, chunks: &dyn ArraySubsetTraits, chunks_data: impl IntoArrayBytes<'a> + MaybeSend, options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunks_opt.
Sourcepub async fn async_store_chunks_elements_opt<T: Element + MaybeSend + MaybeSync>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunks_opt() insteadAvailable on crate feature async only.
pub async fn async_store_chunks_elements_opt<T: Element + MaybeSend + MaybeSync>( &self, chunks: &dyn ArraySubsetTraits, chunks_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunks_elements_opt.
Sourcepub async fn async_store_chunks_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
chunks: &dyn ArraySubsetTraits,
chunks_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunks_opt() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_chunks_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, chunks: &dyn ArraySubsetTraits, chunks_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_chunks_ndarray_opt.
Source§impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>
Sourcepub fn async_readable(&self) -> Array<dyn AsyncReadableStorageTraits>
Available on crate feature async only.
pub fn async_readable(&self) -> Array<dyn AsyncReadableStorageTraits>
async only.Return a read-only instantiation of the array.
Sourcepub async fn async_store_chunk_subset<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_data: impl IntoArrayBytes<'a> + MaybeSend,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_chunk_subset<'a>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_data: impl IntoArrayBytes<'a> + MaybeSend, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_subset.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_store_chunk_subset_elements<T: Element + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk_subset() insteadAvailable on crate feature async only.
pub async fn async_store_chunk_subset_elements<T: Element + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_elements: &[T], ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_subset_elements.
Sourcepub async fn async_store_chunk_subset_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk_subset() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_chunk_subset_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_chunk_subset_ndarray.
Sourcepub async fn async_store_array_subset<'a>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_data: impl IntoArrayBytes<'a> + MaybeSend,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_array_subset<'a>( &self, array_subset: &dyn ArraySubsetTraits, subset_data: impl IntoArrayBytes<'a> + MaybeSend, ) -> Result<(), ArrayError>
async only.Async variant of store_array_subset.
Examples found in repository?
8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub async fn async_store_array_subset_elements<T: Element + MaybeSend + MaybeSync>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_elements: &[T],
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_array_subset() insteadAvailable on crate feature async only.
pub async fn async_store_array_subset_elements<T: Element + MaybeSend + MaybeSync>( &self, array_subset: &dyn ArraySubsetTraits, subset_elements: &[T], ) -> Result<(), ArrayError>
async only.Async variant of store_array_subset_elements.
Sourcepub async fn async_store_array_subset_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: &ArrayRef<T, D>,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_array_subset() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_array_subset_ndarray<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, subset_start: &[u64], subset_array: &ArrayRef<T, D>, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_array_subset_ndarray.
Sourcepub async fn async_compact_chunk(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<bool, ArrayError>
Available on crate feature async only.
pub async fn async_compact_chunk( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<bool, ArrayError>
async only.Async variant of compact_chunk.
Sourcepub async fn async_store_chunk_subset_opt<'a>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_data: impl IntoArrayBytes<'a> + MaybeSend,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_chunk_subset_opt<'a>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_data: impl IntoArrayBytes<'a> + MaybeSend, options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_subset_opt.
Sourcepub async fn async_store_chunk_subset_elements_opt<T: Element + MaybeSend + MaybeSync>(
&self,
chunk_indices: &[u64],
chunk_subset: &dyn ArraySubsetTraits,
chunk_subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk_subset_opt() insteadAvailable on crate feature async only.
pub async fn async_store_chunk_subset_elements_opt<T: Element + MaybeSend + MaybeSync>( &self, chunk_indices: &[u64], chunk_subset: &dyn ArraySubsetTraits, chunk_subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_subset_elements_opt.
Sourcepub async fn async_store_chunk_subset_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_chunk_subset_opt() insteadAvailable on crate feature async only.
pub async fn async_store_chunk_subset_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_chunk_subset_ndarray_opt.
Sourcepub async fn async_store_array_subset_opt<'a>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_data: impl IntoArrayBytes<'a> + MaybeSend,
options: &CodecOptions,
) -> Result<(), ArrayError>
Available on crate feature async only.
pub async fn async_store_array_subset_opt<'a>( &self, array_subset: &dyn ArraySubsetTraits, subset_data: impl IntoArrayBytes<'a> + MaybeSend, options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_array_subset_opt.
Sourcepub async fn async_store_array_subset_elements_opt<T: Element + MaybeSend + MaybeSync>(
&self,
array_subset: &dyn ArraySubsetTraits,
subset_elements: &[T],
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_array_subset_opt() insteadAvailable on crate feature async only.
pub async fn async_store_array_subset_elements_opt<T: Element + MaybeSend + MaybeSync>( &self, array_subset: &dyn ArraySubsetTraits, subset_elements: &[T], options: &CodecOptions, ) -> Result<(), ArrayError>
async only.Async variant of store_array_subset_elements_opt.
Sourcepub async fn async_store_array_subset_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: &ArrayRef<T, D>,
options: &CodecOptions,
) -> Result<(), ArrayError>
👎Deprecated since 0.23.0: Use async_store_array_subset_opt() insteadAvailable on crate features async and ndarray only.
pub async fn async_store_array_subset_ndarray_opt<T: Element + MaybeSend + MaybeSync, D: Dimension>( &self, subset_start: &[u64], subset_array: &ArrayRef<T, D>, options: &CodecOptions, ) -> Result<(), ArrayError>
async and ndarray only.Async variant of store_array_subset_ndarray_opt.
Sourcepub async fn async_partial_encoder(
&self,
chunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Arc<dyn AsyncArrayPartialEncoderTraits>, ArrayError>
Available on crate feature async only.
pub async fn async_partial_encoder( &self, chunk_indices: &[u64], options: &CodecOptions, ) -> Result<Arc<dyn AsyncArrayPartialEncoderTraits>, ArrayError>
async only.Initialises an asynchronous partial encoder for the chunk at chunk_indices.
Only one partial encoder should be created for a chunk at a time because:
- partial encoders can hold internal state that may become out of sync, and
- parallel writing to the same chunk may result in data loss.
Partial encoding with AsyncArrayPartialEncoderTraits::partial_encode will use parallelism internally where possible.
§Errors
Returns an ArrayError if initialisation of the partial encoder fails.
Source§impl<TStorage: ?Sized> Array<TStorage>
impl<TStorage: ?Sized> Array<TStorage>
Sourcepub fn with_storage<TStorage2: ?Sized>(
&self,
storage: Arc<TStorage2>,
) -> Array<TStorage2>
pub fn with_storage<TStorage2: ?Sized>( &self, storage: Arc<TStorage2>, ) -> Array<TStorage2>
Replace the storage backing an array.
Sourcepub fn new_with_metadata(
storage: Arc<TStorage>,
path: &str,
metadata: ArrayMetadata,
) -> Result<Self, ArrayCreateError>
pub fn new_with_metadata( storage: Arc<TStorage>, path: &str, metadata: ArrayMetadata, ) -> Result<Self, ArrayCreateError>
Create an array in storage at path with metadata.
This does not write to the store, use store_metadata to write metadata to storage.
§Errors
Returns ArrayCreateError if:
- any metadata is invalid or,
- a plugin (e.g. data type/chunk grid/chunk key encoding/codec/storage transformer) is invalid.
Examples found in repository?
22fn main() -> Result<(), Box<dyn std::error::Error>> {
23 let store = Arc::new(zarrs_storage::store::MemoryStore::new());
24
25 let serde_json::Value::Object(attributes) = serde_json::json!({
26 "foo": "bar",
27 "baz": 42,
28 }) else {
29 unreachable!()
30 };
31
32 // Create a Zarr V2 group
33 let group_metadata: GroupMetadata = GroupMetadataV2::new()
34 .with_attributes(attributes.clone())
35 .into();
36 let group = Group::new_with_metadata(store.clone(), "/group", group_metadata)?;
37
38 // Store the metadata as V2 and V3
39 let convert_group_metadata_to_v3 =
40 GroupMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
41 group.store_metadata()?;
42 group.store_metadata_opt(&convert_group_metadata_to_v3)?;
43 println!(
44 "group/.zgroup (Zarr V2 group metadata):\n{}\n",
45 key_to_str(&store, "group/.zgroup")?
46 );
47 println!(
48 "group/.zattrs (Zarr V2 group attributes):\n{}\n",
49 key_to_str(&store, "group/.zattrs")?
50 );
51 println!(
52 "group/zarr.json (Zarr V3 equivalent group metadata/attributes):\n{}\n",
53 key_to_str(&store, "group/zarr.json")?
54 );
55 // println!(
56 // "The equivalent Zarr V3 group metadata is\n{}\n",
57 // group.metadata_opt(&convert_group_metadata_to_v3).to_string_pretty()
58 // );
59
60 // Create a Zarr V2 array
61 let array_metadata = ArrayMetadataV2::new(
62 vec![10, 10],
63 vec![NonZeroU64::new(5).unwrap(); 2],
64 ">f4".into(), // big endian float32
65 FillValueMetadata::from(f32::NAN),
66 None,
67 None,
68 )
69 .with_dimension_separator(ChunkKeySeparator::Slash)
70 .with_order(ArrayMetadataV2Order::F)
71 .with_attributes(attributes.clone());
72 let array = zarrs::array::Array::new_with_metadata(
73 store.clone(),
74 "/group/array",
75 array_metadata.into(),
76 )?;
77
78 // Store the metadata as V2 and V3
79 let convert_array_metadata_to_v3 =
80 ArrayMetadataOptions::default().with_metadata_convert_version(MetadataConvertVersion::V3);
81 array.store_metadata()?;
82 array.store_metadata_opt(&convert_array_metadata_to_v3)?;
83 println!(
84 "group/array/.zarray (Zarr V2 array metadata):\n{}\n",
85 key_to_str(&store, "group/array/.zarray")?
86 );
87 println!(
88 "group/array/.zattrs (Zarr V2 array attributes):\n{}\n",
89 key_to_str(&store, "group/array/.zattrs")?
90 );
91 println!(
92 "group/array/zarr.json (Zarr V3 equivalent array metadata/attributes):\n{}\n",
93 key_to_str(&store, "group/array/zarr.json")?
94 );
95 // println!(
96 // "The equivalent Zarr V3 array metadata is\n{}\n",
97 // array.metadata_opt(&convert_array_metadata_to_v3).to_string_pretty()
98 // );
99
100 array.store_chunk(&[0, 1], &[0.0f32; 5 * 5])?;
101
102 // Print the keys in the store
103 println!("The store contains keys:");
104 for key in store.list()? {
105 println!(" {}", key);
106 }
107
108 Ok(())
109}Sourcepub fn with_codec_options(self, codec_options: CodecOptions) -> Self
pub fn with_codec_options(self, codec_options: CodecOptions) -> Self
Set the codec options.
Sourcepub fn set_codec_options(&mut self, codec_options: CodecOptions) -> &mut Self
pub fn set_codec_options(&mut self, codec_options: CodecOptions) -> &mut Self
Set the codec options.
Sourcepub fn with_metadata_options(
self,
metadata_options: ArrayMetadataOptions,
) -> Self
pub fn with_metadata_options( self, metadata_options: ArrayMetadataOptions, ) -> Self
Set the metadata options.
Sourcepub fn set_metadata_options(
&mut self,
metadata_options: ArrayMetadataOptions,
) -> &mut Self
pub fn set_metadata_options( &mut self, metadata_options: ArrayMetadataOptions, ) -> &mut Self
Set the metadata options.
Sourcepub const fn fill_value(&self) -> &FillValue
pub const fn fill_value(&self) -> &FillValue
Get the fill value.
Sourcepub fn set_shape(
&mut self,
array_shape: ArrayShape,
) -> Result<&mut Self, ArrayCreateError>
pub fn set_shape( &mut self, array_shape: ArrayShape, ) -> Result<&mut Self, ArrayCreateError>
Set the array shape.
§Errors
Returns an ArrayCreateError if the chunk grid is not compatible with array_shape.
Sourcepub unsafe fn set_shape_and_chunk_grid(
&mut self,
array_shape: ArrayShape,
chunk_grid_metadata: impl Into<ArrayBuilderChunkGridMetadata>,
) -> Result<&mut Self, ArrayCreateError>
pub unsafe fn set_shape_and_chunk_grid( &mut self, array_shape: ArrayShape, chunk_grid_metadata: impl Into<ArrayBuilderChunkGridMetadata>, ) -> Result<&mut Self, ArrayCreateError>
Set the array shape and chunk grid from chunk grid metadata.
This method allows setting both the array shape and chunk grid simultaneously.
Some chunk grids depend on the array shape (e.g. rectilinear), so this method ensures that the chunk grid is correctly configured for the new array shape.
§Errors
Returns an ArrayCreateError if:
- the chunk grid is not compatible with
array_shape, or - the chunk grid metadata is invalid.
§Safety
This method does not validate that existing chunks in the store are compatible with the new chunk grid. If the chunk grid is changed such that existing chunks are no longer valid, subsequent read or write operations may fail or produce incorrect results.
It is the caller’s responsibility to ensure that the new chunk grid is compatible with any existing data in the store. This may involve deleting or rewriting existing chunks to match the new chunk grid. Use with caution!
Sourcepub fn dimensionality(&self) -> usize
pub fn dimensionality(&self) -> usize
Get the array dimensionality.
Sourcepub fn codecs(&self) -> Arc<CodecChain>
pub fn codecs(&self) -> Arc<CodecChain>
Get the codecs.
Sourcepub const fn chunk_grid(&self) -> &ChunkGrid
pub const fn chunk_grid(&self) -> &ChunkGrid
Get the chunk grid.
Examples found in repository?
13fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
14 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
15 use zarrs::array::{ArraySubset, ZARR_NAN_F32, codec, data_type};
16 use zarrs::node::Node;
17 use zarrs::storage::store;
18
19 // Create a store
20 // let path = tempfile::TempDir::new()?;
21 // let mut store: ReadableWritableListableStorage =
22 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
23 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
24 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
25 && arg1 == "--usage-log"
26 {
27 let log_writer = Arc::new(std::sync::Mutex::new(
28 // std::io::BufWriter::new(
29 std::io::stdout(),
30 // )
31 ));
32 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
33 chrono::Utc::now().format("[%T%.3f] ").to_string()
34 }));
35 }
36
37 // Create the root group
38 zarrs::group::GroupBuilder::new()
39 .build(store.clone(), "/")?
40 .store_metadata()?;
41
42 // Create a group with attributes
43 let group_path = "/group";
44 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
45 group
46 .attributes_mut()
47 .insert("foo".into(), serde_json::Value::String("bar".into()));
48 group.store_metadata()?;
49
50 println!(
51 "The group metadata is:\n{}\n",
52 group.metadata().to_string_pretty()
53 );
54
55 // Create an array
56 let array_path = "/group/array";
57 let array = zarrs::array::ArrayBuilder::new(
58 vec![8, 8], // array shape
59 MetadataV3::new_with_configuration(
60 "rectangular",
61 RectangularChunkGridConfiguration {
62 chunk_shape: vec![
63 vec![
64 NonZeroU64::new(1).unwrap(),
65 NonZeroU64::new(2).unwrap(),
66 NonZeroU64::new(3).unwrap(),
67 NonZeroU64::new(2).unwrap(),
68 ]
69 .into(),
70 NonZeroU64::new(4).unwrap().into(),
71 ], // chunk sizes
72 },
73 ),
74 data_type::float32(),
75 ZARR_NAN_F32,
76 )
77 .bytes_to_bytes_codecs(vec![
78 #[cfg(feature = "gzip")]
79 Arc::new(codec::GzipCodec::new(5)?),
80 ])
81 .dimension_names(["y", "x"].into())
82 // .storage_transformers(vec![].into())
83 .build(store.clone(), array_path)?;
84
85 // Write array metadata to store
86 array.store_metadata()?;
87
88 // Write some chunks (in parallel)
89 (0..4).into_par_iter().try_for_each(|i| {
90 let chunk_grid = array.chunk_grid();
91 let chunk_indices = vec![i, 0];
92 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
93 let chunk_array = ndarray::ArrayD::<f32>::from_elem(
94 chunk_shape
95 .iter()
96 .map(|u| u.get() as usize)
97 .collect::<Vec<_>>(),
98 i as f32,
99 );
100 array.store_chunk(&chunk_indices, chunk_array)
101 } else {
102 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
103 chunk_indices.to_vec(),
104 ))
105 }
106 })?;
107
108 println!(
109 "The array metadata is:\n{}\n",
110 array.metadata().to_string_pretty()
111 );
112
113 // Write a subset spanning multiple chunks, including updating chunks already written
114 array.store_array_subset(
115 &[3..6, 3..6], // start
116 ndarray::ArrayD::<f32>::from_shape_vec(
117 vec![3, 3],
118 vec![0.1f32, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
119 )?,
120 )?;
121
122 // Store elements directly, in this case set the 7th column to 123.0
123 array.store_array_subset(&[0..8, 6..7], &[123.0f32; 8])?;
124
125 // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
126 array.store_chunk_subset(
127 // chunk indices
128 &[3, 1],
129 // subset within chunk
130 &[1..2, 0..4],
131 &[-4.0f32; 4],
132 )?;
133
134 // Read the whole array
135 let data_all: ArrayD<f32> = array.retrieve_array_subset(&array.subset_all())?;
136 println!("The whole array is:\n{data_all}\n");
137
138 // Read a chunk back from the store
139 let chunk_indices = vec![1, 0];
140 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
141 println!("Chunk [1,0] is:\n{data_chunk}\n");
142
143 // Read the central 4x2 subset of the array
144 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
145 let data_4x2: ArrayD<f32> = array.retrieve_array_subset(&subset_4x2)?;
146 println!("The middle 4x2 subset is:\n{data_4x2}\n");
147
148 // Show the hierarchy
149 let node = Node::open(&store, "/").unwrap();
150 let tree = node.hierarchy_tree();
151 println!("The Zarr hierarchy tree is:\n{tree}");
152
153 Ok(())
154}More examples
8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 })?;
86
87 let subset_all = array.subset_all();
88 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
89 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
90
91 // Store multiple chunks
92 array.store_chunks(
93 &[1..2, 0..2],
94 &[
95 //
96 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
97 //
98 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
99 ],
100 )?;
101 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
102 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
103
104 // Write a subset spanning multiple chunks, including updating chunks already written
105 array.store_array_subset(
106 &[3..6, 3..6],
107 &[-3.3f32, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
108 )?;
109 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
110 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
111
112 // Store array subset
113 array.store_array_subset(
114 &[0..8, 6..7],
115 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
116 )?;
117 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
118 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
119
120 // Store chunk subset
121 array.store_chunk_subset(
122 // chunk indices
123 &[1, 1],
124 // subset within chunk
125 &[3..4, 0..4],
126 &[-7.4f32, -7.5, -7.6, -7.7],
127 )?;
128 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
129 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
130
131 // Erase a chunk
132 array.erase_chunk(&[0, 0])?;
133 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
134 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
135
136 // Read a chunk
137 let chunk_indices = vec![0, 1];
138 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
139 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
140
141 // Read chunks
142 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
143 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
144 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
145
146 // Retrieve an array subset
147 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
148 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
149 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
150
151 // Show the hierarchy
152 let node = Node::open(&store, "/").unwrap();
153 let tree = node.hierarchy_tree();
154 println!("hierarchy_tree:\n{}", tree);
155
156 Ok(())
157}10fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
11 use std::sync::Arc;
12
13 use rayon::prelude::{IntoParallelIterator, ParallelIterator};
14 use zarrs::array::{ArraySubset, codec, data_type};
15 use zarrs::node::Node;
16 use zarrs::storage::store;
17
18 // Create a store
19 // let path = tempfile::TempDir::new()?;
20 // let mut store: ReadableWritableListableStorage =
21 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
22 // let mut store: ReadableWritableListableStorage = Arc::new(
23 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/sharded_array_write_read.zarr")?,
24 // );
25 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
26 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
27 && arg1 == "--usage-log"
28 {
29 let log_writer = Arc::new(std::sync::Mutex::new(
30 // std::io::BufWriter::new(
31 std::io::stdout(),
32 // )
33 ));
34 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
35 chrono::Utc::now().format("[%T%.3f] ").to_string()
36 }));
37 }
38
39 // Create the root group
40 zarrs::group::GroupBuilder::new()
41 .build(store.clone(), "/")?
42 .store_metadata()?;
43
44 // Create a group with attributes
45 let group_path = "/group";
46 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
47 group
48 .attributes_mut()
49 .insert("foo".into(), serde_json::Value::String("bar".into()));
50 group.store_metadata()?;
51
52 // Create an array
53 let array_path = "/group/array";
54 let subchunk_shape = vec![4, 4];
55 let array = zarrs::array::ArrayBuilder::new(
56 vec![8, 8], // array shape
57 vec![4, 8], // chunk (shard) shape
58 data_type::uint16(),
59 0u16,
60 )
61 .subchunk_shape(subchunk_shape.clone())
62 .bytes_to_bytes_codecs(vec![
63 #[cfg(feature = "gzip")]
64 Arc::new(codec::GzipCodec::new(5)?),
65 ])
66 .dimension_names(["y", "x"].into())
67 // .storage_transformers(vec![].into())
68 .build(store.clone(), array_path)?;
69
70 // Write array metadata to store
71 array.store_metadata()?;
72
73 // The array metadata is
74 println!(
75 "The array metadata is:\n{}\n",
76 array.metadata().to_string_pretty()
77 );
78
79 // Use default codec options (concurrency etc)
80 let options = CodecOptions::default();
81
82 // Write some shards (in parallel)
83 (0..2).into_par_iter().try_for_each(|s| {
84 let chunk_grid = array.chunk_grid();
85 let chunk_indices = vec![s, 0];
86 if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices)? {
87 let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
88 chunk_shape
89 .iter()
90 .map(|u| u.get() as usize)
91 .collect::<Vec<_>>(),
92 |ij| {
93 (s * chunk_shape[0].get() * chunk_shape[1].get()
94 + ij[0] as u64 * chunk_shape[1].get()
95 + ij[1] as u64) as u16
96 },
97 );
98 array.store_chunk(&chunk_indices, chunk_array)
99 } else {
100 Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
101 chunk_indices.to_vec(),
102 ))
103 }
104 })?;
105
106 // Read the whole array
107 let data_all: ArrayD<u16> = array.retrieve_array_subset(&array.subset_all())?;
108 println!("The whole array is:\n{data_all}\n");
109
110 // Read a shard back from the store
111 let shard_indices = vec![1, 0];
112 let data_shard: ArrayD<u16> = array.retrieve_chunk(&shard_indices)?;
113 println!("Shard [1,0] is:\n{data_shard}\n");
114
115 // Read a subchunk from the store
116 let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
117 let data_chunk: ArrayD<u16> = array.retrieve_array_subset(&subset_chunk_1_0)?;
118 println!("Chunk [1,0] is:\n{data_chunk}\n");
119
120 // Read the central 4x2 subset of the array
121 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
122 let data_4x2: ArrayD<u16> = array.retrieve_array_subset(&subset_4x2)?;
123 println!("The middle 4x2 subset is:\n{data_4x2}\n");
124
125 // Decode subchunks
126 // In some cases, it might be preferable to decode subchunks in a shard directly.
127 // If using the partial decoder, then the shard index will only be read once from the store.
128 let partial_decoder = array.partial_decoder(&[0, 0])?;
129 println!("Decoded subchunks:");
130 for subchunk_subset in [
131 ArraySubset::new_with_start_shape(vec![0, 0], subchunk_shape.clone())?,
132 ArraySubset::new_with_start_shape(vec![0, 4], subchunk_shape.clone())?,
133 ] {
134 println!("{subchunk_subset}");
135 let decoded_subchunk_bytes = partial_decoder.partial_decode(&subchunk_subset, &options)?;
136 let ndarray = bytes_to_ndarray::<u16>(
137 &subchunk_shape,
138 decoded_subchunk_bytes.into_fixed()?.into_owned(),
139 )?;
140 println!("{ndarray}\n");
141 }
142
143 // Show the hierarchy
144 let node = Node::open(&store, "/").unwrap();
145 let tree = node.hierarchy_tree();
146 println!("The Zarr hierarchy tree is:\n{}", tree);
147
148 println!(
149 "The keys in the store are:\n[{}]",
150 store.list().unwrap_or_default().iter().format(", ")
151 );
152
153 Ok(())
154}8fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
12 use zarrs::node::Node;
13 use zarrs::storage::store;
14
15 // Create a store
16 // let path = tempfile::TempDir::new()?;
17 // let mut store: ReadableWritableListableStorage =
18 // Arc::new(zarrs::filesystem::FilesystemStore::new(path.path())?);
19 // let mut store: ReadableWritableListableStorage = Arc::new(
20 // zarrs::filesystem::FilesystemStore::new("zarrs/tests/data/array_write_read.zarr")?,
21 // );
22 let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
23 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
24 && arg1 == "--usage-log"
25 {
26 let log_writer = Arc::new(std::sync::Mutex::new(
27 // std::io::BufWriter::new(
28 std::io::stdout(),
29 // )
30 ));
31 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
32 chrono::Utc::now().format("[%T%.3f] ").to_string()
33 }));
34 }
35
36 // Create the root group
37 zarrs::group::GroupBuilder::new()
38 .build(store.clone(), "/")?
39 .store_metadata()?;
40
41 // Create a group with attributes
42 let group_path = "/group";
43 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
44 group
45 .attributes_mut()
46 .insert("foo".into(), serde_json::Value::String("bar".into()));
47 group.store_metadata()?;
48
49 println!(
50 "The group metadata is:\n{}\n",
51 group.metadata().to_string_pretty()
52 );
53
54 // Create an array
55 let array_path = "/group/array";
56 let array = zarrs::array::ArrayBuilder::new(
57 vec![8, 8], // array shape
58 vec![4, 4], // regular chunk shape
59 data_type::float32(),
60 ZARR_NAN_F32,
61 )
62 // .bytes_to_bytes_codecs(vec![]) // uncompressed
63 .dimension_names(["y", "x"].into())
64 // .storage_transformers(vec![].into())
65 .build(store.clone(), array_path)?;
66
67 // Write array metadata to store
68 array.store_metadata()?;
69
70 println!(
71 "The array metadata is:\n{}\n",
72 array.metadata().to_string_pretty()
73 );
74
75 // Write some chunks
76 (0..2).into_par_iter().try_for_each(|i| {
77 let chunk_indices: Vec<u64> = vec![0, i];
78 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
79 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
80 })?;
81 array.store_chunk(
82 &chunk_indices,
83 ArrayD::<f32>::from_shape_vec(
84 chunk_subset.shape_usize(),
85 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
86 )
87 .unwrap(),
88 )
89 })?;
90
91 let subset_all = array.subset_all();
92 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
93 println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
94
95 // Store multiple chunks
96 let ndarray_chunks: Array2<f32> = array![
97 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
98 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
99 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
100 [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
101 ];
102 array.store_chunks(&[1..2, 0..2], ndarray_chunks)?;
103 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
104 println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
105
106 // Write a subset spanning multiple chunks, including updating chunks already written
107 let ndarray_subset: Array2<f32> =
108 array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
109 array.store_array_subset(&[3..6, 3..6], ndarray_subset)?;
110 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
111 println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
112
113 // Store array subset
114 let ndarray_subset: Array2<f32> = array![
115 [-0.6],
116 [-1.6],
117 [-2.6],
118 [-3.6],
119 [-4.6],
120 [-5.6],
121 [-6.6],
122 [-7.6],
123 ];
124 array.store_array_subset(&[0..8, 6..7], ndarray_subset)?;
125 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
126 println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
127
128 // Store chunk subset
129 let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
130 array.store_chunk_subset(
131 // chunk indices
132 &[1, 1],
133 // subset within chunk
134 &[3..4, 0..4],
135 ndarray_chunk_subset,
136 )?;
137 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
138 println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
139
140 // Erase a chunk
141 array.erase_chunk(&[0, 0])?;
142 let data_all: ArrayD<f32> = array.retrieve_array_subset(&subset_all)?;
143 println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
144
145 // Read a chunk
146 let chunk_indices = vec![0, 1];
147 let data_chunk: ArrayD<f32> = array.retrieve_chunk(&chunk_indices)?;
148 println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
149
150 // Read chunks
151 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
152 let data_chunks: ArrayD<f32> = array.retrieve_chunks(&chunks)?;
153 println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
154
155 // Retrieve an array subset
156 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
157 let data_subset: ArrayD<f32> = array.retrieve_array_subset(&subset)?;
158 println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
159
160 // Show the hierarchy
161 let node = Node::open(&store, "/").unwrap();
162 let tree = node.hierarchy_tree();
163 println!("hierarchy_tree:\n{}", tree);
164
165 Ok(())
166}8async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
9 use std::sync::Arc;
10
11 use futures::StreamExt;
12 use zarrs::array::{ArraySubset, ZARR_NAN_F32, data_type};
13 use zarrs::node::Node;
14
15 // Create a store
16 let mut store: AsyncReadableWritableListableStorage = Arc::new(
17 zarrs_object_store::AsyncObjectStore::new(object_store::memory::InMemory::new()),
18 );
19 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
20 && arg1 == "--usage-log"
21 {
22 let log_writer = Arc::new(std::sync::Mutex::new(
23 // std::io::BufWriter::new(
24 std::io::stdout(),
25 // )
26 ));
27 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
28 chrono::Utc::now().format("[%T%.3f] ").to_string()
29 }));
30 }
31
32 // Create the root group
33 zarrs::group::GroupBuilder::new()
34 .build(store.clone(), "/")?
35 .async_store_metadata()
36 .await?;
37
38 // Create a group with attributes
39 let group_path = "/group";
40 let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
41 group
42 .attributes_mut()
43 .insert("foo".into(), serde_json::Value::String("bar".into()));
44 group.async_store_metadata().await?;
45
46 println!(
47 "The group metadata is:\n{}\n",
48 group.metadata().to_string_pretty()
49 );
50
51 // Create an array
52 let array_path = "/group/array";
53 let array = zarrs::array::ArrayBuilder::new(
54 vec![8, 8], // array shape
55 vec![4, 4], // regular chunk shape
56 data_type::float32(),
57 ZARR_NAN_F32,
58 )
59 // .bytes_to_bytes_codecs(vec![]) // uncompressed
60 .dimension_names(["y", "x"].into())
61 // .storage_transformers(vec![].into())
62 .build_arc(store.clone(), array_path)?;
63
64 // Write array metadata to store
65 array.async_store_metadata().await?;
66
67 println!(
68 "The array metadata is:\n{}\n",
69 array.metadata().to_string_pretty()
70 );
71
72 // Write some chunks
73 let store_chunk = |i: u64| {
74 let array = array.clone();
75 async move {
76 let chunk_indices: Vec<u64> = vec![0, i];
77 let chunk_subset = array.chunk_grid().subset(&chunk_indices)?.ok_or_else(|| {
78 zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
79 })?;
80 array
81 .async_store_chunk(
82 &chunk_indices,
83 vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
84 )
85 .await
86 }
87 };
88 futures::stream::iter(0..2)
89 .map(Ok)
90 .try_for_each_concurrent(None, store_chunk)
91 .await?;
92
93 let subset_all = array.subset_all();
94 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
95 println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
96
97 // Store multiple chunks
98 array
99 .async_store_chunks(
100 &[1..2, 0..2],
101 &[
102 //
103 1.0f32, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
104 //
105 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
106 ],
107 )
108 .await?;
109 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
110 println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
111
112 // Write a subset spanning multiple chunks, including updating chunks already written
113 array
114 .async_store_array_subset(
115 &[3..6, 3..6],
116 &[-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
117 )
118 .await?;
119 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
120 println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
121
122 // Store array subset
123 array
124 .async_store_array_subset(
125 &[0..8, 6..7],
126 &[-0.6f32, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
127 )
128 .await?;
129 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
130 println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
131
132 // Store chunk subset
133 array
134 .async_store_chunk_subset(
135 // chunk indices
136 &[1, 1],
137 // subset within chunk
138 &[3..4, 0..4],
139 &[-7.4f32, -7.5, -7.6, -7.7],
140 )
141 .await?;
142 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
143 println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
144
145 // Erase a chunk
146 array.async_erase_chunk(&[0, 0]).await?;
147 let data_all: ArrayD<f32> = array.async_retrieve_array_subset(&subset_all).await?;
148 println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
149
150 // Read a chunk
151 let chunk_indices = vec![0, 1];
152 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
153 println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
154
155 // Read chunks
156 let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
157 let data_chunks: ArrayD<f32> = array.async_retrieve_chunks(&chunks).await?;
158 println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
159
160 // Retrieve an array subset
161 let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
162 let data_subset: ArrayD<f32> = array.async_retrieve_array_subset(&subset).await?;
163 println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
164
165 // Show the hierarchy
166 let node = Node::async_open(store, "/").await.unwrap();
167 let tree = node.hierarchy_tree();
168 println!("hierarchy_tree:\n{}", tree);
169
170 Ok(())
171}Sourcepub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding
pub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding
Get the chunk key encoding.
Sourcepub const fn storage_transformers(&self) -> &StorageTransformerChain
pub const fn storage_transformers(&self) -> &StorageTransformerChain
Get the storage transformers.
Sourcepub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>
pub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>
Get the dimension names.
Sourcepub fn set_dimension_names(
&mut self,
dimension_names: Option<Vec<DimensionName>>,
) -> &mut Self
pub fn set_dimension_names( &mut self, dimension_names: Option<Vec<DimensionName>>, ) -> &mut Self
Set the dimension names.
Sourcepub const fn attributes(&self) -> &Map<String, Value>
pub const fn attributes(&self) -> &Map<String, Value>
Get the attributes.
Sourcepub fn attributes_mut(&mut self) -> &mut Map<String, Value>
pub fn attributes_mut(&mut self) -> &mut Map<String, Value>
Mutably borrow the array attributes.
Sourcepub fn metadata(&self) -> &ArrayMetadata
pub fn metadata(&self) -> &ArrayMetadata
Return the underlying array metadata.
Examples found in repository?
157fn main() {
158 let store = std::sync::Arc::new(MemoryStore::default());
159 let array_path = "/array";
160 let array = ArrayBuilder::new(
161 vec![4, 1], // array shape
162 vec![3, 1], // regular chunk shape
163 Arc::new(CustomDataTypeVariableSize),
164 [],
165 )
166 .array_to_array_codecs(vec![
167 #[cfg(feature = "transpose")]
168 Arc::new(zarrs::array::codec::TransposeCodec::new(
169 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
170 )),
171 ])
172 .bytes_to_bytes_codecs(vec![
173 #[cfg(feature = "gzip")]
174 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
175 #[cfg(feature = "crc32c")]
176 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
177 ])
178 // .storage_transformers(vec![].into())
179 .build(store, array_path)
180 .unwrap();
181 println!("{}", array.metadata().to_string_pretty());
182
183 let data = [
184 CustomDataTypeVariableSizeElement::from(Some(1.0)),
185 CustomDataTypeVariableSizeElement::from(None),
186 CustomDataTypeVariableSizeElement::from(Some(3.0)),
187 ];
188 array.store_chunk(&[0, 0], &data).unwrap();
189
190 let data: Vec<CustomDataTypeVariableSizeElement> =
191 array.retrieve_array_subset(&array.subset_all()).unwrap();
192
193 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
194 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
195 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
196 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
197
198 println!("{data:#?}");
199}More examples
280fn main() {
281 let store = std::sync::Arc::new(MemoryStore::default());
282 let array_path = "/array";
283 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
284 let array = ArrayBuilder::new(
285 vec![4, 1], // array shape
286 vec![2, 1], // regular chunk shape
287 Arc::new(CustomDataTypeFixedSize),
288 FillValue::new(fill_value.to_ne_bytes().to_vec()),
289 )
290 .array_to_array_codecs(vec![
291 #[cfg(feature = "transpose")]
292 Arc::new(zarrs::array::codec::TransposeCodec::new(
293 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
294 )),
295 ])
296 .bytes_to_bytes_codecs(vec![
297 #[cfg(feature = "gzip")]
298 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
299 #[cfg(feature = "crc32c")]
300 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
301 ])
302 // .storage_transformers(vec![].into())
303 .build(store, array_path)
304 .unwrap();
305 println!("{}", array.metadata().to_string_pretty());
306
307 let data = [
308 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
309 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
310 ];
311 array.store_chunk(&[0, 0], &data).unwrap();
312
313 let data: Vec<CustomDataTypeFixedSizeElement> =
314 array.retrieve_array_subset(&array.subset_all()).unwrap();
315
316 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
317 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
318 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
319 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
320
321 println!("{data:#?}");
322}192fn main() {
193 let store = std::sync::Arc::new(MemoryStore::default());
194 let array_path = "/array";
195 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
196 let array = ArrayBuilder::new(
197 vec![4096, 1], // array shape
198 vec![5, 1], // regular chunk shape
199 Arc::new(CustomDataTypeUInt12),
200 FillValue::new(fill_value.into_le_bytes().to_vec()),
201 )
202 .array_to_array_codecs(vec![
203 #[cfg(feature = "transpose")]
204 Arc::new(zarrs::array::codec::TransposeCodec::new(
205 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
206 )),
207 ])
208 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
209 .bytes_to_bytes_codecs(vec![
210 #[cfg(feature = "gzip")]
211 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
212 #[cfg(feature = "crc32c")]
213 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
214 ])
215 // .storage_transformers(vec![].into())
216 .build(store, array_path)
217 .unwrap();
218 println!("{}", array.metadata().to_string_pretty());
219
220 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
221 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
222 .collect();
223
224 array
225 .store_array_subset(&array.subset_all(), &data)
226 .unwrap();
227
228 let mut data: Vec<CustomDataTypeUInt12Element> =
229 array.retrieve_array_subset(&array.subset_all()).unwrap();
230
231 for (i, d) in data.drain(0..4096).enumerate() {
232 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
233 assert_eq!(d, element);
234 let element_pd: Vec<CustomDataTypeUInt12Element> = array
235 .retrieve_array_subset(&[(i as u64)..i as u64 + 1, 0..1])
236 .unwrap();
237 assert_eq!(element_pd[0], element);
238 }
239}203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 vec![5, 1], // regular chunk shape
210 Arc::new(CustomDataTypeFloat8e3m4),
211 FillValue::new(fill_value.into_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .bytes_to_bytes_codecs(vec![
220 #[cfg(feature = "gzip")]
221 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
222 #[cfg(feature = "crc32c")]
223 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
224 ])
225 // .storage_transformers(vec![].into())
226 .build(store, array_path)
227 .unwrap();
228 println!("{}", array.metadata().to_string_pretty());
229
230 let data = [
231 CustomDataTypeFloat8e3m4Element::from(2.34),
232 CustomDataTypeFloat8e3m4Element::from(3.45),
233 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
234 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
235 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
236 ];
237 array.store_chunk(&[0, 0], &data).unwrap();
238
239 let data: Vec<CustomDataTypeFloat8e3m4Element> =
240 array.retrieve_array_subset(&array.subset_all()).unwrap();
241
242 for f in &data {
243 println!(
244 "float8_e3m4: {:08b} f32: {}",
245 f.into_ne_bytes()[0],
246 f.into_f32()
247 );
248 }
249
250 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
251 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
252 assert_eq!(
253 data[2],
254 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
255 );
256 assert_eq!(
257 data[3],
258 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
259 );
260 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
261 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
262}15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 // Backend::OpenDAL => {
23 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 // let operator = opendal::Operator::new(builder)?.finish();
25 // Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 // }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
37 && arg1 == "--usage-log"
38 {
39 let log_writer = Arc::new(std::sync::Mutex::new(
40 // std::io::BufWriter::new(
41 std::io::stdout(),
42 // )
43 ));
44 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
45 chrono::Utc::now().format("[%T%.3f] ").to_string()
46 }));
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all: ArrayD<f32> = array
59 .async_retrieve_array_subset(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
66 println!("Chunk [1,0] is:\n{data_chunk}\n");
67
68 // Read the central 4x2 subset of the array
69 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
70 let data_4x2: ArrayD<f32> = array.async_retrieve_array_subset(&subset_4x2).await?;
71 println!("The middle 4x2 subset is:\n{data_4x2}\n");
72
73 Ok(())
74}194fn main() {
195 let store = std::sync::Arc::new(MemoryStore::default());
196 let array_path = "/array";
197 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
198 let array = ArrayBuilder::new(
199 vec![6, 1], // array shape
200 vec![5, 1], // regular chunk shape
201 Arc::new(CustomDataTypeUInt4),
202 FillValue::new(fill_value.into_ne_bytes().to_vec()),
203 )
204 .array_to_array_codecs(vec![
205 #[cfg(feature = "transpose")]
206 Arc::new(zarrs::array::codec::TransposeCodec::new(
207 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
208 )),
209 ])
210 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
211 .bytes_to_bytes_codecs(vec![
212 #[cfg(feature = "gzip")]
213 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
214 #[cfg(feature = "crc32c")]
215 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
216 ])
217 // .storage_transformers(vec![].into())
218 .build(store, array_path)
219 .unwrap();
220 println!("{}", array.metadata().to_string_pretty());
221
222 let data = [
223 CustomDataTypeUInt4Element::try_from(1).unwrap(),
224 CustomDataTypeUInt4Element::try_from(2).unwrap(),
225 CustomDataTypeUInt4Element::try_from(3).unwrap(),
226 CustomDataTypeUInt4Element::try_from(4).unwrap(),
227 CustomDataTypeUInt4Element::try_from(5).unwrap(),
228 ];
229 array.store_chunk(&[0, 0], &data).unwrap();
230
231 let data: Vec<CustomDataTypeUInt4Element> =
232 array.retrieve_array_subset(&array.subset_all()).unwrap();
233
234 for f in &data {
235 println!("uint4: {:08b} u8: {}", f.into_u8(), f.into_u8());
236 }
237
238 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
239 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
240 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
241 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
242 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
243 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
244
245 let data: Vec<CustomDataTypeUInt4Element> = array.retrieve_array_subset(&[1..3, 0..1]).unwrap();
246 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
247 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
248}- examples/data_type_optional_nested.rs
- examples/sync_http_array_read.rs
- examples/array_write_read_string.rs
- examples/rectangular_array_write_read.rs
- examples/array_write_read.rs
- examples/sharded_array_write_read.rs
- examples/data_type_optional.rs
- examples/array_write_read_ndarray.rs
- examples/async_array_write_read.rs
Sourcepub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata
pub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata
Return a new ArrayMetadata with ArrayMetadataOptions applied.
This method is used internally by Array::store_metadata and Array::store_metadata_opt.
Sourcepub fn builder(&self) -> ArrayBuilder
pub fn builder(&self) -> ArrayBuilder
Create an array builder matching the parameters of this array.
Sourcepub fn chunk_grid_shape(&self) -> &[u64]
pub fn chunk_grid_shape(&self) -> &[u64]
Return the shape of the chunk grid (i.e., the number of chunks).
Examples found in repository?
18fn main() -> Result<(), Box<dyn std::error::Error>> {
19 // Create an in-memory store
20 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
21 // "zarrs/tests/data/v3/array_optional.zarr",
22 // )?);
23 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
24
25 // Build the codec chains for the optional codec
26 let array = ArrayBuilder::new(
27 vec![4, 4], // 4x4 array
28 vec![2, 2], // 2x2 chunks
29 data_type::uint8().to_optional(), // Optional uint8
30 FillValue::new_optional_null(), // Null fill value: [0]
31 )
32 .dimension_names(["y", "x"].into())
33 .attributes(
34 serde_json::json!({
35 "description": r#"A 4x4 array of optional uint8 values with some missing data.
36N marks missing (`None`=`null`) values:
37 0 N 2 3
38 N 5 N 7
39 8 9 N N
4012 N N N"#,
41 })
42 .as_object()
43 .unwrap()
44 .clone(),
45 )
46 .build(store.clone(), "/array")?;
47 array.store_metadata_opt(
48 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
49 )?;
50
51 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
52
53 // Create some data with missing values
54 let data = ndarray::array![
55 [Some(0u8), None, Some(2u8), Some(3u8)],
56 [None, Some(5u8), None, Some(7u8)],
57 [Some(8u8), Some(9u8), None, None],
58 [Some(12u8), None, None, None],
59 ]
60 .into_dyn();
61
62 // Write the data
63 array.store_array_subset(&array.subset_all(), data.clone())?;
64
65 // Read back the data
66 let data_read: ArrayD<Option<u8>> = array.retrieve_array_subset(&array.subset_all())?;
67
68 // Verify data integrity
69 assert_eq!(data, data_read);
70
71 // Display the data in a grid format
72 println!("Data grid, N marks missing (`None`=`null`) values");
73 println!(" 0 1 2 3");
74 for y in 0..4 {
75 print!("{} ", y);
76 for x in 0..4 {
77 match data_read[[y, x]] {
78 Some(value) => print!("{:2} ", value),
79 None => print!(" N "),
80 }
81 }
82 println!();
83 }
84
85 // Print the raw bytes in all chunks
86 println!("Raw bytes in all chunks:");
87 let chunk_grid_shape = array.chunk_grid_shape();
88 for chunk_y in 0..chunk_grid_shape[0] {
89 for chunk_x in 0..chunk_grid_shape[1] {
90 let chunk_indices = vec![chunk_y, chunk_x];
91 let chunk_key = array.chunk_key(&chunk_indices);
92 println!(" Chunk [{}, {}] (key: {}):", chunk_y, chunk_x, chunk_key);
93
94 if let Some(chunk_bytes) = store.get(&chunk_key)? {
95 println!(" Size: {} bytes", chunk_bytes.len());
96
97 if chunk_bytes.len() >= 16 {
98 // Parse first 8 bytes as mask size (little-endian u64)
99 let mask_size = u64::from_le_bytes([
100 chunk_bytes[0],
101 chunk_bytes[1],
102 chunk_bytes[2],
103 chunk_bytes[3],
104 chunk_bytes[4],
105 chunk_bytes[5],
106 chunk_bytes[6],
107 chunk_bytes[7],
108 ]) as usize;
109
110 // Parse second 8 bytes as data size (little-endian u64)
111 let data_size = u64::from_le_bytes([
112 chunk_bytes[8],
113 chunk_bytes[9],
114 chunk_bytes[10],
115 chunk_bytes[11],
116 chunk_bytes[12],
117 chunk_bytes[13],
118 chunk_bytes[14],
119 chunk_bytes[15],
120 ]) as usize;
121
122 // Display mask size header with raw bytes
123 print!(" Mask size: 0b");
124 for byte in &chunk_bytes[0..8] {
125 print!("{:08b}", byte);
126 }
127 println!(" -> {} bytes", mask_size);
128
129 // Display data size header with raw bytes
130 print!(" Data size: 0b");
131 for byte in &chunk_bytes[8..16] {
132 print!("{:08b}", byte);
133 }
134 println!(" -> {} bytes", data_size);
135
136 // Show mask and data sections separately
137 if chunk_bytes.len() >= 16 + mask_size + data_size {
138 let mask_start = 16;
139 let data_start = 16 + mask_size;
140
141 // Show mask as binary
142 if mask_size > 0 {
143 println!(" Mask (binary):");
144 print!(" ");
145 for byte in &chunk_bytes[mask_start..mask_start + mask_size] {
146 print!("0b{:08b} ", byte);
147 }
148 println!();
149 }
150
151 // Show data as binary
152 if data_size > 0 {
153 println!(" Data (binary):");
154 print!(" ");
155 for byte in &chunk_bytes[data_start..data_start + data_size] {
156 print!("0b{:08b} ", byte);
157 }
158 println!();
159 }
160 }
161 } else {
162 panic!(" Chunk too small to parse headers");
163 }
164 } else {
165 println!(" Chunk missing (fill value chunk)");
166 }
167 }
168 }
169 Ok(())
170}Sourcepub fn chunk_key(&self, chunk_indices: &[u64]) -> StoreKey
pub fn chunk_key(&self, chunk_indices: &[u64]) -> StoreKey
Return the StoreKey of the chunk at chunk_indices.
Examples found in repository?
18fn main() -> Result<(), Box<dyn std::error::Error>> {
19 // Create an in-memory store
20 // let store = Arc::new(zarrs::filesystem::FilesystemStore::new(
21 // "zarrs/tests/data/v3/array_optional.zarr",
22 // )?);
23 let store = Arc::new(zarrs::storage::store::MemoryStore::new());
24
25 // Build the codec chains for the optional codec
26 let array = ArrayBuilder::new(
27 vec![4, 4], // 4x4 array
28 vec![2, 2], // 2x2 chunks
29 data_type::uint8().to_optional(), // Optional uint8
30 FillValue::new_optional_null(), // Null fill value: [0]
31 )
32 .dimension_names(["y", "x"].into())
33 .attributes(
34 serde_json::json!({
35 "description": r#"A 4x4 array of optional uint8 values with some missing data.
36N marks missing (`None`=`null`) values:
37 0 N 2 3
38 N 5 N 7
39 8 9 N N
4012 N N N"#,
41 })
42 .as_object()
43 .unwrap()
44 .clone(),
45 )
46 .build(store.clone(), "/array")?;
47 array.store_metadata_opt(
48 &zarrs::array::ArrayMetadataOptions::default().with_include_zarrs_metadata(false),
49 )?;
50
51 println!("Array metadata:\n{}", array.metadata().to_string_pretty());
52
53 // Create some data with missing values
54 let data = ndarray::array![
55 [Some(0u8), None, Some(2u8), Some(3u8)],
56 [None, Some(5u8), None, Some(7u8)],
57 [Some(8u8), Some(9u8), None, None],
58 [Some(12u8), None, None, None],
59 ]
60 .into_dyn();
61
62 // Write the data
63 array.store_array_subset(&array.subset_all(), data.clone())?;
64
65 // Read back the data
66 let data_read: ArrayD<Option<u8>> = array.retrieve_array_subset(&array.subset_all())?;
67
68 // Verify data integrity
69 assert_eq!(data, data_read);
70
71 // Display the data in a grid format
72 println!("Data grid, N marks missing (`None`=`null`) values");
73 println!(" 0 1 2 3");
74 for y in 0..4 {
75 print!("{} ", y);
76 for x in 0..4 {
77 match data_read[[y, x]] {
78 Some(value) => print!("{:2} ", value),
79 None => print!(" N "),
80 }
81 }
82 println!();
83 }
84
85 // Print the raw bytes in all chunks
86 println!("Raw bytes in all chunks:");
87 let chunk_grid_shape = array.chunk_grid_shape();
88 for chunk_y in 0..chunk_grid_shape[0] {
89 for chunk_x in 0..chunk_grid_shape[1] {
90 let chunk_indices = vec![chunk_y, chunk_x];
91 let chunk_key = array.chunk_key(&chunk_indices);
92 println!(" Chunk [{}, {}] (key: {}):", chunk_y, chunk_x, chunk_key);
93
94 if let Some(chunk_bytes) = store.get(&chunk_key)? {
95 println!(" Size: {} bytes", chunk_bytes.len());
96
97 if chunk_bytes.len() >= 16 {
98 // Parse first 8 bytes as mask size (little-endian u64)
99 let mask_size = u64::from_le_bytes([
100 chunk_bytes[0],
101 chunk_bytes[1],
102 chunk_bytes[2],
103 chunk_bytes[3],
104 chunk_bytes[4],
105 chunk_bytes[5],
106 chunk_bytes[6],
107 chunk_bytes[7],
108 ]) as usize;
109
110 // Parse second 8 bytes as data size (little-endian u64)
111 let data_size = u64::from_le_bytes([
112 chunk_bytes[8],
113 chunk_bytes[9],
114 chunk_bytes[10],
115 chunk_bytes[11],
116 chunk_bytes[12],
117 chunk_bytes[13],
118 chunk_bytes[14],
119 chunk_bytes[15],
120 ]) as usize;
121
122 // Display mask size header with raw bytes
123 print!(" Mask size: 0b");
124 for byte in &chunk_bytes[0..8] {
125 print!("{:08b}", byte);
126 }
127 println!(" -> {} bytes", mask_size);
128
129 // Display data size header with raw bytes
130 print!(" Data size: 0b");
131 for byte in &chunk_bytes[8..16] {
132 print!("{:08b}", byte);
133 }
134 println!(" -> {} bytes", data_size);
135
136 // Show mask and data sections separately
137 if chunk_bytes.len() >= 16 + mask_size + data_size {
138 let mask_start = 16;
139 let data_start = 16 + mask_size;
140
141 // Show mask as binary
142 if mask_size > 0 {
143 println!(" Mask (binary):");
144 print!(" ");
145 for byte in &chunk_bytes[mask_start..mask_start + mask_size] {
146 print!("0b{:08b} ", byte);
147 }
148 println!();
149 }
150
151 // Show data as binary
152 if data_size > 0 {
153 println!(" Data (binary):");
154 print!(" ");
155 for byte in &chunk_bytes[data_start..data_start + data_size] {
156 print!("0b{:08b} ", byte);
157 }
158 println!();
159 }
160 }
161 } else {
162 panic!(" Chunk too small to parse headers");
163 }
164 } else {
165 println!(" Chunk missing (fill value chunk)");
166 }
167 }
168 }
169 Ok(())
170}Sourcepub fn chunk_origin(
&self,
chunk_indices: &[u64],
) -> Result<ArrayIndices, ArrayError>
pub fn chunk_origin( &self, chunk_indices: &[u64], ) -> Result<ArrayIndices, ArrayError>
Return the origin of the chunk at chunk_indices.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.
Sourcepub fn chunk_shape(
&self,
chunk_indices: &[u64],
) -> Result<ChunkShape, ArrayError>
pub fn chunk_shape( &self, chunk_indices: &[u64], ) -> Result<ChunkShape, ArrayError>
Return the shape of the chunk at chunk_indices.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.
Sourcepub fn subset_all(&self) -> ArraySubset
pub fn subset_all(&self) -> ArraySubset
Return an array subset that spans the entire array.
Examples found in repository?
157fn main() {
158 let store = std::sync::Arc::new(MemoryStore::default());
159 let array_path = "/array";
160 let array = ArrayBuilder::new(
161 vec![4, 1], // array shape
162 vec![3, 1], // regular chunk shape
163 Arc::new(CustomDataTypeVariableSize),
164 [],
165 )
166 .array_to_array_codecs(vec![
167 #[cfg(feature = "transpose")]
168 Arc::new(zarrs::array::codec::TransposeCodec::new(
169 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
170 )),
171 ])
172 .bytes_to_bytes_codecs(vec![
173 #[cfg(feature = "gzip")]
174 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
175 #[cfg(feature = "crc32c")]
176 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
177 ])
178 // .storage_transformers(vec![].into())
179 .build(store, array_path)
180 .unwrap();
181 println!("{}", array.metadata().to_string_pretty());
182
183 let data = [
184 CustomDataTypeVariableSizeElement::from(Some(1.0)),
185 CustomDataTypeVariableSizeElement::from(None),
186 CustomDataTypeVariableSizeElement::from(Some(3.0)),
187 ];
188 array.store_chunk(&[0, 0], &data).unwrap();
189
190 let data: Vec<CustomDataTypeVariableSizeElement> =
191 array.retrieve_array_subset(&array.subset_all()).unwrap();
192
193 assert_eq!(data[0], CustomDataTypeVariableSizeElement::from(Some(1.0)));
194 assert_eq!(data[1], CustomDataTypeVariableSizeElement::from(None));
195 assert_eq!(data[2], CustomDataTypeVariableSizeElement::from(Some(3.0)));
196 assert_eq!(data[3], CustomDataTypeVariableSizeElement::from(None));
197
198 println!("{data:#?}");
199}More examples
280fn main() {
281 let store = std::sync::Arc::new(MemoryStore::default());
282 let array_path = "/array";
283 let fill_value = CustomDataTypeFixedSizeElement { x: 1, y: 2.3 };
284 let array = ArrayBuilder::new(
285 vec![4, 1], // array shape
286 vec![2, 1], // regular chunk shape
287 Arc::new(CustomDataTypeFixedSize),
288 FillValue::new(fill_value.to_ne_bytes().to_vec()),
289 )
290 .array_to_array_codecs(vec![
291 #[cfg(feature = "transpose")]
292 Arc::new(zarrs::array::codec::TransposeCodec::new(
293 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
294 )),
295 ])
296 .bytes_to_bytes_codecs(vec![
297 #[cfg(feature = "gzip")]
298 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
299 #[cfg(feature = "crc32c")]
300 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
301 ])
302 // .storage_transformers(vec![].into())
303 .build(store, array_path)
304 .unwrap();
305 println!("{}", array.metadata().to_string_pretty());
306
307 let data = [
308 CustomDataTypeFixedSizeElement { x: 3, y: 4.5 },
309 CustomDataTypeFixedSizeElement { x: 6, y: 7.8 },
310 ];
311 array.store_chunk(&[0, 0], &data).unwrap();
312
313 let data: Vec<CustomDataTypeFixedSizeElement> =
314 array.retrieve_array_subset(&array.subset_all()).unwrap();
315
316 assert_eq!(data[0], CustomDataTypeFixedSizeElement { x: 3, y: 4.5 });
317 assert_eq!(data[1], CustomDataTypeFixedSizeElement { x: 6, y: 7.8 });
318 assert_eq!(data[2], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
319 assert_eq!(data[3], CustomDataTypeFixedSizeElement { x: 1, y: 2.3 });
320
321 println!("{data:#?}");
322}192fn main() {
193 let store = std::sync::Arc::new(MemoryStore::default());
194 let array_path = "/array";
195 let fill_value = CustomDataTypeUInt12Element::try_from(15).unwrap();
196 let array = ArrayBuilder::new(
197 vec![4096, 1], // array shape
198 vec![5, 1], // regular chunk shape
199 Arc::new(CustomDataTypeUInt12),
200 FillValue::new(fill_value.into_le_bytes().to_vec()),
201 )
202 .array_to_array_codecs(vec![
203 #[cfg(feature = "transpose")]
204 Arc::new(zarrs::array::codec::TransposeCodec::new(
205 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
206 )),
207 ])
208 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
209 .bytes_to_bytes_codecs(vec![
210 #[cfg(feature = "gzip")]
211 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
212 #[cfg(feature = "crc32c")]
213 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
214 ])
215 // .storage_transformers(vec![].into())
216 .build(store, array_path)
217 .unwrap();
218 println!("{}", array.metadata().to_string_pretty());
219
220 let data: Vec<CustomDataTypeUInt12Element> = (0..4096)
221 .map(|i| CustomDataTypeUInt12Element::try_from(i).unwrap())
222 .collect();
223
224 array
225 .store_array_subset(&array.subset_all(), &data)
226 .unwrap();
227
228 let mut data: Vec<CustomDataTypeUInt12Element> =
229 array.retrieve_array_subset(&array.subset_all()).unwrap();
230
231 for (i, d) in data.drain(0..4096).enumerate() {
232 let element = CustomDataTypeUInt12Element::try_from(i as u64).unwrap();
233 assert_eq!(d, element);
234 let element_pd: Vec<CustomDataTypeUInt12Element> = array
235 .retrieve_array_subset(&[(i as u64)..i as u64 + 1, 0..1])
236 .unwrap();
237 assert_eq!(element_pd[0], element);
238 }
239}203fn main() {
204 let store = std::sync::Arc::new(MemoryStore::default());
205 let array_path = "/array";
206 let fill_value = CustomDataTypeFloat8e3m4Element::from(1.23);
207 let array = ArrayBuilder::new(
208 vec![6, 1], // array shape
209 vec![5, 1], // regular chunk shape
210 Arc::new(CustomDataTypeFloat8e3m4),
211 FillValue::new(fill_value.into_ne_bytes().to_vec()),
212 )
213 .array_to_array_codecs(vec![
214 #[cfg(feature = "transpose")]
215 Arc::new(zarrs::array::codec::TransposeCodec::new(
216 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
217 )),
218 ])
219 .bytes_to_bytes_codecs(vec![
220 #[cfg(feature = "gzip")]
221 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
222 #[cfg(feature = "crc32c")]
223 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
224 ])
225 // .storage_transformers(vec![].into())
226 .build(store, array_path)
227 .unwrap();
228 println!("{}", array.metadata().to_string_pretty());
229
230 let data = [
231 CustomDataTypeFloat8e3m4Element::from(2.34),
232 CustomDataTypeFloat8e3m4Element::from(3.45),
233 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY),
234 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY),
235 CustomDataTypeFloat8e3m4Element::from(f32::NAN),
236 ];
237 array.store_chunk(&[0, 0], &data).unwrap();
238
239 let data: Vec<CustomDataTypeFloat8e3m4Element> =
240 array.retrieve_array_subset(&array.subset_all()).unwrap();
241
242 for f in &data {
243 println!(
244 "float8_e3m4: {:08b} f32: {}",
245 f.into_ne_bytes()[0],
246 f.into_f32()
247 );
248 }
249
250 assert_eq!(data[0], CustomDataTypeFloat8e3m4Element::from(2.34));
251 assert_eq!(data[1], CustomDataTypeFloat8e3m4Element::from(3.45));
252 assert_eq!(
253 data[2],
254 CustomDataTypeFloat8e3m4Element::from(f32::INFINITY)
255 );
256 assert_eq!(
257 data[3],
258 CustomDataTypeFloat8e3m4Element::from(f32::NEG_INFINITY)
259 );
260 assert_eq!(data[4], CustomDataTypeFloat8e3m4Element::from(f32::NAN));
261 assert_eq!(data[5], CustomDataTypeFloat8e3m4Element::from(1.23));
262}15async fn http_array_read(backend: Backend) -> Result<(), Box<dyn std::error::Error>> {
16 const HTTP_URL: &str =
17 "https://raw.githubusercontent.com/zarrs/zarrs/main/zarrs/tests/data/array_write_read.zarr";
18 const ARRAY_PATH: &str = "/group/array";
19
20 // Create a HTTP store
21 let mut store: AsyncReadableStorage = match backend {
22 // Backend::OpenDAL => {
23 // let builder = opendal::services::Http::default().endpoint(HTTP_URL);
24 // let operator = opendal::Operator::new(builder)?.finish();
25 // Arc::new(zarrs_opendal::AsyncOpendalStore::new(operator))
26 // }
27 Backend::ObjectStore => {
28 let options = object_store::ClientOptions::new().with_allow_http(true);
29 let store = object_store::http::HttpBuilder::new()
30 .with_url(HTTP_URL)
31 .with_client_options(options)
32 .build()?;
33 Arc::new(zarrs_object_store::AsyncObjectStore::new(store))
34 }
35 };
36 if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1)
37 && arg1 == "--usage-log"
38 {
39 let log_writer = Arc::new(std::sync::Mutex::new(
40 // std::io::BufWriter::new(
41 std::io::stdout(),
42 // )
43 ));
44 store = Arc::new(UsageLogStorageAdapter::new(store, log_writer, || {
45 chrono::Utc::now().format("[%T%.3f] ").to_string()
46 }));
47 }
48
49 // Init the existing array, reading metadata
50 let array = Array::async_open(store, ARRAY_PATH).await?;
51
52 println!(
53 "The array metadata is:\n{}\n",
54 array.metadata().to_string_pretty()
55 );
56
57 // Read the whole array
58 let data_all: ArrayD<f32> = array
59 .async_retrieve_array_subset(&array.subset_all())
60 .await?;
61 println!("The whole array is:\n{data_all}\n");
62
63 // Read a chunk back from the store
64 let chunk_indices = vec![1, 0];
65 let data_chunk: ArrayD<f32> = array.async_retrieve_chunk(&chunk_indices).await?;
66 println!("Chunk [1,0] is:\n{data_chunk}\n");
67
68 // Read the central 4x2 subset of the array
69 let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
70 let data_4x2: ArrayD<f32> = array.async_retrieve_array_subset(&subset_4x2).await?;
71 println!("The middle 4x2 subset is:\n{data_4x2}\n");
72
73 Ok(())
74}194fn main() {
195 let store = std::sync::Arc::new(MemoryStore::default());
196 let array_path = "/array";
197 let fill_value = CustomDataTypeUInt4Element::try_from(15).unwrap();
198 let array = ArrayBuilder::new(
199 vec![6, 1], // array shape
200 vec![5, 1], // regular chunk shape
201 Arc::new(CustomDataTypeUInt4),
202 FillValue::new(fill_value.into_ne_bytes().to_vec()),
203 )
204 .array_to_array_codecs(vec![
205 #[cfg(feature = "transpose")]
206 Arc::new(zarrs::array::codec::TransposeCodec::new(
207 zarrs::array::codec::array_to_array::transpose::TransposeOrder::new(&[1, 0]).unwrap(),
208 )),
209 ])
210 .array_to_bytes_codec(Arc::new(zarrs::array::codec::PackBitsCodec::default()))
211 .bytes_to_bytes_codecs(vec![
212 #[cfg(feature = "gzip")]
213 Arc::new(zarrs::array::codec::GzipCodec::new(5).unwrap()),
214 #[cfg(feature = "crc32c")]
215 Arc::new(zarrs::array::codec::Crc32cCodec::new()),
216 ])
217 // .storage_transformers(vec![].into())
218 .build(store, array_path)
219 .unwrap();
220 println!("{}", array.metadata().to_string_pretty());
221
222 let data = [
223 CustomDataTypeUInt4Element::try_from(1).unwrap(),
224 CustomDataTypeUInt4Element::try_from(2).unwrap(),
225 CustomDataTypeUInt4Element::try_from(3).unwrap(),
226 CustomDataTypeUInt4Element::try_from(4).unwrap(),
227 CustomDataTypeUInt4Element::try_from(5).unwrap(),
228 ];
229 array.store_chunk(&[0, 0], &data).unwrap();
230
231 let data: Vec<CustomDataTypeUInt4Element> =
232 array.retrieve_array_subset(&array.subset_all()).unwrap();
233
234 for f in &data {
235 println!("uint4: {:08b} u8: {}", f.into_u8(), f.into_u8());
236 }
237
238 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(1).unwrap());
239 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(2).unwrap());
240 assert_eq!(data[2], CustomDataTypeUInt4Element::try_from(3).unwrap());
241 assert_eq!(data[3], CustomDataTypeUInt4Element::try_from(4).unwrap());
242 assert_eq!(data[4], CustomDataTypeUInt4Element::try_from(5).unwrap());
243 assert_eq!(data[5], CustomDataTypeUInt4Element::try_from(15).unwrap());
244
245 let data: Vec<CustomDataTypeUInt4Element> = array.retrieve_array_subset(&[1..3, 0..1]).unwrap();
246 assert_eq!(data[0], CustomDataTypeUInt4Element::try_from(2).unwrap());
247 assert_eq!(data[1], CustomDataTypeUInt4Element::try_from(3).unwrap());
248}- examples/data_type_optional_nested.rs
- examples/sync_http_array_read.rs
- examples/array_write_read_string.rs
- examples/rectangular_array_write_read.rs
- examples/array_write_read.rs
- examples/sharded_array_write_read.rs
- examples/data_type_optional.rs
- examples/array_write_read_ndarray.rs
- examples/async_array_write_read.rs
Sourcepub fn chunk_shape_usize(
&self,
chunk_indices: &[u64],
) -> Result<Vec<usize>, ArrayError>
pub fn chunk_shape_usize( &self, chunk_indices: &[u64], ) -> Result<Vec<usize>, ArrayError>
Return the shape of the chunk at chunk_indices.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.
§Panics
Panics if any component of the chunk shape exceeds usize::MAX.
Sourcepub fn chunk_subset(
&self,
chunk_indices: &[u64],
) -> Result<ArraySubset, ArrayError>
pub fn chunk_subset( &self, chunk_indices: &[u64], ) -> Result<ArraySubset, ArrayError>
Return the array subset of the chunk at chunk_indices.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.
Sourcepub fn chunk_subset_bounded(
&self,
chunk_indices: &[u64],
) -> Result<ArraySubset, ArrayError>
pub fn chunk_subset_bounded( &self, chunk_indices: &[u64], ) -> Result<ArraySubset, ArrayError>
Return the array subset of the chunk at chunk_indices bounded by the array shape.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.
Sourcepub fn chunks_subset(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<ArraySubset, ArrayError>
pub fn chunks_subset( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<ArraySubset, ArrayError>
Return the array subset of chunks.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if a chunk in chunks is incompatible with the chunk grid.
Sourcepub fn chunks_subset_bounded(
&self,
chunks: &dyn ArraySubsetTraits,
) -> Result<ArraySubset, ArrayError>
pub fn chunks_subset_bounded( &self, chunks: &dyn ArraySubsetTraits, ) -> Result<ArraySubset, ArrayError>
Return the array subset of chunks bounded by the array shape.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.
Sourcepub fn chunks_in_array_subset(
&self,
array_subset: &dyn ArraySubsetTraits,
) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>
pub fn chunks_in_array_subset( &self, array_subset: &dyn ArraySubsetTraits, ) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>
Return an array subset indicating the chunks intersecting array_subset.
Returns None if the intersecting chunks cannot be determined.
§Errors
Returns IncompatibleDimensionalityError if the array subset has an incorrect dimensionality.
Sourcepub fn to_v3(self) -> Result<Self, ArrayMetadataV2ToV3Error>
pub fn to_v3(self) -> Result<Self, ArrayMetadataV2ToV3Error>
Convert the array to Zarr V3.
§Errors
Returns a ArrayMetadataV2ToV3Error if the metadata is not compatible with Zarr V3 metadata.
Trait Implementations§
Source§impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>
Available on crate feature sharding only.
impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>
sharding only.Source§fn is_sharded(&self) -> bool
fn is_sharded(&self) -> bool
sharding_indexed.Source§fn is_exclusively_sharded(&self) -> bool
fn is_exclusively_sharded(&self) -> bool
sharding_indexed and the array has no array-to-array or bytes-to-bytes codecs.Source§fn subchunk_shape(&self) -> Option<ChunkShape>
fn subchunk_shape(&self) -> Option<ChunkShape>
sharding_indexed codec metadata. Read moreSource§fn effective_subchunk_shape(&self) -> Option<ChunkShape>
fn effective_subchunk_shape(&self) -> Option<ChunkShape>
Source§fn subchunk_grid(&self) -> ChunkGrid
fn subchunk_grid(&self) -> ChunkGrid
Source§fn subchunk_grid_shape(&self) -> ArrayShape
fn subchunk_grid_shape(&self) -> ArrayShape
Source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>
Available on crate feature sharding only.
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>
sharding only.Source§fn subchunk_byte_range(
&self,
cache: &ArrayShardedReadableExtCache,
subchunk_indices: &[u64],
) -> Result<Option<ByteRange>, ArrayError>
fn subchunk_byte_range( &self, cache: &ArrayShardedReadableExtCache, subchunk_indices: &[u64], ) -> Result<Option<ByteRange>, ArrayError>
Source§fn retrieve_encoded_subchunk(
&self,
cache: &ArrayShardedReadableExtCache,
subchunk_indices: &[u64],
) -> Result<Option<Vec<u8>>, ArrayError>
fn retrieve_encoded_subchunk( &self, cache: &ArrayShardedReadableExtCache, subchunk_indices: &[u64], ) -> Result<Option<Vec<u8>>, ArrayError>
Source§fn retrieve_subchunk_opt<T: FromArrayBytes>(
&self,
cache: &ArrayShardedReadableExtCache,
subchunk_indices: &[u64],
options: &CodecOptions,
) -> Result<T, ArrayError>
fn retrieve_subchunk_opt<T: FromArrayBytes>( &self, cache: &ArrayShardedReadableExtCache, subchunk_indices: &[u64], options: &CodecOptions, ) -> Result<T, ArrayError>
subchunk_indices into its bytes. Read moreSource§fn retrieve_subchunk_elements_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
subchunk_indices: &[u64],
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_subchunk_elements_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, subchunk_indices: &[u64], options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
subchunk_indices into a vector of its elements. Read moreSource§fn retrieve_subchunk_ndarray_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
subchunk_indices: &[u64],
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_subchunk_ndarray_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, subchunk_indices: &[u64], options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Source§fn retrieve_subchunks_opt<T: FromArrayBytes>(
&self,
cache: &ArrayShardedReadableExtCache,
subchunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
fn retrieve_subchunks_opt<T: FromArrayBytes>( &self, cache: &ArrayShardedReadableExtCache, subchunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
subchunks. Read moreSource§fn retrieve_subchunks_elements_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
subchunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_subchunks_elements_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, subchunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
subchunks into a vector of their elements. Read moreSource§fn retrieve_subchunks_ndarray_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
subchunks: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_subchunks_ndarray_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, subchunks: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Source§fn retrieve_array_subset_sharded_opt<T: FromArrayBytes>(
&self,
cache: &ArrayShardedReadableExtCache,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<T, ArrayError>
fn retrieve_array_subset_sharded_opt<T: FromArrayBytes>( &self, cache: &ArrayShardedReadableExtCache, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<T, ArrayError>
array_subset of array. Read moreSource§fn retrieve_array_subset_elements_sharded_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<Vec<T>, ArrayError>
fn retrieve_array_subset_elements_sharded_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<Vec<T>, ArrayError>
array_subset of array into a vector of its elements. Read moreSource§fn retrieve_array_subset_ndarray_sharded_opt<T: ElementOwned>(
&self,
cache: &ArrayShardedReadableExtCache,
array_subset: &dyn ArraySubsetTraits,
options: &CodecOptions,
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_array_subset_ndarray_sharded_opt<T: ElementOwned>( &self, cache: &ArrayShardedReadableExtCache, array_subset: &dyn ArraySubsetTraits, options: &CodecOptions, ) -> Result<ArrayD<T>, ArrayError>
ndarray only.Source§impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> AsyncArrayShardedReadableExt<TStorage> for Array<TStorage>
Available on crate feature async only.
impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> AsyncArrayShardedReadableExt<TStorage> for Array<TStorage>
async only.Source§fn async_subchunk_byte_range<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<ByteRange>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn async_subchunk_byte_range<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<ByteRange>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn async_retrieve_encoded_subchunk<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<Vec<u8>>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn async_retrieve_encoded_subchunk<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
) -> Pin<Box<dyn Future<Output = Result<Option<Vec<u8>>, ArrayError>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn async_retrieve_subchunk_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<T, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + FromArrayBytes + MaybeSend,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_subchunk_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<T, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + FromArrayBytes + MaybeSend,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
subchunk_indices into its bytes. Read moreSource§fn async_retrieve_subchunk_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_subchunk_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
subchunk_indices into a vector of its elements. Read moreSource§fn async_retrieve_subchunk_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_subchunk_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunk_indices: &'life2 [u64],
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
ndarray only.Source§fn async_retrieve_subchunks_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunks: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<T, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + FromArrayBytes + MaybeSend,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_subchunks_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunks: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<T, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + FromArrayBytes + MaybeSend,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
chunks. Read moreSource§fn async_retrieve_subchunks_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunks: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_subchunks_elements_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunks: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
subchunks into a vector of their elements. Read moreSource§fn async_retrieve_subchunks_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunks: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_subchunks_ndarray_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
subchunks: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
ndarray only.Source§fn async_retrieve_array_subset_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<T, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + FromArrayBytes + MaybeSend,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_array_subset_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<T, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + FromArrayBytes + MaybeSend,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
array_subset of array. Read moreSource§fn async_retrieve_array_subset_elements_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_array_subset_elements_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<Vec<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
array_subset of array into a vector of its elements. Read moreSource§fn async_retrieve_array_subset_ndarray_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
fn async_retrieve_array_subset_ndarray_sharded_opt<'life0, 'life1, 'life2, 'life3, 'async_trait, T>(
&'life0 self,
cache: &'life1 AsyncArrayShardedReadableExtCache,
array_subset: &'life2 dyn ArraySubsetTraits,
options: &'life3 CodecOptions,
) -> Pin<Box<dyn Future<Output = Result<ArrayD<T>, ArrayError>> + Send + 'async_trait>>where
T: 'async_trait + ElementOwned + MaybeSend + MaybeSync,
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
'life3: 'async_trait,
ndarray only.Auto Trait Implementations§
impl<TStorage> Freeze for Array<TStorage>where
TStorage: ?Sized,
impl<TStorage> Send for Array<TStorage>
impl<TStorage> Sync for Array<TStorage>
impl<TStorage> !RefUnwindSafe for Array<TStorage>
impl<TStorage> Unpin for Array<TStorage>where
TStorage: ?Sized,
impl<TStorage> UnsafeUnpin for Array<TStorage>where
TStorage: ?Sized,
impl<TStorage> !UnwindSafe for Array<TStorage>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more