pub struct Array<TStorage: ?Sized> { /* private fields */ }
Expand description
A Zarr array.
See https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html#array-metadata.
§Metadata
An array is defined by the following parameters (which are encoded in its JSON metadata):
- shape: defines the length of the array dimensions,
- data type: defines the numerical representation array elements,
- chunk grid: defines how the array is subdivided into chunks,
- chunk key encoding: defines how chunk grid cell coordinates are mapped to keys in a store,
- fill value: an element value to use for uninitialised portions of the array.
- codecs: used to encode and decode chunks,
and optional parameters:
- attributes: user-defined attributes,
- storage transformers: used to intercept and alter the storage keys and bytes of an array before they reach the underlying physical storage, and
- dimension names: defines the names of the array dimensions.
See https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html#array-metadata for more information on array metadata.
§Initilisation
A new array can be initialised with an ArrayBuilder
or Array::new_with_metadata
.
An existing array can be initialised with Array::new
, its metadata is read from the store.
The shape
and attributes
of an array are mutable and can be updated after construction.
However, array metadata must be written explicitly to the store with store_metadata
if an array is newly created or its metadata has been mutated.
§Methods
§Sync API
Array operations are divided into several categories based on the traits implemented for the backing storage. The core array methods are:
ReadableStorageTraits
: read array data and metadataWritableStorageTraits
: store/erase array data and store metadataReadableWritableStorageTraits
: store operations requiring reading
All retrieve
and store
methods have multiple variants:
- Standard variants store or retrieve data represented as bytes.
_elements
suffix variants can store or retrieve chunks with a known type._ndarray
suffix variants can store or retrievendarray::Array
s (requiresndarray
feature).- Retrieve and store methods have an
_opt
variant with an additionalCodecOptions
argument for fine-grained concurrency control. - Variants without the
_opt
suffix use defaultCodecOptions
which just maximises concurrent operations. This is preferred unless using external parallelisation.
§Async API
With the async
feature and an async store, there are equivalent methods to the sync API with an async_
prefix.
This crate is async runtime-agnostic and does not spawn tasks internally.
The implication is that methods like async_retrieve_array_subset
or async_retrieve_chunks
do not parallelise over chunks and can be slow compared to the sync API (especially when they involve a large number of chunks).
This limitation can be circumvented by spawning tasks outside of zarrs.
For example, instead of using async_retrieve_chunks
, multiple tasks executing async_retrieve_chunk_into_array_view
could be spawned that output to a preallocated buffer.
An example of such an approach can be found in the zarrs_benchmark_read_async
application in the zarrs_tools crate.
§Parallel Writing
If a chunk is written more than once, its element values depend on whichever operation wrote to the chunk last.
The ReadableWritableStorageTraits
store_chunk_subset
and store_array_subset
methods and their variants internally retrieve a chunk, update it, then store it.
It is the responsibility of zarrs consumers to ensure that:
Array::store_chunk_subset
is not called concurrently on the same chunk, andArray::store_array_subset
is not called concurrently on regions sharing chunks.
Partial writes to a chunk may be lost if these rules are not respected.
zarrs does not currently offer an API for locking chunks or regions.
§Best Practices
§Writing
For optimum write performance, an array should be written using store_chunk
or store_chunks
where possible.
The store_chunk_subset
and store_array_subset
are less preferred because they may incur decoding overhead and require careful usage if executed concurrently (see previous section).
§Reading
It is fastest to load arrays using retrieve_chunk
or retrieve_chunks
where possible.
In contrast, the retrieve_chunk_subset
and retrieve_array_subset
may use partial decoders which can be less efficient with some codecs/stores.
§zarrs
Metadata
By default, the zarrs
version and a link to its source code is written to the _zarrs
attribute in array metadata.
This can be disabled with set_include_zarrs_metadata(false)
.
Implementations§
source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>
sourcepub fn new(storage: Arc<TStorage>, path: &str) -> Result<Self, ArrayCreateError>
pub fn new(storage: Arc<TStorage>, path: &str) -> Result<Self, ArrayCreateError>
Create an array in storage
at path
. The metadata is read from the store.
§Errors
Returns ArrayCreateError
if there is a storage error or any metadata is invalid.
Examples found in repository?
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::Array,
array_subset::ArraySubset,
storage::{
storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
store,
},
};
const HTTP_URL: &str =
"https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
const ARRAY_PATH: &str = "/group/array";
// Create a HTTP store
let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log.clone().create_readable_transformer(store);
}
}
// Init the existing array, reading metadata
let array = Array::new(store, ARRAY_PATH)?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
Ok(())
}
sourcepub fn retrieve_chunk_if_exists(
&self,
chunk_indices: &[u64]
) -> Result<Option<Vec<u8>>, ArrayError>
pub fn retrieve_chunk_if_exists( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<u8>>, ArrayError>
Read and decode the chunk at chunk_indices
into its bytes if it exists with default codec options.
§Errors
Returns an ArrayError
if
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of elements in the chunk exceeds usize::MAX
.
sourcepub fn retrieve_chunk_elements_if_exists<T: Pod>(
&self,
chunk_indices: &[u64]
) -> Result<Option<Vec<T>>, ArrayError>
pub fn retrieve_chunk_elements_if_exists<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<T>>, ArrayError>
Read and decode the chunk at chunk_indices
into a vector of its elements if it exists with default codec options.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
sourcepub fn retrieve_chunk_ndarray_if_exists<T: Pod>(
&self,
chunk_indices: &[u64]
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray_if_exists<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<Option<ArrayD<T>>, ArrayError>
ndarray
only.Read and decode the chunk at chunk_indices
into an ndarray::ArrayD
if it exists.
§Errors
Returns an ArrayError
if:
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
- the chunk indices are invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if a chunk dimension is larger than usize::MAX
.
sourcepub fn retrieve_chunk(
&self,
chunk_indices: &[u64]
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_chunk( &self, chunk_indices: &[u64] ) -> Result<Vec<u8>, ArrayError>
Read and decode the chunk at chunk_indices
into its bytes or the fill value if it does not exist with default codec options.
§Errors
Returns an ArrayError
if
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of elements in the chunk exceeds usize::MAX
.
sourcepub fn retrieve_chunk_elements<T: Pod>(
&self,
chunk_indices: &[u64]
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_elements<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<Vec<T>, ArrayError>
Read and decode the chunk at chunk_indices
into a vector of its elements or the fill value if it does not exist.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
chunk_indices
are invalid,- there is a codec decoding error, or
- an underlying store error.
sourcepub fn retrieve_chunk_ndarray<T: Pod>(
&self,
chunk_indices: &[u64]
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the chunk at chunk_indices
into an ndarray::ArrayD
. It is filled with the fill value if it does not exist.
§Errors
Returns an ArrayError
if:
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
- the chunk indices are invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if a chunk dimension is larger than usize::MAX
.
Examples found in repository?
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::Array,
array_subset::ArraySubset,
storage::{
storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
store,
},
};
const HTTP_URL: &str =
"https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
const ARRAY_PATH: &str = "/group/array";
// Create a HTTP store
let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log.clone().create_readable_transformer(store);
}
}
// Init the existing array, reading metadata
let array = Array::new(store, ARRAY_PATH)?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn retrieve_chunk_into_array_view(
&self,
chunk_indices: &[u64],
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
pub fn retrieve_chunk_into_array_view( &self, chunk_indices: &[u64], array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
Retrieve a chunk and output into an existing array.
§Errors
See Array::retrieve_chunk
.
Can also error if the ArraySubset
in array_view
does not have the same shape as the chunk at chunk_indices
.
§Panics
Panics if an offset is larger than usize::MAX
.
sourcepub fn retrieve_chunks(
&self,
chunks: &ArraySubset
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_chunks( &self, chunks: &ArraySubset ) -> Result<Vec<u8>, ArrayError>
Read and decode the chunks at chunks
into their bytes.
§Errors
Returns an ArrayError
if
- any chunk indices in
chunks
are invalid, - there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if the number of array elements in the chunk exceeds usize::MAX
.
sourcepub fn retrieve_chunks_elements<T: Pod>(
&self,
chunks: &ArraySubset
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunks_elements<T: Pod>( &self, chunks: &ArraySubset ) -> Result<Vec<T>, ArrayError>
Read and decode the chunks at chunks
into a vector of their elements.
§Errors
Returns an ArrayError
if any chunk indices in chunks
are invalid or an error condition in Array::retrieve_chunks_opt
.
§Panics
Panics if the number of array elements in the chunks exceeds usize::MAX
.
sourcepub fn retrieve_chunks_ndarray<T: Pod>(
&self,
chunks: &ArraySubset
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunks_ndarray<T: Pod>( &self, chunks: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the chunks at chunks
into an ndarray::ArrayD
.
§Errors
Returns an ArrayError
if any chunk indices in chunks
are invalid or an error condition in Array::retrieve_chunks_elements_opt
.
§Panics
Panics if the number of array elements in the chunks exceeds usize::MAX
.
Examples found in repository?
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn retrieve_chunks_into_array_view(
&self,
chunks: &ArraySubset,
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
pub fn retrieve_chunks_into_array_view( &self, chunks: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
Retrieve chunks into an array view.
§Errors
See Array::retrieve_chunks_opt
.
Can also error if the ArraySubset
in array_view
does not have the same shape as array_subset
.
§Panics
Panics if an offset is larger than usize::MAX
.
sourcepub fn retrieve_chunk_subset(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>
Read and decode the chunk_subset
of the chunk at chunk_indices
into its bytes.
§Errors
Returns an ArrayError
if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if the number of elements in chunk_subset
is usize::MAX
or larger.
sourcepub fn retrieve_chunk_subset_elements<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_subset_elements<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>
Read and decode the chunk_subset
of the chunk at chunk_indices
into its elements.
§Errors
Returns an ArrayError
if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
sourcepub fn retrieve_chunk_subset_ndarray<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_subset_ndarray<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the chunk_subset
of the chunk at chunk_indices
into an ndarray::ArrayD
.
§Errors
Returns an ArrayError
if:
- the chunk indices are invalid,
- the chunk subset is invalid,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if the number of elements in chunk_subset
is usize::MAX
or larger.
sourcepub fn retrieve_chunk_subset_into_array_view(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
pub fn retrieve_chunk_subset_into_array_view( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
Retrieve a subset of a chunk and output into an existing array.
§Errors
See Array::retrieve_chunk_subset
.
Can also error if the ArraySubset
in array_view
does not have the same shape as chunk_subset
.
§Panics
Panics if an offset is larger than usize::MAX
.
sourcepub fn retrieve_array_subset(
&self,
array_subset: &ArraySubset
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_array_subset( &self, array_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>
Read and decode the array_subset
of array into its bytes.
Out-of-bounds elements will have the fill value.
If parallel
is true, chunks intersecting the array subset are retrieved in parallel.
§Errors
Returns an ArrayError
if:
- the
array_subset
dimensionality does not match the chunk grid dimensionality, - there is a codec decoding error, or
- an underlying store error.
§Panics
Panics if attempting to reference a byte beyond usize::MAX
.
sourcepub fn retrieve_array_subset_elements<T: Pod>(
&self,
array_subset: &ArraySubset
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_array_subset_elements<T: Pod>( &self, array_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>
Read and decode the array_subset
of array into a vector of its elements.
§Errors
Returns an ArrayError
if:
- the size of
T
does not match the data type size, - the decoded bytes cannot be transmuted,
- an array subset is invalid or out of bounds of the array,
- there is a codec decoding error, or
- an underlying store error.
sourcepub fn retrieve_array_subset_ndarray<T: Pod>(
&self,
array_subset: &ArraySubset
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_array_subset_ndarray<T: Pod>( &self, array_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Read and decode the array_subset
of array into an ndarray::ArrayD
.
§Errors
Returns an ArrayError
if:
- an array subset is invalid or out of bounds of the array,
- there is a codec decoding error, or
- an underlying store error.
§Panics
Will panic if any dimension in chunk_subset
is usize::MAX
or larger.
Examples found in repository?
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::Array,
array_subset::ArraySubset,
storage::{
storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
store,
},
};
const HTTP_URL: &str =
"https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
const ARRAY_PATH: &str = "/group/array";
// Create a HTTP store
let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log.clone().create_readable_transformer(store);
}
}
// Init the existing array, reading metadata
let array = Array::new(store, ARRAY_PATH)?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn retrieve_array_subset_into_array_view(
&self,
array_subset: &ArraySubset,
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
pub fn retrieve_array_subset_into_array_view( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
Retrieve an array subset into an array view.
§Errors
See Array::retrieve_array_subset
.
Can also error if the ArraySubset
in array_view
does not have the same shape as array_subset
.
§Panics
Panics if an offset is larger than usize::MAX
.
sourcepub fn partial_decoder<'a>(
&'a self,
chunk_indices: &[u64]
) -> Result<Box<dyn ArrayPartialDecoderTraits + 'a>, ArrayError>
pub fn partial_decoder<'a>( &'a self, chunk_indices: &[u64] ) -> Result<Box<dyn ArrayPartialDecoderTraits + 'a>, ArrayError>
Initialises a partial decoder for the chunk at chunk_indices
.
§Errors
Returns an ArrayError
if initialisation of the partial decoder fails.
Examples found in repository?
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
sourcepub fn retrieve_chunk_if_exists_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Option<Vec<u8>>, ArrayError>
pub fn retrieve_chunk_if_exists_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<u8>>, ArrayError>
Explicit options version of retrieve_chunk_if_exists
.
sourcepub fn retrieve_chunk_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_chunk_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
Explicit options version of retrieve_chunk
.
sourcepub fn retrieve_chunk_elements_if_exists_opt<T: Pod>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Option<Vec<T>>, ArrayError>
pub fn retrieve_chunk_elements_if_exists_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<T>>, ArrayError>
Explicit options version of retrieve_chunk_elements_if_exists
.
sourcepub fn retrieve_chunk_elements_opt<T: Pod>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_elements_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunk_elements
.
sourcepub fn retrieve_chunk_ndarray_if_exists_opt<T: Pod>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray_if_exists_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<ArrayD<T>>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunk_ndarray_if_exists
.
sourcepub fn retrieve_chunk_ndarray_opt<T: Pod>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_ndarray_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunk_ndarray
.
sourcepub fn retrieve_chunk_into_array_view_opt(
&self,
chunk_indices: &[u64],
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn retrieve_chunk_into_array_view_opt( &self, chunk_indices: &[u64], array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of retrieve_chunk_into_array_view
.
sourcepub fn retrieve_chunk_subset_into_array_view_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn retrieve_chunk_subset_into_array_view_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of retrieve_chunk_subset_into_array_view
.
sourcepub fn retrieve_chunks_opt(
&self,
chunks: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_chunks_opt( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
Explicit options version of retrieve_chunks
.
sourcepub fn retrieve_chunks_elements_opt<T: Pod>(
&self,
chunks: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunks_elements_opt<T: Pod>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunks_elements
.
sourcepub fn retrieve_chunks_ndarray_opt<T: Pod>(
&self,
chunks: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunks_ndarray_opt<T: Pod>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunks_ndarray
.
sourcepub fn retrieve_array_subset_opt(
&self,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_array_subset_opt( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
Explicit options version of retrieve_array_subset
.
sourcepub fn retrieve_chunks_into_array_view_opt(
&self,
chunks: &ArraySubset,
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn retrieve_chunks_into_array_view_opt( &self, chunks: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of retrieve_chunks_into_array_view
.
sourcepub fn retrieve_array_subset_into_array_view_opt(
&self,
array_subset: &ArraySubset,
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn retrieve_array_subset_into_array_view_opt( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of retrieve_array_subset_into_array_view
.
sourcepub fn retrieve_array_subset_elements_opt<T: Pod>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_array_subset_elements_opt<T: Pod>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_array_subset_elements
.
sourcepub fn retrieve_array_subset_ndarray_opt<T: Pod>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_array_subset_ndarray_opt<T: Pod>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_array_subset_ndarray
.
sourcepub fn retrieve_chunk_subset_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
pub fn retrieve_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
Explicit options version of retrieve_chunk_subset
.
sourcepub fn retrieve_chunk_subset_elements_opt<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
pub fn retrieve_chunk_subset_elements_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
Explicit options version of retrieve_chunk_subset_elements
.
sourcepub fn retrieve_chunk_subset_ndarray_opt<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate feature ndarray
only.
pub fn retrieve_chunk_subset_ndarray_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.Explicit options version of retrieve_chunk_subset_ndarray
.
sourcepub fn partial_decoder_opt<'a>(
&'a self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Box<dyn ArrayPartialDecoderTraits + 'a>, ArrayError>
pub fn partial_decoder_opt<'a>( &'a self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Box<dyn ArrayPartialDecoderTraits + 'a>, ArrayError>
Explicit options version of partial_decoder
.
source§impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>
sourcepub fn store_metadata(&self) -> Result<(), StorageError>
pub fn store_metadata(&self) -> Result<(), StorageError>
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
More examples
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_chunk(
&self,
chunk_indices: &[u64],
chunk_bytes: Vec<u8>
) -> Result<(), ArrayError>
pub fn store_chunk( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8> ) -> Result<(), ArrayError>
Encode chunk_bytes
and store at chunk_indices
.
Use store_chunk_opt
to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError
if
chunk_indices
are invalid,- the length of
chunk_bytes
is not equal to the expected length (the product of the number of elements in the chunk and the data type size in bytes), - there is a codec encoding error, or
- an underlying store error.
sourcepub fn store_chunk_elements<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_elements: Vec<T>
) -> Result<(), ArrayError>
pub fn store_chunk_elements<T: Pod>( &self, chunk_indices: &[u64], chunk_elements: Vec<T> ) -> Result<(), ArrayError>
Encode chunk_elements
and store at chunk_indices
.
Use store_chunk_elements_opt
to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_chunk
error condition is met.
Examples found in repository?
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_chunk_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: TArray
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray ) -> Result<(), ArrayError>
ndarray
only.Encode chunk_array
and store at chunk_indices
.
Use store_chunk_ndarray_opt
to control codec options.
§Errors
Returns an ArrayError
if
- the shape of the array does not match the shape of the chunk,
- a
store_chunk_elements
error condition is met.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
More examples
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_chunks(
&self,
chunks: &ArraySubset,
chunks_bytes: Vec<u8>
) -> Result<(), ArrayError>
pub fn store_chunks( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8> ) -> Result<(), ArrayError>
Encode chunks_bytes
and store at the chunks with indices represented by the chunks
array subset.
Use store_chunks_opt
to control codec options.
A chunk composed entirely of the fill value will not be written to the store.
§Errors
Returns an ArrayError
if
chunks
are invalid,- the length of
chunk_bytes
is not equal to the expected length (the product of the number of elements in the chunks and the data type size in bytes), - there is a codec encoding error, or
- an underlying store error.
sourcepub fn store_chunks_elements<T: Pod>(
&self,
chunks: &ArraySubset,
chunks_elements: Vec<T>
) -> Result<(), ArrayError>
pub fn store_chunks_elements<T: Pod>( &self, chunks: &ArraySubset, chunks_elements: Vec<T> ) -> Result<(), ArrayError>
Encode chunks_elements
and store at the chunks with indices represented by the chunks
array subset.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_chunks
error condition is met.
Examples found in repository?
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_chunks_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: TArray
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunks_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray ) -> Result<(), ArrayError>
ndarray
only.Encode chunks_array
and store at the chunks with indices represented by the chunks
array subset.
§Errors
Returns an ArrayError
if
- the shape of the array does not match the shape of the chunks,
- a
store_chunks_elements
error condition is met.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn erase_metadata(&self) -> Result<(), StorageError>
pub fn erase_metadata(&self) -> Result<(), StorageError>
Erase the metadata.
Succeeds if the metadata does not exist.
§Errors
Returns a StorageError
if there is an underlying store error.
sourcepub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>
pub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>
Erase the chunk at chunk_indices
.
Succeeds if the chunk does not exist.
§Errors
Returns a StorageError
if there is an underlying store error.
Examples found in repository?
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn erase_chunks(&self, chunks: &ArraySubset) -> Result<(), StorageError>
pub fn erase_chunks(&self, chunks: &ArraySubset) -> Result<(), StorageError>
sourcepub fn store_chunk_opt(
&self,
chunk_indices: &[u64],
chunk_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_chunk_opt( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_chunk
.
sourcepub fn store_chunk_elements_opt<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_chunk_elements_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_chunk_elements
.
sourcepub fn store_chunk_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_chunk_ndarray
.
sourcepub fn store_chunks_opt(
&self,
chunks: &ArraySubset,
chunks_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_chunks_opt( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_chunks
.
sourcepub fn store_chunks_elements_opt<T: Pod>(
&self,
chunks: &ArraySubset,
chunks_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_chunks_elements_opt<T: Pod>( &self, chunks: &ArraySubset, chunks_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_chunks_elements
.
sourcepub fn store_chunks_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunks_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_chunks_ndarray
.
source§impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>
sourcepub fn store_chunk_subset(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: Vec<u8>
) -> Result<(), ArrayError>
pub fn store_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8> ) -> Result<(), ArrayError>
Encode chunk_subset_bytes
and store in chunk_subset
of the chunk at chunk_indices
with default codec options.
Use store_chunk_subset_opt
to control codec options.
Prefer to use store_chunk
where possible, since this function may decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError
if
chunk_subset
is invalid or out of bounds of the chunk,- there is a codec encoding error, or
- an underlying store error.
§Panics
Panics if attempting to reference a byte beyond usize::MAX
.
sourcepub fn store_chunk_subset_elements<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: Vec<T>
) -> Result<(), ArrayError>
pub fn store_chunk_subset_elements<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T> ) -> Result<(), ArrayError>
Encode chunk_subset_elements
and store in chunk_subset
of the chunk at chunk_indices
with default codec options.
Use store_chunk_subset_elements_opt
to control codec options.
Prefer to use store_chunk_elements
where possible, since this will decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_chunk_subset
error condition is met.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
More examples
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_chunk_subset_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: TArray
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_subset_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray ) -> Result<(), ArrayError>
ndarray
only.Encode chunk_subset_array
and store in chunk_subset
of the chunk in the subset starting at chunk_subset_start
.
Use store_chunk_subset_ndarray_opt
to control codec options.
Prefer to use store_chunk_ndarray
where possible, since this will decode the chunk before updating it and reencoding it.
§Errors
Returns an ArrayError
if a store_chunk_subset_elements
error condition is met.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_array_subset(
&self,
array_subset: &ArraySubset,
subset_bytes: Vec<u8>
) -> Result<(), ArrayError>
pub fn store_array_subset( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8> ) -> Result<(), ArrayError>
Encode subset_bytes
and store in array_subset
.
Use store_array_subset_opt
to control codec options.
Prefer to use store_chunk
or store_chunks
where possible, since this will decode and encode each chunk intersecting array_subset
.
§Errors
Returns an ArrayError
if
- the dimensionality of
array_subset
does not match the chunk grid dimensionality - the length of
subset_bytes
does not match the expected length governed by the shape of the array subset and the data type size, - there is a codec encoding error, or
- an underlying store error.
sourcepub fn store_array_subset_elements<T: Pod>(
&self,
array_subset: &ArraySubset,
subset_elements: Vec<T>
) -> Result<(), ArrayError>
pub fn store_array_subset_elements<T: Pod>( &self, array_subset: &ArraySubset, subset_elements: Vec<T> ) -> Result<(), ArrayError>
Encode subset_elements
and store in array_subset
.
Use store_array_subset_elements_opt
to control codec options.
Prefer to use store_chunk_elements
or store_chunks_elements
where possible, since this will decode and encode each chunk intersecting array_subset
.
§Errors
Returns an ArrayError
if
- the size of
T
does not match the data type size, or - a
store_array_subset
error condition is met.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
More examples
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_array_subset_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: TArray
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_array_subset_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray ) -> Result<(), ArrayError>
ndarray
only.Encode subset_array
and store in the array subset starting at subset_start
.
Use store_array_subset_ndarray_opt
to control codec options.
Prefer to use store_chunk_ndarray
or store_chunks_ndarray
where possible, since this will decode and encode each chunk intersecting array_subset
.
§Errors
Returns an ArrayError
if a store_array_subset_elements
error condition is met.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn store_chunk_subset_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_chunk_subset
.
sourcepub fn store_chunk_subset_elements_opt<T: Pod>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_chunk_subset_elements_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_chunk_subset_elements
.
sourcepub fn store_chunk_subset_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_chunk_subset_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_chunk_subset_ndarray
.
sourcepub fn store_array_subset_opt(
&self,
array_subset: &ArraySubset,
subset_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_array_subset_opt( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_array_subset
.
sourcepub fn store_array_subset_elements_opt<T: Pod>(
&self,
array_subset: &ArraySubset,
subset_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
pub fn store_array_subset_elements_opt<T: Pod>( &self, array_subset: &ArraySubset, subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
Explicit options version of store_array_subset_elements
.
sourcepub fn store_array_subset_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature ndarray
only.
pub fn store_array_subset_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
ndarray
only.Explicit options version of store_array_subset_ndarray
.
source§impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>
sourcepub async fn async_new(
storage: Arc<TStorage>,
path: &str
) -> Result<Self, ArrayCreateError>
Available on crate feature async
only.
pub async fn async_new( storage: Arc<TStorage>, path: &str ) -> Result<Self, ArrayCreateError>
async
only.Async variant of new
.
sourcepub async fn async_retrieve_chunk_if_exists(
&self,
chunk_indices: &[u64]
) -> Result<Option<Vec<u8>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_if_exists( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<u8>>, ArrayError>
async
only.Async variant of retrieve_chunk_if_exists
.
sourcepub async fn async_retrieve_chunk_elements_if_exists<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64]
) -> Result<Option<Vec<T>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements_if_exists<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<T>>, ArrayError>
async
only.Async variant of retrieve_chunk_elements_if_exists
.
sourcepub async fn async_retrieve_chunk_ndarray_if_exists<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64]
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray_if_exists<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<Option<ArrayD<T>>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray_if_exists
.
sourcepub async fn async_retrieve_chunk(
&self,
chunk_indices: &[u64]
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk( &self, chunk_indices: &[u64] ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_chunk
.
sourcepub async fn async_retrieve_chunk_elements<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64]
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_elements
.
sourcepub async fn async_retrieve_chunk_ndarray<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64]
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_retrieve_chunk_into_array_view(
&self,
chunk_indices: &[u64],
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_into_array_view( &self, chunk_indices: &[u64], array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_chunk_into_array_view
.
sourcepub async fn async_retrieve_chunks(
&self,
chunks: &ArraySubset
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks( &self, chunks: &ArraySubset ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_chunks
.
sourcepub async fn async_retrieve_chunks_elements<T: Pod + Send + Sync>(
&self,
chunks: &ArraySubset
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_elements<T: Pod + Send + Sync>( &self, chunks: &ArraySubset ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunks_elements
.
sourcepub async fn async_retrieve_chunks_ndarray<T: Pod + Send + Sync>(
&self,
chunks: &ArraySubset
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunks_ndarray<T: Pod + Send + Sync>( &self, chunks: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunks_ndarray
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_retrieve_chunks_into_array_view(
&self,
chunks: &ArraySubset,
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_into_array_view( &self, chunks: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_chunks_into_array_view
.
sourcepub async fn async_retrieve_chunk_subset(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_chunk_subset
.
sourcepub async fn async_retrieve_chunk_subset_elements<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_subset_elements
.
sourcepub async fn async_retrieve_chunk_subset_ndarray<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_subset_ndarray<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_subset_ndarray
.
sourcepub async fn async_retrieve_chunk_subset_into_array_view(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_into_array_view( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_chunk_subset_into_array_view
.
sourcepub async fn async_retrieve_array_subset(
&self,
array_subset: &ArraySubset
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset( &self, array_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_array_subset
.
sourcepub async fn async_retrieve_array_subset_elements<T: Pod + Send + Sync>(
&self,
array_subset: &ArraySubset
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_elements<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_array_subset_elements
.
sourcepub async fn async_retrieve_array_subset_ndarray<T: Pod + Send + Sync>(
&self,
array_subset: &ArraySubset
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_array_subset_ndarray<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_array_subset_ndarray
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_retrieve_array_subset_into_array_view(
&self,
array_subset: &ArraySubset,
array_view: &ArrayView<'_>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_into_array_view( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_array_subset_into_array_view
.
sourcepub async fn async_partial_decoder<'a>(
&'a self,
chunk_indices: &[u64]
) -> Result<Box<dyn AsyncArrayPartialDecoderTraits + 'a>, ArrayError>
Available on crate feature async
only.
pub async fn async_partial_decoder<'a>( &'a self, chunk_indices: &[u64] ) -> Result<Box<dyn AsyncArrayPartialDecoderTraits + 'a>, ArrayError>
async
only.Async variant of partial_decoder
.
sourcepub async fn async_retrieve_chunk_if_exists_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Option<Vec<u8>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_if_exists_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<u8>>, ArrayError>
async
only.Async variant of retrieve_chunk_if_exists_opt
.
sourcepub async fn async_retrieve_chunk_opt(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_chunk_opt
.
sourcepub async fn async_retrieve_chunk_elements_if_exists_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Option<Vec<T>>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements_if_exists_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<T>>, ArrayError>
async
only.Async variant of retrieve_chunk_elements_if_exists_opt
.
sourcepub async fn async_retrieve_chunk_elements_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_elements_opt
.
sourcepub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Option<ArrayD<T>>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<ArrayD<T>>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray_if_exists_opt
.
sourcepub async fn async_retrieve_chunk_ndarray_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_ndarray_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_ndarray_opt
.
sourcepub async fn async_retrieve_chunk_into_array_view_opt(
&self,
chunk_indices: &[u64],
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_into_array_view_opt( &self, chunk_indices: &[u64], array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_chunk_into_array_view_opt
.
sourcepub async fn async_retrieve_chunks_opt(
&self,
chunks: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_opt( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_chunks_opt
.
sourcepub async fn async_retrieve_chunks_elements_opt<T: Pod + Send + Sync>(
&self,
chunks: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_elements_opt<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunks_elements_opt
.
sourcepub async fn async_retrieve_chunks_ndarray_opt<T: Pod + Send + Sync>(
&self,
chunks: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunks_ndarray_opt<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunks_ndarray_opt
.
sourcepub async fn async_retrieve_array_subset_opt(
&self,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_opt( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_array_subset_opt
.
sourcepub async fn async_retrieve_array_subset_elements_opt<T: Pod + Send + Sync>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_elements_opt<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_array_subset_elements_opt
.
sourcepub async fn async_retrieve_array_subset_ndarray_opt<T: Pod + Send + Sync>(
&self,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_array_subset_ndarray_opt<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_array_subset_ndarray_opt
.
sourcepub async fn async_retrieve_chunks_into_array_view_opt(
&self,
chunks: &ArraySubset,
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunks_into_array_view_opt( &self, chunks: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_chunks_into_array_view_opt
.
sourcepub async fn async_retrieve_array_subset_into_array_view_opt(
&self,
array_subset: &ArraySubset,
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_array_subset_into_array_view_opt( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_array_subset_into_array_view_opt
.
sourcepub async fn async_retrieve_chunk_subset_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
async
only.Async variant of retrieve_chunk_subset_opt
.
sourcepub async fn async_retrieve_chunk_subset_elements_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
async
only.Async variant of retrieve_chunk_subset_elements_opt
.
sourcepub async fn async_retrieve_chunk_subset_ndarray_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_retrieve_chunk_subset_ndarray_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
async
and ndarray
only.Async variant of retrieve_chunk_subset_ndarray_opt
.
sourcepub async fn async_retrieve_chunk_subset_into_array_view_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
array_view: &ArrayView<'_>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_retrieve_chunk_subset_into_array_view_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of retrieve_chunk_subset_into_array_view_opt
.
sourcepub async fn async_partial_decoder_opt<'a>(
&'a self,
chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Box<dyn AsyncArrayPartialDecoderTraits + 'a>, ArrayError>
Available on crate feature async
only.
pub async fn async_partial_decoder_opt<'a>( &'a self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Box<dyn AsyncArrayPartialDecoderTraits + 'a>, ArrayError>
async
only.Async variant of partial_decoder_opt
.
source§impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>
sourcepub async fn async_store_metadata(&self) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_store_metadata(&self) -> Result<(), StorageError>
async
only.Async variant of store_metadata
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_store_chunk(
&self,
chunk_indices: &[u64],
chunk_bytes: Vec<u8>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8> ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk
.
sourcepub async fn async_store_chunk_elements<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_elements: Vec<T>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_elements: Vec<T> ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_elements
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_store_chunk_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: TArray
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunk_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunk_ndarray
.
sourcepub async fn async_store_chunks(
&self,
chunks: &ArraySubset,
chunks_bytes: Vec<u8>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8> ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks
.
sourcepub async fn async_store_chunks_elements<T: Pod + Send + Sync>(
&self,
chunks: &ArraySubset,
chunks_elements: Vec<T>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks_elements<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, chunks_elements: Vec<T> ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks_elements
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_store_chunks_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: TArray
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunks_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunks_ndarray
.
sourcepub async fn async_erase_metadata(&self) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_metadata(&self) -> Result<(), StorageError>
async
only.Async variant of erase_metadata
.
sourcepub async fn async_erase_chunk(
&self,
chunk_indices: &[u64]
) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_chunk( &self, chunk_indices: &[u64] ) -> Result<(), StorageError>
async
only.Async variant of erase_chunk
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_erase_chunks(
&self,
chunks: &ArraySubset
) -> Result<(), StorageError>
Available on crate feature async
only.
pub async fn async_erase_chunks( &self, chunks: &ArraySubset ) -> Result<(), StorageError>
async
only.Async variant of erase_chunks
.
sourcepub async fn async_store_chunk_opt(
&self,
chunk_indices: &[u64],
chunk_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_opt( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_opt
.
sourcepub async fn async_store_chunk_elements_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_elements_opt
.
sourcepub async fn async_store_chunk_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunk_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunk_ndarray_opt
.
sourcepub async fn async_store_chunks_opt(
&self,
chunks: &ArraySubset,
chunks_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks_opt( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks_opt
.
sourcepub async fn async_store_chunks_elements_opt<T: Pod + Send + Sync>(
&self,
chunks: &ArraySubset,
chunks_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunks_elements_opt<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, chunks_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunks_elements_opt
.
sourcepub async fn async_store_chunks_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
chunks: &ArraySubset,
chunks_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunks_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunks_ndarray_opt
.
source§impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>
impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>
sourcepub async fn async_store_chunk_subset(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: Vec<u8>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8> ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset
.
sourcepub async fn async_store_chunk_subset_elements<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: Vec<T>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T> ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_elements
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_store_chunk_subset_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: TArray
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_chunk_subset_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_chunk_subset_ndarray
.
sourcepub async fn async_store_array_subset(
&self,
array_subset: &ArraySubset,
subset_bytes: Vec<u8>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8> ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset
.
sourcepub async fn async_store_array_subset_elements<T: Pod + Send + Sync>(
&self,
array_subset: &ArraySubset,
subset_elements: Vec<T>
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset_elements<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, subset_elements: Vec<T> ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset_elements
.
Examples found in repository?
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub async fn async_store_array_subset_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: TArray
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_array_subset_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_array_subset_ndarray
.
sourcepub async fn async_store_chunk_subset_opt(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_opt
.
sourcepub async fn async_store_chunk_subset_elements_opt<T: Pod + Send + Sync>(
&self,
chunk_indices: &[u64],
chunk_subset: &ArraySubset,
chunk_subset_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_elements_opt
.
sourcepub async fn async_store_chunk_subset_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
chunk_indices: &[u64],
chunk_subset_start: &[u64],
chunk_subset_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_chunk_subset_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_chunk_subset_ndarray_opt
.
sourcepub async fn async_store_array_subset_opt(
&self,
array_subset: &ArraySubset,
subset_bytes: Vec<u8>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset_opt( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset_opt
.
sourcepub async fn async_store_array_subset_elements_opt<T: Pod + Send + Sync>(
&self,
array_subset: &ArraySubset,
subset_elements: Vec<T>,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate feature async
only.
pub async fn async_store_array_subset_elements_opt<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>
async
only.Async variant of store_array_subset_elements_opt
.
sourcepub async fn async_store_array_subset_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>(
&self,
subset_start: &[u64],
subset_array: TArray,
options: &CodecOptions
) -> Result<(), ArrayError>
Available on crate features async
and ndarray
only.
pub async fn async_store_array_subset_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>
async
and ndarray
only.Async variant of store_array_subset_ndarray_opt
.
source§impl<TStorage: ?Sized> Array<TStorage>
impl<TStorage: ?Sized> Array<TStorage>
sourcepub fn new_with_metadata(
storage: Arc<TStorage>,
path: &str,
metadata: ArrayMetadata
) -> Result<Self, ArrayCreateError>
pub fn new_with_metadata( storage: Arc<TStorage>, path: &str, metadata: ArrayMetadata ) -> Result<Self, ArrayCreateError>
Create an array in storage
at path
with metadata
.
This does not write to the store, use store_metadata
to write metadata
to storage
.
§Errors
Returns ArrayCreateError
if:
- any metadata is invalid or,
- a plugin (e.g. data type/chunk grid/chunk key encoding/codec/storage transformer) is invalid.
sourcepub fn set_shape(&mut self, shape: ArrayShape)
pub fn set_shape(&mut self, shape: ArrayShape)
Set the shape of the array.
sourcepub fn attributes_mut(&mut self) -> &mut Map<String, Value>
pub fn attributes_mut(&mut self) -> &mut Map<String, Value>
Mutably borrow the array attributes.
sourcepub const fn fill_value(&self) -> &FillValue
pub const fn fill_value(&self) -> &FillValue
Get the fill value.
sourcepub fn shape(&self) -> &[u64]
pub fn shape(&self) -> &[u64]
Get the array shape.
Examples found in repository?
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::Array,
array_subset::ArraySubset,
storage::{
storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
store,
},
};
const HTTP_URL: &str =
"https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
const ARRAY_PATH: &str = "/group/array";
// Create a HTTP store
let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log.clone().create_readable_transformer(store);
}
}
// Init the existing array, reading metadata
let array = Array::new(store, ARRAY_PATH)?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn dimensionality(&self) -> usize
pub fn dimensionality(&self) -> usize
Get the array dimensionality.
sourcepub const fn codecs(&self) -> &CodecChain
pub const fn codecs(&self) -> &CodecChain
Get the codecs.
sourcepub const fn chunk_grid(&self) -> &ChunkGrid
pub const fn chunk_grid(&self) -> &ChunkGrid
Get the chunk grid.
Examples found in repository?
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
More examples
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding
pub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding
Get the chunk key encoding.
sourcepub const fn storage_transformers(&self) -> &StorageTransformerChain
pub const fn storage_transformers(&self) -> &StorageTransformerChain
Get the storage transformers.
sourcepub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>
pub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>
Get the dimension names.
sourcepub const fn attributes(&self) -> &Map<String, Value>
pub const fn attributes(&self) -> &Map<String, Value>
Get the attributes.
sourcepub const fn additional_fields(&self) -> &AdditionalFields
pub const fn additional_fields(&self) -> &AdditionalFields
Get the additional fields.
sourcepub fn set_include_zarrs_metadata(&mut self, include_zarrs_metadata: bool)
pub fn set_include_zarrs_metadata(&mut self, include_zarrs_metadata: bool)
Enable or disable the inclusion of zarrs metadata in the array attributes. Enabled by default.
Zarrs metadata includes the zarrs version and some parameters.
sourcepub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata
pub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata
Create ArrayMetadata
.
sourcepub fn metadata(&self) -> ArrayMetadata
pub fn metadata(&self) -> ArrayMetadata
Create ArrayMetadata
with default options.
Examples found in repository?
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::Array,
array_subset::ArraySubset,
storage::{
storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
store,
},
};
const HTTP_URL: &str =
"https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
const ARRAY_PATH: &str = "/group/array";
// Create a HTTP store
let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log.clone().create_readable_transformer(store);
}
}
// Init the existing array, reading metadata
let array = Array::new(store, ARRAY_PATH)?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
Ok(())
}
More examples
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use zarrs::array::ChunkGrid;
use zarrs::{
array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
node::Node,
};
use zarrs::{
array::{DataType, ZARR_NAN_F32},
array_subset::ArraySubset,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
ChunkGrid::new(RectangularChunkGrid::new(&[
[1, 2, 3, 2].try_into()?,
4.try_into()?,
])),
FillValue::from(ZARR_NAN_F32),
)
.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
])
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// Write some chunks (in parallel)
(0..4).into_par_iter().try_for_each(|i| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![i, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<f32>::from_elem(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
i as f32,
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_ndarray(
&[3, 3], // start
ndarray::ArrayD::<f32>::from_shape_vec(
vec![3, 3],
vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
)?,
)?;
// Store elements directly, in this case set the 7th column to 123.0
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![123.0; 8],
)?;
// Store elements directly in a chunk, in this case set the last row of the bottom right chunk
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[3, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[1..2, 0..4]),
vec![-4.0; 4],
)?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a chunk back from the store
let chunk_indices = vec![1, 0];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{tree}");
Ok(())
}
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_elements(
&chunk_indices,
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array.store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array.store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array.store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use zarrs::{
array::{
codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
DataType, FillValue,
},
array_subset::ArraySubset,
node::Node,
storage::store,
};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use std::sync::Arc;
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
// Create an array
let array_path = "/group/array";
let shard_shape = vec![4, 8];
let inner_chunk_shape = vec![4, 4];
let mut sharding_codec_builder =
ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
sharding_codec_builder.bytes_to_bytes_codecs(vec![
#[cfg(feature = "gzip")]
Box::new(codec::GzipCodec::new(5)?),
]);
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::UInt16,
shard_shape.try_into()?,
FillValue::from(0u16),
)
.array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
// The array metadata is
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some shards (in parallel)
(0..2).into_par_iter().try_for_each(|s| {
let chunk_grid = array.chunk_grid();
let chunk_indices = vec![s, 0];
if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
chunk_shape
.iter()
.map(|u| u.get() as usize)
.collect::<Vec<_>>(),
|ij| {
(s * chunk_shape[0].get() * chunk_shape[1].get()
+ ij[0] as u64 * chunk_shape[1].get()
+ ij[1] as u64) as u16
},
);
array.store_chunk_ndarray(&chunk_indices, chunk_array)
} else {
Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
chunk_indices.to_vec(),
))
}
})?;
// Read the whole array
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
println!("The whole array is:\n{data_all}\n");
// Read a shard back from the store
let shard_indices = vec![1, 0];
let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
println!("Shard [1,0] is:\n{data_shard}\n");
// Read an inner chunk from the store
let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
println!("Chunk [1,0] is:\n{data_chunk}\n");
// Read the central 4x2 subset of the array
let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
println!("The middle 4x2 subset is:\n{data_4x2}\n");
// Decode inner chunks
// In some cases, it might be preferable to decode inner chunks in a shard directly.
// If using the partial decoder, then the shard index will only be read once from the store.
let partial_decoder = array.partial_decoder(&[0, 0])?;
let inner_chunks_to_decode = vec![
ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
];
let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
.into_iter()
.map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
.collect::<Result<Vec<_>, _>>()?;
println!("Decoded inner chunks:");
for (inner_chunk_subset, decoded_inner_chunk) in
std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
{
println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
}
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("The zarr hierarchy tree is:\n{}", tree);
println!(
"The keys in the store are:\n[{}]",
store.list().unwrap_or_default().iter().format(", ")
);
Ok(())
}
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.store_metadata()?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata()).unwrap()
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.store_metadata()?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata()).unwrap()
);
// Write some chunks
(0..2).into_par_iter().try_for_each(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
let chunk_subset = array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})?;
array.store_chunk_ndarray(
&chunk_indices,
ArrayD::<f32>::from_shape_vec(
chunk_subset.shape_usize(),
vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
.unwrap(),
)
})?;
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
let ndarray_chunks: Array2<f32> = array![
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
[1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
];
array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
let ndarray_subset: Array2<f32> =
array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
let ndarray_subset: Array2<f32> = array![
[-0.6],
[-1.6],
[-2.6],
[-3.6],
[-4.6],
[-5.6],
[-6.6],
[-7.6],
];
array.store_array_subset_ndarray(
ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
ndarray_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
array.store_chunk_subset_ndarray(
// chunk indices
&[1, 1],
// subset within chunk
ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
ndarray_chunk_subset,
)?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.erase_chunk(&[0, 0])?;
let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::new(&*store, "/").unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
use futures::{stream::FuturesUnordered, StreamExt};
use std::sync::Arc;
use zarrs::{
array::{DataType, FillValue, ZARR_NAN_F32},
array_subset::ArraySubset,
node::Node,
storage::store,
};
// Create a store
// let path = tempfile::TempDir::new()?;
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
// let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
// "tests/data/array_write_read.zarr",
// )?);
let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
object_store::memory::InMemory::new(),
));
if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
if arg1 == "--usage-log" {
let log_writer = Arc::new(std::sync::Mutex::new(
// std::io::BufWriter::new(
std::io::stdout(),
// )
));
let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
chrono::Utc::now().format("[%T%.3f] ").to_string()
}));
store = usage_log
.clone()
.create_async_readable_writable_listable_transformer(store);
}
}
// Create a group
let group_path = "/group";
let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;
// Update group metadata
group
.attributes_mut()
.insert("foo".into(), serde_json::Value::String("bar".into()));
// Write group metadata to store
group.async_store_metadata().await?;
println!(
"The group metadata is:\n{}\n",
serde_json::to_string_pretty(&group.metadata())?
);
// Create an array
let array_path = "/group/array";
let array = zarrs::array::ArrayBuilder::new(
vec![8, 8], // array shape
DataType::Float32,
vec![4, 4].try_into()?, // regular chunk shape
FillValue::from(ZARR_NAN_F32),
)
// .bytes_to_bytes_codecs(vec![]) // uncompressed
.dimension_names(["y", "x"].into())
// .storage_transformers(vec![].into())
.build(store.clone(), array_path)?;
// Write array metadata to store
array.async_store_metadata().await?;
println!(
"The array metadata is:\n{}\n",
serde_json::to_string_pretty(&array.metadata())?
);
// Write some chunks
let subsets = (0..2)
.map(|i| {
let chunk_indices: Vec<u64> = vec![0, i];
array
.chunk_grid()
.subset(&chunk_indices, array.shape())?
.ok_or_else(|| {
zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
})
.map(|chunk_subset| (i, chunk_indices, chunk_subset))
})
.collect::<Result<Vec<_>, _>>()?;
let mut futures = subsets
.iter()
.map(|(i, chunk_indices, chunk_subset)| {
array.async_store_chunk_elements(
&chunk_indices,
vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
)
})
.collect::<FuturesUnordered<_>>();
while let Some(item) = futures.next().await {
item?;
}
let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");
// Store multiple chunks
array
.async_store_chunks_elements::<f32>(
&ArraySubset::new_with_ranges(&[1..2, 0..2]),
vec![
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
//
1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");
// Write a subset spanning multiple chunks, including updating chunks already written
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[3..6, 3..6]),
vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");
// Store array subset
array
.async_store_array_subset_elements::<f32>(
&ArraySubset::new_with_ranges(&[0..8, 6..7]),
vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");
// Store chunk subset
array
.async_store_chunk_subset_elements::<f32>(
// chunk indices
&[1, 1],
// subset within chunk
&ArraySubset::new_with_ranges(&[3..4, 0..4]),
vec![-7.4, -7.5, -7.6, -7.7],
)
.await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");
// Erase a chunk
array.async_erase_chunk(&[0, 0]).await?;
let data_all = array
.async_retrieve_array_subset_ndarray::<f32>(&subset_all)
.await?;
println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");
// Read a chunk
let chunk_indices = vec![0, 1];
let data_chunk = array
.async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
.await?;
println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");
// Read chunks
let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");
// Retrieve an array subset
let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
let data_subset = array
.async_retrieve_array_subset_ndarray::<f32>(&subset)
.await?;
println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");
// Show the hierarchy
let node = Node::async_new(&*store, "/").await.unwrap();
let tree = node.hierarchy_tree();
println!("hierarchy_tree:\n{}", tree);
Ok(())
}
sourcepub fn builder(&self) -> ArrayBuilder
pub fn builder(&self) -> ArrayBuilder
Create an array builder matching the parameters of this array.
sourcepub fn chunk_grid_shape(&self) -> Option<ArrayShape>
pub fn chunk_grid_shape(&self) -> Option<ArrayShape>
Return the shape of the chunk grid (i.e., the number of chunks).
sourcepub fn chunk_origin(
&self,
chunk_indices: &[u64]
) -> Result<ArrayIndices, ArrayError>
pub fn chunk_origin( &self, chunk_indices: &[u64] ) -> Result<ArrayIndices, ArrayError>
Return the origin of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
sourcepub fn chunk_shape(
&self,
chunk_indices: &[u64]
) -> Result<ChunkShape, ArrayError>
pub fn chunk_shape( &self, chunk_indices: &[u64] ) -> Result<ChunkShape, ArrayError>
Return the shape of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
sourcepub fn chunk_shape_usize(
&self,
chunk_indices: &[u64]
) -> Result<Vec<usize>, ArrayError>
pub fn chunk_shape_usize( &self, chunk_indices: &[u64] ) -> Result<Vec<usize>, ArrayError>
Return the shape of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
§Panics
Panics if any component of the chunk shape exceeds usize::MAX
.
sourcepub fn chunk_subset(
&self,
chunk_indices: &[u64]
) -> Result<ArraySubset, ArrayError>
pub fn chunk_subset( &self, chunk_indices: &[u64] ) -> Result<ArraySubset, ArrayError>
Return the array subset of the chunk at chunk_indices
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
sourcepub fn chunk_subset_bounded(
&self,
chunk_indices: &[u64]
) -> Result<ArraySubset, ArrayError>
pub fn chunk_subset_bounded( &self, chunk_indices: &[u64] ) -> Result<ArraySubset, ArrayError>
Return the array subset of the chunk at chunk_indices
bounded by the array shape.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
sourcepub fn chunks_subset(
&self,
chunks: &ArraySubset
) -> Result<ArraySubset, ArrayError>
pub fn chunks_subset( &self, chunks: &ArraySubset ) -> Result<ArraySubset, ArrayError>
Return the array subset of chunks
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if a chunk in chunks
is incompatible with the chunk grid.
sourcepub fn chunks_subset_bounded(
&self,
chunks: &ArraySubset
) -> Result<ArraySubset, ArrayError>
pub fn chunks_subset_bounded( &self, chunks: &ArraySubset ) -> Result<ArraySubset, ArrayError>
Return the array subset of chunks
bounded by the array shape.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
sourcepub fn chunk_array_representation(
&self,
chunk_indices: &[u64]
) -> Result<ChunkRepresentation, ArrayError>
pub fn chunk_array_representation( &self, chunk_indices: &[u64] ) -> Result<ChunkRepresentation, ArrayError>
Get the chunk array representation at chunk_index
.
§Errors
Returns ArrayError::InvalidChunkGridIndicesError
if the chunk_indices
are incompatible with the chunk grid.
sourcepub fn chunks_in_array_subset(
&self,
array_subset: &ArraySubset
) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>
pub fn chunks_in_array_subset( &self, array_subset: &ArraySubset ) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>
Return an array subset indicating the chunks intersecting array_subset
.
Returns None
if the intersecting chunks cannot be determined.
§Errors
Returns IncompatibleDimensionalityError
if the array subset has an incorrect dimensionality.
Trait Implementations§
source§impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>
Available on crate feature sharding
only.
impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>
sharding
only.source§fn is_sharded(&self) -> bool
fn is_sharded(&self) -> bool
sharding_indexed
.source§fn inner_chunk_shape(&self) -> Option<ChunkShape>
fn inner_chunk_shape(&self) -> Option<ChunkShape>
source§fn inner_chunk_grid(&self) -> ChunkGrid
fn inner_chunk_grid(&self) -> ChunkGrid
source§impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>
Available on crate feature sharding
only.
impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>
sharding
only.source§fn retrieve_inner_chunk_opt<'a>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
inner_chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
fn retrieve_inner_chunk_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
chunk_indices
into its bytes. Read moresource§fn retrieve_inner_chunk_elements_opt<'a, T: Pod>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
inner_chunk_indices: &[u64],
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
fn retrieve_inner_chunk_elements_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
chunk_indices
into a vector of its elements. Read moresource§fn retrieve_inner_chunk_ndarray_opt<'a, T: Pod>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
inner_chunk_indices: &[u64],
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_inner_chunk_ndarray_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunk_indices: &[u64], options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.source§fn retrieve_inner_chunks_opt<'a>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
inner_chunks: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
fn retrieve_inner_chunks_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
chunks
into their bytes. Read moresource§fn retrieve_inner_chunks_elements_opt<'a, T: Pod>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
inner_chunks: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
fn retrieve_inner_chunks_elements_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
inner_chunks
into a vector of their elements. Read moresource§fn retrieve_inner_chunks_ndarray_opt<'a, T: Pod>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
inner_chunks: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_inner_chunks_ndarray_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunks: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.source§fn retrieve_array_subset_sharded_opt<'a>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<u8>, ArrayError>
fn retrieve_array_subset_sharded_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>
array_subset
of array into its bytes. Read moresource§fn retrieve_array_subset_elements_sharded_opt<'a, T: Pod>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<Vec<T>, ArrayError>
fn retrieve_array_subset_elements_sharded_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>
array_subset
of array into a vector of its elements. Read moresource§fn retrieve_array_subset_ndarray_sharded_opt<'a, T: Pod>(
&'a self,
cache: &ArrayShardedReadableExtCache<'a>,
array_subset: &ArraySubset,
options: &CodecOptions
) -> Result<ArrayD<T>, ArrayError>
fn retrieve_array_subset_ndarray_sharded_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>
ndarray
only.