Struct zarrs::array::Array

source ·
pub struct Array<TStorage: ?Sized> { /* private fields */ }
Expand description

A Zarr array.

See https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html#array-metadata.

§Metadata

An array is defined by the following parameters (which are encoded in its JSON metadata):

  • shape: defines the length of the array dimensions,
  • data type: defines the numerical representation array elements,
  • chunk grid: defines how the array is subdivided into chunks,
  • chunk key encoding: defines how chunk grid cell coordinates are mapped to keys in a store,
  • fill value: an element value to use for uninitialised portions of the array.
  • codecs: used to encode and decode chunks,

and optional parameters:

  • attributes: user-defined attributes,
  • storage transformers: used to intercept and alter the storage keys and bytes of an array before they reach the underlying physical storage, and
  • dimension names: defines the names of the array dimensions.

See https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html#array-metadata for more information on array metadata.

§Initilisation

A new array can be initialised with an ArrayBuilder or Array::new_with_metadata.

An existing array can be initialised with Array::new, its metadata is read from the store.

The shape and attributes of an array are mutable and can be updated after construction. However, array metadata must be written explicitly to the store with store_metadata if an array is newly created or its metadata has been mutated.

§Methods

§Sync API

Array operations are divided into several categories based on the traits implemented for the backing storage. The core array methods are:

All retrieve and store methods have multiple variants:

  • Standard variants store or retrieve data represented as bytes.
  • _elements suffix variants can store or retrieve chunks with a known type.
  • _ndarray suffix variants can store or retrieve ndarray::Arrays (requires ndarray feature).
  • Retrieve and store methods have an _opt variant with an additional CodecOptions argument for fine-grained concurrency control.
  • Variants without the _opt suffix use default CodecOptions which just maximises concurrent operations. This is preferred unless using external parallelisation.
§Async API

With the async feature and an async store, there are equivalent methods to the sync API with an async_ prefix.

The async API is not as performant as the sync API.

This crate is async runtime-agnostic and does not spawn tasks internally. The implication is that methods like async_retrieve_array_subset or async_retrieve_chunks do not parallelise over chunks and can be slow compared to the sync API (especially when they involve a large number of chunks).

This limitation can be circumvented by spawning tasks outside of zarrs. For example, instead of using async_retrieve_chunks, multiple tasks executing async_retrieve_chunk_into_array_view could be spawned that output to a preallocated buffer. An example of such an approach can be found in the zarrs_benchmark_read_async application in the zarrs_tools crate.

§Parallel Writing

If a chunk is written more than once, its element values depend on whichever operation wrote to the chunk last. The ReadableWritableStorageTraits store_chunk_subset and store_array_subset methods and their variants internally retrieve a chunk, update it, then store it. It is the responsibility of zarrs consumers to ensure that:

Partial writes to a chunk may be lost if these rules are not respected.

zarrs does not currently offer an API for locking chunks or regions.

§Best Practices

§Writing

For optimum write performance, an array should be written using store_chunk or store_chunks where possible. The store_chunk_subset and store_array_subset are less preferred because they may incur decoding overhead and require careful usage if executed concurrently (see previous section).

§Reading

It is fastest to load arrays using retrieve_chunk or retrieve_chunks where possible. In contrast, the retrieve_chunk_subset and retrieve_array_subset may use partial decoders which can be less efficient with some codecs/stores.

§zarrs Metadata

By default, the zarrs version and a link to its source code is written to the _zarrs attribute in array metadata. This can be disabled with set_include_zarrs_metadata(false).

Implementations§

source§

impl<TStorage: ?Sized + ReadableStorageTraits + 'static> Array<TStorage>

source

pub fn new(storage: Arc<TStorage>, path: &str) -> Result<Self, ArrayCreateError>

Create an array in storage at path. The metadata is read from the store.

§Errors

Returns ArrayCreateError if there is a storage error or any metadata is invalid.

Examples found in repository?
examples/http_array_read.rs (line 35)
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::Array,
        array_subset::ArraySubset,
        storage::{
            storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
            store,
        },
    };

    const HTTP_URL: &str =
        "https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
    const ARRAY_PATH: &str = "/group/array";

    // Create a HTTP store
    let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log.clone().create_readable_transformer(store);
        }
    }

    // Init the existing array, reading metadata
    let array = Array::new(store, ARRAY_PATH)?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    Ok(())
}
source

pub fn retrieve_chunk_if_exists( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<u8>>, ArrayError>

Read and decode the chunk at chunk_indices into its bytes if it exists with default codec options.

§Errors

Returns an ArrayError if

  • chunk_indices are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Panics if the number of elements in the chunk exceeds usize::MAX.

source

pub fn retrieve_chunk_elements_if_exists<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<T>>, ArrayError>

Read and decode the chunk at chunk_indices into a vector of its elements if it exists with default codec options.

§Errors

Returns an ArrayError if

  • the size of T does not match the data type size,
  • the decoded bytes cannot be transmuted,
  • chunk_indices are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
source

pub fn retrieve_chunk_ndarray_if_exists<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<Option<ArrayD<T>>, ArrayError>

Available on crate feature ndarray only.

Read and decode the chunk at chunk_indices into an ndarray::ArrayD if it exists.

§Errors

Returns an ArrayError if:

  • the size of T does not match the data type size,
  • the decoded bytes cannot be transmuted,
  • the chunk indices are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Will panic if a chunk dimension is larger than usize::MAX.

source

pub fn retrieve_chunk( &self, chunk_indices: &[u64] ) -> Result<Vec<u8>, ArrayError>

Read and decode the chunk at chunk_indices into its bytes or the fill value if it does not exist with default codec options.

§Errors

Returns an ArrayError if

  • chunk_indices are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Panics if the number of elements in the chunk exceeds usize::MAX.

source

pub fn retrieve_chunk_elements<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<Vec<T>, ArrayError>

Read and decode the chunk at chunk_indices into a vector of its elements or the fill value if it does not exist.

§Errors

Returns an ArrayError if

  • the size of T does not match the data type size,
  • the decoded bytes cannot be transmuted,
  • chunk_indices are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
source

pub fn retrieve_chunk_ndarray<T: Pod>( &self, chunk_indices: &[u64] ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Read and decode the chunk at chunk_indices into an ndarray::ArrayD. It is filled with the fill value if it does not exist.

§Errors

Returns an ArrayError if:

  • the size of T does not match the data type size,
  • the decoded bytes cannot be transmuted,
  • the chunk indices are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Will panic if a chunk dimension is larger than usize::MAX.

Examples found in repository?
examples/http_array_read.rs (line 49)
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::Array,
        array_subset::ArraySubset,
        storage::{
            storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
            store,
        },
    };

    const HTTP_URL: &str =
        "https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
    const ARRAY_PATH: &str = "/group/array";

    // Create a HTTP store
    let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log.clone().create_readable_transformer(store);
        }
    }

    // Init the existing array, reading metadata
    let array = Array::new(store, ARRAY_PATH)?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    Ok(())
}
More examples
Hide additional examples
examples/rectangular_array_write_read.rs (line 136)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
examples/array_write_read.rs (line 143)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/sharded_array_write_read.rs (line 118)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (line 159)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn retrieve_chunk_into_array_view( &self, chunk_indices: &[u64], array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Retrieve a chunk and output into an existing array.

§Errors

See Array::retrieve_chunk. Can also error if the ArraySubset in array_view does not have the same shape as the chunk at chunk_indices.

§Panics

Panics if an offset is larger than usize::MAX.

source

pub fn retrieve_chunks( &self, chunks: &ArraySubset ) -> Result<Vec<u8>, ArrayError>

Read and decode the chunks at chunks into their bytes.

§Errors

Returns an ArrayError if

  • any chunk indices in chunks are invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Panics if the number of array elements in the chunk exceeds usize::MAX.

source

pub fn retrieve_chunks_elements<T: Pod>( &self, chunks: &ArraySubset ) -> Result<Vec<T>, ArrayError>

Read and decode the chunks at chunks into a vector of their elements.

§Errors

Returns an ArrayError if any chunk indices in chunks are invalid or an error condition in Array::retrieve_chunks_opt.

§Panics

Panics if the number of array elements in the chunks exceeds usize::MAX.

source

pub fn retrieve_chunks_ndarray<T: Pod>( &self, chunks: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Read and decode the chunks at chunks into an ndarray::ArrayD.

§Errors

Returns an ArrayError if any chunk indices in chunks are invalid or an error condition in Array::retrieve_chunks_elements_opt.

§Panics

Panics if the number of array elements in the chunks exceeds usize::MAX.

Examples found in repository?
examples/array_write_read.rs (line 148)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read_ndarray.rs (line 164)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn retrieve_chunks_into_array_view( &self, chunks: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Retrieve chunks into an array view.

§Errors

See Array::retrieve_chunks_opt. Can also error if the ArraySubset in array_view does not have the same shape as array_subset.

§Panics

Panics if an offset is larger than usize::MAX.

source

pub fn retrieve_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>

Read and decode the chunk_subset of the chunk at chunk_indices into its bytes.

§Errors

Returns an ArrayError if:

  • the chunk indices are invalid,
  • the chunk subset is invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Will panic if the number of elements in chunk_subset is usize::MAX or larger.

source

pub fn retrieve_chunk_subset_elements<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>

Read and decode the chunk_subset of the chunk at chunk_indices into its elements.

§Errors

Returns an ArrayError if:

  • the chunk indices are invalid,
  • the chunk subset is invalid,
  • there is a codec decoding error, or
  • an underlying store error.
source

pub fn retrieve_chunk_subset_ndarray<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Read and decode the chunk_subset of the chunk at chunk_indices into an ndarray::ArrayD.

§Errors

Returns an ArrayError if:

  • the chunk indices are invalid,
  • the chunk subset is invalid,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Will panic if the number of elements in chunk_subset is usize::MAX or larger.

source

pub fn retrieve_chunk_subset_into_array_view( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Retrieve a subset of a chunk and output into an existing array.

§Errors

See Array::retrieve_chunk_subset. Can also error if the ArraySubset in array_view does not have the same shape as chunk_subset.

§Panics

Panics if an offset is larger than usize::MAX.

source

pub fn retrieve_array_subset( &self, array_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>

Read and decode the array_subset of array into its bytes.

Out-of-bounds elements will have the fill value. If parallel is true, chunks intersecting the array subset are retrieved in parallel.

§Errors

Returns an ArrayError if:

  • the array_subset dimensionality does not match the chunk grid dimensionality,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Panics if attempting to reference a byte beyond usize::MAX.

source

pub fn retrieve_array_subset_elements<T: Pod>( &self, array_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>

Read and decode the array_subset of array into a vector of its elements.

§Errors

Returns an ArrayError if:

  • the size of T does not match the data type size,
  • the decoded bytes cannot be transmuted,
  • an array subset is invalid or out of bounds of the array,
  • there is a codec decoding error, or
  • an underlying store error.
source

pub fn retrieve_array_subset_ndarray<T: Pod>( &self, array_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Read and decode the array_subset of array into an ndarray::ArrayD.

§Errors

Returns an ArrayError if:

  • an array subset is invalid or out of bounds of the array,
  • there is a codec decoding error, or
  • an underlying store error.
§Panics

Will panic if any dimension in chunk_subset is usize::MAX or larger.

Examples found in repository?
examples/http_array_read.rs (line 44)
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::Array,
        array_subset::ArraySubset,
        storage::{
            storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
            store,
        },
    };

    const HTTP_URL: &str =
        "https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
    const ARRAY_PATH: &str = "/group/array";

    // Create a HTTP store
    let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log.clone().create_readable_transformer(store);
        }
    }

    // Init the existing array, reading metadata
    let array = Array::new(store, ARRAY_PATH)?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    Ok(())
}
More examples
Hide additional examples
examples/rectangular_array_write_read.rs (line 131)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
examples/array_write_read.rs (line 93)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/sharded_array_write_read.rs (line 113)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (line 98)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn retrieve_array_subset_into_array_view( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Retrieve an array subset into an array view.

§Errors

See Array::retrieve_array_subset. Can also error if the ArraySubset in array_view does not have the same shape as array_subset.

§Panics

Panics if an offset is larger than usize::MAX.

source

pub fn partial_decoder<'a>( &'a self, chunk_indices: &[u64] ) -> Result<Box<dyn ArrayPartialDecoderTraits + 'a>, ArrayError>

Initialises a partial decoder for the chunk at chunk_indices.

§Errors

Returns an ArrayError if initialisation of the partial decoder fails.

Examples found in repository?
examples/sharded_array_write_read.rs (line 134)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
source

pub fn retrieve_chunk_if_exists_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<u8>>, ArrayError>

Explicit options version of retrieve_chunk_if_exists.

source

pub fn retrieve_chunk_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Explicit options version of retrieve_chunk.

source

pub fn retrieve_chunk_elements_if_exists_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<T>>, ArrayError>

Explicit options version of retrieve_chunk_elements_if_exists.

source

pub fn retrieve_chunk_elements_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Explicit options version of retrieve_chunk_elements.

source

pub fn retrieve_chunk_ndarray_if_exists_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<ArrayD<T>>, ArrayError>

Available on crate feature ndarray only.

Explicit options version of retrieve_chunk_ndarray_if_exists.

source

pub fn retrieve_chunk_ndarray_opt<T: Pod>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Explicit options version of retrieve_chunk_ndarray.

source

pub fn retrieve_chunk_into_array_view_opt( &self, chunk_indices: &[u64], array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of retrieve_chunk_into_array_view.

source

pub fn retrieve_chunk_subset_into_array_view_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of retrieve_chunk_subset_into_array_view.

source

pub fn retrieve_chunks_opt( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Explicit options version of retrieve_chunks.

source

pub fn retrieve_chunks_elements_opt<T: Pod>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Explicit options version of retrieve_chunks_elements.

source

pub fn retrieve_chunks_ndarray_opt<T: Pod>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Explicit options version of retrieve_chunks_ndarray.

source

pub fn retrieve_array_subset_opt( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Explicit options version of retrieve_array_subset.

source

pub fn retrieve_chunks_into_array_view_opt( &self, chunks: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of retrieve_chunks_into_array_view.

source

pub fn retrieve_array_subset_into_array_view_opt( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of retrieve_array_subset_into_array_view.

source

pub fn retrieve_array_subset_elements_opt<T: Pod>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Explicit options version of retrieve_array_subset_elements.

source

pub fn retrieve_array_subset_ndarray_opt<T: Pod>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Explicit options version of retrieve_array_subset_ndarray.

source

pub fn retrieve_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Explicit options version of retrieve_chunk_subset.

source

pub fn retrieve_chunk_subset_elements_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Explicit options version of retrieve_chunk_subset_elements.

source

pub fn retrieve_chunk_subset_ndarray_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.

Explicit options version of retrieve_chunk_subset_ndarray.

source

pub fn partial_decoder_opt<'a>( &'a self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Box<dyn ArrayPartialDecoderTraits + 'a>, ArrayError>

Explicit options version of partial_decoder.

source§

impl<TStorage: ?Sized + WritableStorageTraits + 'static> Array<TStorage>

source

pub fn store_metadata(&self) -> Result<(), StorageError>

Store metadata.

§Errors

Returns StorageError if there is an underlying store error.

Examples found in repository?
examples/rectangular_array_write_read.rs (line 78)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read.rs (line 70)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/sharded_array_write_read.rs (line 79)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (line 71)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_chunk( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8> ) -> Result<(), ArrayError>

Encode chunk_bytes and store at chunk_indices.

Use store_chunk_opt to control codec options. A chunk composed entirely of the fill value will not be written to the store.

§Errors

Returns an ArrayError if

  • chunk_indices are invalid,
  • the length of chunk_bytes is not equal to the expected length (the product of the number of elements in the chunk and the data type size in bytes),
  • there is a codec encoding error, or
  • an underlying store error.
source

pub fn store_chunk_elements<T: Pod>( &self, chunk_indices: &[u64], chunk_elements: Vec<T> ) -> Result<(), ArrayError>

Encode chunk_elements and store at chunk_indices.

Use store_chunk_elements_opt to control codec options. A chunk composed entirely of the fill value will not be written to the store.

§Errors

Returns an ArrayError if

  • the size of T does not match the data type size, or
  • a store_chunk error condition is met.
Examples found in repository?
examples/array_write_read.rs (lines 86-89)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_chunk_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Encode chunk_array and store at chunk_indices.

Use store_chunk_ndarray_opt to control codec options.

§Errors

Returns an ArrayError if

  • the shape of the array does not match the shape of the chunk,
  • a store_chunk_elements error condition is met.
Examples found in repository?
examples/rectangular_array_write_read.rs (line 92)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
More examples
Hide additional examples
examples/sharded_array_write_read.rs (line 103)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (lines 87-94)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_chunks( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8> ) -> Result<(), ArrayError>

Encode chunks_bytes and store at the chunks with indices represented by the chunks array subset.

Use store_chunks_opt to control codec options. A chunk composed entirely of the fill value will not be written to the store.

§Errors

Returns an ArrayError if

  • chunks are invalid,
  • the length of chunk_bytes is not equal to the expected length (the product of the number of elements in the chunks and the data type size in bytes),
  • there is a codec encoding error, or
  • an underlying store error.
source

pub fn store_chunks_elements<T: Pod>( &self, chunks: &ArraySubset, chunks_elements: Vec<T> ) -> Result<(), ArrayError>

Encode chunks_elements and store at the chunks with indices represented by the chunks array subset.

§Errors

Returns an ArrayError if

  • the size of T does not match the data type size, or
  • a store_chunks error condition is met.
Examples found in repository?
examples/array_write_read.rs (lines 97-105)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_chunks_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Encode chunks_array and store at the chunks with indices represented by the chunks array subset.

§Errors

Returns an ArrayError if

  • the shape of the array does not match the shape of the chunks,
  • a store_chunks_elements error condition is met.
Examples found in repository?
examples/array_write_read_ndarray.rs (line 108)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn erase_metadata(&self) -> Result<(), StorageError>

Erase the metadata.

Succeeds if the metadata does not exist.

§Errors

Returns a StorageError if there is an underlying store error.

source

pub fn erase_chunk(&self, chunk_indices: &[u64]) -> Result<(), StorageError>

Erase the chunk at chunk_indices.

Succeeds if the chunk does not exist.

§Errors

Returns a StorageError if there is an underlying store error.

Examples found in repository?
examples/array_write_read.rs (line 137)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read_ndarray.rs (line 153)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn erase_chunks(&self, chunks: &ArraySubset) -> Result<(), StorageError>

Erase the chunks in chunks.

§Errors

Returns a StorageError if there is an underlying store error.

source

pub fn store_chunk_opt( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_chunk.

source

pub fn store_chunk_elements_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_chunk_elements.

source

pub fn store_chunk_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Explicit options version of store_chunk_ndarray.

source

pub fn store_chunks_opt( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_chunks.

source

pub fn store_chunks_elements_opt<T: Pod>( &self, chunks: &ArraySubset, chunks_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_chunks_elements.

source

pub fn store_chunks_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Explicit options version of store_chunks_ndarray.

source§

impl<TStorage: ?Sized + ReadableWritableStorageTraits + 'static> Array<TStorage>

source

pub fn store_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8> ) -> Result<(), ArrayError>

Encode chunk_subset_bytes and store in chunk_subset of the chunk at chunk_indices with default codec options.

Use store_chunk_subset_opt to control codec options. Prefer to use store_chunk where possible, since this function may decode the chunk before updating it and reencoding it.

§Errors

Returns an ArrayError if

  • chunk_subset is invalid or out of bounds of the chunk,
  • there is a codec encoding error, or
  • an underlying store error.
§Panics

Panics if attempting to reference a byte beyond usize::MAX.

source

pub fn store_chunk_subset_elements<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T> ) -> Result<(), ArrayError>

Encode chunk_subset_elements and store in chunk_subset of the chunk at chunk_indices with default codec options.

Use store_chunk_subset_elements_opt to control codec options. Prefer to use store_chunk_elements where possible, since this will decode the chunk before updating it and reencoding it.

§Errors

Returns an ArrayError if

  • the size of T does not match the data type size, or
  • a store_chunk_subset error condition is met.
Examples found in repository?
examples/rectangular_array_write_read.rs (lines 121-127)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read.rs (lines 126-132)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_chunk_subset_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Encode chunk_subset_array and store in chunk_subset of the chunk in the subset starting at chunk_subset_start.

Use store_chunk_subset_ndarray_opt to control codec options. Prefer to use store_chunk_ndarray where possible, since this will decode the chunk before updating it and reencoding it.

§Errors

Returns an ArrayError if a store_chunk_subset_elements error condition is met.

Examples found in repository?
examples/array_write_read_ndarray.rs (lines 142-148)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_array_subset( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8> ) -> Result<(), ArrayError>

Encode subset_bytes and store in array_subset.

Use store_array_subset_opt to control codec options. Prefer to use store_chunk or store_chunks where possible, since this will decode and encode each chunk intersecting array_subset.

§Errors

Returns an ArrayError if

  • the dimensionality of array_subset does not match the chunk grid dimensionality
  • the length of subset_bytes does not match the expected length governed by the shape of the array subset and the data type size,
  • there is a codec encoding error, or
  • an underlying store error.
source

pub fn store_array_subset_elements<T: Pod>( &self, array_subset: &ArraySubset, subset_elements: Vec<T> ) -> Result<(), ArrayError>

Encode subset_elements and store in array_subset.

Use store_array_subset_elements_opt to control codec options. Prefer to use store_chunk_elements or store_chunks_elements where possible, since this will decode and encode each chunk intersecting array_subset.

§Errors

Returns an ArrayError if

  • the size of T does not match the data type size, or
  • a store_array_subset error condition is met.
Examples found in repository?
examples/rectangular_array_write_read.rs (lines 115-118)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read.rs (lines 110-113)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_array_subset_ndarray<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Encode subset_array and store in the array subset starting at subset_start.

Use store_array_subset_ndarray_opt to control codec options. Prefer to use store_chunk_ndarray or store_chunks_ndarray where possible, since this will decode and encode each chunk intersecting array_subset.

§Errors

Returns an ArrayError if a store_array_subset_elements error condition is met.

Examples found in repository?
examples/rectangular_array_write_read.rs (lines 106-112)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read_ndarray.rs (lines 115-118)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn store_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_chunk_subset.

source

pub fn store_chunk_subset_elements_opt<T: Pod>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_chunk_subset_elements.

source

pub fn store_chunk_subset_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Explicit options version of store_chunk_subset_ndarray.

source

pub fn store_array_subset_opt( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_array_subset.

source

pub fn store_array_subset_elements_opt<T: Pod>( &self, array_subset: &ArraySubset, subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Explicit options version of store_array_subset_elements.

source

pub fn store_array_subset_ndarray_opt<T: Pod, TArray: Into<Array<T, D>>, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature ndarray only.

Explicit options version of store_array_subset_ndarray.

source§

impl<TStorage: ?Sized + AsyncReadableStorageTraits + 'static> Array<TStorage>

source

pub async fn async_new( storage: Arc<TStorage>, path: &str ) -> Result<Self, ArrayCreateError>

Available on crate feature async only.

Async variant of new.

source

pub async fn async_retrieve_chunk_if_exists( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<u8>>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_if_exists.

source

pub async fn async_retrieve_chunk_elements_if_exists<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<Option<Vec<T>>, ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunk_ndarray_if_exists<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<Option<ArrayD<T>>, ArrayError>

Available on crate features async and ndarray only.
source

pub async fn async_retrieve_chunk( &self, chunk_indices: &[u64] ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk.

source

pub async fn async_retrieve_chunk_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_elements.

source

pub async fn async_retrieve_chunk_ndarray<T: Pod + Send + Sync>( &self, chunk_indices: &[u64] ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.

Async variant of retrieve_chunk_ndarray.

Examples found in repository?
examples/async_array_write_read.rs (line 177)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_retrieve_chunk_into_array_view( &self, chunk_indices: &[u64], array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunks( &self, chunks: &ArraySubset ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunks.

source

pub async fn async_retrieve_chunks_elements<T: Pod + Send + Sync>( &self, chunks: &ArraySubset ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunks_elements.

source

pub async fn async_retrieve_chunks_ndarray<T: Pod + Send + Sync>( &self, chunks: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.

Async variant of retrieve_chunks_ndarray.

Examples found in repository?
examples/async_array_write_read.rs (line 183)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_retrieve_chunks_into_array_view( &self, chunks: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_subset.

source

pub async fn async_retrieve_chunk_subset_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunk_subset_ndarray<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.
source

pub async fn async_retrieve_chunk_subset_into_array_view( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_array_subset( &self, array_subset: &ArraySubset ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_array_subset.

source

pub async fn async_retrieve_array_subset_elements<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_array_subset_ndarray<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.
Examples found in repository?
examples/async_array_write_read.rs (line 107)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_retrieve_array_subset_into_array_view( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_> ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_partial_decoder<'a>( &'a self, chunk_indices: &[u64] ) -> Result<Box<dyn AsyncArrayPartialDecoderTraits + 'a>, ArrayError>

Available on crate feature async only.

Async variant of partial_decoder.

source

pub async fn async_retrieve_chunk_if_exists_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<u8>>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_if_exists_opt.

source

pub async fn async_retrieve_chunk_opt( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_opt.

source

pub async fn async_retrieve_chunk_elements_if_exists_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<Vec<T>>, ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunk_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_elements_opt.

source

pub async fn async_retrieve_chunk_ndarray_if_exists_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Option<ArrayD<T>>, ArrayError>

Available on crate features async and ndarray only.
source

pub async fn async_retrieve_chunk_ndarray_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.

Async variant of retrieve_chunk_ndarray_opt.

source

pub async fn async_retrieve_chunk_into_array_view_opt( &self, chunk_indices: &[u64], array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunks_opt( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunks_opt.

source

pub async fn async_retrieve_chunks_elements_opt<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunks_elements_opt.

source

pub async fn async_retrieve_chunks_ndarray_opt<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.

Async variant of retrieve_chunks_ndarray_opt.

source

pub async fn async_retrieve_array_subset_opt( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_array_subset_opt.

source

pub async fn async_retrieve_array_subset_elements_opt<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_array_subset_ndarray_opt<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.
source

pub async fn async_retrieve_chunks_into_array_view_opt( &self, chunks: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_array_subset_into_array_view_opt( &self, array_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Available on crate feature async only.

Async variant of retrieve_chunk_subset_opt.

source

pub async fn async_retrieve_chunk_subset_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Available on crate feature async only.
source

pub async fn async_retrieve_chunk_subset_ndarray_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate features async and ndarray only.
source

pub async fn async_retrieve_chunk_subset_into_array_view_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_partial_decoder_opt<'a>( &'a self, chunk_indices: &[u64], options: &CodecOptions ) -> Result<Box<dyn AsyncArrayPartialDecoderTraits + 'a>, ArrayError>

Available on crate feature async only.

Async variant of partial_decoder_opt.

source§

impl<TStorage: ?Sized + AsyncWritableStorageTraits + 'static> Array<TStorage>

source

pub async fn async_store_metadata(&self) -> Result<(), StorageError>

Available on crate feature async only.

Async variant of store_metadata.

Examples found in repository?
examples/async_array_write_read.rs (line 72)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_store_chunk( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk.

source

pub async fn async_store_chunk_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_elements: Vec<T> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk_elements.

Examples found in repository?
examples/async_array_write_read.rs (lines 95-98)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_store_chunk_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.

Async variant of store_chunk_ndarray.

source

pub async fn async_store_chunks( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunks.

source

pub async fn async_store_chunks_elements<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, chunks_elements: Vec<T> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunks_elements.

Examples found in repository?
examples/async_array_write_read.rs (lines 113-121)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_store_chunks_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.

Async variant of store_chunks_ndarray.

source

pub async fn async_erase_metadata(&self) -> Result<(), StorageError>

Available on crate feature async only.

Async variant of erase_metadata.

source

pub async fn async_erase_chunk( &self, chunk_indices: &[u64] ) -> Result<(), StorageError>

Available on crate feature async only.

Async variant of erase_chunk.

Examples found in repository?
examples/async_array_write_read.rs (line 168)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_erase_chunks( &self, chunks: &ArraySubset ) -> Result<(), StorageError>

Available on crate feature async only.

Async variant of erase_chunks.

source

pub async fn async_store_chunk_opt( &self, chunk_indices: &[u64], chunk_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk_opt.

source

pub async fn async_store_chunk_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk_elements_opt.

source

pub async fn async_store_chunk_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.

Async variant of store_chunk_ndarray_opt.

source

pub async fn async_store_chunks_opt( &self, chunks: &ArraySubset, chunks_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunks_opt.

source

pub async fn async_store_chunks_elements_opt<T: Pod + Send + Sync>( &self, chunks: &ArraySubset, chunks_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunks_elements_opt.

source

pub async fn async_store_chunks_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunks: &ArraySubset, chunks_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.

Async variant of store_chunks_ndarray_opt.

source§

impl<TStorage: ?Sized + AsyncReadableWritableStorageTraits + 'static> Array<TStorage>

source

pub async fn async_store_chunk_subset( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk_subset.

source

pub async fn async_store_chunk_subset_elements<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk_subset_elements.

Examples found in repository?
examples/async_array_write_read.rs (lines 154-160)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_store_chunk_subset_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.

Async variant of store_chunk_subset_ndarray.

source

pub async fn async_store_array_subset( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_array_subset.

source

pub async fn async_store_array_subset_elements<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, subset_elements: Vec<T> ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_array_subset_elements.

Examples found in repository?
examples/async_array_write_read.rs (lines 130-133)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub async fn async_store_array_subset_ndarray<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.

Async variant of store_array_subset_ndarray.

source

pub async fn async_store_chunk_subset_opt( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_chunk_subset_opt.

source

pub async fn async_store_chunk_subset_elements_opt<T: Pod + Send + Sync>( &self, chunk_indices: &[u64], chunk_subset: &ArraySubset, chunk_subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_store_chunk_subset_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, chunk_indices: &[u64], chunk_subset_start: &[u64], chunk_subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_store_array_subset_opt( &self, array_subset: &ArraySubset, subset_bytes: Vec<u8>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.

Async variant of store_array_subset_opt.

source

pub async fn async_store_array_subset_elements_opt<T: Pod + Send + Sync>( &self, array_subset: &ArraySubset, subset_elements: Vec<T>, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate feature async only.
source

pub async fn async_store_array_subset_ndarray_opt<T: Pod + Send + Sync, TArray: Into<Array<T, D>> + Send, D: Dimension>( &self, subset_start: &[u64], subset_array: TArray, options: &CodecOptions ) -> Result<(), ArrayError>

Available on crate features async and ndarray only.
source§

impl<TStorage: ?Sized> Array<TStorage>

source

pub fn new_with_metadata( storage: Arc<TStorage>, path: &str, metadata: ArrayMetadata ) -> Result<Self, ArrayCreateError>

Create an array in storage at path with metadata. This does not write to the store, use store_metadata to write metadata to storage.

§Errors

Returns ArrayCreateError if:

  • any metadata is invalid or,
  • a plugin (e.g. data type/chunk grid/chunk key encoding/codec/storage transformer) is invalid.
source

pub fn set_shape(&mut self, shape: ArrayShape)

Set the shape of the array.

source

pub fn attributes_mut(&mut self) -> &mut Map<String, Value>

Mutably borrow the array attributes.

source

pub const fn path(&self) -> &NodePath

Get the node path.

source

pub const fn data_type(&self) -> &DataType

Get the data type.

source

pub const fn fill_value(&self) -> &FillValue

Get the fill value.

source

pub fn shape(&self) -> &[u64]

Get the array shape.

Examples found in repository?
examples/http_array_read.rs (line 43)
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::Array,
        array_subset::ArraySubset,
        storage::{
            storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
            store,
        },
    };

    const HTTP_URL: &str =
        "https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
    const ARRAY_PATH: &str = "/group/array";

    // Create a HTTP store
    let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log.clone().create_readable_transformer(store);
        }
    }

    // Init the existing array, reading metadata
    let array = Array::new(store, ARRAY_PATH)?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    Ok(())
}
More examples
Hide additional examples
examples/rectangular_array_write_read.rs (line 84)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
examples/array_write_read.rs (line 82)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/sharded_array_write_read.rs (line 91)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (line 83)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/async_array_write_read.rs (line 85)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn dimensionality(&self) -> usize

Get the array dimensionality.

source

pub const fn codecs(&self) -> &CodecChain

Get the codecs.

source

pub const fn chunk_grid(&self) -> &ChunkGrid

Get the chunk grid.

Examples found in repository?
examples/rectangular_array_write_read.rs (line 82)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
More examples
Hide additional examples
examples/array_write_read.rs (line 81)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/sharded_array_write_read.rs (line 89)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (line 82)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/async_array_write_read.rs (line 84)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub const fn chunk_key_encoding(&self) -> &ChunkKeyEncoding

Get the chunk key encoding.

source

pub const fn storage_transformers(&self) -> &StorageTransformerChain

Get the storage transformers.

source

pub const fn dimension_names(&self) -> &Option<Vec<DimensionName>>

Get the dimension names.

source

pub const fn attributes(&self) -> &Map<String, Value>

Get the attributes.

source

pub const fn additional_fields(&self) -> &AdditionalFields

Get the additional fields.

source

pub fn set_include_zarrs_metadata(&mut self, include_zarrs_metadata: bool)

Enable or disable the inclusion of zarrs metadata in the array attributes. Enabled by default.

Zarrs metadata includes the zarrs version and some parameters.

source

pub fn metadata_opt(&self, options: &ArrayMetadataOptions) -> ArrayMetadata

Create ArrayMetadata.

source

pub fn metadata(&self) -> ArrayMetadata

Create ArrayMetadata with default options.

Examples found in repository?
examples/http_array_read.rs (line 39)
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
fn http_array_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::Array,
        array_subset::ArraySubset,
        storage::{
            storage_transformer::{StorageTransformerExtension, UsageLogStorageTransformer},
            store,
        },
    };

    const HTTP_URL: &str =
        "https://raw.githubusercontent.com/LDeakin/zarrs/main/tests/data/array_write_read.zarr";
    const ARRAY_PATH: &str = "/group/array";

    // Create a HTTP store
    let mut store: ReadableStorage = Arc::new(store::HTTPStore::new(HTTP_URL)?);
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log.clone().create_readable_transformer(store);
        }
    }

    // Init the existing array, reading metadata
    let array = Array::new(store, ARRAY_PATH)?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    Ok(())
}
More examples
Hide additional examples
examples/rectangular_array_write_read.rs (line 102)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
fn rectangular_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use zarrs::array::ChunkGrid;
    use zarrs::{
        array::{chunk_grid::RectangularChunkGrid, codec, FillValue},
        node::Node,
    };
    use zarrs::{
        array::{DataType, ZARR_NAN_F32},
        array_subset::ArraySubset,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    let mut store: ReadableWritableListableStorage = std::sync::Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        ChunkGrid::new(RectangularChunkGrid::new(&[
            [1, 2, 3, 2].try_into()?,
            4.try_into()?,
        ])),
        FillValue::from(ZARR_NAN_F32),
    )
    .bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ])
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // Write some chunks (in parallel)
    (0..4).into_par_iter().try_for_each(|i| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![i, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<f32>::from_elem(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                i as f32,
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_ndarray(
        &[3, 3], // start
        ndarray::ArrayD::<f32>::from_shape_vec(
            vec![3, 3],
            vec![0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
        )?,
    )?;

    // Store elements directly, in this case set the 7th column to 123.0
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![123.0; 8],
    )?;

    // Store elements directly in a chunk, in this case set the last row of the bottom right chunk
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[3, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[1..2, 0..4]),
        vec![-4.0; 4],
    )?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a chunk back from the store
    let chunk_indices = vec![1, 0];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<f32>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{tree}");

    Ok(())
}
examples/array_write_read.rs (line 74)
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_elements(
            &chunk_indices,
            vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array.store_chunks_elements::<f32>(
        &ArraySubset::new_with_ranges(&[1..2, 0..2]),
        vec![
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            //
            1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
        ],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[3..6, 3..6]),
        vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array.store_array_subset_elements::<f32>(
        &ArraySubset::new_with_ranges(&[0..8, 6..7]),
        vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array.store_chunk_subset_elements::<f32>(
        // chunk indices
        &[1, 1],
        // subset within chunk
        &ArraySubset::new_with_ranges(&[3..4, 0..4]),
        vec![-7.4, -7.5, -7.6, -7.7],
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/sharded_array_write_read.rs (line 84)
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
fn sharded_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use zarrs::{
        array::{
            codec::{self, array_to_bytes::sharding::ShardingCodecBuilder},
            DataType, FillValue,
        },
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    use rayon::prelude::{IntoParallelIterator, ParallelIterator};
    use std::sync::Arc;

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new("tests/data/sharded_array_write_read.zarr")?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    // Create an array
    let array_path = "/group/array";
    let shard_shape = vec![4, 8];
    let inner_chunk_shape = vec![4, 4];
    let mut sharding_codec_builder =
        ShardingCodecBuilder::new(inner_chunk_shape.as_slice().try_into()?);
    sharding_codec_builder.bytes_to_bytes_codecs(vec![
        #[cfg(feature = "gzip")]
        Box::new(codec::GzipCodec::new(5)?),
    ]);
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::UInt16,
        shard_shape.try_into()?,
        FillValue::from(0u16),
    )
    .array_to_bytes_codec(Box::new(sharding_codec_builder.build()))
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    // The array metadata is
    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some shards (in parallel)
    (0..2).into_par_iter().try_for_each(|s| {
        let chunk_grid = array.chunk_grid();
        let chunk_indices = vec![s, 0];
        if let Some(chunk_shape) = chunk_grid.chunk_shape(&chunk_indices, array.shape())? {
            let chunk_array = ndarray::ArrayD::<u16>::from_shape_fn(
                chunk_shape
                    .iter()
                    .map(|u| u.get() as usize)
                    .collect::<Vec<_>>(),
                |ij| {
                    (s * chunk_shape[0].get() * chunk_shape[1].get()
                        + ij[0] as u64 * chunk_shape[1].get()
                        + ij[1] as u64) as u16
                },
            );
            array.store_chunk_ndarray(&chunk_indices, chunk_array)
        } else {
            Err(zarrs::array::ArrayError::InvalidChunkGridIndicesError(
                chunk_indices.to_vec(),
            ))
        }
    })?;

    // Read the whole array
    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec()); // the center 4x2 region
    let data_all = array.retrieve_array_subset_ndarray::<u16>(&subset_all)?;
    println!("The whole array is:\n{data_all}\n");

    // Read a shard back from the store
    let shard_indices = vec![1, 0];
    let data_shard = array.retrieve_chunk_ndarray::<u16>(&shard_indices)?;
    println!("Shard [1,0] is:\n{data_shard}\n");

    // Read an inner chunk from the store
    let subset_chunk_1_0 = ArraySubset::new_with_ranges(&[4..8, 0..4]);
    let data_chunk = array.retrieve_array_subset_ndarray::<u16>(&subset_chunk_1_0)?;
    println!("Chunk [1,0] is:\n{data_chunk}\n");

    // Read the central 4x2 subset of the array
    let subset_4x2 = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_4x2 = array.retrieve_array_subset_ndarray::<u16>(&subset_4x2)?;
    println!("The middle 4x2 subset is:\n{data_4x2}\n");

    // Decode inner chunks
    // In some cases, it might be preferable to decode inner chunks in a shard directly.
    // If using the partial decoder, then the shard index will only be read once from the store.
    let partial_decoder = array.partial_decoder(&[0, 0])?;
    let inner_chunks_to_decode = vec![
        ArraySubset::new_with_start_shape(vec![0, 0], inner_chunk_shape.clone())?,
        ArraySubset::new_with_start_shape(vec![0, 4], inner_chunk_shape.clone())?,
    ];
    let decoded_inner_chunks_bytes = partial_decoder.partial_decode(&inner_chunks_to_decode)?;
    let decoded_inner_chunks_ndarray = decoded_inner_chunks_bytes
        .into_iter()
        .map(|bytes| bytes_to_ndarray::<u16>(&inner_chunk_shape, bytes))
        .collect::<Result<Vec<_>, _>>()?;
    println!("Decoded inner chunks:");
    for (inner_chunk_subset, decoded_inner_chunk) in
        std::iter::zip(inner_chunks_to_decode, decoded_inner_chunks_ndarray)
    {
        println!("{inner_chunk_subset}\n{decoded_inner_chunk}\n");
    }

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("The zarr hierarchy tree is:\n{}", tree);

    println!(
        "The keys in the store are:\n[{}]",
        store.list().unwrap_or_default().iter().format(", ")
    );

    Ok(())
}
examples/array_write_read_ndarray.rs (line 75)
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
fn array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::FilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: ReadableWritableListableStorage = Arc::new(store::MemoryStore::new());
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.store_metadata()?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata()).unwrap()
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.store_metadata()?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata()).unwrap()
    );

    // Write some chunks
    (0..2).into_par_iter().try_for_each(|i| {
        let chunk_indices: Vec<u64> = vec![0, i];
        let chunk_subset = array
            .chunk_grid()
            .subset(&chunk_indices, array.shape())?
            .ok_or_else(|| {
                zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
            })?;
        array.store_chunk_ndarray(
            &chunk_indices,
            ArrayD::<f32>::from_shape_vec(
                chunk_subset.shape_usize(),
                vec![i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
            .unwrap(),
        )
    })?;

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    let ndarray_chunks: Array2<f32> = array![
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
        [1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,],
    ];
    array.store_chunks_ndarray(&ArraySubset::new_with_ranges(&[1..2, 0..2]), ndarray_chunks)?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    let ndarray_subset: Array2<f32> =
        array![[-3.3, -3.4, -3.5,], [-4.3, -4.4, -4.5,], [-5.3, -5.4, -5.5],];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[3..6, 3..6]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    let ndarray_subset: Array2<f32> = array![
        [-0.6],
        [-1.6],
        [-2.6],
        [-3.6],
        [-4.6],
        [-5.6],
        [-6.6],
        [-7.6],
    ];
    array.store_array_subset_ndarray(
        ArraySubset::new_with_ranges(&[0..8, 6..7]).start(),
        ndarray_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    let ndarray_chunk_subset: Array2<f32> = array![[-7.4, -7.5, -7.6, -7.7],];
    array.store_chunk_subset_ndarray(
        // chunk indices
        &[1, 1],
        // subset within chunk
        ArraySubset::new_with_ranges(&[3..4, 0..4]).start(),
        ndarray_chunk_subset,
    )?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.erase_chunk(&[0, 0])?;
    let data_all = array.retrieve_array_subset_ndarray::<f32>(&subset_all)?;
    println!("erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array.retrieve_chunk_ndarray::<f32>(&chunk_indices)?;
    println!("retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.retrieve_chunks_ndarray::<f32>(&chunks)?;
    println!("retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array.retrieve_array_subset_ndarray::<f32>(&subset)?;
    println!("retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::new(&*store, "/").unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
examples/async_array_write_read.rs (line 76)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
async fn async_array_write_read() -> Result<(), Box<dyn std::error::Error>> {
    use futures::{stream::FuturesUnordered, StreamExt};
    use std::sync::Arc;
    use zarrs::{
        array::{DataType, FillValue, ZARR_NAN_F32},
        array_subset::ArraySubset,
        node::Node,
        storage::store,
    };

    // Create a store
    // let path = tempfile::TempDir::new()?;
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(path.path())?);
    // let mut store: ReadableWritableListableStorage = Arc::new(store::AsyncFilesystemStore::new(
    //     "tests/data/array_write_read.zarr",
    // )?);
    let mut store: AsyncReadableWritableListableStorage = Arc::new(store::AsyncObjectStore::new(
        object_store::memory::InMemory::new(),
    ));
    if let Some(arg1) = std::env::args().collect::<Vec<_>>().get(1) {
        if arg1 == "--usage-log" {
            let log_writer = Arc::new(std::sync::Mutex::new(
                // std::io::BufWriter::new(
                std::io::stdout(),
                //    )
            ));
            let usage_log = Arc::new(UsageLogStorageTransformer::new(log_writer, || {
                chrono::Utc::now().format("[%T%.3f] ").to_string()
            }));
            store = usage_log
                .clone()
                .create_async_readable_writable_listable_transformer(store);
        }
    }

    // Create a group
    let group_path = "/group";
    let mut group = zarrs::group::GroupBuilder::new().build(store.clone(), group_path)?;

    // Update group metadata
    group
        .attributes_mut()
        .insert("foo".into(), serde_json::Value::String("bar".into()));

    // Write group metadata to store
    group.async_store_metadata().await?;

    println!(
        "The group metadata is:\n{}\n",
        serde_json::to_string_pretty(&group.metadata())?
    );

    // Create an array
    let array_path = "/group/array";
    let array = zarrs::array::ArrayBuilder::new(
        vec![8, 8], // array shape
        DataType::Float32,
        vec![4, 4].try_into()?, // regular chunk shape
        FillValue::from(ZARR_NAN_F32),
    )
    // .bytes_to_bytes_codecs(vec![]) // uncompressed
    .dimension_names(["y", "x"].into())
    // .storage_transformers(vec![].into())
    .build(store.clone(), array_path)?;

    // Write array metadata to store
    array.async_store_metadata().await?;

    println!(
        "The array metadata is:\n{}\n",
        serde_json::to_string_pretty(&array.metadata())?
    );

    // Write some chunks
    let subsets = (0..2)
        .map(|i| {
            let chunk_indices: Vec<u64> = vec![0, i];
            array
                .chunk_grid()
                .subset(&chunk_indices, array.shape())?
                .ok_or_else(|| {
                    zarrs::array::ArrayError::InvalidChunkGridIndicesError(chunk_indices.to_vec())
                })
                .map(|chunk_subset| (i, chunk_indices, chunk_subset))
        })
        .collect::<Result<Vec<_>, _>>()?;
    let mut futures = subsets
        .iter()
        .map(|(i, chunk_indices, chunk_subset)| {
            array.async_store_chunk_elements(
                &chunk_indices,
                vec![*i as f32 * 0.1; chunk_subset.num_elements() as usize],
            )
        })
        .collect::<FuturesUnordered<_>>();
    while let Some(item) = futures.next().await {
        item?;
    }

    let subset_all = ArraySubset::new_with_shape(array.shape().to_vec());
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk [0, 0] and [0, 1]:\n{data_all:+4.1}\n");

    // Store multiple chunks
    array
        .async_store_chunks_elements::<f32>(
            &ArraySubset::new_with_ranges(&[1..2, 0..2]),
            vec![
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
                //
                1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1, 1.0, 1.0, 1.0, 1.0, 1.1, 1.1, 1.1, 1.1,
            ],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunks [1..2, 0..2]:\n{data_all:+4.1}\n");

    // Write a subset spanning multiple chunks, including updating chunks already written
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[3..6, 3..6]),
            vec![-3.3, -3.4, -3.5, -4.3, -4.4, -4.5, -5.3, -5.4, -5.5],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [3..6, 3..6]:\n{data_all:+4.1}\n");

    // Store array subset
    array
        .async_store_array_subset_elements::<f32>(
            &ArraySubset::new_with_ranges(&[0..8, 6..7]),
            vec![-0.6, -1.6, -2.6, -3.6, -4.6, -5.6, -6.6, -7.6],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_array_subset [0..8, 6..7]:\n{data_all:+4.1}\n");

    // Store chunk subset
    array
        .async_store_chunk_subset_elements::<f32>(
            // chunk indices
            &[1, 1],
            // subset within chunk
            &ArraySubset::new_with_ranges(&[3..4, 0..4]),
            vec![-7.4, -7.5, -7.6, -7.7],
        )
        .await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_store_chunk_subset [3..4, 0..4] of chunk [1, 1]:\n{data_all:+4.1}\n");

    // Erase a chunk
    array.async_erase_chunk(&[0, 0]).await?;
    let data_all = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset_all)
        .await?;
    println!("async_erase_chunk [0, 0]:\n{data_all:+4.1}\n");

    // Read a chunk
    let chunk_indices = vec![0, 1];
    let data_chunk = array
        .async_retrieve_chunk_ndarray::<f32>(&chunk_indices)
        .await?;
    println!("async_retrieve_chunk [0, 1]:\n{data_chunk:+4.1}\n");

    // Read chunks
    let chunks = ArraySubset::new_with_ranges(&[0..2, 1..2]);
    let data_chunks = array.async_retrieve_chunks_ndarray::<f32>(&chunks).await?;
    println!("async_retrieve_chunks [0..2, 1..2]:\n{data_chunks:+4.1}\n");

    // Retrieve an array subset
    let subset = ArraySubset::new_with_ranges(&[2..6, 3..5]); // the center 4x2 region
    let data_subset = array
        .async_retrieve_array_subset_ndarray::<f32>(&subset)
        .await?;
    println!("async_retrieve_array_subset [2..6, 3..5]:\n{data_subset:+4.1}\n");

    // Show the hierarchy
    let node = Node::async_new(&*store, "/").await.unwrap();
    let tree = node.hierarchy_tree();
    println!("hierarchy_tree:\n{}", tree);

    Ok(())
}
source

pub fn builder(&self) -> ArrayBuilder

Create an array builder matching the parameters of this array.

source

pub fn chunk_grid_shape(&self) -> Option<ArrayShape>

Return the shape of the chunk grid (i.e., the number of chunks).

source

pub fn chunk_origin( &self, chunk_indices: &[u64] ) -> Result<ArrayIndices, ArrayError>

Return the origin of the chunk at chunk_indices.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

source

pub fn chunk_shape( &self, chunk_indices: &[u64] ) -> Result<ChunkShape, ArrayError>

Return the shape of the chunk at chunk_indices.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

source

pub fn chunk_shape_usize( &self, chunk_indices: &[u64] ) -> Result<Vec<usize>, ArrayError>

Return the shape of the chunk at chunk_indices.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

§Panics

Panics if any component of the chunk shape exceeds usize::MAX.

source

pub fn chunk_subset( &self, chunk_indices: &[u64] ) -> Result<ArraySubset, ArrayError>

Return the array subset of the chunk at chunk_indices.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

source

pub fn chunk_subset_bounded( &self, chunk_indices: &[u64] ) -> Result<ArraySubset, ArrayError>

Return the array subset of the chunk at chunk_indices bounded by the array shape.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

source

pub fn chunks_subset( &self, chunks: &ArraySubset ) -> Result<ArraySubset, ArrayError>

Return the array subset of chunks.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if a chunk in chunks is incompatible with the chunk grid.

source

pub fn chunks_subset_bounded( &self, chunks: &ArraySubset ) -> Result<ArraySubset, ArrayError>

Return the array subset of chunks bounded by the array shape.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

source

pub fn chunk_array_representation( &self, chunk_indices: &[u64] ) -> Result<ChunkRepresentation, ArrayError>

Get the chunk array representation at chunk_index.

§Errors

Returns ArrayError::InvalidChunkGridIndicesError if the chunk_indices are incompatible with the chunk grid.

source

pub fn chunks_in_array_subset( &self, array_subset: &ArraySubset ) -> Result<Option<ArraySubset>, IncompatibleDimensionalityError>

Return an array subset indicating the chunks intersecting array_subset.

Returns None if the intersecting chunks cannot be determined.

§Errors

Returns IncompatibleDimensionalityError if the array subset has an incorrect dimensionality.

Trait Implementations§

source§

impl<TStorage: ?Sized> ArrayShardedExt for Array<TStorage>

Available on crate feature sharding only.
source§

fn is_sharded(&self) -> bool

Returns true if the array to bytes codec of the array is sharding_indexed.
source§

fn inner_chunk_shape(&self) -> Option<ChunkShape>

Return the inner chunk shape. Read more
source§

fn inner_chunk_grid(&self) -> ChunkGrid

Retrieve the inner chunk grid. Read more
source§

impl<TStorage: ?Sized + ReadableStorageTraits + 'static> ArrayShardedReadableExt<TStorage> for Array<TStorage>

Available on crate feature sharding only.
source§

fn retrieve_inner_chunk_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Read and decode the inner chunk at chunk_indices into its bytes. Read more
source§

fn retrieve_inner_chunk_elements_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunk_indices: &[u64], options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Read and decode the inner chunk at chunk_indices into a vector of its elements. Read more
source§

fn retrieve_inner_chunk_ndarray_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunk_indices: &[u64], options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.
Read and decode the chunk at chunk_indices into an ndarray::ArrayD. Read more
source§

fn retrieve_inner_chunks_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Read and decode the chunks at chunks into their bytes. Read more
source§

fn retrieve_inner_chunks_elements_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunks: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Read and decode the inner chunks at inner_chunks into a vector of their elements. Read more
source§

fn retrieve_inner_chunks_ndarray_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, inner_chunks: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.
Read and decode the inner chunks at inner_chunks into an ndarray::ArrayD. Read more
source§

fn retrieve_array_subset_sharded_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<u8>, ArrayError>

Read and decode the array_subset of array into its bytes. Read more
source§

fn retrieve_array_subset_elements_sharded_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<Vec<T>, ArrayError>

Read and decode the array_subset of array into a vector of its elements. Read more
source§

fn retrieve_array_subset_ndarray_sharded_opt<'a, T: Pod>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, array_subset: &ArraySubset, options: &CodecOptions ) -> Result<ArrayD<T>, ArrayError>

Available on crate feature ndarray only.
Read and decode the array_subset of array into an ndarray::ArrayD. Read more
source§

fn retrieve_shard_subset_into_array_view_opt<'a>( &'a self, cache: &ArrayShardedReadableExtCache<'a>, shard_indices: &[u64], shard_subset: &ArraySubset, array_view: &ArrayView<'_>, options: &CodecOptions ) -> Result<(), ArrayError>

Retrieve a shard subset into an array view. Read more
source§

impl<TStorage: Debug + ?Sized> Debug for Array<TStorage>

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl<TStorage> Freeze for Array<TStorage>
where TStorage: ?Sized,

§

impl<TStorage> !RefUnwindSafe for Array<TStorage>

§

impl<TStorage> Send for Array<TStorage>
where TStorage: Sync + Send + ?Sized,

§

impl<TStorage> Sync for Array<TStorage>
where TStorage: Sync + Send + ?Sized,

§

impl<TStorage> Unpin for Array<TStorage>
where TStorage: ?Sized,

§

impl<TStorage> !UnwindSafe for Array<TStorage>

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> Pointable for T

source§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

source§

fn vzip(self) -> V

source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more