1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
//! # aws-multipart-upload
//!
//! A high-level API for building and working with AWS S3 multipart uploads
//! using the official [SDK] for Rust.
//!
//! ## Overview
//!
//! As explained in the [README][readme], the goal of this crate is to provide
//! an API that simplifies the process of performing S3 multipart uploads with
//! abstractions that hide the tedious and precise details, and in a way that is
//! easily compatible with the more ubiquitous dependencies from the ecosystem.
//!
//! In general, the crate provides:
//!
//! * An abstract [`UploadApi`] representing atomic operations of a multipart
//! upload, and a stock implementation for the AWS SDK S3 client in the value
//! [`SdkClient`] that can be extended.
//! * Convenience methods for statically or dynamically constructing one or more
//! [`ObjectUri`]s, which is the destination address of an upload and the only
//! required value to initialize one.
//! * The module [`encoder`], which defines a way to write values to the body of
//! a part upload in the trait [`PartEncoder`], and provides part encoders for
//! jsonlines and CSV.
//! * The main types, [`Upload`] and [`EncodeUpload`], which realize the trait
//! [`MultipartWrite`] for writing parts of a whole asynchronously as an AWS
//! S3 multipart upload.
//! * Combinators to compose and combine uploads with other multipart writers,
//! streams, and futures.
//!
//! ## Examples
//!
//! ```rust,no_run
//! use aws_multipart_upload::{ByteSize, SdkClient, UploadBuilder};
//! use aws_multipart_upload::encoder::JsonLinesEncoder;
//! use aws_multipart_upload::prelude::*;
//! use futures::stream::{self, StreamExt as _};
//!
//! async fn upload_stream() {
//! // Build a default multipart upload client.
//! //
//! // For convenience `aws_config` is re-exported, as is `aws_sdk_s3` under the
//! // symbol `aws_sdk`, for customization.
//! let client = SdkClient::defaults().await;
//!
//! // Use `UploadBuilder` to make a multipart upload with a target part size of
//! // 5 MiB, which writes incoming `serde_json::Value`s as lines of JSON.
//! let upl = UploadBuilder::new(client)
//! .with_part_size(ByteSize::mib(5))
//! .with_uri(("a-bucket-us-east-1", "an/object/key.jsonl"))
//! .build_upload_from(JsonLinesEncoder::new());
//!
//! // Now the uploader can have `serde_json::Value`s written to it to build a
//! // part of the upload. As parts reach the target size of 5 MiB, they'll be
//! // turned into a part upload request and the request will be sent.
//! //
//! // The combinator `collect_upload` combines this uploader with a streaming
//! // source. The result is a future that, when awaited, runs the stream to
//! // exhaustion, uploading the parts and sending a request to complete the
//! // upload when the stream has stopped producing.
//! let out = stream::iter(0..100000)
//! .map(|n| serde_json::json!({"k1": n, "k2": n.to_string()}))
//! .collect_upload(upl)
//! .await
//! .unwrap();
//!
//! println!("uploaded {} bytes to {}", out.bytes, out.uri);
//! }
//! ```
//!
//! [SDK]: https://awslabs.github.io/aws-sdk-rust/
//! [readme]: https://github.com/quasi-coherent/aws-multipart-upload/blob/master/README.md
//! [`PartEncoder`]: self::encoder::PartEncoder
//! [`MultipartWrite`]: https://docs.rs/multipart-write/latest/multipart_write/
//! [`Upload`]: self::upload::Upload
//! [`EncodeUpload`]: self::upload::EncodeUpload
use ;
pub extern crate aws_config;
pub extern crate aws_sdk_s3 as aws_sdk;
pub extern crate multipart_write;
pub use ByteSize;
pub use ;
pub use ;
pub use ;