Struct aws_sdk_sagemaker::model::input_config::Builder
source · [−]#[non_exhaustive]pub struct Builder { /* private fields */ }
Expand description
A builder for InputConfig
Implementations
sourceimpl Builder
impl Builder
sourcepub fn s3_uri(self, input: impl Into<String>) -> Self
pub fn s3_uri(self, input: impl Into<String>) -> Self
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
sourcepub fn set_s3_uri(self, input: Option<String>) -> Self
pub fn set_s3_uri(self, input: Option<String>) -> Self
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
sourcepub fn data_input_config(self, input: impl Into<String>) -> Self
pub fn data_input_config(self, input: impl Into<String>) -> Self
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework
specific.
-
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input":[1,1024,1024,3]}
-
If using the CLI,
{\"input\":[1,1024,1024,3]}
-
-
Examples for two inputs:
-
If using the console,
{"data1": [1,28,28,1], "data2":[1,28,28,1]}
-
If using the CLI,
{\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
-
-
-
KERAS
: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfig
should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input_1":[1,3,224,224]}
-
If using the CLI,
{\"input_1\":[1,3,224,224]}
-
-
Examples for two inputs:
-
If using the console,
{"input_1": [1,3,224,224], "input_2":[1,3,224,224]}
-
If using the CLI,
{\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
-
-
-
MXNET/ONNX/DARKNET
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"data":[1,3,1024,1024]}
-
If using the CLI,
{\"data\":[1,3,1024,1024]}
-
-
Examples for two inputs:
-
If using the console,
{"var1": [1,1,28,28], "var2":[1,1,28,28]}
-
If using the CLI,
{\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
-
-
-
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.-
Examples for one input in dictionary format:
-
If using the console,
{"input0":[1,3,224,224]}
-
If using the CLI,
{\"input0\":[1,3,224,224]}
-
-
Example for one input in list format:
[[1,3,224,224]]
-
Examples for two inputs in dictionary format:
-
If using the console,
{"input0":[1,3,224,224], "input1":[1,3,224,224]}
-
If using the CLI,
{\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
-
-
Example for two inputs in list format:
[[1,3,224,224], [1,3,224,224]]
-
-
XGBOOST
: input data name and shape are not needed.
DataInputConfig
supports the following parameters for CoreML
OutputConfig$TargetDevice
(ML Model format):
-
shape
: Input shape, for example{"input_1": {"shape": [1,224,224,3]}}
. In addition to static input shapes, CoreML converter supports Flexible input shapes:-
Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example:
{"input_1": {"shape": ["1..10", 224, 224, 3]}}
-
Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example:
{"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}
-
-
default_shape
: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example{"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}
-
type
: Input type. Allowed values:Image
andTensor
. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such asbias
andscale
. -
bias
: If the input type is an Image, you need to provide the bias vector. -
scale
: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig
parameters can be specified using OutputConfig$CompilerOptions
. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:
-
Tensor type input:
-
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}
-
-
Tensor type input without input name (PyTorch):
-
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]
-
-
Image type input:
-
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
-
Image type input without input name (PyTorch):
-
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
Depending on the model format, DataInputConfig
requires the following parameters for ml_eia2
OutputConfig:TargetDevice.
-
For TensorFlow models saved in the SavedModel format, specify the input names from
signature_def_key
and the input model shapes forDataInputConfig
. Specify thesignature_def_key
inOutputConfig:CompilerOptions
if the model does not use TensorFlow's default signature def key. For example:-
"DataInputConfig": {"inputs": [1, 224, 224, 3]}
-
"CompilerOptions": {"signature_def_key": "serving_custom"}
-
-
For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in
DataInputConfig
and the output tensor names foroutput_names
inOutputConfig:CompilerOptions
. For example:-
"DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}
-
"CompilerOptions": {"output_names": ["output_tensor:0"]}
-
sourcepub fn set_data_input_config(self, input: Option<String>) -> Self
pub fn set_data_input_config(self, input: Option<String>) -> Self
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework
specific.
-
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input":[1,1024,1024,3]}
-
If using the CLI,
{\"input\":[1,1024,1024,3]}
-
-
Examples for two inputs:
-
If using the console,
{"data1": [1,28,28,1], "data2":[1,28,28,1]}
-
If using the CLI,
{\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
-
-
-
KERAS
: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfig
should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input_1":[1,3,224,224]}
-
If using the CLI,
{\"input_1\":[1,3,224,224]}
-
-
Examples for two inputs:
-
If using the console,
{"input_1": [1,3,224,224], "input_2":[1,3,224,224]}
-
If using the CLI,
{\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
-
-
-
MXNET/ONNX/DARKNET
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"data":[1,3,1024,1024]}
-
If using the CLI,
{\"data\":[1,3,1024,1024]}
-
-
Examples for two inputs:
-
If using the console,
{"var1": [1,1,28,28], "var2":[1,1,28,28]}
-
If using the CLI,
{\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
-
-
-
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.-
Examples for one input in dictionary format:
-
If using the console,
{"input0":[1,3,224,224]}
-
If using the CLI,
{\"input0\":[1,3,224,224]}
-
-
Example for one input in list format:
[[1,3,224,224]]
-
Examples for two inputs in dictionary format:
-
If using the console,
{"input0":[1,3,224,224], "input1":[1,3,224,224]}
-
If using the CLI,
{\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
-
-
Example for two inputs in list format:
[[1,3,224,224], [1,3,224,224]]
-
-
XGBOOST
: input data name and shape are not needed.
DataInputConfig
supports the following parameters for CoreML
OutputConfig$TargetDevice
(ML Model format):
-
shape
: Input shape, for example{"input_1": {"shape": [1,224,224,3]}}
. In addition to static input shapes, CoreML converter supports Flexible input shapes:-
Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example:
{"input_1": {"shape": ["1..10", 224, 224, 3]}}
-
Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example:
{"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}
-
-
default_shape
: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example{"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}
-
type
: Input type. Allowed values:Image
andTensor
. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such asbias
andscale
. -
bias
: If the input type is an Image, you need to provide the bias vector. -
scale
: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig
parameters can be specified using OutputConfig$CompilerOptions
. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:
-
Tensor type input:
-
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}
-
-
Tensor type input without input name (PyTorch):
-
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]
-
-
Image type input:
-
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
-
Image type input without input name (PyTorch):
-
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
Depending on the model format, DataInputConfig
requires the following parameters for ml_eia2
OutputConfig:TargetDevice.
-
For TensorFlow models saved in the SavedModel format, specify the input names from
signature_def_key
and the input model shapes forDataInputConfig
. Specify thesignature_def_key
inOutputConfig:CompilerOptions
if the model does not use TensorFlow's default signature def key. For example:-
"DataInputConfig": {"inputs": [1, 224, 224, 3]}
-
"CompilerOptions": {"signature_def_key": "serving_custom"}
-
-
For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in
DataInputConfig
and the output tensor names foroutput_names
inOutputConfig:CompilerOptions
. For example:-
"DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}
-
"CompilerOptions": {"output_names": ["output_tensor:0"]}
-
sourcepub fn framework(self, input: Framework) -> Self
pub fn framework(self, input: Framework) -> Self
Identifies the framework in which the model was trained. For example: TENSORFLOW.
sourcepub fn set_framework(self, input: Option<Framework>) -> Self
pub fn set_framework(self, input: Option<Framework>) -> Self
Identifies the framework in which the model was trained. For example: TENSORFLOW.
sourcepub fn framework_version(self, input: impl Into<String>) -> Self
pub fn framework_version(self, input: impl Into<String>) -> Self
Specifies the framework version to use. This API field is only supported for the PyTorch and TensorFlow frameworks.
For information about framework versions supported for cloud targets and edge devices, see Cloud Supported Instance Types and Frameworks and Edge Supported Frameworks.
sourcepub fn set_framework_version(self, input: Option<String>) -> Self
pub fn set_framework_version(self, input: Option<String>) -> Self
Specifies the framework version to use. This API field is only supported for the PyTorch and TensorFlow frameworks.
For information about framework versions supported for cloud targets and edge devices, see Cloud Supported Instance Types and Frameworks and Edge Supported Frameworks.
sourcepub fn build(self) -> InputConfig
pub fn build(self) -> InputConfig
Consumes the builder and constructs a InputConfig
Trait Implementations
impl StructuralPartialEq for Builder
Auto Trait Implementations
impl RefUnwindSafe for Builder
impl Send for Builder
impl Sync for Builder
impl Unpin for Builder
impl UnwindSafe for Builder
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcepub fn borrow_mut(&mut self) -> &mut T
pub fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcepub fn to_owned(&self) -> T
pub fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
sourcepub fn clone_into(&self, target: &mut T)
pub fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more