#[non_exhaustive]pub struct InputConfig {
pub s3_uri: Option<String>,
pub data_input_config: Option<String>,
pub framework: Option<Framework>,
pub framework_version: Option<String>,
}
Expand description
Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.s3_uri: Option<String>
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
data_input_config: Option<String>
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are Framework
specific.
-
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input":\[1,1024,1024,3\]}
-
If using the CLI,
{\"input\":\[1,1024,1024,3\]}
-
-
Examples for two inputs:
-
If using the console,
{"data1": \[1,28,28,1\], "data2":\[1,28,28,1\]}
-
If using the CLI,
{\"data1\": \[1,28,28,1\], \"data2\":\[1,28,28,1\]}
-
-
-
KERAS
: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfig
should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input_1":\[1,3,224,224\]}
-
If using the CLI,
{\"input_1\":\[1,3,224,224\]}
-
-
Examples for two inputs:
-
If using the console,
{"input_1": \[1,3,224,224\], "input_2":\[1,3,224,224\]}
-
If using the CLI,
{\"input_1\": \[1,3,224,224\], \"input_2\":\[1,3,224,224\]}
-
-
-
MXNET/ONNX/DARKNET
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"data":\[1,3,1024,1024\]}
-
If using the CLI,
{\"data\":\[1,3,1024,1024\]}
-
-
Examples for two inputs:
-
If using the console,
{"var1": \[1,1,28,28\], "var2":\[1,1,28,28\]}
-
If using the CLI,
{\"var1\": \[1,1,28,28\], \"var2\":\[1,1,28,28\]}
-
-
-
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.-
Examples for one input in dictionary format:
-
If using the console,
{"input0":\[1,3,224,224\]}
-
If using the CLI,
{\"input0\":\[1,3,224,224\]}
-
-
Example for one input in list format:
\[\[1,3,224,224\]\]
-
Examples for two inputs in dictionary format:
-
If using the console,
{"input0":\[1,3,224,224\], "input1":\[1,3,224,224\]}
-
If using the CLI,
{\"input0\":\[1,3,224,224\], \"input1\":\[1,3,224,224\]}
-
-
Example for two inputs in list format:
\[\[1,3,224,224\], \[1,3,224,224\]\]
-
-
XGBOOST
: input data name and shape are not needed.
DataInputConfig
supports the following parameters for CoreML
TargetDevice
(ML Model format):
-
shape
: Input shape, for example{"input_1": {"shape": \[1,224,224,3\]}}
. In addition to static input shapes, CoreML converter supports Flexible input shapes:-
Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example:
{"input_1": {"shape": \["1..10", 224, 224, 3\]}}
-
Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example:
{"input_1": {"shape": \[\[1, 224, 224, 3\], \[1, 160, 160, 3\]\]}}
-
-
default_shape
: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example{"input_1": {"shape": \["1..10", 224, 224, 3\], "default_shape": \[1, 224, 224, 3\]}}
-
type
: Input type. Allowed values:Image
andTensor
. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such asbias
andscale
. -
bias
: If the input type is an Image, you need to provide the bias vector. -
scale
: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig
parameters can be specified using OutputConfig CompilerOptions
. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:
-
Tensor type input:
-
"DataInputConfig": {"input_1": {"shape": \[\[1,224,224,3\], \[1,160,160,3\]\], "default_shape": \[1,224,224,3\]}}
-
-
Tensor type input without input name (PyTorch):
-
"DataInputConfig": \[{"shape": \[\[1,3,224,224\], \[1,3,160,160\]\], "default_shape": \[1,3,224,224\]}\]
-
-
Image type input:
-
"DataInputConfig": {"input_1": {"shape": \[\[1,224,224,3\], \[1,160,160,3\]\], "default_shape": \[1,224,224,3\], "type": "Image", "bias": \[-1,-1,-1\], "scale": 0.007843137255}}
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
-
Image type input without input name (PyTorch):
-
"DataInputConfig": \[{"shape": \[\[1,3,224,224\], \[1,3,160,160\]\], "default_shape": \[1,3,224,224\], "type": "Image", "bias": \[-1,-1,-1\], "scale": 0.007843137255}\]
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
Depending on the model format, DataInputConfig
requires the following parameters for ml_eia2
OutputConfig:TargetDevice.
-
For TensorFlow models saved in the SavedModel format, specify the input names from
signature_def_key
and the input model shapes forDataInputConfig
. Specify thesignature_def_key
inOutputConfig:CompilerOptions
if the model does not use TensorFlow's default signature def key. For example:-
"DataInputConfig": {"inputs": \[1, 224, 224, 3\]}
-
"CompilerOptions": {"signature_def_key": "serving_custom"}
-
-
For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in
DataInputConfig
and the output tensor names foroutput_names
inOutputConfig:CompilerOptions
. For example:-
"DataInputConfig": {"input_tensor:0": \[1, 224, 224, 3\]}
-
"CompilerOptions": {"output_names": \["output_tensor:0"\]}
-
framework: Option<Framework>
Identifies the framework in which the model was trained. For example: TENSORFLOW.
framework_version: Option<String>
Specifies the framework version to use. This API field is only supported for the MXNet, PyTorch, TensorFlow and TensorFlow Lite frameworks.
For information about framework versions supported for cloud targets and edge devices, see Cloud Supported Instance Types and Frameworks and Edge Supported Frameworks.
Implementations§
Source§impl InputConfig
impl InputConfig
Sourcepub fn s3_uri(&self) -> Option<&str>
pub fn s3_uri(&self) -> Option<&str>
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
Sourcepub fn data_input_config(&self) -> Option<&str>
pub fn data_input_config(&self) -> Option<&str>
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are Framework
specific.
-
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input":\[1,1024,1024,3\]}
-
If using the CLI,
{\"input\":\[1,1024,1024,3\]}
-
-
Examples for two inputs:
-
If using the console,
{"data1": \[1,28,28,1\], "data2":\[1,28,28,1\]}
-
If using the CLI,
{\"data1\": \[1,28,28,1\], \"data2\":\[1,28,28,1\]}
-
-
-
KERAS
: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfig
should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"input_1":\[1,3,224,224\]}
-
If using the CLI,
{\"input_1\":\[1,3,224,224\]}
-
-
Examples for two inputs:
-
If using the console,
{"input_1": \[1,3,224,224\], "input_2":\[1,3,224,224\]}
-
If using the CLI,
{\"input_1\": \[1,3,224,224\], \"input_2\":\[1,3,224,224\]}
-
-
-
MXNET/ONNX/DARKNET
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.-
Examples for one input:
-
If using the console,
{"data":\[1,3,1024,1024\]}
-
If using the CLI,
{\"data\":\[1,3,1024,1024\]}
-
-
Examples for two inputs:
-
If using the console,
{"var1": \[1,1,28,28\], "var2":\[1,1,28,28\]}
-
If using the CLI,
{\"var1\": \[1,1,28,28\], \"var2\":\[1,1,28,28\]}
-
-
-
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.-
Examples for one input in dictionary format:
-
If using the console,
{"input0":\[1,3,224,224\]}
-
If using the CLI,
{\"input0\":\[1,3,224,224\]}
-
-
Example for one input in list format:
\[\[1,3,224,224\]\]
-
Examples for two inputs in dictionary format:
-
If using the console,
{"input0":\[1,3,224,224\], "input1":\[1,3,224,224\]}
-
If using the CLI,
{\"input0\":\[1,3,224,224\], \"input1\":\[1,3,224,224\]}
-
-
Example for two inputs in list format:
\[\[1,3,224,224\], \[1,3,224,224\]\]
-
-
XGBOOST
: input data name and shape are not needed.
DataInputConfig
supports the following parameters for CoreML
TargetDevice
(ML Model format):
-
shape
: Input shape, for example{"input_1": {"shape": \[1,224,224,3\]}}
. In addition to static input shapes, CoreML converter supports Flexible input shapes:-
Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example:
{"input_1": {"shape": \["1..10", 224, 224, 3\]}}
-
Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example:
{"input_1": {"shape": \[\[1, 224, 224, 3\], \[1, 160, 160, 3\]\]}}
-
-
default_shape
: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example{"input_1": {"shape": \["1..10", 224, 224, 3\], "default_shape": \[1, 224, 224, 3\]}}
-
type
: Input type. Allowed values:Image
andTensor
. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such asbias
andscale
. -
bias
: If the input type is an Image, you need to provide the bias vector. -
scale
: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig
parameters can be specified using OutputConfig CompilerOptions
. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:
-
Tensor type input:
-
"DataInputConfig": {"input_1": {"shape": \[\[1,224,224,3\], \[1,160,160,3\]\], "default_shape": \[1,224,224,3\]}}
-
-
Tensor type input without input name (PyTorch):
-
"DataInputConfig": \[{"shape": \[\[1,3,224,224\], \[1,3,160,160\]\], "default_shape": \[1,3,224,224\]}\]
-
-
Image type input:
-
"DataInputConfig": {"input_1": {"shape": \[\[1,224,224,3\], \[1,160,160,3\]\], "default_shape": \[1,224,224,3\], "type": "Image", "bias": \[-1,-1,-1\], "scale": 0.007843137255}}
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
-
Image type input without input name (PyTorch):
-
"DataInputConfig": \[{"shape": \[\[1,3,224,224\], \[1,3,160,160\]\], "default_shape": \[1,3,224,224\], "type": "Image", "bias": \[-1,-1,-1\], "scale": 0.007843137255}\]
-
"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
-
Depending on the model format, DataInputConfig
requires the following parameters for ml_eia2
OutputConfig:TargetDevice.
-
For TensorFlow models saved in the SavedModel format, specify the input names from
signature_def_key
and the input model shapes forDataInputConfig
. Specify thesignature_def_key
inOutputConfig:CompilerOptions
if the model does not use TensorFlow's default signature def key. For example:-
"DataInputConfig": {"inputs": \[1, 224, 224, 3\]}
-
"CompilerOptions": {"signature_def_key": "serving_custom"}
-
-
For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in
DataInputConfig
and the output tensor names foroutput_names
inOutputConfig:CompilerOptions
. For example:-
"DataInputConfig": {"input_tensor:0": \[1, 224, 224, 3\]}
-
"CompilerOptions": {"output_names": \["output_tensor:0"\]}
-
Sourcepub fn framework(&self) -> Option<&Framework>
pub fn framework(&self) -> Option<&Framework>
Identifies the framework in which the model was trained. For example: TENSORFLOW.
Sourcepub fn framework_version(&self) -> Option<&str>
pub fn framework_version(&self) -> Option<&str>
Specifies the framework version to use. This API field is only supported for the MXNet, PyTorch, TensorFlow and TensorFlow Lite frameworks.
For information about framework versions supported for cloud targets and edge devices, see Cloud Supported Instance Types and Frameworks and Edge Supported Frameworks.
Source§impl InputConfig
impl InputConfig
Sourcepub fn builder() -> InputConfigBuilder
pub fn builder() -> InputConfigBuilder
Creates a new builder-style object to manufacture InputConfig
.
Trait Implementations§
Source§impl Clone for InputConfig
impl Clone for InputConfig
Source§fn clone(&self) -> InputConfig
fn clone(&self) -> InputConfig
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for InputConfig
impl Debug for InputConfig
Source§impl PartialEq for InputConfig
impl PartialEq for InputConfig
impl StructuralPartialEq for InputConfig
Auto Trait Implementations§
impl Freeze for InputConfig
impl RefUnwindSafe for InputConfig
impl Send for InputConfig
impl Sync for InputConfig
impl Unpin for InputConfig
impl UnwindSafe for InputConfig
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);