Struct aws_sdk_sagemaker::model::OutputConfig
source · [−]#[non_exhaustive]pub struct OutputConfig {
pub s3_output_location: Option<String>,
pub target_device: Option<TargetDevice>,
pub target_platform: Option<TargetPlatform>,
pub compiler_options: Option<String>,
pub kms_key_id: Option<String>,
}
Expand description
Contains information about the output location for the compiled model and the target device that the model runs on. TargetDevice
and TargetPlatform
are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from the TargetDevice
list, use TargetPlatform
to describe the platform of your edge device and CompilerOptions
if there are specific settings that are required or recommended to use for particular TargetPlatform.
Fields (Non-exhaustive)
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.s3_output_location: Option<String>
Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix
.
target_device: Option<TargetDevice>
Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform
fields. It can be used instead of TargetPlatform
.
target_platform: Option<TargetPlatform>
Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice
.
The following examples show how to configure the TargetPlatform
and CompilerOptions
JSON strings for popular target platforms:
-
Raspberry Pi 3 Model B+
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
"CompilerOptions": {'mattr': ['+neon']}
-
Jetson TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
-
EC2 m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions": {'mcpu': 'skylake-avx512'}
-
RK3399
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}
-
ARMv7 phone (CPU)
"TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
"CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
-
ARMv8 phone (CPU)
"TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},
"CompilerOptions": {'ANDROID_PLATFORM': 29}
compiler_options: Option<String>
Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform
specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.
-
DTYPE
: Specifies the data type for the input. When compiling forml_*
(except forml_inf
) instances using PyTorch framework, provide the data type (dtype) of the model's input."float32"
is used if"DTYPE"
is not specified. Options for data type are:-
float32: Use either
"float"
or"float32"
. -
int64: Use either
"int64"
or"long"
.
For example,
{"dtype" : "float32"}
. -
-
CPU
: Compilation for CPU supports the following compiler options.-
mcpu
: CPU micro-architecture. For example,{'mcpu': 'skylake-avx512'}
-
mattr
: CPU flags. For example,{'mattr': ['+neon', '+vfpv4']}
-
-
ARM
: Details of ARM CPU compilations.-
NEON
: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.For example, add
{'mattr': ['+neon']}
to the compiler options if compiling for ARM 32-bit platform with the NEON support.
-
-
NVIDIA
: Compilation for NVIDIA GPU supports the following compiler options.-
gpu_code
: Specifies the targeted architecture. -
trt-ver
: Specifies the TensorRT versions in x.y.z. format. -
cuda-ver
: Specifies the CUDA version in x.y format.
For example,
{'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
-
-
ANDROID
: Compilation for the Android OS supports the following compiler options:-
ANDROID_PLATFORM
: Specifies the Android API levels. Available levels range from 21 to 29. For example,{'ANDROID_PLATFORM': 28}
. -
mattr
: Add{'mattr': ['+neon']}
to compiler options if compiling for ARM 32-bit platform with NEON support.
-
-
INFERENTIA
: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example,"CompilerOptions": "\"--verbose 1 --num-neuroncores 2 -O2\""
.For information about supported compiler options, see Neuron Compiler CLI.
-
CoreML
: Compilation for the CoreMLOutputConfig$TargetDevice
supports the following compiler options:-
class_labels
: Specifies the classification labels file name inside input tar.gz file. For example,{"class_labels": "imagenet_labels_1000.txt"}
. Labels inside the txt file should be separated by newlines.
-
-
EIA
: Compilation for the Elastic Inference Accelerator supports the following compiler options:-
precision_mode
: Specifies the precision of compiled artifacts. Supported values are"FP16"
and"FP32"
. Default is"FP32"
. -
signature_def_key
: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key. -
output_names
: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either:signature_def_key
oroutput_names
.
For example:
{"precision_mode": "FP32", "output_names": ["output:0"]}
-
kms_key_id: Option<String>
The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.
The KmsKeyId can be any of the following formats:
-
Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
-
Key ARN:
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
-
Alias name:
alias/ExampleAlias
-
Alias name ARN:
arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
Implementations
sourceimpl OutputConfig
impl OutputConfig
sourcepub fn s3_output_location(&self) -> Option<&str>
pub fn s3_output_location(&self) -> Option<&str>
Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix
.
sourcepub fn target_device(&self) -> Option<&TargetDevice>
pub fn target_device(&self) -> Option<&TargetDevice>
Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform
fields. It can be used instead of TargetPlatform
.
sourcepub fn target_platform(&self) -> Option<&TargetPlatform>
pub fn target_platform(&self) -> Option<&TargetPlatform>
Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice
.
The following examples show how to configure the TargetPlatform
and CompilerOptions
JSON strings for popular target platforms:
-
Raspberry Pi 3 Model B+
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
"CompilerOptions": {'mattr': ['+neon']}
-
Jetson TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
-
EC2 m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions": {'mcpu': 'skylake-avx512'}
-
RK3399
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}
-
ARMv7 phone (CPU)
"TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
"CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
-
ARMv8 phone (CPU)
"TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},
"CompilerOptions": {'ANDROID_PLATFORM': 29}
sourcepub fn compiler_options(&self) -> Option<&str>
pub fn compiler_options(&self) -> Option<&str>
Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform
specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.
-
DTYPE
: Specifies the data type for the input. When compiling forml_*
(except forml_inf
) instances using PyTorch framework, provide the data type (dtype) of the model's input."float32"
is used if"DTYPE"
is not specified. Options for data type are:-
float32: Use either
"float"
or"float32"
. -
int64: Use either
"int64"
or"long"
.
For example,
{"dtype" : "float32"}
. -
-
CPU
: Compilation for CPU supports the following compiler options.-
mcpu
: CPU micro-architecture. For example,{'mcpu': 'skylake-avx512'}
-
mattr
: CPU flags. For example,{'mattr': ['+neon', '+vfpv4']}
-
-
ARM
: Details of ARM CPU compilations.-
NEON
: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.For example, add
{'mattr': ['+neon']}
to the compiler options if compiling for ARM 32-bit platform with the NEON support.
-
-
NVIDIA
: Compilation for NVIDIA GPU supports the following compiler options.-
gpu_code
: Specifies the targeted architecture. -
trt-ver
: Specifies the TensorRT versions in x.y.z. format. -
cuda-ver
: Specifies the CUDA version in x.y format.
For example,
{'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
-
-
ANDROID
: Compilation for the Android OS supports the following compiler options:-
ANDROID_PLATFORM
: Specifies the Android API levels. Available levels range from 21 to 29. For example,{'ANDROID_PLATFORM': 28}
. -
mattr
: Add{'mattr': ['+neon']}
to compiler options if compiling for ARM 32-bit platform with NEON support.
-
-
INFERENTIA
: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example,"CompilerOptions": "\"--verbose 1 --num-neuroncores 2 -O2\""
.For information about supported compiler options, see Neuron Compiler CLI.
-
CoreML
: Compilation for the CoreMLOutputConfig$TargetDevice
supports the following compiler options:-
class_labels
: Specifies the classification labels file name inside input tar.gz file. For example,{"class_labels": "imagenet_labels_1000.txt"}
. Labels inside the txt file should be separated by newlines.
-
-
EIA
: Compilation for the Elastic Inference Accelerator supports the following compiler options:-
precision_mode
: Specifies the precision of compiled artifacts. Supported values are"FP16"
and"FP32"
. Default is"FP32"
. -
signature_def_key
: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key. -
output_names
: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either:signature_def_key
oroutput_names
.
For example:
{"precision_mode": "FP32", "output_names": ["output:0"]}
-
sourcepub fn kms_key_id(&self) -> Option<&str>
pub fn kms_key_id(&self) -> Option<&str>
The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.
The KmsKeyId can be any of the following formats:
-
Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
-
Key ARN:
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
-
Alias name:
alias/ExampleAlias
-
Alias name ARN:
arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
sourceimpl OutputConfig
impl OutputConfig
sourcepub fn builder() -> Builder
pub fn builder() -> Builder
Creates a new builder-style object to manufacture OutputConfig
Trait Implementations
sourceimpl Clone for OutputConfig
impl Clone for OutputConfig
sourcefn clone(&self) -> OutputConfig
fn clone(&self) -> OutputConfig
Returns a copy of the value. Read more
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
sourceimpl Debug for OutputConfig
impl Debug for OutputConfig
sourceimpl PartialEq<OutputConfig> for OutputConfig
impl PartialEq<OutputConfig> for OutputConfig
sourcefn eq(&self, other: &OutputConfig) -> bool
fn eq(&self, other: &OutputConfig) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
sourcefn ne(&self, other: &OutputConfig) -> bool
fn ne(&self, other: &OutputConfig) -> bool
This method tests for !=
.
impl StructuralPartialEq for OutputConfig
Auto Trait Implementations
impl RefUnwindSafe for OutputConfig
impl Send for OutputConfig
impl Sync for OutputConfig
impl Unpin for OutputConfig
impl UnwindSafe for OutputConfig
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcepub fn borrow_mut(&mut self) -> &mut T
pub fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcepub fn to_owned(&self) -> T
pub fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
sourcepub fn clone_into(&self, target: &mut T)
pub fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more