Struct aws_sdk_neptunedata::operation::create_ml_endpoint::builders::CreateMLEndpointFluentBuilder
source · pub struct CreateMLEndpointFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to CreateMLEndpoint
.
Creates a new Neptune ML inference endpoint that lets you query one specific model that the model-training process constructed. See Managing inference endpoints using the endpoints command.
When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:CreateMLEndpoint IAM action in that cluster.
Implementations§
source§impl CreateMLEndpointFluentBuilder
impl CreateMLEndpointFluentBuilder
sourcepub fn as_input(&self) -> &CreateMlEndpointInputBuilder
pub fn as_input(&self) -> &CreateMlEndpointInputBuilder
Access the CreateMLEndpoint as a reference.
sourcepub async fn send(
self
) -> Result<CreateMlEndpointOutput, SdkError<CreateMLEndpointError, HttpResponse>>
pub async fn send( self ) -> Result<CreateMlEndpointOutput, SdkError<CreateMLEndpointError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<CreateMlEndpointOutput, CreateMLEndpointError, Self>
pub fn customize( self ) -> CustomizableOperation<CreateMlEndpointOutput, CreateMLEndpointError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn id(self, input: impl Into<String>) -> Self
pub fn id(self, input: impl Into<String>) -> Self
A unique identifier for the new inference endpoint. The default is an autogenerated timestamped name.
sourcepub fn set_id(self, input: Option<String>) -> Self
pub fn set_id(self, input: Option<String>) -> Self
A unique identifier for the new inference endpoint. The default is an autogenerated timestamped name.
sourcepub fn get_id(&self) -> &Option<String>
pub fn get_id(&self) -> &Option<String>
A unique identifier for the new inference endpoint. The default is an autogenerated timestamped name.
sourcepub fn ml_model_training_job_id(self, input: impl Into<String>) -> Self
pub fn ml_model_training_job_id(self, input: impl Into<String>) -> Self
The job Id of the completed model-training job that has created the model that the inference endpoint will point to. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn set_ml_model_training_job_id(self, input: Option<String>) -> Self
pub fn set_ml_model_training_job_id(self, input: Option<String>) -> Self
The job Id of the completed model-training job that has created the model that the inference endpoint will point to. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn get_ml_model_training_job_id(&self) -> &Option<String>
pub fn get_ml_model_training_job_id(&self) -> &Option<String>
The job Id of the completed model-training job that has created the model that the inference endpoint will point to. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn ml_model_transform_job_id(self, input: impl Into<String>) -> Self
pub fn ml_model_transform_job_id(self, input: impl Into<String>) -> Self
The job Id of the completed model-transform job. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn set_ml_model_transform_job_id(self, input: Option<String>) -> Self
pub fn set_ml_model_transform_job_id(self, input: Option<String>) -> Self
The job Id of the completed model-transform job. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn get_ml_model_transform_job_id(&self) -> &Option<String>
pub fn get_ml_model_transform_job_id(&self) -> &Option<String>
The job Id of the completed model-transform job. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn update(self, input: bool) -> Self
pub fn update(self, input: bool) -> Self
If set to true
, update
indicates that this is an update request. The default is false
. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn set_update(self, input: Option<bool>) -> Self
pub fn set_update(self, input: Option<bool>) -> Self
If set to true
, update
indicates that this is an update request. The default is false
. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn get_update(&self) -> &Option<bool>
pub fn get_update(&self) -> &Option<bool>
If set to true
, update
indicates that this is an update request. The default is false
. You must supply either the mlModelTrainingJobId
or the mlModelTransformJobId
.
sourcepub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self
pub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self
The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.
sourcepub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self
pub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self
The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.
sourcepub fn get_neptune_iam_role_arn(&self) -> &Option<String>
pub fn get_neptune_iam_role_arn(&self) -> &Option<String>
The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.
sourcepub fn model_name(self, input: impl Into<String>) -> Self
pub fn model_name(self, input: impl Into<String>) -> Self
Model type for training. By default the Neptune ML model is automatically based on the modelType
used in data processing, but you can specify a different model type here. The default is rgcn
for heterogeneous graphs and kge
for knowledge graphs. The only valid value for heterogeneous graphs is rgcn
. Valid values for knowledge graphs are: kge
, transe
, distmult
, and rotate
.
sourcepub fn set_model_name(self, input: Option<String>) -> Self
pub fn set_model_name(self, input: Option<String>) -> Self
Model type for training. By default the Neptune ML model is automatically based on the modelType
used in data processing, but you can specify a different model type here. The default is rgcn
for heterogeneous graphs and kge
for knowledge graphs. The only valid value for heterogeneous graphs is rgcn
. Valid values for knowledge graphs are: kge
, transe
, distmult
, and rotate
.
sourcepub fn get_model_name(&self) -> &Option<String>
pub fn get_model_name(&self) -> &Option<String>
Model type for training. By default the Neptune ML model is automatically based on the modelType
used in data processing, but you can specify a different model type here. The default is rgcn
for heterogeneous graphs and kge
for knowledge graphs. The only valid value for heterogeneous graphs is rgcn
. Valid values for knowledge graphs are: kge
, transe
, distmult
, and rotate
.
sourcepub fn instance_type(self, input: impl Into<String>) -> Self
pub fn instance_type(self, input: impl Into<String>) -> Self
The type of Neptune ML instance to use for online servicing. The default is ml.m5.xlarge
. Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget.
sourcepub fn set_instance_type(self, input: Option<String>) -> Self
pub fn set_instance_type(self, input: Option<String>) -> Self
The type of Neptune ML instance to use for online servicing. The default is ml.m5.xlarge
. Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget.
sourcepub fn get_instance_type(&self) -> &Option<String>
pub fn get_instance_type(&self) -> &Option<String>
The type of Neptune ML instance to use for online servicing. The default is ml.m5.xlarge
. Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget.
sourcepub fn instance_count(self, input: i32) -> Self
pub fn instance_count(self, input: i32) -> Self
The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction. The default is 1
sourcepub fn set_instance_count(self, input: Option<i32>) -> Self
pub fn set_instance_count(self, input: Option<i32>) -> Self
The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction. The default is 1
sourcepub fn get_instance_count(&self) -> &Option<i32>
pub fn get_instance_count(&self) -> &Option<i32>
The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction. The default is 1
sourcepub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self
pub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self
The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self
pub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self
The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn get_volume_encryption_kms_key(&self) -> &Option<String>
pub fn get_volume_encryption_kms_key(&self) -> &Option<String>
The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
Trait Implementations§
source§impl Clone for CreateMLEndpointFluentBuilder
impl Clone for CreateMLEndpointFluentBuilder
source§fn clone(&self) -> CreateMLEndpointFluentBuilder
fn clone(&self) -> CreateMLEndpointFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl Freeze for CreateMLEndpointFluentBuilder
impl !RefUnwindSafe for CreateMLEndpointFluentBuilder
impl Send for CreateMLEndpointFluentBuilder
impl Sync for CreateMLEndpointFluentBuilder
impl Unpin for CreateMLEndpointFluentBuilder
impl !UnwindSafe for CreateMLEndpointFluentBuilder
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more