Struct aws_sdk_neptunedata::client::Client
source · pub struct Client { /* private fields */ }
Expand description
Client for Amazon NeptuneData
Client for invoking operations on Amazon NeptuneData. Each operation on Amazon NeptuneData is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_neptunedata::Client::new(&config);
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Config
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_neptunedata::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the CancelGremlinQuery
operation has
a Client::cancel_gremlin_query
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.cancel_gremlin_query()
.query_id("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
Implementations§
source§impl Client
impl Client
sourcepub fn cancel_gremlin_query(&self) -> CancelGremlinQueryFluentBuilder
pub fn cancel_gremlin_query(&self) -> CancelGremlinQueryFluentBuilder
Constructs a fluent builder for the CancelGremlinQuery
operation.
- The fluent builder is configurable:
query_id(impl Into<String>)
/set_query_id(Option<String>)
:
required: trueThe unique identifier that identifies the query to be canceled.
- On success, responds with
CancelGremlinQueryOutput
with field(s):status(Option<String>)
:The status of the cancelation
- On failure, responds with
SdkError<CancelGremlinQueryError>
source§impl Client
impl Client
sourcepub fn cancel_loader_job(&self) -> CancelLoaderJobFluentBuilder
pub fn cancel_loader_job(&self) -> CancelLoaderJobFluentBuilder
Constructs a fluent builder for the CancelLoaderJob
operation.
- The fluent builder is configurable:
load_id(impl Into<String>)
/set_load_id(Option<String>)
:
required: trueThe ID of the load job to be deleted.
- On success, responds with
CancelLoaderJobOutput
with field(s):status(Option<String>)
:The cancellation status.
- On failure, responds with
SdkError<CancelLoaderJobError>
source§impl Client
impl Client
sourcepub fn cancel_ml_data_processing_job(
&self
) -> CancelMLDataProcessingJobFluentBuilder
pub fn cancel_ml_data_processing_job( &self ) -> CancelMLDataProcessingJobFluentBuilder
Constructs a fluent builder for the CancelMLDataProcessingJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the data-processing job.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
clean(bool)
/set_clean(Option<bool>)
:
required: falseIf set to
TRUE
, this flag specifies that all Neptune ML S3 artifacts should be deleted when the job is stopped. The default isFALSE
.
- On success, responds with
CancelMlDataProcessingJobOutput
with field(s):status(Option<String>)
:The status of the cancellation request.
- On failure, responds with
SdkError<CancelMLDataProcessingJobError>
source§impl Client
impl Client
sourcepub fn cancel_ml_model_training_job(
&self
) -> CancelMLModelTrainingJobFluentBuilder
pub fn cancel_ml_model_training_job( &self ) -> CancelMLModelTrainingJobFluentBuilder
Constructs a fluent builder for the CancelMLModelTrainingJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the model-training job to be canceled.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
clean(bool)
/set_clean(Option<bool>)
:
required: falseIf set to
TRUE
, this flag specifies that all Amazon S3 artifacts should be deleted when the job is stopped. The default isFALSE
.
- On success, responds with
CancelMlModelTrainingJobOutput
with field(s):status(Option<String>)
:The status of the cancellation.
- On failure, responds with
SdkError<CancelMLModelTrainingJobError>
source§impl Client
impl Client
sourcepub fn cancel_ml_model_transform_job(
&self
) -> CancelMLModelTransformJobFluentBuilder
pub fn cancel_ml_model_transform_job( &self ) -> CancelMLModelTransformJobFluentBuilder
Constructs a fluent builder for the CancelMLModelTransformJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique ID of the model transform job to be canceled.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
clean(bool)
/set_clean(Option<bool>)
:
required: falseIf this flag is set to
TRUE
, all Neptune ML S3 artifacts should be deleted when the job is stopped. The default isFALSE
.
- On success, responds with
CancelMlModelTransformJobOutput
with field(s):status(Option<String>)
:the status of the cancelation.
- On failure, responds with
SdkError<CancelMLModelTransformJobError>
source§impl Client
impl Client
sourcepub fn cancel_open_cypher_query(&self) -> CancelOpenCypherQueryFluentBuilder
pub fn cancel_open_cypher_query(&self) -> CancelOpenCypherQueryFluentBuilder
Constructs a fluent builder for the CancelOpenCypherQuery
operation.
- The fluent builder is configurable:
query_id(impl Into<String>)
/set_query_id(Option<String>)
:
required: trueThe unique ID of the openCypher query to cancel.
silent(bool)
/set_silent(Option<bool>)
:
required: falseIf set to
TRUE
, causes the cancelation of the openCypher query to happen silently.
- On success, responds with
CancelOpenCypherQueryOutput
with field(s):status(Option<String>)
:The cancellation status of the openCypher query.
payload(Option<bool>)
:The cancelation payload for the openCypher query.
- On failure, responds with
SdkError<CancelOpenCypherQueryError>
source§impl Client
impl Client
sourcepub fn create_ml_endpoint(&self) -> CreateMLEndpointFluentBuilder
pub fn create_ml_endpoint(&self) -> CreateMLEndpointFluentBuilder
Constructs a fluent builder for the CreateMLEndpoint
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: falseA unique identifier for the new inference endpoint. The default is an autogenerated timestamped name.
ml_model_training_job_id(impl Into<String>)
/set_ml_model_training_job_id(Option<String>)
:
required: falseThe job Id of the completed model-training job that has created the model that the inference endpoint will point to. You must supply either the
mlModelTrainingJobId
or themlModelTransformJobId
.ml_model_transform_job_id(impl Into<String>)
/set_ml_model_transform_job_id(Option<String>)
:
required: falseThe job Id of the completed model-transform job. You must supply either the
mlModelTrainingJobId
or themlModelTransformJobId
.update(bool)
/set_update(Option<bool>)
:
required: falseIf set to
true
,update
indicates that this is an update request. The default isfalse
. You must supply either themlModelTrainingJobId
or themlModelTransformJobId
.neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.
model_name(impl Into<String>)
/set_model_name(Option<String>)
:
required: falseModel type for training. By default the Neptune ML model is automatically based on the
modelType
used in data processing, but you can specify a different model type here. The default isrgcn
for heterogeneous graphs andkge
for knowledge graphs. The only valid value for heterogeneous graphs isrgcn
. Valid values for knowledge graphs are:kge
,transe
,distmult
, androtate
.instance_type(impl Into<String>)
/set_instance_type(Option<String>)
:
required: falseThe type of Neptune ML instance to use for online servicing. The default is
ml.m5.xlarge
. Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget.instance_count(i32)
/set_instance_count(Option<i32>)
:
required: falseThe minimum number of Amazon EC2 instances to deploy to an endpoint for prediction. The default is 1
volume_encryption_kms_key(impl Into<String>)
/set_volume_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
- On success, responds with
CreateMlEndpointOutput
with field(s):id(Option<String>)
:The unique ID of the new inference endpoint.
arn(Option<String>)
:The ARN for the new inference endpoint.
creation_time_in_millis(Option<i64>)
:The endpoint creation time, in milliseconds.
- On failure, responds with
SdkError<CreateMLEndpointError>
source§impl Client
impl Client
sourcepub fn delete_ml_endpoint(&self) -> DeleteMLEndpointFluentBuilder
pub fn delete_ml_endpoint(&self) -> DeleteMLEndpointFluentBuilder
Constructs a fluent builder for the DeleteMLEndpoint
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the inference endpoint.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.
clean(bool)
/set_clean(Option<bool>)
:
required: falseIf this flag is set to
TRUE
, all Neptune ML S3 artifacts should be deleted when the job is stopped. The default isFALSE
.
- On success, responds with
DeleteMlEndpointOutput
with field(s):status(Option<String>)
:The status of the cancellation.
- On failure, responds with
SdkError<DeleteMLEndpointError>
source§impl Client
impl Client
sourcepub fn delete_propertygraph_statistics(
&self
) -> DeletePropertygraphStatisticsFluentBuilder
pub fn delete_propertygraph_statistics( &self ) -> DeletePropertygraphStatisticsFluentBuilder
Constructs a fluent builder for the DeletePropertygraphStatistics
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
DeletePropertygraphStatisticsOutput
with field(s):status_code(Option<i32>)
:The HTTP response code: 200 if the delete was successful, or 204 if there were no statistics to delete.
status(Option<String>)
:The cancel status.
payload(Option<DeleteStatisticsValueMap>)
:The deletion payload.
- On failure, responds with
SdkError<DeletePropertygraphStatisticsError>
source§impl Client
impl Client
sourcepub fn delete_sparql_statistics(&self) -> DeleteSparqlStatisticsFluentBuilder
pub fn delete_sparql_statistics(&self) -> DeleteSparqlStatisticsFluentBuilder
Constructs a fluent builder for the DeleteSparqlStatistics
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
DeleteSparqlStatisticsOutput
with field(s):status_code(Option<i32>)
:The HTTP response code: 200 if the delete was successful, or 204 if there were no statistics to delete.
status(Option<String>)
:The cancel status.
payload(Option<DeleteStatisticsValueMap>)
:The deletion payload.
- On failure, responds with
SdkError<DeleteSparqlStatisticsError>
source§impl Client
impl Client
sourcepub fn execute_fast_reset(&self) -> ExecuteFastResetFluentBuilder
pub fn execute_fast_reset(&self) -> ExecuteFastResetFluentBuilder
Constructs a fluent builder for the ExecuteFastReset
operation.
- The fluent builder is configurable:
action(Action)
/set_action(Option<Action>)
:
required: trueThe fast reset action. One of the following values:
-
initiateDatabaseReset
– This action generates a unique token needed to actually perform the fast reset. -
performDatabaseReset
– This action uses the token generated by theinitiateDatabaseReset
action to actually perform the fast reset.
-
token(impl Into<String>)
/set_token(Option<String>)
:
required: falseThe fast-reset token to initiate the reset.
- On success, responds with
ExecuteFastResetOutput
with field(s):status(String)
:The
status
is only returned for theperformDatabaseReset
action, and indicates whether or not the fast reset rquest is accepted.payload(Option<FastResetToken>)
:The
payload
is only returned by theinitiateDatabaseReset
action, and contains the unique token to use with theperformDatabaseReset
action to make the reset occur.
- On failure, responds with
SdkError<ExecuteFastResetError>
source§impl Client
impl Client
sourcepub fn execute_gremlin_explain_query(
&self
) -> ExecuteGremlinExplainQueryFluentBuilder
pub fn execute_gremlin_explain_query( &self ) -> ExecuteGremlinExplainQueryFluentBuilder
Constructs a fluent builder for the ExecuteGremlinExplainQuery
operation.
- The fluent builder is configurable:
gremlin_query(impl Into<String>)
/set_gremlin_query(Option<String>)
:
required: trueThe Gremlin explain query string.
- On success, responds with
ExecuteGremlinExplainQueryOutput
with field(s):output(Option<Blob>)
:A text blob containing the Gremlin explain result, as described in Tuning Gremlin queries.
- On failure, responds with
SdkError<ExecuteGremlinExplainQueryError>
source§impl Client
impl Client
sourcepub fn execute_gremlin_profile_query(
&self
) -> ExecuteGremlinProfileQueryFluentBuilder
pub fn execute_gremlin_profile_query( &self ) -> ExecuteGremlinProfileQueryFluentBuilder
Constructs a fluent builder for the ExecuteGremlinProfileQuery
operation.
- The fluent builder is configurable:
gremlin_query(impl Into<String>)
/set_gremlin_query(Option<String>)
:
required: trueThe Gremlin query string to profile.
results(bool)
/set_results(Option<bool>)
:
required: falseIf this flag is set to
TRUE
, the query results are gathered and displayed as part of the profile report. IfFALSE
, only the result count is displayed.chop(i32)
/set_chop(Option<i32>)
:
required: falseIf non-zero, causes the results string to be truncated at that number of characters. If set to zero, the string contains all the results.
serializer(impl Into<String>)
/set_serializer(Option<String>)
:
required: falseIf non-null, the gathered results are returned in a serialized response message in the format specified by this parameter. See Gremlin profile API in Neptune for more information.
index_ops(bool)
/set_index_ops(Option<bool>)
:
required: falseIf this flag is set to
TRUE
, the results include a detailed report of all index operations that took place during query execution and serialization.
- On success, responds with
ExecuteGremlinProfileQueryOutput
with field(s):output(Option<Blob>)
:A text blob containing the Gremlin Profile result. See Gremlin profile API in Neptune for details.
- On failure, responds with
SdkError<ExecuteGremlinProfileQueryError>
source§impl Client
impl Client
sourcepub fn execute_gremlin_query(&self) -> ExecuteGremlinQueryFluentBuilder
pub fn execute_gremlin_query(&self) -> ExecuteGremlinQueryFluentBuilder
Constructs a fluent builder for the ExecuteGremlinQuery
operation.
- The fluent builder is configurable:
gremlin_query(impl Into<String>)
/set_gremlin_query(Option<String>)
:
required: trueUsing this API, you can run Gremlin queries in string format much as you can using the HTTP endpoint. The interface is compatible with whatever Gremlin version your DB cluster is using (see the Tinkerpop client section to determine which Gremlin releases your engine version supports).
serializer(impl Into<String>)
/set_serializer(Option<String>)
:
required: falseIf non-null, the query results are returned in a serialized response message in the format specified by this parameter. See the GraphSON section in the TinkerPop documentation for a list of the formats that are currently supported.
- On success, responds with
ExecuteGremlinQueryOutput
with field(s):request_id(Option<String>)
:The unique identifier of the Gremlin query.
status(Option<GremlinQueryStatusAttributes>)
:The status of the Gremlin query.
result(Option<Document>)
:The Gremlin query output from the server.
meta_value(Option<Document>)
:Metadata about the Gremlin query.
- On failure, responds with
SdkError<ExecuteGremlinQueryError>
source§impl Client
impl Client
sourcepub fn execute_open_cypher_explain_query(
&self
) -> ExecuteOpenCypherExplainQueryFluentBuilder
pub fn execute_open_cypher_explain_query( &self ) -> ExecuteOpenCypherExplainQueryFluentBuilder
Constructs a fluent builder for the ExecuteOpenCypherExplainQuery
operation.
- The fluent builder is configurable:
open_cypher_query(impl Into<String>)
/set_open_cypher_query(Option<String>)
:
required: trueThe openCypher query string.
parameters(impl Into<String>)
/set_parameters(Option<String>)
:
required: falseThe openCypher query parameters.
explain_mode(OpenCypherExplainMode)
/set_explain_mode(Option<OpenCypherExplainMode>)
:
required: trueThe openCypher
explain
mode. Can be one of:static
,dynamic
, ordetails
.
- On success, responds with
ExecuteOpenCypherExplainQueryOutput
with field(s):results(Blob)
:A text blob containing the openCypher
explain
results.
- On failure, responds with
SdkError<ExecuteOpenCypherExplainQueryError>
source§impl Client
impl Client
sourcepub fn execute_open_cypher_query(&self) -> ExecuteOpenCypherQueryFluentBuilder
pub fn execute_open_cypher_query(&self) -> ExecuteOpenCypherQueryFluentBuilder
Constructs a fluent builder for the ExecuteOpenCypherQuery
operation.
- The fluent builder is configurable:
open_cypher_query(impl Into<String>)
/set_open_cypher_query(Option<String>)
:
required: trueThe openCypher query string to be executed.
parameters(impl Into<String>)
/set_parameters(Option<String>)
:
required: falseThe openCypher query parameters for query execution. See Examples of openCypher parameterized queries for more information.
- On success, responds with
ExecuteOpenCypherQueryOutput
with field(s):results(Document)
:The openCypherquery results.
- On failure, responds with
SdkError<ExecuteOpenCypherQueryError>
source§impl Client
impl Client
sourcepub fn get_engine_status(&self) -> GetEngineStatusFluentBuilder
pub fn get_engine_status(&self) -> GetEngineStatusFluentBuilder
Constructs a fluent builder for the GetEngineStatus
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
GetEngineStatusOutput
with field(s):status(Option<String>)
:Set to
healthy
if the instance is not experiencing problems. If the instance is recovering from a crash or from being rebooted and there are active transactions running from the latest server shutdown, status is set torecovery
.start_time(Option<String>)
:Set to the UTC time at which the current server process started.
db_engine_version(Option<String>)
:Set to the Neptune engine version running on your DB cluster. If this engine version has been manually patched since it was released, the version number is prefixed by
Patch-
.role(Option<String>)
:Set to
reader
if the instance is a read-replica, or towriter
if the instance is the primary instance.dfe_query_engine(Option<String>)
:Set to
enabled
if the DFE engine is fully enabled, or toviaQueryHint
(the default) if the DFE engine is only used with queries that have theuseDFE
query hint set totrue
.gremlin(Option<QueryLanguageVersion>)
:Contains information about the Gremlin query language available on your cluster. Specifically, it contains a version field that specifies the current TinkerPop version being used by the engine.
sparql(Option<QueryLanguageVersion>)
:Contains information about the SPARQL query language available on your cluster. Specifically, it contains a version field that specifies the current SPARQL version being used by the engine.
opencypher(Option<QueryLanguageVersion>)
:Contains information about the openCypher query language available on your cluster. Specifically, it contains a version field that specifies the current operCypher version being used by the engine.
lab_mode(Option<HashMap::<String, String>>)
:Contains Lab Mode settings being used by the engine.
rolling_back_trx_count(Option<i32>)
:If there are transactions being rolled back, this field is set to the number of such transactions. If there are none, the field doesn’t appear at all.
rolling_back_trx_earliest_start_time(Option<String>)
:Set to the start time of the earliest transaction being rolled back. If no transactions are being rolled back, the field doesn’t appear at all.
features(Option<HashMap::<String, Document>>)
:Contains status information about the features enabled on your DB cluster.
settings(Option<HashMap::<String, String>>)
:Contains information about the current settings on your DB cluster. For example, contains the current cluster query timeout setting (
clusterQueryTimeoutInMs
).
- On failure, responds with
SdkError<GetEngineStatusError>
source§impl Client
impl Client
sourcepub fn get_gremlin_query_status(&self) -> GetGremlinQueryStatusFluentBuilder
pub fn get_gremlin_query_status(&self) -> GetGremlinQueryStatusFluentBuilder
Constructs a fluent builder for the GetGremlinQueryStatus
operation.
- The fluent builder is configurable:
query_id(impl Into<String>)
/set_query_id(Option<String>)
:
required: trueThe unique identifier that identifies the Gremlin query.
- On success, responds with
GetGremlinQueryStatusOutput
with field(s):query_id(Option<String>)
:The ID of the query for which status is being returned.
query_string(Option<String>)
:The Gremlin query string.
query_eval_stats(Option<QueryEvalStats>)
:The evaluation status of the Gremlin query.
- On failure, responds with
SdkError<GetGremlinQueryStatusError>
source§impl Client
impl Client
sourcepub fn get_loader_job_status(&self) -> GetLoaderJobStatusFluentBuilder
pub fn get_loader_job_status(&self) -> GetLoaderJobStatusFluentBuilder
Constructs a fluent builder for the GetLoaderJobStatus
operation.
- The fluent builder is configurable:
load_id(impl Into<String>)
/set_load_id(Option<String>)
:
required: trueThe load ID of the load job to get the status of.
details(bool)
/set_details(Option<bool>)
:
required: falseFlag indicating whether or not to include details beyond the overall status (
TRUE
orFALSE
; the default isFALSE
).errors(bool)
/set_errors(Option<bool>)
:
required: falseFlag indicating whether or not to include a list of errors encountered (
TRUE
orFALSE
; the default isFALSE
).The list of errors is paged. The
page
anderrorsPerPage
parameters allow you to page through all the errors.page(i32)
/set_page(Option<i32>)
:
required: falseThe error page number (a positive integer; the default is
1
). Only valid when theerrors
parameter is set toTRUE
.errors_per_page(i32)
/set_errors_per_page(Option<i32>)
:
required: falseThe number of errors returned in each page (a positive integer; the default is
10
). Only valid when theerrors
parameter set toTRUE
.
- On success, responds with
GetLoaderJobStatusOutput
with field(s):status(String)
:The HTTP response code for the request.
payload(Document)
:Status information about the load job, in a layout that could look like this:
- On failure, responds with
SdkError<GetLoaderJobStatusError>
source§impl Client
impl Client
sourcepub fn get_ml_data_processing_job(&self) -> GetMLDataProcessingJobFluentBuilder
pub fn get_ml_data_processing_job(&self) -> GetMLDataProcessingJobFluentBuilder
Constructs a fluent builder for the GetMLDataProcessingJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the data-processing job to be retrieved.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
GetMlDataProcessingJobOutput
with field(s):status(Option<String>)
:Status of the data processing job.
id(Option<String>)
:The unique identifier of this data-processing job.
processing_job(Option<MlResourceDefinition>)
:Definition of the data processing job.
- On failure, responds with
SdkError<GetMLDataProcessingJobError>
source§impl Client
impl Client
sourcepub fn get_ml_endpoint(&self) -> GetMLEndpointFluentBuilder
pub fn get_ml_endpoint(&self) -> GetMLEndpointFluentBuilder
Constructs a fluent builder for the GetMLEndpoint
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the inference endpoint.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
GetMlEndpointOutput
with field(s):status(Option<String>)
:The status of the inference endpoint.
id(Option<String>)
:The unique identifier of the inference endpoint.
endpoint(Option<MlResourceDefinition>)
:The endpoint definition.
endpoint_config(Option<MlConfigDefinition>)
:The endpoint configuration
- On failure, responds with
SdkError<GetMLEndpointError>
source§impl Client
impl Client
sourcepub fn get_ml_model_training_job(&self) -> GetMLModelTrainingJobFluentBuilder
pub fn get_ml_model_training_job(&self) -> GetMLModelTrainingJobFluentBuilder
Constructs a fluent builder for the GetMLModelTrainingJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the model-training job to retrieve.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
GetMlModelTrainingJobOutput
with field(s):status(Option<String>)
:The status of the model training job.
id(Option<String>)
:The unique identifier of this model-training job.
processing_job(Option<MlResourceDefinition>)
:The data processing job.
hpo_job(Option<MlResourceDefinition>)
:The HPO job.
model_transform_job(Option<MlResourceDefinition>)
:The model transform job.
ml_models(Option<Vec::<MlConfigDefinition>>)
:A list of the configurations of the ML models being used.
- On failure, responds with
SdkError<GetMLModelTrainingJobError>
source§impl Client
impl Client
sourcepub fn get_ml_model_transform_job(&self) -> GetMLModelTransformJobFluentBuilder
pub fn get_ml_model_transform_job(&self) -> GetMLModelTransformJobFluentBuilder
Constructs a fluent builder for the GetMLModelTransformJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe unique identifier of the model-transform job to be reetrieved.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
GetMlModelTransformJobOutput
with field(s):status(Option<String>)
:The status of the model-transform job.
id(Option<String>)
:The unique identifier of the model-transform job to be retrieved.
base_processing_job(Option<MlResourceDefinition>)
:The base data processing job.
remote_model_transform_job(Option<MlResourceDefinition>)
:The remote model transform job.
models(Option<Vec::<MlConfigDefinition>>)
:A list of the configuration information for the models being used.
- On failure, responds with
SdkError<GetMLModelTransformJobError>
source§impl Client
impl Client
sourcepub fn get_open_cypher_query_status(
&self
) -> GetOpenCypherQueryStatusFluentBuilder
pub fn get_open_cypher_query_status( &self ) -> GetOpenCypherQueryStatusFluentBuilder
Constructs a fluent builder for the GetOpenCypherQueryStatus
operation.
- The fluent builder is configurable:
query_id(impl Into<String>)
/set_query_id(Option<String>)
:
required: trueThe unique ID of the openCypher query for which to retrieve the query status.
- On success, responds with
GetOpenCypherQueryStatusOutput
with field(s):query_id(Option<String>)
:The unique ID of the query for which status is being returned.
query_string(Option<String>)
:The openCypher query string.
query_eval_stats(Option<QueryEvalStats>)
:The openCypher query evaluation status.
- On failure, responds with
SdkError<GetOpenCypherQueryStatusError>
source§impl Client
impl Client
sourcepub fn get_propertygraph_statistics(
&self
) -> GetPropertygraphStatisticsFluentBuilder
pub fn get_propertygraph_statistics( &self ) -> GetPropertygraphStatisticsFluentBuilder
Constructs a fluent builder for the GetPropertygraphStatistics
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
GetPropertygraphStatisticsOutput
with field(s):status(String)
:The HTTP return code of the request. If the request succeeded, the code is 200. See Common error codes for DFE statistics request for a list of common errors.
payload(Option<Statistics>)
:Statistics for property-graph data.
- On failure, responds with
SdkError<GetPropertygraphStatisticsError>
source§impl Client
impl Client
sourcepub fn get_propertygraph_stream(&self) -> GetPropertygraphStreamFluentBuilder
pub fn get_propertygraph_stream(&self) -> GetPropertygraphStreamFluentBuilder
Constructs a fluent builder for the GetPropertygraphStream
operation.
- The fluent builder is configurable:
limit(i64)
/set_limit(Option<i64>)
:
required: falseSpecifies the maximum number of records to return. There is also a size limit of 10 MB on the response that can’t be modified and that takes precedence over the number of records specified in the
limit
parameter. The response does include a threshold-breaching record if the 10 MB limit was reached.The range for
limit
is 1 to 100,000, with a default of 10.iterator_type(IteratorType)
/set_iterator_type(Option<IteratorType>)
:
required: falseCan be one of:
-
AT_SEQUENCE_NUMBER
– Indicates that reading should start from the event sequence number specified jointly by thecommitNum
andopNum
parameters. -
AFTER_SEQUENCE_NUMBER
– Indicates that reading should start right after the event sequence number specified jointly by thecommitNum
andopNum
parameters. -
TRIM_HORIZON
– Indicates that reading should start at the last untrimmed record in the system, which is the oldest unexpired (not yet deleted) record in the change-log stream. -
LATEST
– Indicates that reading should start at the most recent record in the system, which is the latest unexpired (not yet deleted) record in the change-log stream.
-
commit_num(i64)
/set_commit_num(Option<i64>)
:
required: falseThe commit number of the starting record to read from the change-log stream. This parameter is required when
iteratorType
isAT_SEQUENCE_NUMBER
orAFTER_SEQUENCE_NUMBER
, and ignored wheniteratorType
isTRIM_HORIZON
orLATEST
.op_num(i64)
/set_op_num(Option<i64>)
:
required: falseThe operation sequence number within the specified commit to start reading from in the change-log stream data. The default is
1
.encoding(Encoding)
/set_encoding(Option<Encoding>)
:
required: falseIf set to TRUE, Neptune compresses the response using gzip encoding.
- On success, responds with
GetPropertygraphStreamOutput
with field(s):last_event_id(HashMap::<String, String>)
:Sequence identifier of the last change in the stream response.
An event ID is composed of two fields: a
commitNum
, which identifies a transaction that changed the graph, and anopNum
, which identifies a specific operation within that transaction:last_trx_timestamp_in_millis(i64)
:The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.
format(String)
:Serialization format for the change records being returned. Currently, the only supported value is
PG_JSON
.records(Vec::<PropertygraphRecord>)
:An array of serialized change-log stream records included in the response.
total_records(i32)
:The total number of records in the response.
- On failure, responds with
SdkError<GetPropertygraphStreamError>
source§impl Client
impl Client
sourcepub fn get_propertygraph_summary(&self) -> GetPropertygraphSummaryFluentBuilder
pub fn get_propertygraph_summary(&self) -> GetPropertygraphSummaryFluentBuilder
Constructs a fluent builder for the GetPropertygraphSummary
operation.
- The fluent builder is configurable:
mode(GraphSummaryType)
/set_mode(Option<GraphSummaryType>)
:
required: falseMode can take one of two values:
BASIC
(the default), andDETAILED
.
- On success, responds with
GetPropertygraphSummaryOutput
with field(s):status_code(Option<i32>)
:The HTTP return code of the request. If the request succeeded, the code is 200.
payload(Option<PropertygraphSummaryValueMap>)
:Payload containing the property graph summary response.
- On failure, responds with
SdkError<GetPropertygraphSummaryError>
source§impl Client
impl Client
sourcepub fn get_rdf_graph_summary(&self) -> GetRDFGraphSummaryFluentBuilder
pub fn get_rdf_graph_summary(&self) -> GetRDFGraphSummaryFluentBuilder
Constructs a fluent builder for the GetRDFGraphSummary
operation.
- The fluent builder is configurable:
mode(GraphSummaryType)
/set_mode(Option<GraphSummaryType>)
:
required: falseMode can take one of two values:
BASIC
(the default), andDETAILED
.
- On success, responds with
GetRdfGraphSummaryOutput
with field(s):status_code(Option<i32>)
:The HTTP return code of the request. If the request succeeded, the code is 200.
payload(Option<RdfGraphSummaryValueMap>)
:Payload for an RDF graph summary response
- On failure, responds with
SdkError<GetRDFGraphSummaryError>
source§impl Client
impl Client
sourcepub fn get_sparql_statistics(&self) -> GetSparqlStatisticsFluentBuilder
pub fn get_sparql_statistics(&self) -> GetSparqlStatisticsFluentBuilder
Constructs a fluent builder for the GetSparqlStatistics
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
GetSparqlStatisticsOutput
with field(s):status(String)
:The HTTP return code of the request. If the request succeeded, the code is 200. See Common error codes for DFE statistics request for a list of common errors.
When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:GetStatisticsStatus IAM action in that cluster.
payload(Option<Statistics>)
:Statistics for RDF data.
- On failure, responds with
SdkError<GetSparqlStatisticsError>
source§impl Client
impl Client
sourcepub fn get_sparql_stream(&self) -> GetSparqlStreamFluentBuilder
pub fn get_sparql_stream(&self) -> GetSparqlStreamFluentBuilder
Constructs a fluent builder for the GetSparqlStream
operation.
- The fluent builder is configurable:
limit(i64)
/set_limit(Option<i64>)
:
required: falseSpecifies the maximum number of records to return. There is also a size limit of 10 MB on the response that can’t be modified and that takes precedence over the number of records specified in the
limit
parameter. The response does include a threshold-breaching record if the 10 MB limit was reached.The range for
limit
is 1 to 100,000, with a default of 10.iterator_type(IteratorType)
/set_iterator_type(Option<IteratorType>)
:
required: falseCan be one of:
-
AT_SEQUENCE_NUMBER
– Indicates that reading should start from the event sequence number specified jointly by thecommitNum
andopNum
parameters. -
AFTER_SEQUENCE_NUMBER
– Indicates that reading should start right after the event sequence number specified jointly by thecommitNum
andopNum
parameters. -
TRIM_HORIZON
– Indicates that reading should start at the last untrimmed record in the system, which is the oldest unexpired (not yet deleted) record in the change-log stream. -
LATEST
– Indicates that reading should start at the most recent record in the system, which is the latest unexpired (not yet deleted) record in the change-log stream.
-
commit_num(i64)
/set_commit_num(Option<i64>)
:
required: falseThe commit number of the starting record to read from the change-log stream. This parameter is required when
iteratorType
isAT_SEQUENCE_NUMBER
orAFTER_SEQUENCE_NUMBER
, and ignored wheniteratorType
isTRIM_HORIZON
orLATEST
.op_num(i64)
/set_op_num(Option<i64>)
:
required: falseThe operation sequence number within the specified commit to start reading from in the change-log stream data. The default is
1
.encoding(Encoding)
/set_encoding(Option<Encoding>)
:
required: falseIf set to TRUE, Neptune compresses the response using gzip encoding.
- On success, responds with
GetSparqlStreamOutput
with field(s):last_event_id(HashMap::<String, String>)
:Sequence identifier of the last change in the stream response.
An event ID is composed of two fields: a
commitNum
, which identifies a transaction that changed the graph, and anopNum
, which identifies a specific operation within that transaction:last_trx_timestamp_in_millis(i64)
:The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.
format(String)
:Serialization format for the change records being returned. Currently, the only supported value is
NQUADS
.records(Vec::<SparqlRecord>)
:An array of serialized change-log stream records included in the response.
total_records(i32)
:The total number of records in the response.
- On failure, responds with
SdkError<GetSparqlStreamError>
source§impl Client
impl Client
sourcepub fn list_gremlin_queries(&self) -> ListGremlinQueriesFluentBuilder
pub fn list_gremlin_queries(&self) -> ListGremlinQueriesFluentBuilder
Constructs a fluent builder for the ListGremlinQueries
operation.
- The fluent builder is configurable:
include_waiting(bool)
/set_include_waiting(Option<bool>)
:
required: falseIf set to
TRUE
, the list returned includes waiting queries. The default isFALSE
;
- On success, responds with
ListGremlinQueriesOutput
with field(s):accepted_query_count(Option<i32>)
:The number of queries that have been accepted but not yet completed, including queries in the queue.
running_query_count(Option<i32>)
:The number of Gremlin queries currently running.
queries(Option<Vec::<GremlinQueryStatus>>)
:A list of the current queries.
- On failure, responds with
SdkError<ListGremlinQueriesError>
source§impl Client
impl Client
sourcepub fn list_loader_jobs(&self) -> ListLoaderJobsFluentBuilder
pub fn list_loader_jobs(&self) -> ListLoaderJobsFluentBuilder
Constructs a fluent builder for the ListLoaderJobs
operation.
- The fluent builder is configurable:
limit(i32)
/set_limit(Option<i32>)
:
required: falseThe number of load IDs to list. Must be a positive integer greater than zero and not more than
100
(which is the default).include_queued_loads(bool)
/set_include_queued_loads(Option<bool>)
:
required: falseAn optional parameter that can be used to exclude the load IDs of queued load requests when requesting a list of load IDs by setting the parameter to
FALSE
. The default value isTRUE
.
- On success, responds with
ListLoaderJobsOutput
with field(s):status(String)
:Returns the status of the job list request.
payload(Option<LoaderIdResult>)
:The requested list of job IDs.
- On failure, responds with
SdkError<ListLoaderJobsError>
source§impl Client
impl Client
sourcepub fn list_ml_data_processing_jobs(
&self
) -> ListMLDataProcessingJobsFluentBuilder
pub fn list_ml_data_processing_jobs( &self ) -> ListMLDataProcessingJobsFluentBuilder
Constructs a fluent builder for the ListMLDataProcessingJobs
operation.
- The fluent builder is configurable:
max_items(i32)
/set_max_items(Option<i32>)
:
required: falseThe maximum number of items to return (from 1 to 1024; the default is 10).
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
ListMlDataProcessingJobsOutput
with field(s):ids(Option<Vec::<String>>)
:A page listing data processing job IDs.
- On failure, responds with
SdkError<ListMLDataProcessingJobsError>
source§impl Client
impl Client
sourcepub fn list_ml_endpoints(&self) -> ListMLEndpointsFluentBuilder
pub fn list_ml_endpoints(&self) -> ListMLEndpointsFluentBuilder
Constructs a fluent builder for the ListMLEndpoints
operation.
- The fluent builder is configurable:
max_items(i32)
/set_max_items(Option<i32>)
:
required: falseThe maximum number of items to return (from 1 to 1024; the default is 10.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
ListMlEndpointsOutput
with field(s):ids(Option<Vec::<String>>)
:A page from the list of inference endpoint IDs.
- On failure, responds with
SdkError<ListMLEndpointsError>
source§impl Client
impl Client
sourcepub fn list_ml_model_training_jobs(
&self
) -> ListMLModelTrainingJobsFluentBuilder
pub fn list_ml_model_training_jobs( &self ) -> ListMLModelTrainingJobsFluentBuilder
Constructs a fluent builder for the ListMLModelTrainingJobs
operation.
- The fluent builder is configurable:
max_items(i32)
/set_max_items(Option<i32>)
:
required: falseThe maximum number of items to return (from 1 to 1024; the default is 10).
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
ListMlModelTrainingJobsOutput
with field(s):ids(Option<Vec::<String>>)
:A page of the list of model training job IDs.
- On failure, responds with
SdkError<ListMLModelTrainingJobsError>
source§impl Client
impl Client
sourcepub fn list_ml_model_transform_jobs(
&self
) -> ListMLModelTransformJobsFluentBuilder
pub fn list_ml_model_transform_jobs( &self ) -> ListMLModelTransformJobsFluentBuilder
Constructs a fluent builder for the ListMLModelTransformJobs
operation.
- The fluent builder is configurable:
max_items(i32)
/set_max_items(Option<i32>)
:
required: falseThe maximum number of items to return (from 1 to 1024; the default is 10).
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
- On success, responds with
ListMlModelTransformJobsOutput
with field(s):ids(Option<Vec::<String>>)
:A page from the list of model transform IDs.
- On failure, responds with
SdkError<ListMLModelTransformJobsError>
source§impl Client
impl Client
sourcepub fn list_open_cypher_queries(&self) -> ListOpenCypherQueriesFluentBuilder
pub fn list_open_cypher_queries(&self) -> ListOpenCypherQueriesFluentBuilder
Constructs a fluent builder for the ListOpenCypherQueries
operation.
- The fluent builder is configurable:
include_waiting(bool)
/set_include_waiting(Option<bool>)
:
required: falseWhen set to
TRUE
and other parameters are not present, causes status information to be returned for waiting queries as well as for running queries.
- On success, responds with
ListOpenCypherQueriesOutput
with field(s):accepted_query_count(Option<i32>)
:The number of queries that have been accepted but not yet completed, including queries in the queue.
running_query_count(Option<i32>)
:The number of currently running openCypher queries.
queries(Option<Vec::<GremlinQueryStatus>>)
:A list of current openCypher queries.
- On failure, responds with
SdkError<ListOpenCypherQueriesError>
source§impl Client
impl Client
sourcepub fn manage_propertygraph_statistics(
&self
) -> ManagePropertygraphStatisticsFluentBuilder
pub fn manage_propertygraph_statistics( &self ) -> ManagePropertygraphStatisticsFluentBuilder
Constructs a fluent builder for the ManagePropertygraphStatistics
operation.
- The fluent builder is configurable:
mode(StatisticsAutoGenerationMode)
/set_mode(Option<StatisticsAutoGenerationMode>)
:
required: falseThe statistics generation mode. One of:
DISABLE_AUTOCOMPUTE
,ENABLE_AUTOCOMPUTE
, orREFRESH
, the last of which manually triggers DFE statistics generation.
- On success, responds with
ManagePropertygraphStatisticsOutput
with field(s):status(String)
:The HTTP return code of the request. If the request succeeded, the code is 200.
payload(Option<RefreshStatisticsIdMap>)
:This is only returned for refresh mode.
- On failure, responds with
SdkError<ManagePropertygraphStatisticsError>
source§impl Client
impl Client
sourcepub fn manage_sparql_statistics(&self) -> ManageSparqlStatisticsFluentBuilder
pub fn manage_sparql_statistics(&self) -> ManageSparqlStatisticsFluentBuilder
Constructs a fluent builder for the ManageSparqlStatistics
operation.
- The fluent builder is configurable:
mode(StatisticsAutoGenerationMode)
/set_mode(Option<StatisticsAutoGenerationMode>)
:
required: falseThe statistics generation mode. One of:
DISABLE_AUTOCOMPUTE
,ENABLE_AUTOCOMPUTE
, orREFRESH
, the last of which manually triggers DFE statistics generation.
- On success, responds with
ManageSparqlStatisticsOutput
with field(s):status(String)
:The HTTP return code of the request. If the request succeeded, the code is 200.
payload(Option<RefreshStatisticsIdMap>)
:This is only returned for refresh mode.
- On failure, responds with
SdkError<ManageSparqlStatisticsError>
source§impl Client
impl Client
sourcepub fn start_loader_job(&self) -> StartLoaderJobFluentBuilder
pub fn start_loader_job(&self) -> StartLoaderJobFluentBuilder
Constructs a fluent builder for the StartLoaderJob
operation.
- The fluent builder is configurable:
source(impl Into<String>)
/set_source(Option<String>)
:
required: trueThe
source
parameter accepts an S3 URI that identifies a single file, multiple files, a folder, or multiple folders. Neptune loads every data file in any folder that is specified.The URI can be in any of the following formats.
-
s3://(bucket_name)/(object-key-name)
-
https://s3.amazonaws.com/(bucket_name)/(object-key-name)
-
https://s3.us-east-1.amazonaws.com/(bucket_name)/(object-key-name)
The
object-key-name
element of the URI is equivalent to the prefix parameter in an S3 ListObjects API call. It identifies all the objects in the specified S3 bucket whose names begin with that prefix. That can be a single file or folder, or multiple files and/or folders.The specified folder or folders can contain multiple vertex files and multiple edge files.
-
format(Format)
/set_format(Option<Format>)
:
required: trueThe format of the data. For more information about data formats for the Neptune
Loader
command, see Load Data Formats.Allowed values
-
csv
for the Gremlin CSV data format. -
opencypher
for the openCypher CSV data format. -
ntriples
for the N-Triples RDF data format. -
nquads
for the N-Quads RDF data format. -
rdfxml
for the RDF\XML RDF data format. -
turtle
for the Turtle RDF data format.
-
s3_bucket_region(S3BucketRegion)
/set_s3_bucket_region(Option<S3BucketRegion>)
:
required: trueThe Amazon region of the S3 bucket. This must match the Amazon Region of the DB cluster.
iam_role_arn(impl Into<String>)
/set_iam_role_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) for an IAM role to be assumed by the Neptune DB instance for access to the S3 bucket. The IAM role ARN provided here should be attached to the DB cluster (see Adding the IAM Role to an Amazon Neptune Cluster.
mode(Mode)
/set_mode(Option<Mode>)
:
required: falseThe load job mode.
Allowed values:
RESUME
,NEW
,AUTO
.Default value:
AUTO
.-
RESUME
– In RESUME mode, the loader looks for a previous load from this source, and if it finds one, resumes that load job. If no previous load job is found, the loader stops.The loader avoids reloading files that were successfully loaded in a previous job. It only tries to process failed files. If you dropped previously loaded data from your Neptune cluster, that data is not reloaded in this mode. If a previous load job loaded all files from the same source successfully, nothing is reloaded, and the loader returns success.
-
NEW
– In NEW mode, the creates a new load request regardless of any previous loads. You can use this mode to reload all the data from a source after dropping previously loaded data from your Neptune cluster, or to load new data available at the same source. -
AUTO
– In AUTO mode, the loader looks for a previous load job from the same source, and if it finds one, resumes that job, just as inRESUME
mode.If the loader doesn’t find a previous load job from the same source, it loads all data from the source, just as in
NEW
mode.
-
fail_on_error(bool)
/set_fail_on_error(Option<bool>)
:
required: falsefailOnError
– A flag to toggle a complete stop on an error.Allowed values:
“TRUE”
,“FALSE”
.Default value:
“TRUE”
.When this parameter is set to
“FALSE”
, the loader tries to load all the data in the location specified, skipping any entries with errors.When this parameter is set to
“TRUE”
, the loader stops as soon as it encounters an error. Data loaded up to that point persists.parallelism(Parallelism)
/set_parallelism(Option<Parallelism>)
:
required: falseThe optional
parallelism
parameter can be set to reduce the number of threads used by the bulk load process.Allowed values:
-
LOW
– The number of threads used is the number of available vCPUs divided by 8. -
MEDIUM
– The number of threads used is the number of available vCPUs divided by 2. -
HIGH
– The number of threads used is the same as the number of available vCPUs. -
OVERSUBSCRIBE
– The number of threads used is the number of available vCPUs multiplied by 2. If this value is used, the bulk loader takes up all available resources.This does not mean, however, that the
OVERSUBSCRIBE
setting results in 100% CPU utilization. Because the load operation is I/O bound, the highest CPU utilization to expect is in the 60% to 70% range.
Default value:
HIGH
The
parallelism
setting can sometimes result in a deadlock between threads when loading openCypher data. When this happens, Neptune returns theLOAD_DATA_DEADLOCK
error. You can generally fix the issue by settingparallelism
to a lower setting and retrying the load command.-
parser_configuration(impl Into<String>, impl Into<String>)
/set_parser_configuration(Option<HashMap::<String, String>>)
:
required: falseparserConfiguration
– An optional object with additional parser configuration values. Each of the child parameters is also optional:-
namedGraphUri
– The default graph for all RDF formats when no graph is specified (for non-quads formats and NQUAD entries with no graph).The default is
https://aws.amazon.com/neptune/vocab/v01/DefaultNamedGraph
. -
baseUri
– The base URI for RDF/XML and Turtle formats.The default is
https://aws.amazon.com/neptune/default
. -
allowEmptyStrings
– Gremlin users need to be able to pass empty string values(“”) as node and edge properties when loading CSV data. IfallowEmptyStrings
is set tofalse
(the default), such empty strings are treated as nulls and are not loaded.If
allowEmptyStrings
is set totrue
, the loader treats empty strings as valid property values and loads them accordingly.
-
update_single_cardinality_properties(bool)
/set_update_single_cardinality_properties(Option<bool>)
:
required: falseupdateSingleCardinalityProperties
is an optional parameter that controls how the bulk loader treats a new value for single-cardinality vertex or edge properties. This is not supported for loading openCypher data.Allowed values:
“TRUE”
,“FALSE”
.Default value:
“FALSE”
.By default, or when
updateSingleCardinalityProperties
is explicitly set to“FALSE”
, the loader treats a new value as an error, because it violates single cardinality.When
updateSingleCardinalityProperties
is set to“TRUE”
, on the other hand, the bulk loader replaces the existing value with the new one. If multiple edge or single-cardinality vertex property values are provided in the source file(s) being loaded, the final value at the end of the bulk load could be any one of those new values. The loader only guarantees that the existing value has been replaced by one of the new ones.queue_request(bool)
/set_queue_request(Option<bool>)
:
required: falseThis is an optional flag parameter that indicates whether the load request can be queued up or not.
You don’t have to wait for one load job to complete before issuing the next one, because Neptune can queue up as many as 64 jobs at a time, provided that their
queueRequest
parameters are all set to“TRUE”
. The queue order of the jobs will be first-in-first-out (FIFO).If the
queueRequest
parameter is omitted or set to“FALSE”
, the load request will fail if another load job is already running.Allowed values:
“TRUE”
,“FALSE”
.Default value:
“FALSE”
.dependencies(impl Into<String>)
/set_dependencies(Option<Vec::<String>>)
:
required: falseThis is an optional parameter that can make a queued load request contingent on the successful completion of one or more previous jobs in the queue.
Neptune can queue up as many as 64 load requests at a time, if their
queueRequest
parameters are set to“TRUE”
. Thedependencies
parameter lets you make execution of such a queued request dependent on the successful completion of one or more specified previous requests in the queue.For example, if load
Job-A
andJob-B
are independent of each other, but loadJob-C
needsJob-A
andJob-B
to be finished before it begins, proceed as follows:-
Submit
load-job-A
andload-job-B
one after another in any order, and save their load-ids. -
Submit
load-job-C
with the load-ids of the two jobs in itsdependencies
field:
Because of the
dependencies
parameter, the bulk loader will not startJob-C
untilJob-A
andJob-B
have completed successfully. If either one of them fails, Job-C will not be executed, and its status will be set toLOAD_FAILED_BECAUSE_DEPENDENCY_NOT_SATISFIED
.You can set up multiple levels of dependency in this way, so that the failure of one job will cause all requests that are directly or indirectly dependent on it to be cancelled.
-
user_provided_edge_ids(bool)
/set_user_provided_edge_ids(Option<bool>)
:
required: falseThis parameter is required only when loading openCypher data that contains relationship IDs. It must be included and set to
True
when openCypher relationship IDs are explicitly provided in the load data (recommended).When
userProvidedEdgeIds
is absent or set toTrue
, an:ID
column must be present in every relationship file in the load.When
userProvidedEdgeIds
is present and set toFalse
, relationship files in the load must not contain an:ID
column. Instead, the Neptune loader automatically generates an ID for each relationship.It’s useful to provide relationship IDs explicitly so that the loader can resume loading after error in the CSV data have been fixed, without having to reload any relationships that have already been loaded. If relationship IDs have not been explicitly assigned, the loader cannot resume a failed load if any relationship file has had to be corrected, and must instead reload all the relationships.
- On success, responds with
StartLoaderJobOutput
with field(s):status(String)
:The HTTP return code indicating the status of the load job.
payload(HashMap::<String, String>)
:Contains a
loadId
name-value pair that provides an identifier for the load operation.
- On failure, responds with
SdkError<StartLoaderJobError>
source§impl Client
impl Client
sourcepub fn start_ml_data_processing_job(
&self
) -> StartMLDataProcessingJobFluentBuilder
pub fn start_ml_data_processing_job( &self ) -> StartMLDataProcessingJobFluentBuilder
Constructs a fluent builder for the StartMLDataProcessingJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: falseA unique identifier for the new job. The default is an autogenerated UUID.
previous_data_processing_job_id(impl Into<String>)
/set_previous_data_processing_job_id(Option<String>)
:
required: falseThe job ID of a completed data processing job run on an earlier version of the data.
input_data_s3_location(impl Into<String>)
/set_input_data_s3_location(Option<String>)
:
required: trueThe URI of the Amazon S3 location where you want SageMaker to download the data needed to run the data processing job.
processed_data_s3_location(impl Into<String>)
/set_processed_data_s3_location(Option<String>)
:
required: trueThe URI of the Amazon S3 location where you want SageMaker to save the results of a data processing job.
sagemaker_iam_role_arn(impl Into<String>)
/set_sagemaker_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of an IAM role that SageMaker can assume to perform tasks on your behalf. This must be listed in your DB cluster parameter group or an error will occur.
processing_instance_type(impl Into<String>)
/set_processing_instance_type(Option<String>)
:
required: falseThe type of ML instance used during data processing. Its memory should be large enough to hold the processed dataset. The default is the smallest ml.r5 type whose memory is ten times larger than the size of the exported graph data on disk.
processing_instance_volume_size_in_gb(i32)
/set_processing_instance_volume_size_in_gb(Option<i32>)
:
required: falseThe disk volume size of the processing instance. Both input data and processed data are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML chooses the volume size automatically based on the data size.
processing_time_out_in_seconds(i32)
/set_processing_time_out_in_seconds(Option<i32>)
:
required: falseTimeout in seconds for the data processing job. The default is 86,400 (1 day).
model_type(impl Into<String>)
/set_model_type(Option<String>)
:
required: falseOne of the two model types that Neptune ML currently supports: heterogeneous graph models (
heterogeneous
), and knowledge graph (kge
). The default is none. If not specified, Neptune ML chooses the model type automatically based on the data.config_file_name(impl Into<String>)
/set_config_file_name(Option<String>)
:
required: falseA data specification file that describes how to load the exported graph data for training. The file is automatically generated by the Neptune export toolkit. The default is
training-data-configuration.json
.subnets(impl Into<String>)
/set_subnets(Option<Vec::<String>>)
:
required: falseThe IDs of the subnets in the Neptune VPC. The default is None.
security_group_ids(impl Into<String>)
/set_security_group_ids(Option<Vec::<String>>)
:
required: falseThe VPC security group IDs. The default is None.
volume_encryption_kms_key(impl Into<String>)
/set_volume_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
s3_output_encryption_kms_key(impl Into<String>)
/set_s3_output_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
- On success, responds with
StartMlDataProcessingJobOutput
with field(s):id(Option<String>)
:The unique ID of the new data processing job.
arn(Option<String>)
:The ARN of the data processing job.
creation_time_in_millis(Option<i64>)
:The time it took to create the new processing job, in milliseconds.
- On failure, responds with
SdkError<StartMLDataProcessingJobError>
source§impl Client
impl Client
sourcepub fn start_ml_model_training_job(
&self
) -> StartMLModelTrainingJobFluentBuilder
pub fn start_ml_model_training_job( &self ) -> StartMLModelTrainingJobFluentBuilder
Constructs a fluent builder for the StartMLModelTrainingJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: falseA unique identifier for the new job. The default is An autogenerated UUID.
previous_model_training_job_id(impl Into<String>)
/set_previous_model_training_job_id(Option<String>)
:
required: falseThe job ID of a completed model-training job that you want to update incrementally based on updated data.
data_processing_job_id(impl Into<String>)
/set_data_processing_job_id(Option<String>)
:
required: trueThe job ID of the completed data-processing job that has created the data that the training will work with.
train_model_s3_location(impl Into<String>)
/set_train_model_s3_location(Option<String>)
:
required: trueThe location in Amazon S3 where the model artifacts are to be stored.
sagemaker_iam_role_arn(impl Into<String>)
/set_sagemaker_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
base_processing_instance_type(impl Into<String>)
/set_base_processing_instance_type(Option<String>)
:
required: falseThe type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
training_instance_type(impl Into<String>)
/set_training_instance_type(Option<String>)
:
required: falseThe type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is
ml.p3.2xlarge
. Choosing the right instance type for training depends on the task type, graph size, and your budget.training_instance_volume_size_in_gb(i32)
/set_training_instance_volume_size_in_gb(Option<i32>)
:
required: falseThe disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
training_time_out_in_seconds(i32)
/set_training_time_out_in_seconds(Option<i32>)
:
required: falseTimeout in seconds for the training job. The default is 86,400 (1 day).
max_hpo_number_of_training_jobs(i32)
/set_max_hpo_number_of_training_jobs(Option<i32>)
:
required: falseMaximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set
maxHPONumberOfTrainingJobs
to 10). In general, the more tuning runs, the better the results.max_hpo_parallel_training_jobs(i32)
/set_max_hpo_parallel_training_jobs(Option<i32>)
:
required: falseMaximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
subnets(impl Into<String>)
/set_subnets(Option<Vec::<String>>)
:
required: falseThe IDs of the subnets in the Neptune VPC. The default is None.
security_group_ids(impl Into<String>)
/set_security_group_ids(Option<Vec::<String>>)
:
required: falseThe VPC security group IDs. The default is None.
volume_encryption_kms_key(impl Into<String>)
/set_volume_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
s3_output_encryption_kms_key(impl Into<String>)
/set_s3_output_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
enable_managed_spot_training(bool)
/set_enable_managed_spot_training(Option<bool>)
:
required: falseOptimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is
False
.custom_model_training_parameters(CustomModelTrainingParameters)
/set_custom_model_training_parameters(Option<CustomModelTrainingParameters>)
:
required: falseThe configuration for custom model training. This is a JSON object.
- On success, responds with
StartMlModelTrainingJobOutput
with field(s):id(Option<String>)
:The unique ID of the new model training job.
arn(Option<String>)
:The ARN of the new model training job.
creation_time_in_millis(Option<i64>)
:The model training job creation time, in milliseconds.
- On failure, responds with
SdkError<StartMLModelTrainingJobError>
source§impl Client
impl Client
sourcepub fn start_ml_model_transform_job(
&self
) -> StartMLModelTransformJobFluentBuilder
pub fn start_ml_model_transform_job( &self ) -> StartMLModelTransformJobFluentBuilder
Constructs a fluent builder for the StartMLModelTransformJob
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: falseA unique identifier for the new job. The default is an autogenerated UUID.
data_processing_job_id(impl Into<String>)
/set_data_processing_job_id(Option<String>)
:
required: falseThe job ID of a completed data-processing job. You must include either
dataProcessingJobId
and amlModelTrainingJobId
, or atrainingJobName
.ml_model_training_job_id(impl Into<String>)
/set_ml_model_training_job_id(Option<String>)
:
required: falseThe job ID of a completed model-training job. You must include either
dataProcessingJobId
and amlModelTrainingJobId
, or atrainingJobName
.training_job_name(impl Into<String>)
/set_training_job_name(Option<String>)
:
required: falseThe name of a completed SageMaker training job. You must include either
dataProcessingJobId
and amlModelTrainingJobId
, or atrainingJobName
.model_transform_output_s3_location(impl Into<String>)
/set_model_transform_output_s3_location(Option<String>)
:
required: trueThe location in Amazon S3 where the model artifacts are to be stored.
sagemaker_iam_role_arn(impl Into<String>)
/set_sagemaker_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.
neptune_iam_role_arn(impl Into<String>)
/set_neptune_iam_role_arn(Option<String>)
:
required: falseThe ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
custom_model_transform_parameters(CustomModelTransformParameters)
/set_custom_model_transform_parameters(Option<CustomModelTransformParameters>)
:
required: falseConfiguration information for a model transform using a custom model. The
customModelTransformParameters
object contains the following fields, which must have values compatible with the saved model parameters from the training job:base_processing_instance_type(impl Into<String>)
/set_base_processing_instance_type(Option<String>)
:
required: falseThe type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.
base_processing_instance_volume_size_in_gb(i32)
/set_base_processing_instance_volume_size_in_gb(Option<i32>)
:
required: falseThe disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
subnets(impl Into<String>)
/set_subnets(Option<Vec::<String>>)
:
required: falseThe IDs of the subnets in the Neptune VPC. The default is None.
security_group_ids(impl Into<String>)
/set_security_group_ids(Option<Vec::<String>>)
:
required: falseThe VPC security group IDs. The default is None.
volume_encryption_kms_key(impl Into<String>)
/set_volume_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
s3_output_encryption_kms_key(impl Into<String>)
/set_s3_output_encryption_kms_key(Option<String>)
:
required: falseThe Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
- On success, responds with
StartMlModelTransformJobOutput
with field(s):id(Option<String>)
:The unique ID of the new model transform job.
arn(Option<String>)
:The ARN of the model transform job.
creation_time_in_millis(Option<i64>)
:The creation time of the model transform job, in milliseconds.
- On failure, responds with
SdkError<StartMLModelTransformJobError>
source§impl Client
impl Client
sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_impl
configured. - Identity caching is enabled without a
sleep_impl
andtime_source
configured. - No
behavior_version
is provided.
The panic message for each of these will have instructions on how to resolve them.
source§impl Client
impl Client
sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it. - This method will panic if no
BehaviorVersion
is provided. If you experience this panic, setbehavior_version
on the Config or enable thebehavior-version-latest
Cargo feature.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more