Struct aws_sdk_sagemaker::model::DataProcessing
source · #[non_exhaustive]pub struct DataProcessing { /* private fields */ }
Expand description
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
Implementations§
source§impl DataProcessing
impl DataProcessing
sourcepub fn input_filter(&self) -> Option<&str>
pub fn input_filter(&self) -> Option<&str>
A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter
parameter to exclude fields, such as an ID column, from the input. If you want SageMaker to pass the entire input dataset to the algorithm, accept the default value $
.
Examples: "$"
, "$[1:]"
, "$.features"
sourcepub fn output_filter(&self) -> Option<&str>
pub fn output_filter(&self) -> Option<&str>
A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want SageMaker to store the entire input dataset in the output file, leave the default value, $
. If you specify indexes that aren't within the dimension size of the joined dataset, you get an error.
Examples: "$"
, "$[0,5:]"
, "$['id','SageMakerOutput']"
sourcepub fn join_source(&self) -> Option<&JoinSource>
pub fn join_source(&self) -> Option<&JoinSource>
Specifies the source of the data to join with the transformed data. The valid values are None
and Input
. The default value is None
, which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource
to Input
. You can specify OutputFilter
as an additional filter to select a portion of the joined dataset and store it in the output file.
For JSON or JSONLines objects, such as a JSON array, SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput
. The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput
key and the results are stored in SageMakerOutput
.
For CSV data, SageMaker takes each row as a JSON array and joins the transformed data with the input by appending each transformed row to the end of the input. The joined data has the original input data followed by the transformed data and the output is a CSV file.
For information on how joining in applied, see Workflow for Associating Inferences with Input Records.
source§impl DataProcessing
impl DataProcessing
sourcepub fn builder() -> Builder
pub fn builder() -> Builder
Creates a new builder-style object to manufacture DataProcessing
.
Trait Implementations§
source§impl Clone for DataProcessing
impl Clone for DataProcessing
source§fn clone(&self) -> DataProcessing
fn clone(&self) -> DataProcessing
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for DataProcessing
impl Debug for DataProcessing
source§impl PartialEq<DataProcessing> for DataProcessing
impl PartialEq<DataProcessing> for DataProcessing
source§fn eq(&self, other: &DataProcessing) -> bool
fn eq(&self, other: &DataProcessing) -> bool
self
and other
values to be equal, and is used
by ==
.