Expand description

Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.

AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data.

AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management.

AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.

Getting Started

Examples are available for many services and operations, check out the examples folder in GitHub.

The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add aws-sdk-datapipeline to your project, add the following to your Cargo.toml file:

[dependencies]
aws-config = "0.55.1"
aws-sdk-datapipeline = "0.26.0"
tokio = { version = "1", features = ["full"] }

Then in code, a client can be created with the following:

use aws_sdk_datapipeline as datapipeline;

#[tokio::main]
async fn main() -> Result<(), datapipeline::Error> {
    let config = aws_config::load_from_env().await;
    let client = datapipeline::Client::new(&config);

    // ... make some calls with the client

    Ok(())
}

See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.

Using the SDK

Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.

Getting Help

Crate Organization

The entry point for most customers will be Client, which exposes one method for each API offered by AWS Data Pipeline. The return value of each of these methods is a “fluent builder”, where the different inputs for that API are added by builder-style function call chaining, followed by calling send() to get a Future that will result in either a successful output or a SdkError.

Some of these API inputs may be structs or enums to provide more complex structured information. These structs and enums live in types. There are some simpler types for representing data such as date times or binary blobs that live in primitives.

All types required to configure a client via the Config struct live in config.

The operation module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.

There is a top-level Error type that encompasses all the errors that the client can return. Any other error type can be converted to this Error type via the From trait.

The other modules within this crate are not required for normal usage.

Modules

  • Client for calling AWS Data Pipeline.
  • Configuration for AWS Data Pipeline.
  • Endpoint resolution functionality.
  • Common errors and error handling utilities.
  • Information about this crate.
  • Base Middleware Stack
  • All operations that this crate can perform.
  • Primitives such as Blob or DateTime used by other types.
  • Data structures used by operation inputs/outputs.

Structs

Enums

  • All possible error types for this service.