Struct aws_sdk_glue::types::SparkSql
source · #[non_exhaustive]pub struct SparkSql {
pub name: String,
pub inputs: Vec<String>,
pub sql_query: String,
pub sql_aliases: Vec<SqlAlias>,
pub output_schemas: Option<Vec<GlueSchema>>,
}Expand description
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a single DynamicFrame.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.name: StringThe name of the transform node.
inputs: Vec<String>The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
sql_query: StringA SQL query that must use Spark SQL syntax and return a single data set.
sql_aliases: Vec<SqlAlias>A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
output_schemas: Option<Vec<GlueSchema>>Specifies the data schema for the SparkSQL transform.
Implementations§
source§impl SparkSql
impl SparkSql
sourcepub fn inputs(&self) -> &[String]
pub fn inputs(&self) -> &[String]
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
sourcepub fn sql_query(&self) -> &str
pub fn sql_query(&self) -> &str
A SQL query that must use Spark SQL syntax and return a single data set.
sourcepub fn sql_aliases(&self) -> &[SqlAlias]
pub fn sql_aliases(&self) -> &[SqlAlias]
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
sourcepub fn output_schemas(&self) -> &[GlueSchema]
pub fn output_schemas(&self) -> &[GlueSchema]
Specifies the data schema for the SparkSQL transform.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .output_schemas.is_none().