1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299
//! This module provides the building blocks for creating IMPORT and EXPORT jobs.
//! These are represented by a query that gets executed concurrently with some ETL workers, both of
//! which are obtained by building the ETL job. The data format is always CSV, but there are some
//! customizations that can be done on the builders such as row or column separator, etc.
//!
//! The query execution is driven by a future obtained from building the job, and will rely
//! on using the workers (also obtained from building the job) to complete and return. The
//! results is of type [`ExaQueryResult`] which can give you the number of affected rows.
//!
//! IMPORT jobs are constructed through the [`ImportBuilder`] type and will generate workers of type
//! [`ExaImport`]. The workers can be used to write data to the database and the query execution
//! ends when all the workers have been closed (by explicitly calling `close().await`).
//!
//! EXPORT jobs are constructed through the [`ExportBuilder`] type and will generate workers of type
//! [`ExaExport`]. The workers can be used to read data from the database and the query execution
//! ends when all the workers receive EOF. They can be dropped afterwards.
//!
//! ETL jobs can use TLS, compression, or both and will do so in a
//! consistent manner with the [`ExaConnection`] they are executed on.
//! That means that if the connection uses TLS / compression, so will the ETL job.
//!
//! **NOTE:** Trying to run ETL jobs with TLS without an ETL TLS feature flag results
//! in a runtime error. Furthermore, enabling more than one ETL TLS feature results in a
//! compile time error.
//!
//! # Atomicity
//!
//! `IMPORT` jobs are not atomic by themselves. If an error occurs during the data ingestion,
//! some of the data might be already sent and written in the database. However, since
//! `IMPORT` is fundamentally just a query, it *can* be transactional. Therefore,
//! beginning a transaction and passing that to the [`ImportBuilder::build`] method will result in
//! the import job needing to be explicitly committed:
//!
//! ```rust,no_run
//! use std::env;
//!
//! use sqlx_exasol::{etl::*, *};
//!
//! # async {
//! #
//! let pool = ExaPool::connect(&env::var("DATABASE_URL").unwrap()).await?;
//! let mut con = pool.acquire().await?;
//! let mut tx = con.begin().await?;
//!
//! let (query_fut, writers) = ImportBuilder::new("SOME_TABLE").build(&mut *tx).await?;
//!
//! // concurrently use the writers and await the query future
//!
//! tx.commit().await?;
//! #
//! # let res: anyhow::Result<()> = Ok(());
//! # res
//! # };
//! ```
//!
//! # IMPORTANT
//!
//! Exasol doesn't really like it when [`ExaImport`] workers are closed without ever sending any
//! data. The underlying socket connection to Exasol will be closed, and Exasol will just try to
//! open a new one. However, workers only listen on the designated sockets once, so the connection
//! will be refused (even if it weren't, the cycle might just repeat since we'd still be sending no
//! data).
//!
//! Therefore, it is wise not to build IMPORT jobs with more workers than required, depending on the
//! amount of data to be imported and especially if certain workers won't be written to at all.
//!
//! Additionally, Exasol expects that all [`ExaExport`] are read in their entirety (until EOF is
//! reached). Failing to do so will result in the query execution returning an error. If, for some
//! reason, you do not want to exhaust the readers, be prepared to handle the error returned by the
//! `EXPORT` query.
mod error;
mod export;
mod import;
mod non_tls;
mod row_separator;
#[cfg(any(feature = "etl_native_tls", feature = "etl_rustls"))]
mod tls;
mod traits;
use std::{
fmt::Write as _,
io::{Error as IoError, Result as IoResult},
net::{IpAddr, Ipv4Addr, SocketAddr, SocketAddrV4},
pin::Pin,
task::{ready, Context, Poll},
};
use arrayvec::ArrayString;
pub use export::{ExaExport, ExportBuilder, ExportSource};
use futures_core::future::BoxFuture;
use futures_io::{AsyncRead, AsyncWrite};
use hyper::rt;
pub use import::{ExaImport, ImportBuilder, Trim};
pub use row_separator::RowSeparator;
use sqlx_core::{error::Error as SqlxError, net::Socket};
use self::{
error::ExaEtlError,
non_tls::NonTlsSocketSpawner,
traits::{EtlJob, WithSocketMaker},
};
use super::websocket::socket::{ExaSocket, WithExaSocket};
use crate::{
command::ExaCommand,
responses::{QueryResult, Results},
ExaConnection, ExaQueryResult,
};
/// Special Exasol packet that enables tunneling.
/// Exasol responds with an internal address that can be used in query.
const SPECIAL_PACKET: [u8; 12] = [2, 33, 33, 2, 1, 0, 0, 0, 1, 0, 0, 0];
/// Type of the future that executes the ETL job.
type JobFuture<'a> = BoxFuture<'a, Result<ExaQueryResult, SqlxError>>;
type SocketFuture = BoxFuture<'static, IoResult<ExaSocket>>;
type WithSocketFuture = BoxFuture<'static, Result<(SocketAddrV4, SocketFuture), SqlxError>>;
/// Builds an ETL job comprising of a [`JobFuture`], that drives the execution
/// of the ETL query, and an array of workers that perform the IO.
async fn build_etl<'a, 'c, T>(
job: &'a T,
con: &'c mut ExaConnection,
) -> Result<(JobFuture<'c>, Vec<T::Worker>), SqlxError>
where
T: EtlJob,
'c: 'a,
{
let ips = con.ws.get_hosts().await?;
let port = con.ws.socket_addr().port();
let with_tls = con.attributes().encryption_enabled;
let with_compression = job
.use_compression()
.unwrap_or(con.attributes().compression_enabled);
// Get the internal Exasol node addresses and the socket spawning futures
let (addrs, futures): (Vec<_>, Vec<_>) =
socket_spawners(job.num_workers(), ips, port, with_tls)
.await?
.into_iter()
.unzip();
// Construct and send query
let query = job.query(addrs, with_tls, with_compression);
let cmd = ExaCommand::new_execute(&query, &con.ws.attributes).try_into()?;
con.ws.send(cmd).await?;
// Create the ETL workers
let sockets = job.create_workers(futures, with_compression);
// Query execution driving future to be returned and awaited
// alongside the worker IO operations
let future = Box::pin(async move {
let query_res: QueryResult = con.ws.recv::<Results>().await?.into();
match query_res {
QueryResult::ResultSet { .. } => Err(IoError::from(ExaEtlError::ResultSetFromEtl))?,
QueryResult::RowCount { row_count } => Ok(ExaQueryResult::new(row_count)),
}
});
Ok((future, sockets))
}
/// Wrapper over [`_socket_spawners`] used to handle TLS/non-TLS reasoning.
async fn socket_spawners(
num: usize,
ips: Vec<IpAddr>,
port: u16,
with_tls: bool,
) -> Result<Vec<(SocketAddrV4, SocketFuture)>, SqlxError> {
let num_sockets = if num > 0 { num } else { ips.len() };
#[cfg(any(feature = "etl_native_tls", feature = "etl_rustls"))]
match with_tls {
true => _socket_spawners(tls::tls_with_socket_maker()?, num_sockets, ips, port).await,
false => _socket_spawners(NonTlsSocketSpawner, num_sockets, ips, port).await,
}
#[cfg(not(any(feature = "etl_native_tls", feature = "etl_rustls")))]
match with_tls {
true => Err(SqlxError::Tls("No ETL TLS feature set".into())),
false => _socket_spawners(NonTlsSocketSpawner, num_sockets, ips, port).await,
}
}
/// Creates a socket making future for each IP address provided.
/// The internal socket address of the corresponding Exasol node
/// is provided alongside the future, to be used in query generation.
async fn _socket_spawners<T>(
socket_spawner: T,
num_sockets: usize,
ips: Vec<IpAddr>,
port: u16,
) -> Result<Vec<(SocketAddrV4, SocketFuture)>, SqlxError>
where
T: WithSocketMaker,
{
let mut output = Vec::with_capacity(num_sockets);
for ip in ips.into_iter().take(num_sockets) {
let mut ip_buf = ArrayString::<50>::new_const();
write!(&mut ip_buf, "{ip}").expect("IP address should fit in 50 characters");
let wrapper = WithExaSocket(SocketAddr::new(ip, port));
let with_socket = socket_spawner.make_with_socket(wrapper);
let (addr, future) = sqlx_core::net::connect_tcp(&ip_buf, port, with_socket)
.await?
.await?;
output.push((addr, future));
}
Ok(output)
}
/// Behind the scenes Exasol will import/export to a file located on the
/// one-shot HTTP server we will host on this socket.
///
/// The "file" will be defined something like <`http://10.25.0.2/0001.csv`>.
///
/// While I don't know the exact implementation details, I assume Exasol
/// does port forwarding to/from the socket we connect (the one in this function)
/// and a local socket it opens (which has the address used in the file).
///
/// This function is used to retrieve the internal IP of that local socket,
/// so we can construct the file name.
async fn get_etl_addr<S>(mut socket: S) -> Result<(S, SocketAddrV4), SqlxError>
where
S: Socket,
{
// Write special packet
let mut write_start = 0;
while write_start < SPECIAL_PACKET.len() {
let written = socket.write(&SPECIAL_PACKET[write_start..]).await?;
write_start += written;
}
// Read response buffer.
let mut buf = [0; 24];
let mut read_start = 0;
while read_start < buf.len() {
let mut buf = &mut buf[read_start..];
let read = socket.read(&mut buf).await?;
read_start += read;
}
// Parse address
let mut ip_buf = ArrayString::<16>::new_const();
buf[8..]
.iter()
.take_while(|b| **b != b'\0')
.for_each(|b| ip_buf.push(char::from(*b)));
let port = u16::from_le_bytes([buf[4], buf[5]]);
let ip = ip_buf
.parse::<Ipv4Addr>()
.map_err(ExaEtlError::from)
.map_err(IoError::from)?;
let address = SocketAddrV4::new(ip, port);
Ok((socket, address))
}
impl rt::Read for ExaSocket {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
mut buf: rt::ReadBufCursor<'_>,
) -> Poll<IoResult<()>> {
// SAFETY: The AsyncRead::poll_read call initializes and
// fills the provided buffer. We do however need to cast it
// to a mutable byte array first so the argument datatype matches.
unsafe {
let buffer: *mut [std::mem::MaybeUninit<u8>] = buf.as_mut();
let buffer = &mut *(buffer as *mut [u8]);
let n = ready!(AsyncRead::poll_read(self, cx, buffer))?;
buf.advance(n);
}
Poll::Ready(Ok(()))
}
}
impl rt::Write for ExaSocket {
fn poll_write(self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &[u8]) -> Poll<IoResult<usize>> {
AsyncWrite::poll_write(self, cx, buf)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<IoResult<()>> {
AsyncWrite::poll_flush(self, cx)
}
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<IoResult<()>> {
AsyncWrite::poll_close(self, cx)
}
}