Struct arrow_odbc::OdbcWriter
source · pub struct OdbcWriter<S> { /* private fields */ }
Expand description
Inserts batches from an [arrow::record_batch::RecordBatchReader
] into a database.
Implementations§
source§impl<S> OdbcWriter<S>where
S: AsStatementRef,
impl<S> OdbcWriter<S>where S: AsStatementRef,
sourcepub fn new(
row_capacity: usize,
schema: &Schema,
statement: Prepared<S>
) -> Result<Self, WriterError>
pub fn new( row_capacity: usize, schema: &Schema, statement: Prepared<S> ) -> Result<Self, WriterError>
Construct a new ODBC writer using an alredy existing prepared statement. Usually you want to
call a higher level constructor like Self::with_connection
. Yet, this constructor is
useful in two scenarios.
- The prepared statement is already constructed and you do not want to spend the time to prepare it again.
- You want to use the arrow arrays as arrar parameters for a statement, but that statement is not necessarily an INSERT statement with a simple 1to1 mapping of columns between table and arrow schema.
Parameters
row_capacity
: The amount of rows send to the database in each chunk. With the exception of the last chunk, which may be smaller.schema
: Schema needs to have one column for each positional parameter of the statement and match the data which will be supplied to the instance later. Otherwise your code will panic.statement
: A prepared statement whose SQL text representation contains one placeholder for each column. The order of the placeholers must correspond to the orders of the columns in theschema
.
sourcepub fn write_all(
&mut self,
reader: impl Iterator<Item = Result<RecordBatch, ArrowError>>
) -> Result<(), WriterError>
pub fn write_all( &mut self, reader: impl Iterator<Item = Result<RecordBatch, ArrowError>> ) -> Result<(), WriterError>
Consumes all the batches in the record batch reader and sends them chunk by chunk to the database.
sourcepub fn write_batch(
&mut self,
record_batch: &RecordBatch
) -> Result<(), WriterError>
pub fn write_batch( &mut self, record_batch: &RecordBatch ) -> Result<(), WriterError>
Consumes a single batch and sends it chunk by chunk to the database. The last batch may not
be consumed until Self::flush
is called.
sourcepub fn flush(&mut self) -> Result<(), WriterError>
pub fn flush(&mut self) -> Result<(), WriterError>
The number of row in an individual record batch must not necessarily match the capacity of the buffers owned by this writer. Therfore sometimes records are not send to the database immediatly but rather we wait for the buffers to be filled then reading the next batch. Once we reach the last batch however, there is no “next batch” anymore. In that case we call this method in order to send the remainder of the records to the database as well.
source§impl<'env> OdbcWriter<StatementConnection<'env>>
impl<'env> OdbcWriter<StatementConnection<'env>>
sourcepub fn from_connection(
connection: Connection<'env>,
schema: &Schema,
table_name: &str,
row_capacity: usize
) -> Result<Self, WriterError>
pub fn from_connection( connection: Connection<'env>, schema: &Schema, table_name: &str, row_capacity: usize ) -> Result<Self, WriterError>
A writer which takes ownership of the connection and inserts the given schema into a table with matching column names.
Note:
If table or column names are derived from user input, be sure to sanatize the input in order to prevent SQL injection attacks.
source§impl<'o> OdbcWriter<StatementImpl<'o>>
impl<'o> OdbcWriter<StatementImpl<'o>>
sourcepub fn with_connection(
connection: &'o Connection<'o>,
schema: &Schema,
table_name: &str,
row_capacity: usize
) -> Result<Self, WriterError>
pub fn with_connection( connection: &'o Connection<'o>, schema: &Schema, table_name: &str, row_capacity: usize ) -> Result<Self, WriterError>
A writer which borrows the connection and inserts the given schema into a table with matching column names.
Note:
If table or column names are derived from user input, be sure to sanatize the input in order to prevent SQL injection attacks.