Struct odbc_api::buffers::TextRowSet [−][src]
pub struct TextRowSet { /* fields omitted */ }
Expand description
This row set binds a string buffer to each column, which is large enough to hold the maximum length string representation for each element in the row set at once.
Example
//! A program executing a query and printing the result as csv to standard out. Requires
//! `anyhow` and `csv` crate.
use anyhow::Error;
use odbc_api::{buffers::TextRowSet, Cursor, Environment};
use std::{
ffi::CStr,
io::{stdout, Write},
path::PathBuf,
};
/// Maximum number of rows fetched with one row set. Fetching batches of rows is usually much
/// faster than fetching individual rows.
const BATCH_SIZE: usize = 5000;
fn main() -> Result<(), Error> {
// Write csv to standard out
let out = stdout();
let mut writer = csv::Writer::from_writer(out);
// We know this is going to be the only ODBC environment in the entire process, so this is
// safe.
let environment = unsafe { Environment::new() }?;
// Connect using a DSN. Alternatively we could have used a connection string
let mut connection = environment.connect(
"DataSourceName",
"Username",
"Password",
)?;
// Execute a one of query without any parameters.
match connection.execute("SELECT * FROM TableName", ())? {
Some(cursor) => {
// Write the column names to stdout
let mut headline : Vec<String> = cursor.column_names()?.collect::<Result<_,_>>()?;
writer.write_record(headline)?;
// Use schema in cursor to initialize a text buffer large enough to hold the largest
// possible strings for each column up to an upper limit of 4KiB
let mut buffers = TextRowSet::for_cursor(BATCH_SIZE, &cursor, Some(4096))?;
// Bind the buffer to the cursor. It is now being filled with every call to fetch.
let mut row_set_cursor = cursor.bind_buffer(&mut buffers)?;
// Iterate over batches
while let Some(batch) = row_set_cursor.fetch()? {
// Within a batch, iterate over every row
for row_index in 0..batch.num_rows() {
// Within a row iterate over every column
let record = (0..batch.num_cols()).map(|col_index| {
batch
.at(col_index, row_index)
.unwrap_or(&[])
});
// Writes row as csv
writer.write_record(record)?;
}
}
}
None => {
eprintln!(
"Query came back empty. No output has been created."
);
}
}
Ok(())
}
Implementations
pub fn for_cursor(
batch_size: usize,
cursor: &impl Cursor,
max_str_len: Option<usize>
) -> Result<TextRowSet, Error>
pub fn for_cursor(
batch_size: usize,
cursor: &impl Cursor,
max_str_len: Option<usize>
) -> Result<TextRowSet, Error>
The resulting text buffer is not in any way tied to the cursor, other than that its buffer sizes a tailor fitted to result set the cursor is iterating over.
Parameters
batch_size
: The maximum number of rows the buffer is able to hold.cursor
: Used to query the display size for each column of the row set. For character data the length in characters is multiplied by 4 in order to have enough space for 4 byte utf-8 characters. This is a pessimization for some data sources (e.g. SQLite 3) which do interpret the size of aVARCHAR(5)
column as 5 bytes rather than 5 characters.max_str_limit
: Some queries make it hard to estimate a sensible upper bound and sometimes drivers are just not that good at it. This argument allows you to specify an upper bound for the length of character data.
Creates a text buffer large enough to hold batch_size
rows with one column for each item
max_str_lengths
of respective size.
Access the element at the specified position in the row set.
Access the element at the specified position in the row set.
Indicator value at the specified position. Useful to detect truncation of data.
Example
use odbc_api::buffers::{Indicator, TextRowSet};
fn is_truncated(buffer: &TextRowSet, col_index: usize, row_index: usize) -> bool {
match buffer.indicator_at(col_index, row_index) {
// There is no value, therefore there is no value not fitting in the column buffer.
Indicator::Null => false,
// The value did not fit into the column buffer, we do not even know, by how much.
Indicator::NoTotal => true,
Indicator::Length(total_length) => {
// If the maximum string length is shorter than the values total length, the
// has been truncated to fit into the buffer.
buffer.max_len(col_index) < total_length
}
}
}
Maximum length in bytes of elements in a column.
Takes one element from the iterator for each internal column buffer and appends it to the
end of the buffer. Should the buffer be not large enough to hold the element, it will be
reallocated with 1.2
times its size.
This method panics if it is tried to insert elements beyond batch size. It will also panic if row does not contain at least one item for each internal column buffer.
Trait Implementations
Number of values per parameter in the collection. This can be different from the maximum
batch size a buffer may be able to hold. Returning 0
will cause the the query not to be
executed. Read more
Declares the bind type of the Row set buffer. 0
Means a columnar binding is used. Any non
zero number is interpreted as the size of a single row in a row wise binding style. Read more
The batch size for bulk cursors, if retrieving many rows at once.
Mutable reference to the number of fetched rows. Read more