pub trait Statement: AsHandle {
Show 38 methods
fn as_sys(&self) -> HStmt;
unsafe fn bind_col(
&mut self,
column_number: u16,
target: &mut impl CDataMut
) -> SqlResult<()> { ... }
unsafe fn fetch(&mut self) -> SqlResult<()> { ... }
fn get_data(
&mut self,
col_or_param_num: u16,
target: &mut impl CDataMut
) -> SqlResult<()> { ... }
fn unbind_cols(&mut self) -> SqlResult<()> { ... }
unsafe fn set_num_rows_fetched(
&mut self,
num_rows: Option<&mut usize>
) -> SqlResult<()> { ... }
fn describe_col(
&self,
column_number: u16,
column_description: &mut ColumnDescription
) -> SqlResult<()> { ... }
unsafe fn exec_direct(&mut self, statement: &SqlText<'_>) -> SqlResult<()> { ... }
fn close_cursor(&mut self) -> SqlResult<()> { ... }
fn prepare(&mut self, statement: &SqlText<'_>) -> SqlResult<()> { ... }
unsafe fn execute(&mut self) -> SqlResult<()> { ... }
fn num_result_cols(&self) -> SqlResult<i16> { ... }
fn num_params(&self) -> SqlResult<u16> { ... }
unsafe fn set_row_array_size(&mut self, size: usize) -> SqlResult<()> { ... }
unsafe fn set_paramset_size(&mut self, size: usize) -> SqlResult<()> { ... }
unsafe fn set_row_bind_type(&mut self, row_size: usize) -> SqlResult<()> { ... }
fn set_metadata_id(&mut self, metadata_id: bool) -> SqlResult<()> { ... }
fn set_async_enable(&mut self, on: bool) -> SqlResult<()> { ... }
unsafe fn bind_input_parameter(
&mut self,
parameter_number: u16,
parameter: &impl HasDataType + CData + ?Sized
) -> SqlResult<()> { ... }
unsafe fn bind_parameter(
&mut self,
parameter_number: u16,
input_output_type: ParamType,
parameter: &mut impl CDataMut + HasDataType
) -> SqlResult<()> { ... }
unsafe fn bind_delayed_input_parameter(
&mut self,
parameter_number: u16,
parameter: &mut impl DelayedInput + HasDataType
) -> SqlResult<()> { ... }
fn is_unsigned_column(&self, column_number: u16) -> SqlResult<bool> { ... }
fn col_type(&self, column_number: u16) -> SqlResult<SqlDataType> { ... }
fn col_concise_type(&self, column_number: u16) -> SqlResult<SqlDataType> { ... }
fn col_octet_length(&self, column_number: u16) -> SqlResult<isize> { ... }
fn col_display_size(&self, column_number: u16) -> SqlResult<isize> { ... }
fn col_precision(&self, column_number: u16) -> SqlResult<isize> { ... }
fn col_scale(&self, column_number: u16) -> SqlResult<Len> { ... }
fn col_name(
&self,
column_number: u16,
buffer: &mut Vec<SqlChar>
) -> SqlResult<()> { ... }
unsafe fn numeric_col_attribute(
&self,
attribute: Desc,
column_number: u16
) -> SqlResult<Len> { ... }
fn reset_parameters(&mut self) -> SqlResult<()> { ... }
fn describe_param(
&self,
parameter_number: u16
) -> SqlResult<ParameterDescription> { ... }
fn param_data(&mut self) -> SqlResult<Option<Pointer>> { ... }
fn columns(
&mut self,
catalog_name: &SqlText<'_>,
schema_name: &SqlText<'_>,
table_name: &SqlText<'_>,
column_name: &SqlText<'_>
) -> SqlResult<()> { ... }
fn tables(
&mut self,
catalog_name: &SqlText<'_>,
schema_name: &SqlText<'_>,
table_name: &SqlText<'_>,
table_type: &SqlText<'_>
) -> SqlResult<()> { ... }
fn put_binary_batch(&mut self, batch: &[u8]) -> SqlResult<()> { ... }
fn row_count(&self) -> SqlResult<isize> { ... }
fn complete_async(
&mut self,
function_name: &'static str
) -> SqlResult<SqlResult<()>> { ... }
}Expand description
An ODBC statement handle. In this crate it is implemented by self::StatementImpl. In ODBC
Statements are used to execute statements and retrieve results. Both parameter and result
buffers are bound to the statement and dereferenced during statement execution and fetching
results.
The trait allows us to reason about statements without taking the lifetime of their connection into account. It also allows for the trait to be implemented by a handle taking ownership of both, the statement and the connection.
Required Methods§
Provided Methods§
sourceunsafe fn bind_col(
&mut self,
column_number: u16,
target: &mut impl CDataMut
) -> SqlResult<()>
unsafe fn bind_col(
&mut self,
column_number: u16,
target: &mut impl CDataMut
) -> SqlResult<()>
Binds application data buffers to columns in the result set.
column_number:0is the bookmark column. It is not included in some result sets. All other columns are numbered starting with1. It is an error to bind a higher-numbered column than there are columns in the result set. This error cannot be detected until the result set has been created, so it is returned byfetch, notbind_col.target_type: The identifier of the C data type of thevaluebuffer. When it is retrieving data from the data source withfetch, the driver converts the data to this type. When it sends data to the source, the driver converts the data from this type.target_value: Pointer to the data buffer to bind to the column.target_length: Length of target value in bytes. (Or for a single element in case of bulk aka. block fetching data).indicator: Buffer is going to hold length or indicator values.
Safety
It is the callers responsibility to make sure the bound columns live until they are no longer bound.
sourceunsafe fn fetch(&mut self) -> SqlResult<()>
unsafe fn fetch(&mut self) -> SqlResult<()>
Returns the next row set in the result set.
It can be called only while a result set exists: I.e., after a call that creates a result
set and before the cursor over that result set is closed. If any columns are bound, it
returns the data in those columns. If the application has specified a pointer to a row
status array or a buffer in which to return the number of rows fetched, fetch also returns
this information. Calls to fetch can be mixed with calls to fetch_scroll.
Safety
Fetch dereferences bound column pointers.
Examples found in repository?
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592
fn next_row(&mut self) -> Result<Option<CursorRow<'_>>, Error> {
let row_available = unsafe {
self.as_stmt_ref()
.fetch()
.into_result_bool(&self.as_stmt_ref())?
};
let ret = if row_available {
Some(unsafe { CursorRow::new(self.as_stmt_ref()) })
} else {
None
};
Ok(ret)
}
/// Binds this cursor to a buffer holding a row set.
fn bind_buffer<B>(self, row_set_buffer: B) -> Result<BlockCursor<Self, B>, Error>
where
Self: Sized,
B: RowSetBuffer;
}
/// An individual row of an result set. See [`crate::Cursor::next_row`].
pub struct CursorRow<'s> {
statement: StatementRef<'s>,
}
impl<'s> CursorRow<'s> {
/// # Safety
///
/// `statement` must be in a cursor state.
unsafe fn new(statement: StatementRef<'s>) -> Self {
CursorRow { statement }
}
}
impl<'s> CursorRow<'s> {
/// Fills a suitable target buffer with a field from the current row of the result set. This
/// method drains the data from the field. It can be called repeatedly to if not all the data
/// fit in the output buffer at once. It should not called repeatedly to fetch the same value
/// twice. Column index starts at `1`.
pub fn get_data(
&mut self,
col_or_param_num: u16,
target: &mut (impl CElement + CDataMut),
) -> Result<(), Error> {
self.statement
.get_data(col_or_param_num, target)
.into_result(&self.statement)
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INDICATOR_VARIABLE_REQUIRED_BUT_NOT_SUPPLIED {
Error::UnableToRepresentNull(record)
} else {
Error::Diagnostics { record, function }
}
})
}
/// Retrieves arbitrary large character data from the row and stores it in the buffer. Column
/// index starts at `1`.
///
/// # Return
///
/// `true` indicates that the value has not been `NULL` and the value has been placed in `buf`.
/// `false` indicates that the value is `NULL`. The buffer is cleared in that case.
pub fn get_text(&mut self, col_or_param_num: u16, buf: &mut Vec<u8>) -> Result<bool, Error> {
// Utilize all of the allocated buffer. We must make sure buffer can at least hold the
// terminating zero. We do a bit more than that though, to avoid to many repeated calls to
// get_data.
buf.resize(max(256, buf.capacity()), 0);
// We repeatedly fetch data and add it to the buffer. The buffer length is therefore the
// accumulated value size. This variable keeps track of the number of bytes we added with
// the next call to get_data.
let mut fetch_size = buf.len();
let mut target = VarCharSliceMut::from_buffer(buf.as_mut_slice(), Indicator::Null);
// Fetch binary data into buffer.
self.get_data(col_or_param_num, &mut target)?;
let not_null = loop {
match target.indicator() {
// Value is `NULL`. We are done here.
Indicator::Null => {
buf.clear();
break false;
}
// We do not know how large the value is. Let's fetch the data with repeated calls
// to get_data.
Indicator::NoTotal => {
let old_len = buf.len();
// Use an exponential strategy for increasing buffer size.
buf.resize(old_len * 2, 0);
let buf_extend = &mut buf[(old_len - 1)..];
fetch_size = buf_extend.len();
target = VarCharSliceMut::from_buffer(buf_extend, Indicator::Null);
self.get_data(col_or_param_num, &mut target)?;
}
// We did get the complete value, including the terminating zero. Let's resize the
// buffer to match the retrieved value exactly (excluding terminating zero).
Indicator::Length(len) if len < fetch_size => {
// Since the indicator refers to value length without terminating zero, this
// also implicitly drops the terminating zero at the end of the buffer.
let shrink_by = fetch_size - len;
buf.resize(buf.len() - shrink_by, 0);
break true;
}
// We did not get all of the value in one go, but the data source has been friendly
// enough to tell us how much is missing.
Indicator::Length(len) => {
let still_missing = len - fetch_size + 1;
let old_len = buf.len();
buf.resize(old_len + still_missing, 0);
let buf_extend = &mut buf[(old_len - 1)..];
fetch_size = buf_extend.len();
target = VarCharSliceMut::from_buffer(buf_extend, Indicator::Null);
self.get_data(col_or_param_num, &mut target)?;
}
}
};
Ok(not_null)
}
/// Retrieves arbitrary large binary data from the row and stores it in the buffer. Column index
/// starts at `1`.
///
/// # Return
///
/// `true` indicates that the value has not been `NULL` and the value has been placed in `buf`.
/// `false` indicates that the value is `NULL`. The buffer is cleared in that case.
pub fn get_binary(&mut self, col_or_param_num: u16, buf: &mut Vec<u8>) -> Result<bool, Error> {
// Utilize all of the allocated buffer. Make sure buffer can at least hold one element.
buf.resize(max(1, buf.capacity()), 0);
// We repeatedly fetch data and add it to the buffer. The buffer length is therefore the
// accumulated value size. This variable keeps track of the number of bytes we added with
// the current call to get_data.
let mut fetch_size = buf.len();
let mut target = VarBinarySliceMut::from_buffer(buf.as_mut_slice(), Indicator::Null);
// Fetch binary data into buffer.
self.get_data(col_or_param_num, &mut target)?;
let not_null = loop {
match target.indicator() {
// Value is `NULL`. We are done here.
Indicator::Null => {
buf.clear();
break false;
}
// We do not know how large the value is. Let's fetch the data with repeated calls
// to get_data.
Indicator::NoTotal => {
let old_len = buf.len();
// Use an exponential strategy for increasing buffer size.
buf.resize(old_len * 2, 0);
let buf_extend = &mut buf[old_len..];
fetch_size = buf_extend.len();
target = VarBinarySliceMut::from_buffer(buf_extend, Indicator::Null);
self.get_data(col_or_param_num, &mut target)?;
}
// We did get the complete value, including the terminating zero. Let's resize the
// buffer to match the retrieved value exactly (excluding terminating zero).
Indicator::Length(len) if len <= fetch_size => {
let shrink_by = fetch_size - len;
buf.resize(buf.len() - shrink_by, 0);
break true;
}
// We did not get all of the value in one go, but the data source has been friendly
// enough to tell us how much is missing.
Indicator::Length(len) => {
let still_missing = len - fetch_size;
let old_len = buf.len();
buf.resize(old_len + still_missing, 0);
let buf_extend = &mut buf[old_len..];
fetch_size = buf_extend.len();
target = VarBinarySliceMut::from_buffer(buf_extend, Indicator::Null);
self.get_data(col_or_param_num, &mut target)?;
}
}
};
Ok(not_null)
}
}
/// Cursors are used to process and iterate the result sets returned by executing queries. Created
/// by either a prepared query or direct execution. Usually utilized through the [`crate::Cursor`]
/// trait.
pub struct CursorImpl<Stmt: AsStatementRef> {
/// A statement handle in cursor mode.
statement: Stmt,
}
impl<S> Drop for CursorImpl<S>
where
S: AsStatementRef,
{
fn drop(&mut self) {
let mut stmt = self.statement.as_stmt_ref();
if let Err(e) = stmt.close_cursor().into_result(&stmt) {
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error closing cursor: {:?}", e)
}
}
}
}
impl<S> AsStatementRef for CursorImpl<S>
where
S: AsStatementRef,
{
fn as_stmt_ref(&mut self) -> StatementRef<'_> {
self.statement.as_stmt_ref()
}
}
impl<S> ResultSetMetadata for CursorImpl<S> where S: AsStatementRef {}
impl<S> Cursor for CursorImpl<S>
where
S: AsStatementRef,
{
fn bind_buffer<B>(mut self, mut row_set_buffer: B) -> Result<BlockCursor<Self, B>, Error>
where
B: RowSetBuffer,
{
let stmt = self.statement.as_stmt_ref();
unsafe {
bind_row_set_buffer_to_statement(stmt, &mut row_set_buffer)?;
}
Ok(BlockCursor::new(row_set_buffer, self))
}
}
impl<S> CursorImpl<S>
where
S: AsStatementRef,
{
/// Users of this library are encouraged not to call this constructor directly but rather invoke
/// [`crate::Connection::execute`] or [`crate::Prepared::execute`] to get a cursor and utilize
/// it using the [`crate::Cursor`] trait. This method is pubilc so users with an understanding
/// of the raw ODBC C-API have a way to create a cursor, after they left the safety rails of the
/// Rust type System, in order to implement a use case not covered yet, by the safe abstractions
/// within this crate.
///
/// # Safety
///
/// `statement` must be in Cursor state, for the invariants of this type to hold.
pub unsafe fn new(statement: S) -> Self {
Self { statement }
}
pub(crate) fn as_sys(&mut self) -> HStmt {
self.as_stmt_ref().as_sys()
}
}
/// A Row set buffer binds row, or column wise buffers to a cursor in order to fill them with row
/// sets with each call to fetch.
///
/// # Safety
///
/// Implementers of this trait must ensure that every pointer bound in `bind_to_cursor` stays valid
/// even if an instance is moved in memory. Bound members should therefore be likely references
/// themselves. To bind stack allocated buffers it is recommended to implement this trait on the
/// reference type instead.
pub unsafe trait RowSetBuffer {
/// Declares the bind type of the Row set buffer. `0` Means a columnar binding is used. Any non
/// zero number is interpreted as the size of a single row in a row wise binding style.
fn bind_type(&self) -> usize;
/// The batch size for bulk cursors, if retrieving many rows at once.
fn row_array_size(&self) -> usize;
/// Mutable reference to the number of fetched rows.
///
/// # Safety
///
/// Implementations of this method must take care that the returned referenced stays valid, even
/// if `self` should be moved.
fn mut_num_fetch_rows(&mut self) -> &mut usize;
/// Binds the buffer either column or row wise to the cursor.
///
/// # Safety
///
/// It's the implementations responsibility to ensure that all bound buffers are valid until
/// unbound or the statement handle is deleted.
unsafe fn bind_colmuns_to_cursor(&mut self, cursor: StatementRef<'_>) -> Result<(), Error>;
}
unsafe impl<T: RowSetBuffer> RowSetBuffer for &mut T {
fn bind_type(&self) -> usize {
(**self).bind_type()
}
fn row_array_size(&self) -> usize {
(**self).row_array_size()
}
fn mut_num_fetch_rows(&mut self) -> &mut usize {
(*self).mut_num_fetch_rows()
}
unsafe fn bind_colmuns_to_cursor(&mut self, cursor: StatementRef<'_>) -> Result<(), Error> {
(*self).bind_colmuns_to_cursor(cursor)
}
}
/// In order to safe on network overhead, it is recommended to use block cursors instead of fetching
/// values individually. This can greatly reduce the time applications need to fetch data. You can
/// create a block cursor by binding preallocated memory to a cursor using [`Cursor::bind_buffer`].
/// A block cursor safes on a lot of IO overhead by fetching an entire set of rows (called *rowset*)
/// at once into the buffer bound to it. Reusing the same buffer for each rowset also safes on
/// allocations. A challange with using block cursors might be database schemas with columns there
/// individual fields can be very large. In these cases developers can choose to:
///
/// 1. Reserve less memory for each individual field than the schema indicates and deciding on a
/// sensible upper bound themselfes. This risks truncation of values though, if they are larger
/// than the upper bound. Using [`BlockCursor::fetch_with_truncation_check`] instead of
/// [`Cursor::next_row`] your appliacation can detect these truncations. This is usually the best
/// choice, since individual fields in a table rarerly actuallly take up several GiB of memory.
/// 2. Calculate the number of rows dynamically based on the maximum expected row size.
/// [`crate::buffers::BufferDesc::bytes_per_row`], can be helpful with this task.
/// 3. Not use block cursors and fetch rows slowly with high IO overhead. Calling
/// [`CursorRow::get_data`] and [`CursorRow::get_text`] to fetch large individual values.
///
/// See: <https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/block-cursors>
pub struct BlockCursor<C: AsStatementRef, B> {
buffer: B,
cursor: C,
}
impl<C, B> BlockCursor<C, B>
where
C: Cursor,
{
fn new(buffer: B, cursor: C) -> Self {
Self { buffer, cursor }
}
/// Fills the bound buffer with the next row set.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// ```
/// use odbc_api::{buffers::TextRowSet, Cursor};
///
/// fn print_all_values(mut cursor: impl Cursor) {
/// let batch_size = 100;
/// let max_string_len = 4000;
/// let buffer = TextRowSet::for_cursor(batch_size, &mut cursor, Some(4000)).unwrap();
/// let mut cursor = cursor.bind_buffer(buffer).unwrap();
/// // Iterate over batches
/// while let Some(batch) = cursor.fetch().unwrap() {
/// // ... print values in batch ...
/// }
/// }
/// ```
pub fn fetch(&mut self) -> Result<Option<&B>, Error> {
self.fetch_with_truncation_check(false)
}
/// Fills the bound buffer with the next row set. Should `error_for_truncation` be `true`and any
/// diagnostic indicate truncation of a value an error is returned.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// Call this method to find out wether there are any truncated values in the batch, without
/// inspecting all its rows and columns.
///
/// ```
/// use odbc_api::{buffers::TextRowSet, Cursor};
///
/// fn print_all_values(mut cursor: impl Cursor) {
/// let batch_size = 100;
/// let max_string_len = 4000;
/// let buffer = TextRowSet::for_cursor(batch_size, &mut cursor, Some(4000)).unwrap();
/// let mut cursor = cursor.bind_buffer(buffer).unwrap();
/// // Iterate over batches
/// while let Some(batch) = cursor.fetch_with_truncation_check(true).unwrap() {
/// // ... print values in batch ...
/// }
/// }
/// ```
pub fn fetch_with_truncation_check(
&mut self,
error_for_truncation: bool,
) -> Result<Option<&B>, Error> {
let mut stmt = self.cursor.as_stmt_ref();
unsafe {
let result = stmt.fetch();
let has_row = error_handling_for_fetch(result, stmt, error_for_truncation)?;
Ok(has_row.then_some(&self.buffer))
}
}
}
impl<C, B> Drop for BlockCursor<C, B>
where
C: AsStatementRef,
{
fn drop(&mut self) {
unsafe {
let mut stmt = self.cursor.as_stmt_ref();
if let Err(e) = stmt
.unbind_cols()
.into_result(&stmt)
.and_then(|()| stmt.set_num_rows_fetched(None).into_result(&stmt))
{
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error unbinding columns: {:?}", e)
}
}
}
}
}
/// The asynchronous sibiling of [`CursorImpl`]. Use this to fetch results in asynchronous code.
///
/// Like [`CursorImpl`] this is an ODBC statement handle in cursor state. However unlike its
/// synchronous sibling this statement handle is in asynchronous polling mode.
pub struct CursorPolling<Stmt: AsStatementRef> {
/// A statement handle in cursor state with asynchronous mode enabled.
statement: Stmt,
}
impl<S> CursorPolling<S>
where
S: AsStatementRef,
{
/// Users of this library are encouraged not to call this constructor directly. This method is
/// pubilc so users with an understanding of the raw ODBC C-API have a way to create an
/// asynchronous cursor, after they left the safety rails of the Rust type System, in order to
/// implement a use case not covered yet, by the safe abstractions within this crate.
///
/// # Safety
///
/// `statement` must be in Cursor state, for the invariants of this type to hold. Preferable
/// `statement` should also have asynchrous mode enabled, otherwise constructing a synchronous
/// [`CursorImpl`] is more suitable.
pub unsafe fn new(statement: S) -> Self {
Self { statement }
}
/// Binds this cursor to a buffer holding a row set.
pub fn bind_buffer<B>(
mut self,
mut row_set_buffer: B,
) -> Result<BlockCursorPolling<Self, B>, Error>
where
B: RowSetBuffer,
{
let stmt = self.statement.as_stmt_ref();
unsafe {
bind_row_set_buffer_to_statement(stmt, &mut row_set_buffer)?;
}
Ok(BlockCursorPolling::new(row_set_buffer, self))
}
}
impl<S> AsStatementRef for CursorPolling<S>
where
S: AsStatementRef,
{
fn as_stmt_ref(&mut self) -> StatementRef<'_> {
self.statement.as_stmt_ref()
}
}
impl<S> Drop for CursorPolling<S>
where
S: AsStatementRef,
{
fn drop(&mut self) {
let mut stmt = self.statement.as_stmt_ref();
if let Err(e) = stmt.close_cursor().into_result(&stmt) {
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error closing cursor: {:?}", e)
}
}
}
}
/// Asynchronously iterates in blocks (called row sets) over a result set, filling a buffers with
/// a lot of rows at once, instead of iterating the result set row by row. This is usually much
/// faster. Asynchronous sibiling of [`self::BlockCursor`].
pub struct BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
buffer: B,
cursor: C,
}
impl<C, B> BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
fn new(buffer: B, cursor: C) -> Self {
Self { buffer, cursor }
}
/// Fills the bound buffer with the next row set.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
pub async fn fetch(&mut self, sleep: impl Sleep) -> Result<Option<&B>, Error> {
self.fetch_with_truncation_check(false, sleep).await
}
/// Fills the bound buffer with the next row set. Should `error_for_truncation` be `true`and any
/// diagnostic indicate truncation of a value an error is returned.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// Call this method to find out wether there are any truncated values in the batch, without
/// inspecting all its rows and columns.
pub async fn fetch_with_truncation_check(
&mut self,
error_for_truncation: bool,
mut sleep: impl Sleep,
) -> Result<Option<&B>, Error> {
let mut stmt = self.cursor.as_stmt_ref();
unsafe {
let result = wait_for(|| stmt.fetch(), &mut sleep).await;
let has_row = error_handling_for_fetch(result, stmt, error_for_truncation)?;
Ok(has_row.then_some(&self.buffer))
}
}sourcefn get_data(
&mut self,
col_or_param_num: u16,
target: &mut impl CDataMut
) -> SqlResult<()>
fn get_data(
&mut self,
col_or_param_num: u16,
target: &mut impl CDataMut
) -> SqlResult<()>
Retrieves data for a single column in the result set or for a single parameter.
Examples found in repository?
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
pub fn get_data(
&mut self,
col_or_param_num: u16,
target: &mut (impl CElement + CDataMut),
) -> Result<(), Error> {
self.statement
.get_data(col_or_param_num, target)
.into_result(&self.statement)
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INDICATOR_VARIABLE_REQUIRED_BUT_NOT_SUPPLIED {
Error::UnableToRepresentNull(record)
} else {
Error::Diagnostics { record, function }
}
})
}sourcefn unbind_cols(&mut self) -> SqlResult<()>
fn unbind_cols(&mut self) -> SqlResult<()>
Release all column buffers bound by bind_col. Except bookmark column.
Examples found in repository?
456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664
fn drop(&mut self) {
unsafe {
let mut stmt = self.cursor.as_stmt_ref();
if let Err(e) = stmt
.unbind_cols()
.into_result(&stmt)
.and_then(|()| stmt.set_num_rows_fetched(None).into_result(&stmt))
{
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error unbinding columns: {:?}", e)
}
}
}
}
}
/// The asynchronous sibiling of [`CursorImpl`]. Use this to fetch results in asynchronous code.
///
/// Like [`CursorImpl`] this is an ODBC statement handle in cursor state. However unlike its
/// synchronous sibling this statement handle is in asynchronous polling mode.
pub struct CursorPolling<Stmt: AsStatementRef> {
/// A statement handle in cursor state with asynchronous mode enabled.
statement: Stmt,
}
impl<S> CursorPolling<S>
where
S: AsStatementRef,
{
/// Users of this library are encouraged not to call this constructor directly. This method is
/// pubilc so users with an understanding of the raw ODBC C-API have a way to create an
/// asynchronous cursor, after they left the safety rails of the Rust type System, in order to
/// implement a use case not covered yet, by the safe abstractions within this crate.
///
/// # Safety
///
/// `statement` must be in Cursor state, for the invariants of this type to hold. Preferable
/// `statement` should also have asynchrous mode enabled, otherwise constructing a synchronous
/// [`CursorImpl`] is more suitable.
pub unsafe fn new(statement: S) -> Self {
Self { statement }
}
/// Binds this cursor to a buffer holding a row set.
pub fn bind_buffer<B>(
mut self,
mut row_set_buffer: B,
) -> Result<BlockCursorPolling<Self, B>, Error>
where
B: RowSetBuffer,
{
let stmt = self.statement.as_stmt_ref();
unsafe {
bind_row_set_buffer_to_statement(stmt, &mut row_set_buffer)?;
}
Ok(BlockCursorPolling::new(row_set_buffer, self))
}
}
impl<S> AsStatementRef for CursorPolling<S>
where
S: AsStatementRef,
{
fn as_stmt_ref(&mut self) -> StatementRef<'_> {
self.statement.as_stmt_ref()
}
}
impl<S> Drop for CursorPolling<S>
where
S: AsStatementRef,
{
fn drop(&mut self) {
let mut stmt = self.statement.as_stmt_ref();
if let Err(e) = stmt.close_cursor().into_result(&stmt) {
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error closing cursor: {:?}", e)
}
}
}
}
/// Asynchronously iterates in blocks (called row sets) over a result set, filling a buffers with
/// a lot of rows at once, instead of iterating the result set row by row. This is usually much
/// faster. Asynchronous sibiling of [`self::BlockCursor`].
pub struct BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
buffer: B,
cursor: C,
}
impl<C, B> BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
fn new(buffer: B, cursor: C) -> Self {
Self { buffer, cursor }
}
/// Fills the bound buffer with the next row set.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
pub async fn fetch(&mut self, sleep: impl Sleep) -> Result<Option<&B>, Error> {
self.fetch_with_truncation_check(false, sleep).await
}
/// Fills the bound buffer with the next row set. Should `error_for_truncation` be `true`and any
/// diagnostic indicate truncation of a value an error is returned.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// Call this method to find out wether there are any truncated values in the batch, without
/// inspecting all its rows and columns.
pub async fn fetch_with_truncation_check(
&mut self,
error_for_truncation: bool,
mut sleep: impl Sleep,
) -> Result<Option<&B>, Error> {
let mut stmt = self.cursor.as_stmt_ref();
unsafe {
let result = wait_for(|| stmt.fetch(), &mut sleep).await;
let has_row = error_handling_for_fetch(result, stmt, error_for_truncation)?;
Ok(has_row.then_some(&self.buffer))
}
}
}
/// Binds a row set buffer to a statment. Implementation is shared between synchronous and
/// asynchronous cursors.
unsafe fn bind_row_set_buffer_to_statement(
mut stmt: StatementRef<'_>,
row_set_buffer: &mut impl RowSetBuffer,
) -> Result<(), Error> {
stmt.set_row_bind_type(row_set_buffer.bind_type())
.into_result(&stmt)?;
let size = row_set_buffer.row_array_size();
stmt.set_row_array_size(size)
.into_result(&stmt)
// SAP anywhere has been seen to return with an "invalid attribute" error instead of
// a success with "option value changed" info. Let us map invalid attributes during
// setting row set array size to something more precise.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_ATTRIBUTE_VALUE {
Error::InvalidRowArraySize { record, size }
} else {
Error::Diagnostics { record, function }
}
})?;
stmt.set_num_rows_fetched(Some(row_set_buffer.mut_num_fetch_rows()))
.into_result(&stmt)?;
row_set_buffer.bind_colmuns_to_cursor(stmt)?;
Ok(())
}
/// Error handling for bulk fetching is shared between synchronous and asynchronous usecase.
fn error_handling_for_fetch(
result: SqlResult<()>,
mut stmt: StatementRef,
error_for_truncation: bool,
) -> Result<bool, Error> {
let has_row = result
.on_success(|| true)
.into_result_with(&stmt.as_stmt_ref(), error_for_truncation, Some(false), None)
// Oracles ODBC driver does not support 64Bit integers. Furthermore, it does not
// tell the it to the user than binding parameters, but rather now then we fetch
// results. The error code retruned is `HY004` rather then `HY003` which should
// be used to indicate invalid buffer types.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_SQL_DATA_TYPE {
Error::OracleOdbcDriverDoesNotSupport64Bit(record)
} else {
Error::Diagnostics { record, function }
}
})?;
Ok(has_row)
}
impl<C, B> Drop for BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
fn drop(&mut self) {
unsafe {
let mut stmt = self.cursor.as_stmt_ref();
if let Err(e) = stmt
.unbind_cols()
.into_result(&stmt)
.and_then(|()| stmt.set_num_rows_fetched(None).into_result(&stmt))
{
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error unbinding columns: {:?}", e)
}
}
}
}sourceunsafe fn set_num_rows_fetched(
&mut self,
num_rows: Option<&mut usize>
) -> SqlResult<()>
unsafe fn set_num_rows_fetched(
&mut self,
num_rows: Option<&mut usize>
) -> SqlResult<()>
Bind an integer to hold the number of rows retrieved with fetch in the current row set.
Passing None for num_rows is going to unbind the value from the statement.
Safety
num_rows must not be moved and remain valid, as long as it remains bound to the cursor.
Examples found in repository?
456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664
fn drop(&mut self) {
unsafe {
let mut stmt = self.cursor.as_stmt_ref();
if let Err(e) = stmt
.unbind_cols()
.into_result(&stmt)
.and_then(|()| stmt.set_num_rows_fetched(None).into_result(&stmt))
{
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error unbinding columns: {:?}", e)
}
}
}
}
}
/// The asynchronous sibiling of [`CursorImpl`]. Use this to fetch results in asynchronous code.
///
/// Like [`CursorImpl`] this is an ODBC statement handle in cursor state. However unlike its
/// synchronous sibling this statement handle is in asynchronous polling mode.
pub struct CursorPolling<Stmt: AsStatementRef> {
/// A statement handle in cursor state with asynchronous mode enabled.
statement: Stmt,
}
impl<S> CursorPolling<S>
where
S: AsStatementRef,
{
/// Users of this library are encouraged not to call this constructor directly. This method is
/// pubilc so users with an understanding of the raw ODBC C-API have a way to create an
/// asynchronous cursor, after they left the safety rails of the Rust type System, in order to
/// implement a use case not covered yet, by the safe abstractions within this crate.
///
/// # Safety
///
/// `statement` must be in Cursor state, for the invariants of this type to hold. Preferable
/// `statement` should also have asynchrous mode enabled, otherwise constructing a synchronous
/// [`CursorImpl`] is more suitable.
pub unsafe fn new(statement: S) -> Self {
Self { statement }
}
/// Binds this cursor to a buffer holding a row set.
pub fn bind_buffer<B>(
mut self,
mut row_set_buffer: B,
) -> Result<BlockCursorPolling<Self, B>, Error>
where
B: RowSetBuffer,
{
let stmt = self.statement.as_stmt_ref();
unsafe {
bind_row_set_buffer_to_statement(stmt, &mut row_set_buffer)?;
}
Ok(BlockCursorPolling::new(row_set_buffer, self))
}
}
impl<S> AsStatementRef for CursorPolling<S>
where
S: AsStatementRef,
{
fn as_stmt_ref(&mut self) -> StatementRef<'_> {
self.statement.as_stmt_ref()
}
}
impl<S> Drop for CursorPolling<S>
where
S: AsStatementRef,
{
fn drop(&mut self) {
let mut stmt = self.statement.as_stmt_ref();
if let Err(e) = stmt.close_cursor().into_result(&stmt) {
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error closing cursor: {:?}", e)
}
}
}
}
/// Asynchronously iterates in blocks (called row sets) over a result set, filling a buffers with
/// a lot of rows at once, instead of iterating the result set row by row. This is usually much
/// faster. Asynchronous sibiling of [`self::BlockCursor`].
pub struct BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
buffer: B,
cursor: C,
}
impl<C, B> BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
fn new(buffer: B, cursor: C) -> Self {
Self { buffer, cursor }
}
/// Fills the bound buffer with the next row set.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
pub async fn fetch(&mut self, sleep: impl Sleep) -> Result<Option<&B>, Error> {
self.fetch_with_truncation_check(false, sleep).await
}
/// Fills the bound buffer with the next row set. Should `error_for_truncation` be `true`and any
/// diagnostic indicate truncation of a value an error is returned.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// Call this method to find out wether there are any truncated values in the batch, without
/// inspecting all its rows and columns.
pub async fn fetch_with_truncation_check(
&mut self,
error_for_truncation: bool,
mut sleep: impl Sleep,
) -> Result<Option<&B>, Error> {
let mut stmt = self.cursor.as_stmt_ref();
unsafe {
let result = wait_for(|| stmt.fetch(), &mut sleep).await;
let has_row = error_handling_for_fetch(result, stmt, error_for_truncation)?;
Ok(has_row.then_some(&self.buffer))
}
}
}
/// Binds a row set buffer to a statment. Implementation is shared between synchronous and
/// asynchronous cursors.
unsafe fn bind_row_set_buffer_to_statement(
mut stmt: StatementRef<'_>,
row_set_buffer: &mut impl RowSetBuffer,
) -> Result<(), Error> {
stmt.set_row_bind_type(row_set_buffer.bind_type())
.into_result(&stmt)?;
let size = row_set_buffer.row_array_size();
stmt.set_row_array_size(size)
.into_result(&stmt)
// SAP anywhere has been seen to return with an "invalid attribute" error instead of
// a success with "option value changed" info. Let us map invalid attributes during
// setting row set array size to something more precise.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_ATTRIBUTE_VALUE {
Error::InvalidRowArraySize { record, size }
} else {
Error::Diagnostics { record, function }
}
})?;
stmt.set_num_rows_fetched(Some(row_set_buffer.mut_num_fetch_rows()))
.into_result(&stmt)?;
row_set_buffer.bind_colmuns_to_cursor(stmt)?;
Ok(())
}
/// Error handling for bulk fetching is shared between synchronous and asynchronous usecase.
fn error_handling_for_fetch(
result: SqlResult<()>,
mut stmt: StatementRef,
error_for_truncation: bool,
) -> Result<bool, Error> {
let has_row = result
.on_success(|| true)
.into_result_with(&stmt.as_stmt_ref(), error_for_truncation, Some(false), None)
// Oracles ODBC driver does not support 64Bit integers. Furthermore, it does not
// tell the it to the user than binding parameters, but rather now then we fetch
// results. The error code retruned is `HY004` rather then `HY003` which should
// be used to indicate invalid buffer types.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_SQL_DATA_TYPE {
Error::OracleOdbcDriverDoesNotSupport64Bit(record)
} else {
Error::Diagnostics { record, function }
}
})?;
Ok(has_row)
}
impl<C, B> Drop for BlockCursorPolling<C, B>
where
C: AsStatementRef,
{
fn drop(&mut self) {
unsafe {
let mut stmt = self.cursor.as_stmt_ref();
if let Err(e) = stmt
.unbind_cols()
.into_result(&stmt)
.and_then(|()| stmt.set_num_rows_fetched(None).into_result(&stmt))
{
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error unbinding columns: {:?}", e)
}
}
}
}sourcefn describe_col(
&self,
column_number: u16,
column_description: &mut ColumnDescription
) -> SqlResult<()>
fn describe_col(
&self,
column_number: u16,
column_description: &mut ColumnDescription
) -> SqlResult<()>
Fetch a column description using the column index.
Parameters
column_number: Column index.0is the bookmark column. The other column indices start with1.column_description: Holds the description of the column after the call. This method does not provide strong exception safety as the value of this argument is undefined in case of an error.
Examples found in repository?
More examples
251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295
fn describe_col(
&self,
column_number: u16,
column_description: &mut ColumnDescription,
) -> SqlResult<()> {
let name = &mut column_description.name;
// Use maximum available capacity.
name.resize(name.capacity(), 0);
let mut name_length: i16 = 0;
let mut data_type = SqlDataType::UNKNOWN_TYPE;
let mut column_size = 0;
let mut decimal_digits = 0;
let mut nullable = odbc_sys::Nullability::UNKNOWN;
let res = unsafe {
sql_describe_col(
self.as_sys(),
column_number,
mut_buf_ptr(name),
clamp_small_int(name.len()),
&mut name_length,
&mut data_type,
&mut column_size,
&mut decimal_digits,
&mut nullable,
)
.into_sql_result("SQLDescribeCol")
};
if res.is_err() {
return res;
}
column_description.nullability = Nullability::new(nullable);
if name_length + 1 > clamp_small_int(name.len()) {
// Buffer is to small to hold name, retry with larger buffer
name.resize(name_length as usize + 1, 0);
self.describe_col(column_number, column_description)
} else {
name.resize(name_length as usize, 0);
column_description.data_type = DataType::new(data_type, column_size, decimal_digits);
res
}
}sourceunsafe fn exec_direct(&mut self, statement: &SqlText<'_>) -> SqlResult<()>
unsafe fn exec_direct(&mut self, statement: &SqlText<'_>) -> SqlResult<()>
Executes a statement, using the current values of the parameter marker variables if any parameters exist in the statement. SQLExecDirect is the fastest way to submit an SQL statement for one-time execution.
Safety
While self as always guaranteed to be a valid allocated handle, this function may
dereference bound parameters. It is the callers responsibility to ensure these are still
valid. One strategy is to reset potentially invalid parameters right before the call using
reset_parameters.
Return
SqlResult::NeedDataif execution requires additional data from delayed parameters.SqlResult::NoDataif a searched update or delete statement did not affect any rows at the data source.
Examples found in repository?
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
pub unsafe fn execute<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
) -> Result<Option<CursorImpl<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
stmt.exec_direct(sql)
} else {
// We execute a prepared query
stmt.execute()
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
stmt.put_binary_batch(batch).into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
if stmt.num_result_cols().into_result(&stmt)? == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorImpl::new(statement);
Ok(Some(cursor))
}
}
/// # Safety
///
/// * Execute may dereference pointers to bound parameters, so these must guaranteed to be valid
/// then calling this function.
/// * Furthermore all bound delayed parameters must be of type `*mut &mut dyn Blob`.
pub async unsafe fn execute_polling<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
mut sleep: impl Sleep,
) -> Result<Option<CursorPolling<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
wait_for(|| stmt.exec_direct(sql), &mut sleep).await
} else {
// We execute a prepared query
wait_for(|| stmt.execute(), &mut sleep).await
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
let result = wait_for(|| stmt.put_binary_batch(batch), &mut sleep).await;
result.into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
let num_result_cols = wait_for(|| stmt.num_result_cols(), &mut sleep)
.await
.into_result(&stmt)?;
if num_result_cols == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorPolling::new(statement);
Ok(Some(cursor))
}
}sourcefn close_cursor(&mut self) -> SqlResult<()>
fn close_cursor(&mut self) -> SqlResult<()>
Close an open cursor.
Examples found in repository?
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539
fn drop(&mut self) {
let mut stmt = self.statement.as_stmt_ref();
if let Err(e) = stmt.close_cursor().into_result(&stmt) {
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error closing cursor: {:?}", e)
}
}
}
}
impl<S> AsStatementRef for CursorImpl<S>
where
S: AsStatementRef,
{
fn as_stmt_ref(&mut self) -> StatementRef<'_> {
self.statement.as_stmt_ref()
}
}
impl<S> ResultSetMetadata for CursorImpl<S> where S: AsStatementRef {}
impl<S> Cursor for CursorImpl<S>
where
S: AsStatementRef,
{
fn bind_buffer<B>(mut self, mut row_set_buffer: B) -> Result<BlockCursor<Self, B>, Error>
where
B: RowSetBuffer,
{
let stmt = self.statement.as_stmt_ref();
unsafe {
bind_row_set_buffer_to_statement(stmt, &mut row_set_buffer)?;
}
Ok(BlockCursor::new(row_set_buffer, self))
}
}
impl<S> CursorImpl<S>
where
S: AsStatementRef,
{
/// Users of this library are encouraged not to call this constructor directly but rather invoke
/// [`crate::Connection::execute`] or [`crate::Prepared::execute`] to get a cursor and utilize
/// it using the [`crate::Cursor`] trait. This method is pubilc so users with an understanding
/// of the raw ODBC C-API have a way to create a cursor, after they left the safety rails of the
/// Rust type System, in order to implement a use case not covered yet, by the safe abstractions
/// within this crate.
///
/// # Safety
///
/// `statement` must be in Cursor state, for the invariants of this type to hold.
pub unsafe fn new(statement: S) -> Self {
Self { statement }
}
pub(crate) fn as_sys(&mut self) -> HStmt {
self.as_stmt_ref().as_sys()
}
}
/// A Row set buffer binds row, or column wise buffers to a cursor in order to fill them with row
/// sets with each call to fetch.
///
/// # Safety
///
/// Implementers of this trait must ensure that every pointer bound in `bind_to_cursor` stays valid
/// even if an instance is moved in memory. Bound members should therefore be likely references
/// themselves. To bind stack allocated buffers it is recommended to implement this trait on the
/// reference type instead.
pub unsafe trait RowSetBuffer {
/// Declares the bind type of the Row set buffer. `0` Means a columnar binding is used. Any non
/// zero number is interpreted as the size of a single row in a row wise binding style.
fn bind_type(&self) -> usize;
/// The batch size for bulk cursors, if retrieving many rows at once.
fn row_array_size(&self) -> usize;
/// Mutable reference to the number of fetched rows.
///
/// # Safety
///
/// Implementations of this method must take care that the returned referenced stays valid, even
/// if `self` should be moved.
fn mut_num_fetch_rows(&mut self) -> &mut usize;
/// Binds the buffer either column or row wise to the cursor.
///
/// # Safety
///
/// It's the implementations responsibility to ensure that all bound buffers are valid until
/// unbound or the statement handle is deleted.
unsafe fn bind_colmuns_to_cursor(&mut self, cursor: StatementRef<'_>) -> Result<(), Error>;
}
unsafe impl<T: RowSetBuffer> RowSetBuffer for &mut T {
fn bind_type(&self) -> usize {
(**self).bind_type()
}
fn row_array_size(&self) -> usize {
(**self).row_array_size()
}
fn mut_num_fetch_rows(&mut self) -> &mut usize {
(*self).mut_num_fetch_rows()
}
unsafe fn bind_colmuns_to_cursor(&mut self, cursor: StatementRef<'_>) -> Result<(), Error> {
(*self).bind_colmuns_to_cursor(cursor)
}
}
/// In order to safe on network overhead, it is recommended to use block cursors instead of fetching
/// values individually. This can greatly reduce the time applications need to fetch data. You can
/// create a block cursor by binding preallocated memory to a cursor using [`Cursor::bind_buffer`].
/// A block cursor safes on a lot of IO overhead by fetching an entire set of rows (called *rowset*)
/// at once into the buffer bound to it. Reusing the same buffer for each rowset also safes on
/// allocations. A challange with using block cursors might be database schemas with columns there
/// individual fields can be very large. In these cases developers can choose to:
///
/// 1. Reserve less memory for each individual field than the schema indicates and deciding on a
/// sensible upper bound themselfes. This risks truncation of values though, if they are larger
/// than the upper bound. Using [`BlockCursor::fetch_with_truncation_check`] instead of
/// [`Cursor::next_row`] your appliacation can detect these truncations. This is usually the best
/// choice, since individual fields in a table rarerly actuallly take up several GiB of memory.
/// 2. Calculate the number of rows dynamically based on the maximum expected row size.
/// [`crate::buffers::BufferDesc::bytes_per_row`], can be helpful with this task.
/// 3. Not use block cursors and fetch rows slowly with high IO overhead. Calling
/// [`CursorRow::get_data`] and [`CursorRow::get_text`] to fetch large individual values.
///
/// See: <https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/block-cursors>
pub struct BlockCursor<C: AsStatementRef, B> {
buffer: B,
cursor: C,
}
impl<C, B> BlockCursor<C, B>
where
C: Cursor,
{
fn new(buffer: B, cursor: C) -> Self {
Self { buffer, cursor }
}
/// Fills the bound buffer with the next row set.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// ```
/// use odbc_api::{buffers::TextRowSet, Cursor};
///
/// fn print_all_values(mut cursor: impl Cursor) {
/// let batch_size = 100;
/// let max_string_len = 4000;
/// let buffer = TextRowSet::for_cursor(batch_size, &mut cursor, Some(4000)).unwrap();
/// let mut cursor = cursor.bind_buffer(buffer).unwrap();
/// // Iterate over batches
/// while let Some(batch) = cursor.fetch().unwrap() {
/// // ... print values in batch ...
/// }
/// }
/// ```
pub fn fetch(&mut self) -> Result<Option<&B>, Error> {
self.fetch_with_truncation_check(false)
}
/// Fills the bound buffer with the next row set. Should `error_for_truncation` be `true`and any
/// diagnostic indicate truncation of a value an error is returned.
///
/// # Return
///
/// `None` if the result set is empty and all row sets have been extracted. `Some` with a
/// reference to the internal buffer otherwise.
///
/// Call this method to find out wether there are any truncated values in the batch, without
/// inspecting all its rows and columns.
///
/// ```
/// use odbc_api::{buffers::TextRowSet, Cursor};
///
/// fn print_all_values(mut cursor: impl Cursor) {
/// let batch_size = 100;
/// let max_string_len = 4000;
/// let buffer = TextRowSet::for_cursor(batch_size, &mut cursor, Some(4000)).unwrap();
/// let mut cursor = cursor.bind_buffer(buffer).unwrap();
/// // Iterate over batches
/// while let Some(batch) = cursor.fetch_with_truncation_check(true).unwrap() {
/// // ... print values in batch ...
/// }
/// }
/// ```
pub fn fetch_with_truncation_check(
&mut self,
error_for_truncation: bool,
) -> Result<Option<&B>, Error> {
let mut stmt = self.cursor.as_stmt_ref();
unsafe {
let result = stmt.fetch();
let has_row = error_handling_for_fetch(result, stmt, error_for_truncation)?;
Ok(has_row.then_some(&self.buffer))
}
}
}
impl<C, B> Drop for BlockCursor<C, B>
where
C: AsStatementRef,
{
fn drop(&mut self) {
unsafe {
let mut stmt = self.cursor.as_stmt_ref();
if let Err(e) = stmt
.unbind_cols()
.into_result(&stmt)
.and_then(|()| stmt.set_num_rows_fetched(None).into_result(&stmt))
{
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error unbinding columns: {:?}", e)
}
}
}
}
}
/// The asynchronous sibiling of [`CursorImpl`]. Use this to fetch results in asynchronous code.
///
/// Like [`CursorImpl`] this is an ODBC statement handle in cursor state. However unlike its
/// synchronous sibling this statement handle is in asynchronous polling mode.
pub struct CursorPolling<Stmt: AsStatementRef> {
/// A statement handle in cursor state with asynchronous mode enabled.
statement: Stmt,
}
impl<S> CursorPolling<S>
where
S: AsStatementRef,
{
/// Users of this library are encouraged not to call this constructor directly. This method is
/// pubilc so users with an understanding of the raw ODBC C-API have a way to create an
/// asynchronous cursor, after they left the safety rails of the Rust type System, in order to
/// implement a use case not covered yet, by the safe abstractions within this crate.
///
/// # Safety
///
/// `statement` must be in Cursor state, for the invariants of this type to hold. Preferable
/// `statement` should also have asynchrous mode enabled, otherwise constructing a synchronous
/// [`CursorImpl`] is more suitable.
pub unsafe fn new(statement: S) -> Self {
Self { statement }
}
/// Binds this cursor to a buffer holding a row set.
pub fn bind_buffer<B>(
mut self,
mut row_set_buffer: B,
) -> Result<BlockCursorPolling<Self, B>, Error>
where
B: RowSetBuffer,
{
let stmt = self.statement.as_stmt_ref();
unsafe {
bind_row_set_buffer_to_statement(stmt, &mut row_set_buffer)?;
}
Ok(BlockCursorPolling::new(row_set_buffer, self))
}
}
impl<S> AsStatementRef for CursorPolling<S>
where
S: AsStatementRef,
{
fn as_stmt_ref(&mut self) -> StatementRef<'_> {
self.statement.as_stmt_ref()
}
}
impl<S> Drop for CursorPolling<S>
where
S: AsStatementRef,
{
fn drop(&mut self) {
let mut stmt = self.statement.as_stmt_ref();
if let Err(e) = stmt.close_cursor().into_result(&stmt) {
// Avoid panicking, if we already have a panic. We don't want to mask the original
// error.
if !panicking() {
panic!("Unexpected error closing cursor: {:?}", e)
}
}
}sourcefn prepare(&mut self, statement: &SqlText<'_>) -> SqlResult<()>
fn prepare(&mut self, statement: &SqlText<'_>) -> SqlResult<()>
Send an SQL statement to the data source for preparation. The application can include one or more parameter markers in the SQL statement. To include a parameter marker, the application embeds a question mark (?) into the SQL string at the appropriate position.
Examples found in repository?
239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297
pub fn prepare(&self, query: &str) -> Result<Prepared<StatementImpl<'_>>, Error> {
let query = SqlText::new(query);
let mut stmt = self.allocate_statement()?;
stmt.prepare(&query).into_result(&stmt)?;
Ok(Prepared::new(stmt))
}
/// Prepares an SQL statement which takes ownership of the connection. The advantage over
/// [`Self::prepare`] is, that you do not need to keep track of the lifetime of the connection
/// seperatly and can create types which do own the prepared query and only depend on the
/// lifetime of the environment. The downside is that you can not use the connection for
/// anything else anymore.
///
/// # Parameters
///
/// * `query`: The text representation of the SQL statement. E.g. "SELECT * FROM my_table;". `?`
/// may be used as a placeholder in the statement text, to be replaced with parameters during
/// execution.
///
/// ```no_run
/// use lazy_static::lazy_static;
/// use odbc_api::{
/// Environment, Error, ColumnarBulkInserter, StatementConnection,
/// buffers::{BufferDesc, AnyBuffer},
/// };
///
/// lazy_static! {
/// static ref ENV: Environment = unsafe { Environment::new().unwrap() };
/// }
///
/// const CONNECTION_STRING: &str =
/// "Driver={ODBC Driver 17 for SQL Server};\
/// Server=localhost;UID=SA;\
/// PWD=My@Test@Password1;";
///
/// /// Supports columnar bulk inserts on a heterogenous schema (columns have different types),
/// /// takes ownership of a connection created using an environment with static lifetime.
/// type Inserter = ColumnarBulkInserter<StatementConnection<'static>, AnyBuffer>;
///
/// /// Creates an inserter which can be reused to bulk insert birthyears with static lifetime.
/// fn make_inserter(query: &str) -> Result<Inserter, Error> {
/// let conn = ENV.connect_with_connection_string(CONNECTION_STRING)?;
/// let prepared = conn.into_prepared("INSERT INTO Birthyear (name, year) VALUES (?, ?)")?;
/// let buffers = [
/// BufferDesc::Text { max_str_len: 255},
/// BufferDesc::I16 { nullable: false },
/// ];
/// let capacity = 400;
/// prepared.into_column_inserter(capacity, buffers)
/// }
/// ```
pub fn into_prepared(self, query: &str) -> Result<Prepared<StatementConnection<'c>>, Error> {
let query = SqlText::new(query);
let mut stmt = self.allocate_statement()?;
stmt.prepare(&query).into_result(&stmt)?;
// Safe: `handle` is a valid statement, and we are giving up ownership of `self`.
let stmt = unsafe { StatementConnection::new(stmt.into_sys(), self) };
Ok(Prepared::new(stmt))
}sourceunsafe fn execute(&mut self) -> SqlResult<()>
unsafe fn execute(&mut self) -> SqlResult<()>
Executes a statement prepared by prepare. After the application processes or discards the
results from a call to execute, the application can call SQLExecute again with new
parameter values.
Safety
While self as always guaranteed to be a valid allocated handle, this function may
dereference bound parameters. It is the callers responsibility to ensure these are still
valid. One strategy is to reset potentially invalid parameters right before the call using
reset_parameters.
Return
SqlResult::NeedDataif execution requires additional data from delayed parameters.SqlResult::NoDataif a searched update or delete statement did not affect any rows at the data source.
Examples found in repository?
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
pub unsafe fn execute<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
) -> Result<Option<CursorImpl<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
stmt.exec_direct(sql)
} else {
// We execute a prepared query
stmt.execute()
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
stmt.put_binary_batch(batch).into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
if stmt.num_result_cols().into_result(&stmt)? == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorImpl::new(statement);
Ok(Some(cursor))
}
}
/// # Safety
///
/// * Execute may dereference pointers to bound parameters, so these must guaranteed to be valid
/// then calling this function.
/// * Furthermore all bound delayed parameters must be of type `*mut &mut dyn Blob`.
pub async unsafe fn execute_polling<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
mut sleep: impl Sleep,
) -> Result<Option<CursorPolling<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
wait_for(|| stmt.exec_direct(sql), &mut sleep).await
} else {
// We execute a prepared query
wait_for(|| stmt.execute(), &mut sleep).await
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
let result = wait_for(|| stmt.put_binary_batch(batch), &mut sleep).await;
result.into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
let num_result_cols = wait_for(|| stmt.num_result_cols(), &mut sleep)
.await
.into_result(&stmt)?;
if num_result_cols == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorPolling::new(statement);
Ok(Some(cursor))
}
}sourcefn num_result_cols(&self) -> SqlResult<i16>
fn num_result_cols(&self) -> SqlResult<i16>
Number of columns in result set.
Can also be used to check, whether or not a result set has been created at all.
Examples found in repository?
More examples
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240
pub unsafe fn execute<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
) -> Result<Option<CursorImpl<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
stmt.exec_direct(sql)
} else {
// We execute a prepared query
stmt.execute()
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
stmt.put_binary_batch(batch).into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
if stmt.num_result_cols().into_result(&stmt)? == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorImpl::new(statement);
Ok(Some(cursor))
}
}
/// # Safety
///
/// * Execute may dereference pointers to bound parameters, so these must guaranteed to be valid
/// then calling this function.
/// * Furthermore all bound delayed parameters must be of type `*mut &mut dyn Blob`.
pub async unsafe fn execute_polling<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
mut sleep: impl Sleep,
) -> Result<Option<CursorPolling<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
wait_for(|| stmt.exec_direct(sql), &mut sleep).await
} else {
// We execute a prepared query
wait_for(|| stmt.execute(), &mut sleep).await
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
let result = wait_for(|| stmt.put_binary_batch(batch), &mut sleep).await;
result.into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
let num_result_cols = wait_for(|| stmt.num_result_cols(), &mut sleep)
.await
.into_result(&stmt)?;
if num_result_cols == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorPolling::new(statement);
Ok(Some(cursor))
}
}
/// Shared implementation for executing a columns query between [`crate::Connection`] and
/// [`crate::Preallocated`].
pub fn execute_columns<S>(
mut statement: S,
catalog_name: &SqlText,
schema_name: &SqlText,
table_name: &SqlText,
column_name: &SqlText,
) -> Result<CursorImpl<S>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
stmt.columns(catalog_name, schema_name, table_name, column_name)
.into_result(&stmt)?;
// We assume columns always creates a result set, since it works like a SELECT statement.
debug_assert_ne!(stmt.num_result_cols().unwrap(), 0);
// Safe: `statement` is in cursor state
let cursor = unsafe { CursorImpl::new(statement) };
Ok(cursor)
}
/// Shared implementation for executing a tables query between [`crate::Connection`] and
/// [`crate::Preallocated`].
pub fn execute_tables<S>(
mut statement: S,
catalog_name: &SqlText,
schema_name: &SqlText,
table_name: &SqlText,
column_name: &SqlText,
) -> Result<CursorImpl<S>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
stmt.tables(catalog_name, schema_name, table_name, column_name)
.into_result(&stmt)?;
// We assume columns always creates a result set, since it works like a SELECT statement.
debug_assert_ne!(stmt.num_result_cols().unwrap(), 0);
// Safe: `statement` is in Cursor state.
let cursor = unsafe { CursorImpl::new(statement) };
Ok(cursor)
}sourcefn num_params(&self) -> SqlResult<u16>
fn num_params(&self) -> SqlResult<u16>
Number of placeholders of a prepared query.
sourceunsafe fn set_row_array_size(&mut self, size: usize) -> SqlResult<()>
unsafe fn set_row_array_size(&mut self, size: usize) -> SqlResult<()>
Sets the batch size for bulk cursors, if retrieving many rows at once.
Safety
It is the callers responsibility to ensure that buffers bound using bind_col can hold the
specified amount of rows.
Examples found in repository?
597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620
unsafe fn bind_row_set_buffer_to_statement(
mut stmt: StatementRef<'_>,
row_set_buffer: &mut impl RowSetBuffer,
) -> Result<(), Error> {
stmt.set_row_bind_type(row_set_buffer.bind_type())
.into_result(&stmt)?;
let size = row_set_buffer.row_array_size();
stmt.set_row_array_size(size)
.into_result(&stmt)
// SAP anywhere has been seen to return with an "invalid attribute" error instead of
// a success with "option value changed" info. Let us map invalid attributes during
// setting row set array size to something more precise.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_ATTRIBUTE_VALUE {
Error::InvalidRowArraySize { record, size }
} else {
Error::Diagnostics { record, function }
}
})?;
stmt.set_num_rows_fetched(Some(row_set_buffer.mut_num_fetch_rows()))
.into_result(&stmt)?;
row_set_buffer.bind_colmuns_to_cursor(stmt)?;
Ok(())
}sourceunsafe fn set_paramset_size(&mut self, size: usize) -> SqlResult<()>
unsafe fn set_paramset_size(&mut self, size: usize) -> SqlResult<()>
Specifies the number of values for each parameter. If it is greater than 1, the data and indicator buffers of the statement point to arrays. The cardinality of each array is equal to the value of this field.
Safety
The bound buffers must at least hold the number of elements specified in this call then the statement is executed.
Examples found in repository?
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
pub fn execute(&mut self) -> Result<Option<CursorImpl<StatementRef<'_>>>, Error> {
let mut stmt = self.statement.as_stmt_ref();
unsafe {
if self.parameter_set_size == 0 {
// A batch size of 0 will not execute anything, same as for execute on connection or
// prepared.
Ok(None)
} else {
// We reset the parameter set size, in order to adequatly handle batches of
// different size then inserting into the database.
stmt.set_paramset_size(self.parameter_set_size);
execute(stmt, None)
}
}
}More examples
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
unsafe fn bind_parameters<S>(
lazy_statement: impl FnOnce() -> Result<S, Error>,
mut params: impl ParameterCollectionRef,
) -> Result<Option<S>, Error>
where
S: AsStatementRef,
{
let parameter_set_size = params.parameter_set_size();
if parameter_set_size == 0 {
return Ok(None);
}
// Only allocate the statement, if we know we are going to execute something.
let mut statement = lazy_statement()?;
let mut stmt = statement.as_stmt_ref();
// Reset parameters so we do not dereference stale once by mistake if we call
// `exec_direct`.
stmt.reset_parameters().into_result(&stmt)?;
stmt.set_paramset_size(parameter_set_size)
.into_result(&stmt)?;
// Bind new parameters passed by caller.
params.bind_parameters_to(&mut stmt)?;
Ok(Some(statement))
}sourceunsafe fn set_row_bind_type(&mut self, row_size: usize) -> SqlResult<()>
unsafe fn set_row_bind_type(&mut self, row_size: usize) -> SqlResult<()>
Sets the binding type to columnar binding for batch cursors.
Any Positive number indicates a row wise binding with that row length. 0 indicates a
columnar binding.
Safety
It is the callers responsibility to ensure that the bound buffers match the memory layout specified by this function.
Examples found in repository?
597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620
unsafe fn bind_row_set_buffer_to_statement(
mut stmt: StatementRef<'_>,
row_set_buffer: &mut impl RowSetBuffer,
) -> Result<(), Error> {
stmt.set_row_bind_type(row_set_buffer.bind_type())
.into_result(&stmt)?;
let size = row_set_buffer.row_array_size();
stmt.set_row_array_size(size)
.into_result(&stmt)
// SAP anywhere has been seen to return with an "invalid attribute" error instead of
// a success with "option value changed" info. Let us map invalid attributes during
// setting row set array size to something more precise.
.provide_context_for_diagnostic(|record, function| {
if record.state == State::INVALID_ATTRIBUTE_VALUE {
Error::InvalidRowArraySize { record, size }
} else {
Error::Diagnostics { record, function }
}
})?;
stmt.set_num_rows_fetched(Some(row_set_buffer.mut_num_fetch_rows()))
.into_result(&stmt)?;
row_set_buffer.bind_colmuns_to_cursor(stmt)?;
Ok(())
}fn set_metadata_id(&mut self, metadata_id: bool) -> SqlResult<()>
sourcefn set_async_enable(&mut self, on: bool) -> SqlResult<()>
fn set_async_enable(&mut self, on: bool) -> SqlResult<()>
Enables or disables asynchronous execution for this statement handle. If asynchronous execution is not enabled on connection level it is disabled by default and everything is executed synchronously.
This is equivalent to stetting SQL_ATTR_ASYNC_ENABLE in the bare C API.
See https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/executing-statements-odbc
sourceunsafe fn bind_input_parameter(
&mut self,
parameter_number: u16,
parameter: &impl HasDataType + CData + ?Sized
) -> SqlResult<()>
unsafe fn bind_input_parameter(
&mut self,
parameter_number: u16,
parameter: &impl HasDataType + CData + ?Sized
) -> SqlResult<()>
Binds a buffer holding an input parameter to a parameter marker in an SQL statement. This
specialized version takes a constant reference to parameter, but is therefore limited to
binding input parameters. See Statement::bind_parameter for the version which can bind
input and output parameters.
See https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlbindparameter-function.
Safety
- It is up to the caller to ensure the lifetimes of the bound parameters.
- Calling this function may influence other statements that share the APD.
Examples found in repository?
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
unsafe fn bind_input_parameters_to(&self, stmt: &mut impl Statement) -> Result<(), Error> {
stmt.bind_input_parameter(1, self).into_result(stmt)
}
}
unsafe impl<T> InputParameterCollection for [T]
where
T: InputParameter,
{
fn parameter_set_size(&self) -> usize {
1
}
unsafe fn bind_input_parameters_to(&self, stmt: &mut impl Statement) -> Result<(), Error> {
for (index, parameter) in self.iter().enumerate() {
stmt.bind_input_parameter(index as u16 + 1, parameter)
.into_result(stmt)?;
}
Ok(())
}More examples
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306
pub fn ensure_max_element_length(
&mut self,
element_length: usize,
num_rows_to_copy: usize,
) -> Result<(), Error> {
// Column buffer is not large enough to hold the element. We must allocate a larger buffer
// in order to hold it. This invalidates the pointers previously bound to the statement. So
// we rebind them.
if element_length > self.column.max_len() {
self.column
.resize_max_element_length(element_length, num_rows_to_copy);
unsafe {
self.stmt
.bind_input_parameter(self.parameter_index, self.column)
.into_result(&self.stmt)?
}
}
Ok(())
}427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449
pub fn ensure_max_element_length(
&mut self,
element_length: usize,
num_rows_to_copy: usize,
) -> Result<(), Error>
where
TextColumn<C>: HasDataType + CData,
{
// Column buffer is not large enough to hold the element. We must allocate a larger buffer
// in order to hold it. This invalidates the pointers previously bound to the statement. So
// we rebind them.
if element_length > self.column.max_len() {
let new_max_str_len = element_length;
self.column
.resize_max_str(new_max_str_len, num_rows_to_copy);
unsafe {
self.stmt
.bind_input_parameter(self.parameter_index, self.column)
.into_result(&self.stmt)?
}
}
Ok(())
}38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
pub unsafe fn new(mut statement: S, parameters: Vec<C>) -> Result<Self, Error>
where
C: ColumnBuffer + HasDataType,
{
let mut stmt = statement.as_stmt_ref();
stmt.reset_parameters();
let mut parameter_number = 1;
// Bind buffers to statement.
for column in ¶meters {
if let Err(error) = stmt
.bind_input_parameter(parameter_number, column)
.into_result(&stmt)
{
// This early return using `?` is risky. We actually did bind some parameters
// already. We cannot guarantee that the bound pointers stay valid in case of an
// error since `Self` is never constructed. We would away with this, if we took
// ownership of the statement and it is destroyed should the constructor not
// succeed. However columnar bulk inserter can also be instantiated with borrowed
// statements. This is why we reset the parameters on error.
stmt.reset_parameters();
return Err(error);
}
parameter_number += 1;
}
let capacity = parameters
.iter()
.map(|col| col.capacity())
.min()
.unwrap_or(0);
Ok(Self {
statement,
parameter_set_size: 0,
capacity,
parameters,
})
}sourceunsafe fn bind_parameter(
&mut self,
parameter_number: u16,
input_output_type: ParamType,
parameter: &mut impl CDataMut + HasDataType
) -> SqlResult<()>
unsafe fn bind_parameter(
&mut self,
parameter_number: u16,
input_output_type: ParamType,
parameter: &mut impl CDataMut + HasDataType
) -> SqlResult<()>
Binds a buffer holding a single parameter to a parameter marker in an SQL statement. To bind
input parameters using constant references see Statement::bind_input_parameter.
See https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlbindparameter-function.
Safety
- It is up to the caller to ensure the lifetimes of the bound parameters.
- Calling this function may influence other statements that share the APD.
Examples found in repository?
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114
unsafe fn bind_to(
&mut self,
parameter_number: u16,
stmt: &mut impl Statement,
) -> Result<(), Error> {
stmt.bind_parameter(parameter_number, odbc_sys::ParamType::InputOutput, self.0)
.into_result(stmt)
}
}
/// Mutable references wrapped in `Out` are bound as output parameters.
unsafe impl<'a, T> ParameterTupleElement for Out<'a, T>
where
T: OutputParameter,
{
unsafe fn bind_to(
&mut self,
parameter_number: u16,
stmt: &mut impl Statement,
) -> Result<(), Error> {
stmt.bind_parameter(parameter_number, odbc_sys::ParamType::Output, self.0)
.into_result(stmt)
}sourceunsafe fn bind_delayed_input_parameter(
&mut self,
parameter_number: u16,
parameter: &mut impl DelayedInput + HasDataType
) -> SqlResult<()>
unsafe fn bind_delayed_input_parameter(
&mut self,
parameter_number: u16,
parameter: &mut impl DelayedInput + HasDataType
) -> SqlResult<()>
Binds an input stream to a parameter marker in an SQL statement. Use this to stream large
values at statement execution time. To bind preallocated constant buffers see
Statement::bind_input_parameter.
See https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlbindparameter-function.
Safety
- It is up to the caller to ensure the lifetimes of the bound parameters.
- Calling this function may influence other statements that share the APD.
Examples found in repository?
97 98 99 100 101 102 103 104 105 106 107 108 109 110
unsafe fn bind_parameters_to(&mut self, stmt: &mut impl Statement) -> Result<(), Error> {
stmt.bind_delayed_input_parameter(1, self).into_result(stmt)
}
}
unsafe impl ParameterTupleElement for &mut BlobParam<'_> {
unsafe fn bind_to(
&mut self,
parameter_number: u16,
stmt: &mut impl Statement,
) -> Result<(), Error> {
stmt.bind_delayed_input_parameter(parameter_number, *self)
.into_result(stmt)
}sourcefn is_unsigned_column(&self, column_number: u16) -> SqlResult<bool>
fn is_unsigned_column(&self, column_number: u16) -> SqlResult<bool>
true if a given column in a result set is unsigned or not a numeric type, false
otherwise.
column_number: Index of the column, starting at 1.
sourcefn col_type(&self, column_number: u16) -> SqlResult<SqlDataType>
fn col_type(&self, column_number: u16) -> SqlResult<SqlDataType>
Returns a number identifying the SQL type of the column in the result set.
column_number: Index of the column, starting at 1.
sourcefn col_concise_type(&self, column_number: u16) -> SqlResult<SqlDataType>
fn col_concise_type(&self, column_number: u16) -> SqlResult<SqlDataType>
The concise data type. For the datetime and interval data types, this field returns the
concise data type; for example, TIME or INTERVAL_YEAR.
column_number: Index of the column, starting at 1.
Examples found in repository?
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
fn col_data_type(&mut self, column_number: u16) -> Result<DataType, Error> {
let stmt = self.as_stmt_ref();
let kind = stmt.col_concise_type(column_number).into_result(&stmt)?;
let dt = match kind {
SqlDataType::UNKNOWN_TYPE => DataType::Unknown,
SqlDataType::EXT_VAR_BINARY => DataType::Varbinary {
length: self.col_octet_length(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_LONG_VAR_BINARY => DataType::LongVarbinary {
length: self.col_octet_length(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_BINARY => DataType::Binary {
length: self.col_octet_length(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_W_VARCHAR => DataType::WVarchar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_W_CHAR => DataType::WChar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_LONG_VARCHAR => DataType::LongVarchar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::CHAR => DataType::Char {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::VARCHAR => DataType::Varchar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::NUMERIC => DataType::Numeric {
precision: self.col_precision(column_number)?.try_into().unwrap(),
scale: self.col_scale(column_number)?.try_into().unwrap(),
},
SqlDataType::DECIMAL => DataType::Decimal {
precision: self.col_precision(column_number)?.try_into().unwrap(),
scale: self.col_scale(column_number)?.try_into().unwrap(),
},
SqlDataType::INTEGER => DataType::Integer,
SqlDataType::SMALLINT => DataType::SmallInt,
SqlDataType::FLOAT => DataType::Float {
precision: self.col_precision(column_number)?.try_into().unwrap(),
},
SqlDataType::REAL => DataType::Real,
SqlDataType::DOUBLE => DataType::Double,
SqlDataType::DATE => DataType::Date,
SqlDataType::TIME => DataType::Time {
precision: self.col_precision(column_number)?.try_into().unwrap(),
},
SqlDataType::TIMESTAMP => DataType::Timestamp {
precision: self.col_precision(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_BIG_INT => DataType::BigInt,
SqlDataType::EXT_TINY_INT => DataType::TinyInt,
SqlDataType::EXT_BIT => DataType::Bit,
other => {
let mut column_description = ColumnDescription::default();
self.describe_col(column_number, &mut column_description)?;
DataType::Other {
data_type: other,
column_size: column_description.data_type.column_size(),
decimal_digits: column_description.data_type.decimal_digits(),
}
}
};
Ok(dt)
}sourcefn col_octet_length(&self, column_number: u16) -> SqlResult<isize>
fn col_octet_length(&self, column_number: u16) -> SqlResult<isize>
Returns the size in bytes of the columns. For variable sized types the maximum size is returned, excluding a terminating zero.
column_number: Index of the column, starting at 1.
sourcefn col_display_size(&self, column_number: u16) -> SqlResult<isize>
fn col_display_size(&self, column_number: u16) -> SqlResult<isize>
Maximum number of characters required to display data from the column.
column_number: Index of the column, starting at 1.
sourcefn col_precision(&self, column_number: u16) -> SqlResult<isize>
fn col_precision(&self, column_number: u16) -> SqlResult<isize>
Precision of the column.
Denotes the applicable precision. For data types SQL_TYPE_TIME, SQL_TYPE_TIMESTAMP, and all the interval data types that represent a time interval, its value is the applicable precision of the fractional seconds component.
sourcefn col_scale(&self, column_number: u16) -> SqlResult<Len>
fn col_scale(&self, column_number: u16) -> SqlResult<Len>
The applicable scale for a numeric data type. For DECIMAL and NUMERIC data types, this is the defined scale. It is undefined for all other data types.
sourcefn col_name(&self, column_number: u16, buffer: &mut Vec<SqlChar>) -> SqlResult<()>
fn col_name(&self, column_number: u16, buffer: &mut Vec<SqlChar>) -> SqlResult<()>
The column alias, if it applies. If the column alias does not apply, the column name is returned. If there is no column name or a column alias, an empty string is returned.
Examples found in repository?
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246
fn col_name(&mut self, column_number: u16) -> Result<String, Error> {
let stmt = self.as_stmt_ref();
let mut buf = vec![0; 1024];
stmt.col_name(column_number, &mut buf).into_result(&stmt)?;
Ok(slice_to_utf8(&buf).unwrap())
}
/// Use this if you want to iterate over all column names and allocate a `String` for each one.
///
/// This is a wrapper around `col_name` introduced for convenience.
fn column_names(&mut self) -> Result<ColumnNamesIt<'_, Self>, Error> {
ColumnNamesIt::new(self)
}
/// Data type of the specified column.
///
/// `column_number`: Index of the column, starting at 1.
fn col_data_type(&mut self, column_number: u16) -> Result<DataType, Error> {
let stmt = self.as_stmt_ref();
let kind = stmt.col_concise_type(column_number).into_result(&stmt)?;
let dt = match kind {
SqlDataType::UNKNOWN_TYPE => DataType::Unknown,
SqlDataType::EXT_VAR_BINARY => DataType::Varbinary {
length: self.col_octet_length(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_LONG_VAR_BINARY => DataType::LongVarbinary {
length: self.col_octet_length(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_BINARY => DataType::Binary {
length: self.col_octet_length(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_W_VARCHAR => DataType::WVarchar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_W_CHAR => DataType::WChar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_LONG_VARCHAR => DataType::LongVarchar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::CHAR => DataType::Char {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::VARCHAR => DataType::Varchar {
length: self.col_display_size(column_number)?.try_into().unwrap(),
},
SqlDataType::NUMERIC => DataType::Numeric {
precision: self.col_precision(column_number)?.try_into().unwrap(),
scale: self.col_scale(column_number)?.try_into().unwrap(),
},
SqlDataType::DECIMAL => DataType::Decimal {
precision: self.col_precision(column_number)?.try_into().unwrap(),
scale: self.col_scale(column_number)?.try_into().unwrap(),
},
SqlDataType::INTEGER => DataType::Integer,
SqlDataType::SMALLINT => DataType::SmallInt,
SqlDataType::FLOAT => DataType::Float {
precision: self.col_precision(column_number)?.try_into().unwrap(),
},
SqlDataType::REAL => DataType::Real,
SqlDataType::DOUBLE => DataType::Double,
SqlDataType::DATE => DataType::Date,
SqlDataType::TIME => DataType::Time {
precision: self.col_precision(column_number)?.try_into().unwrap(),
},
SqlDataType::TIMESTAMP => DataType::Timestamp {
precision: self.col_precision(column_number)?.try_into().unwrap(),
},
SqlDataType::EXT_BIG_INT => DataType::BigInt,
SqlDataType::EXT_TINY_INT => DataType::TinyInt,
SqlDataType::EXT_BIT => DataType::Bit,
other => {
let mut column_description = ColumnDescription::default();
self.describe_col(column_number, &mut column_description)?;
DataType::Other {
data_type: other,
column_size: column_description.data_type.column_size(),
decimal_digits: column_description.data_type.decimal_digits(),
}
}
};
Ok(dt)
}
}
/// Buffer sizes able to hold the display size of each column in utf-8 encoding. You may call this
/// method to figure out suitable buffer sizes for text columns. [`buffers::TextRowSet::for_cursor`]
/// will invoke this function for you.
///
/// # Parameters
///
/// * `metadata`: Used to query the display size for each column of the row set. For character
/// data the length in characters is multiplied by 4 in order to have enough space for 4 byte
/// utf-8 characters. This is a pessimization for some data sources (e.g. SQLite 3) which do
/// interpret the size of a `VARCHAR(5)` column as 5 bytes rather than 5 characters.
pub fn utf8_display_sizes(
metadata: &mut impl ResultSetMetadata,
) -> Result<impl Iterator<Item = Result<usize, Error>> + '_, Error> {
let num_cols: u16 = metadata.num_result_cols()?.try_into().unwrap();
let it = (1..(num_cols + 1)).map(move |col_index| {
// Ask driver for buffer length
let max_str_len = if let Some(encoded_len) = metadata.col_data_type(col_index)?.utf8_len() {
encoded_len
} else {
metadata.col_display_size(col_index)? as usize
};
Ok(max_str_len)
});
Ok(it)
}
/// An iterator calling `col_name` for each column_name and converting the result into UTF-8. See
/// [`ResultSetMetada::column_names`].
pub struct ColumnNamesIt<'c, C: ?Sized> {
cursor: &'c mut C,
buffer: Vec<SqlChar>,
column: u16,
num_cols: u16,
}
impl<'c, C: ResultSetMetadata + ?Sized> ColumnNamesIt<'c, C> {
fn new(cursor: &'c mut C) -> Result<Self, Error> {
let num_cols = cursor.num_result_cols()?.try_into().unwrap();
Ok(Self {
cursor,
// Some ODBC drivers do not report the required size to hold the column name. Starting
// with a reasonable sized buffers, allows us to fetch reasonable sized column alias
// even from those.
buffer: Vec::with_capacity(128),
num_cols,
column: 1,
})
}
}
impl<C> Iterator for ColumnNamesIt<'_, C>
where
C: ResultSetMetadata,
{
type Item = Result<String, Error>;
fn next(&mut self) -> Option<Self::Item> {
if self.column <= self.num_cols {
// stmt instead of cursor.col_name, so we can efficently reuse the buffer and avoid
// extra allocations.
let stmt = self.cursor.as_stmt_ref();
let result = stmt
.col_name(self.column, &mut self.buffer)
.into_result(&stmt)
.map(|()| slice_to_utf8(&self.buffer).unwrap());
self.column += 1;
Some(result)
} else {
None
}
}sourceunsafe fn numeric_col_attribute(
&self,
attribute: Desc,
column_number: u16
) -> SqlResult<Len>
unsafe fn numeric_col_attribute(
&self,
attribute: Desc,
column_number: u16
) -> SqlResult<Len>
Safety
It is the callers responsibility to ensure that attribute refers to a numeric attribute.
Examples found in repository?
567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620
fn is_unsigned_column(&self, column_number: u16) -> SqlResult<bool> {
unsafe { self.numeric_col_attribute(Desc::Unsigned, column_number) }.map(|out| match out {
0 => false,
1 => true,
_ => panic!("Unsigned column attribute must be either 0 or 1."),
})
}
/// Returns a number identifying the SQL type of the column in the result set.
///
/// `column_number`: Index of the column, starting at 1.
fn col_type(&self, column_number: u16) -> SqlResult<SqlDataType> {
unsafe { self.numeric_col_attribute(Desc::Type, column_number) }
.map(|ret| SqlDataType(ret.try_into().unwrap()))
}
/// The concise data type. For the datetime and interval data types, this field returns the
/// concise data type; for example, `TIME` or `INTERVAL_YEAR`.
///
/// `column_number`: Index of the column, starting at 1.
fn col_concise_type(&self, column_number: u16) -> SqlResult<SqlDataType> {
unsafe { self.numeric_col_attribute(Desc::ConciseType, column_number) }
.map(|ret| SqlDataType(ret.try_into().unwrap()))
}
/// Returns the size in bytes of the columns. For variable sized types the maximum size is
/// returned, excluding a terminating zero.
///
/// `column_number`: Index of the column, starting at 1.
fn col_octet_length(&self, column_number: u16) -> SqlResult<isize> {
unsafe { self.numeric_col_attribute(Desc::OctetLength, column_number) }
}
/// Maximum number of characters required to display data from the column.
///
/// `column_number`: Index of the column, starting at 1.
fn col_display_size(&self, column_number: u16) -> SqlResult<isize> {
unsafe { self.numeric_col_attribute(Desc::DisplaySize, column_number) }
}
/// Precision of the column.
///
/// Denotes the applicable precision. For data types SQL_TYPE_TIME, SQL_TYPE_TIMESTAMP, and all
/// the interval data types that represent a time interval, its value is the applicable
/// precision of the fractional seconds component.
fn col_precision(&self, column_number: u16) -> SqlResult<isize> {
unsafe { self.numeric_col_attribute(Desc::Precision, column_number) }
}
/// The applicable scale for a numeric data type. For DECIMAL and NUMERIC data types, this is
/// the defined scale. It is undefined for all other data types.
fn col_scale(&self, column_number: u16) -> SqlResult<Len> {
unsafe { self.numeric_col_attribute(Desc::Scale, column_number) }
}sourcefn reset_parameters(&mut self) -> SqlResult<()>
fn reset_parameters(&mut self) -> SqlResult<()>
Sets the SQL_DESC_COUNT field of the APD to 0, releasing all parameter buffers set for the given StatementHandle.
Examples found in repository?
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
unsafe fn bind_parameters<S>(
lazy_statement: impl FnOnce() -> Result<S, Error>,
mut params: impl ParameterCollectionRef,
) -> Result<Option<S>, Error>
where
S: AsStatementRef,
{
let parameter_set_size = params.parameter_set_size();
if parameter_set_size == 0 {
return Ok(None);
}
// Only allocate the statement, if we know we are going to execute something.
let mut statement = lazy_statement()?;
let mut stmt = statement.as_stmt_ref();
// Reset parameters so we do not dereference stale once by mistake if we call
// `exec_direct`.
stmt.reset_parameters().into_result(&stmt)?;
stmt.set_paramset_size(parameter_set_size)
.into_result(&stmt)?;
// Bind new parameters passed by caller.
params.bind_parameters_to(&mut stmt)?;
Ok(Some(statement))
}More examples
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
pub unsafe fn new(mut statement: S, parameters: Vec<C>) -> Result<Self, Error>
where
C: ColumnBuffer + HasDataType,
{
let mut stmt = statement.as_stmt_ref();
stmt.reset_parameters();
let mut parameter_number = 1;
// Bind buffers to statement.
for column in ¶meters {
if let Err(error) = stmt
.bind_input_parameter(parameter_number, column)
.into_result(&stmt)
{
// This early return using `?` is risky. We actually did bind some parameters
// already. We cannot guarantee that the bound pointers stay valid in case of an
// error since `Self` is never constructed. We would away with this, if we took
// ownership of the statement and it is destroyed should the constructor not
// succeed. However columnar bulk inserter can also be instantiated with borrowed
// statements. This is why we reset the parameters on error.
stmt.reset_parameters();
return Err(error);
}
parameter_number += 1;
}
let capacity = parameters
.iter()
.map(|col| col.capacity())
.min()
.unwrap_or(0);
Ok(Self {
statement,
parameter_set_size: 0,
capacity,
parameters,
})
}sourcefn describe_param(
&self,
parameter_number: u16
) -> SqlResult<ParameterDescription>
fn describe_param(
&self,
parameter_number: u16
) -> SqlResult<ParameterDescription>
Describes parameter marker associated with a prepared SQL statement.
Parameters
parameter_number: Parameter marker number ordered sequentially in increasing parameter order, starting at 1.
sourcefn param_data(&mut self) -> SqlResult<Option<Pointer>>
fn param_data(&mut self) -> SqlResult<Option<Pointer>>
Use to check if which additional parameters need data. Should be called after binding
parameters with an indicator set to crate::sys::DATA_AT_EXEC or a value created with
crate::sys::len_data_at_exec.
Return value contains a parameter identifier passed to bind parameter as a value pointer.
Examples found in repository?
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
pub unsafe fn execute<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
) -> Result<Option<CursorImpl<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
stmt.exec_direct(sql)
} else {
// We execute a prepared query
stmt.execute()
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
stmt.put_binary_batch(batch).into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
if stmt.num_result_cols().into_result(&stmt)? == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorImpl::new(statement);
Ok(Some(cursor))
}
}
/// # Safety
///
/// * Execute may dereference pointers to bound parameters, so these must guaranteed to be valid
/// then calling this function.
/// * Furthermore all bound delayed parameters must be of type `*mut &mut dyn Blob`.
pub async unsafe fn execute_polling<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
mut sleep: impl Sleep,
) -> Result<Option<CursorPolling<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
wait_for(|| stmt.exec_direct(sql), &mut sleep).await
} else {
// We execute a prepared query
wait_for(|| stmt.execute(), &mut sleep).await
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
let result = wait_for(|| stmt.put_binary_batch(batch), &mut sleep).await;
result.into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
let num_result_cols = wait_for(|| stmt.num_result_cols(), &mut sleep)
.await
.into_result(&stmt)?;
if num_result_cols == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorPolling::new(statement);
Ok(Some(cursor))
}
}sourcefn columns(
&mut self,
catalog_name: &SqlText<'_>,
schema_name: &SqlText<'_>,
table_name: &SqlText<'_>,
column_name: &SqlText<'_>
) -> SqlResult<()>
fn columns(
&mut self,
catalog_name: &SqlText<'_>,
schema_name: &SqlText<'_>,
table_name: &SqlText<'_>,
column_name: &SqlText<'_>
) -> SqlResult<()>
Executes a columns query using this statement handle.
Examples found in repository?
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214
pub fn execute_columns<S>(
mut statement: S,
catalog_name: &SqlText,
schema_name: &SqlText,
table_name: &SqlText,
column_name: &SqlText,
) -> Result<CursorImpl<S>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
stmt.columns(catalog_name, schema_name, table_name, column_name)
.into_result(&stmt)?;
// We assume columns always creates a result set, since it works like a SELECT statement.
debug_assert_ne!(stmt.num_result_cols().unwrap(), 0);
// Safe: `statement` is in cursor state
let cursor = unsafe { CursorImpl::new(statement) };
Ok(cursor)
}sourcefn tables(
&mut self,
catalog_name: &SqlText<'_>,
schema_name: &SqlText<'_>,
table_name: &SqlText<'_>,
table_type: &SqlText<'_>
) -> SqlResult<()>
fn tables(
&mut self,
catalog_name: &SqlText<'_>,
schema_name: &SqlText<'_>,
table_name: &SqlText<'_>,
table_type: &SqlText<'_>
) -> SqlResult<()>
Returns the list of table, catalog, or schema names, and table types, stored in a specific data source. The driver returns the information as a result set.
The catalog, schema and table parameters are search patterns by default unless
Self::set_metadata_id is called with true. In that case they must also not be None since
otherwise a NulPointer error is emitted.
Examples found in repository?
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240
pub fn execute_tables<S>(
mut statement: S,
catalog_name: &SqlText,
schema_name: &SqlText,
table_name: &SqlText,
column_name: &SqlText,
) -> Result<CursorImpl<S>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
stmt.tables(catalog_name, schema_name, table_name, column_name)
.into_result(&stmt)?;
// We assume columns always creates a result set, since it works like a SELECT statement.
debug_assert_ne!(stmt.num_result_cols().unwrap(), 0);
// Safe: `statement` is in Cursor state.
let cursor = unsafe { CursorImpl::new(statement) };
Ok(cursor)
}sourcefn put_binary_batch(&mut self, batch: &[u8]) -> SqlResult<()>
fn put_binary_batch(&mut self, batch: &[u8]) -> SqlResult<()>
To put a batch of binary data into the data source at statement execution time. May return
SqlResult::NeedData
Panics if batch is empty.
Examples found in repository?
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
pub unsafe fn execute<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
) -> Result<Option<CursorImpl<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
stmt.exec_direct(sql)
} else {
// We execute a prepared query
stmt.execute()
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
stmt.put_binary_batch(batch).into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
if stmt.num_result_cols().into_result(&stmt)? == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorImpl::new(statement);
Ok(Some(cursor))
}
}
/// # Safety
///
/// * Execute may dereference pointers to bound parameters, so these must guaranteed to be valid
/// then calling this function.
/// * Furthermore all bound delayed parameters must be of type `*mut &mut dyn Blob`.
pub async unsafe fn execute_polling<S>(
mut statement: S,
query: Option<&SqlText<'_>>,
mut sleep: impl Sleep,
) -> Result<Option<CursorPolling<S>>, Error>
where
S: AsStatementRef,
{
let mut stmt = statement.as_stmt_ref();
let result = if let Some(sql) = query {
// We execute an unprepared "one shot query"
wait_for(|| stmt.exec_direct(sql), &mut sleep).await
} else {
// We execute a prepared query
wait_for(|| stmt.execute(), &mut sleep).await
};
// If delayed parameters (e.g. input streams) are bound we might need to put data in order to
// execute.
let need_data =
result
.on_success(|| false)
.into_result_with(&stmt, false, Some(false), Some(true))?;
if need_data {
// Check if any delayed parameters have been bound which stream data to the database at
// statement execution time. Loops over each bound stream.
while let Some(blob_ptr) = stmt.param_data().into_result(&stmt)? {
// The safe interfaces currently exclusively bind pointers to `Blob` trait objects
let blob_ptr: *mut &mut dyn Blob = transmute(blob_ptr);
let blob_ref = &mut *blob_ptr;
// Loop over all batches within each blob
while let Some(batch) = blob_ref.next_batch().map_err(Error::FailedReadingInput)? {
let result = wait_for(|| stmt.put_binary_batch(batch), &mut sleep).await;
result.into_result(&stmt)?;
}
}
}
// Check if a result set has been created.
let num_result_cols = wait_for(|| stmt.num_result_cols(), &mut sleep)
.await
.into_result(&stmt)?;
if num_result_cols == 0 {
Ok(None)
} else {
// Safe: `statement` is in cursor state.
let cursor = CursorPolling::new(statement);
Ok(Some(cursor))
}
}sourcefn row_count(&self) -> SqlResult<isize>
fn row_count(&self) -> SqlResult<isize>
Number of rows affected by an UPDATE, INSERT, or DELETE statement.
See:
https://docs.microsoft.com/en-us/sql/relational-databases/native-client-odbc-api/sqlrowcount https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlrowcount-function
Examples found in repository?
More examples
186 187 188 189 190 191 192 193 194 195 196 197 198
pub fn row_count(&mut self) -> Result<Option<usize>, Error> {
self.statement
.row_count()
.into_result(&self.statement)
.map(|count| {
// ODBC returns -1 in case a row count is not available
if count == -1 {
None
} else {
Some(count.try_into().unwrap())
}
})
}sourcefn complete_async(
&mut self,
function_name: &'static str
) -> SqlResult<SqlResult<()>>
fn complete_async(
&mut self,
function_name: &'static str
) -> SqlResult<SqlResult<()>>
In polling mode can be used instead of repeating the function call. In notification mode
this completes the asynchronous operation. This method panics, in case asynchronous mode is
not enabled. SqlResult::NoData if no asynchronous operation is in progress, or (specific
to notification mode) the driver manager has not notified the application.
See: https://learn.microsoft.com/en-us/sql/odbc/reference/syntax/sqlcompleteasync-function