NanoSQL: a tiny, strongly-typed data mapper for SQLite
NanoSQL is a small data mapper library that helps you execute SQL statements with typed parameters and a typed result set. It does not attempt to typecheck your SQL code. Rather, it only ensures that the parameters and results serialize/deserialize to/from the correct shape.
Overview
The library is structured around prepared statements. First, you create a description
of the interface and implementation of your query. This is realized by the [Query] trait
that contains the input (parameter) and output (result) types as associated types, and
a function for building the SQL text. You can imeplement this trait by hand, or use the
[define_query] macro as a convenient shortcut.
Next, you use [Connection::compile()] to compile the Query into a [CompiledStatement].
This wraps an SQLite prepared statement, but restricts its parameter and return types.
Finally, you call the [CompiledStatement::invoke()] function on your compiled statement
to actually query the database:
- The input of a query is any type that implements the [
Param] trait. These include primitives, optionals of primitives, tuples of primitives or optionals, and structs with fields of primitive or optional types. Nanosql can work with both positional and named arguments, and supports all parameter prefixes accepted by SQLite (?,:,@, and$). This trait can be derived if thederivefeature of the crate is activated. - The output of a query is a type that implements the [
ResultSet] trait. This is most commonly a collection of some sort (e.g., the standardVectype), or any other type that aggregates the rows returned from an SQL query into a meaningful data structure. Notably,Option<T>and [Single]<T>can be used for expecting at most one or exactly one record, respectively. These types implementResultSetwhen the type of their wrapped value implements [ResultRecord]. - [
ResultRecord] is a trait that can be implemented by tuple-like and struct-like types for deserializing individual rows. This trait can also be#[derive]d.
Extremely common (basically inevitable) tasks such as creating the schema of a table and
inserting records is possible via special helper/extension methods on Connection objects,
via the [ConnectionExt] trait. These in turn use the [Table] trait for preparing and
invoking the corresponding SQL statements, and can be called for convenience.
Examples
The absolute basics - create a table, insert a bunch of records into it, then retrieve them:
use ;
use ;
/// This type is going to represent our table.
///
/// We derive the `Param` trait for it so that it can be used for
/// binding parameters to the statement when inserting new records,
/// and the `ResultRecord` trait so that we can use it to retrieve
/// results, too.
///
/// We also derive the `Table` trait so that basic operations such as
/// creating the table in the schema and bulk insertion can be performed
/// using the appropriate convenience methods in [`ConnectionExt`].
///
/// The parameter prefix is '$' by default (if not specified via the
/// param_prefix attribute); it may also be one of ':', '@', or '?',
/// the last one being allowed only for tuples and scalar parameters.
///
/// `#[nanosql(rename)]` on a struct renames the table itself, while
/// `#[nanosql(rename_all)]` applies a casing convention to all columns.
// optional
/// Collective and field-level casing/renaming also works with `enum`s
/// Our first custom query retrieves a pet by its unique ID.
///
/// If you don't want to spell out the impl by hand, you can
/// use the `define_query!{}` macro for a shorter incantation.
;
See the nanosql/examples
directory (and especially realistic.rs) for more advanced and interesting examples.
A note about batch insertion and transactions
The [ConnectionExt::insert_batch()] method wraps the insertion statements in a transaction
for improving performance. The exclusiveness of transactions is modeled in rusqlite at the
type level by the [Connection] object being mutably (uniquely) borrowed for the duration of
the transaction. This in turn means that insert_batch() also needs to mutably borrow. However,
preparing and invoking queries needs an immutable borrow, and prepared statements borrow the
Connection for as long as they live. As such, you may get errors like "cannot borrow connection
mutably because it is also borrowed as immutable" when mixing batch insertion with other queries.
There are two basic solutions to this problem:
- Drop the outstanding prepared statements before calling [
ConnectionExt::insert_batch()]; - Or if you can't do that, then obtain a transaction object that is not checked at compilation
time for exclusiveness, using [
Connection::unchecked_transaction()], then call the immutably borrowing [TransactionExt::insert_batch()] method on the transaction object instead.
Cargo Features
derive: activates procedural macros - mostly custom#[derive]s for commonly-used traits. Enabled by default.expr-check: uses thesqlparsercrate to check for syntax errors in raw SQL expressions in derive macro attributes at compile time. This ensures that any generated SQL will be valid and syntax error in user-supplied SQL code will be clearly pinpointed, instead of causing mysterious statement preparation errors at runtime. Enabled by default.not-nan: implementsParamandResultRecordforordered_float::NotNan. This allows for a more type-safe interface in queries: since SQLite treatsNaNas the SQLNULLvalue, you may run into surprising errors when binding or retrieving anf32::NANorf64::NANand the corresponding parameter needs to beNOT NULL, or the source column can beNULL.pretty-eqp: use theptreecrate to pretty print the results ofEXPLAIN QUERY PLAN. This will implDisplayforQueryPlan, the return type of [ConnectionExt::explain_query_plan()], which renders the tree in a nice, human-readable format using ASCII art.
Notes on the test suite
-
The tests try to exercise all features of the library extensively. For this reason, they rely on most or all Cargo features defined in
Cargo.toml. Consequently, for successfully compiling and running all tests, you must passcargo test --all-featuresto Cargo. -
The
compiletest_rscrate is used for ensuring that the derive macros detect certain kinds of errors, such as multiple primary keys or table-level constraints that reference non-existent columns.Due to the way the
compiletest_rscrate is structured, the tests can be somewhat flaky. If you encounterE0464errors (e.g., "multiple candidates forrlibdependencynanosqlfound"), then runcargo cleanbefore runningcargo test compile_fail --all-features.
TL;DR: the best "lazy" way to run tests is the ./runtests.sh script, which just does: