LazySql
- LazySql is a sqlite library for rust
- Has compile time guarantees
- Ergonomic
- Fast. Automatically caches and reuses prepared statements for you
- Some downsides that may or may not be fixed in future
- it follows an opinionated API design
- Doesn't support BLOBS
- Doesn't support Batch Execution ergonomically. You would need to resort to
sql!()orsql_runtime!()macro
Overview
Installation
Run the following Cargo command in your project directory:
OR
Go to LazySql's crates.io to get the latest version. Add that to following line to your Cargo.toml:
= "*" # Replace the "*" with the latest version
Quick Start
use ;
-
LazySqlhas some nice QOL features like hover over to see sql code and good ide support
-
The type inference system and compile time check also works well for
JOIN,CASEctes,window function,datetime functionsrecursive ctes,RETURNINGand more complex scenarios. You can even runPRAGMAstatements with it. -
Since SQLite defaults to nullable columns, the type inference system defaults to Option. To use concrete types (e.g., String instead of Option), explicitly add NOT NULL to your table columns
-
You cannot name a field called
transactionin the struct since its a reserved method name for transactions. Failiure to do so will result in a compile time error. -
There will be rare scenarios when a type is impossible to infer.
LazySqlwill tell you specifically which binding parameter or expression cannot be inferred and will suggest using type casting via PostgreSQL's::operator or standard SQL'sCAST AS. Note that you can't type cast asbooleanfor now.For instance,


Connection methods
lazysql supports 3 ways to define your schema, depending on your workflow.
1. Inline Schema
As seen in the Quick Start. Define tables inside the struct.
2. SQL File
Point to a .sql file. The compile time checks will be done against this sql file (ensure that there is CREATE TABLE). lazysql watches this file; if you edit it, rust recompiles automatically to ensure type safety.
// you dont have to create tables. Any read/write sql queries gets compile time guarantees.
3. Live Database
Point to an existing .db binary file. lazysql inspects the live metadata to validate your queries.
Note: for method 2 and 3, you can technically CREATE TABLE as well but to ensure that they are taken into consideration for compile time check, add them at the top of your struct
Features
the lazy_sql! macro brings sql! and sql_runtime! macro. so there is no need to import them. and they can only be used within structs defined with lazy_sql!
Note: Both sql! and sql_runtime! accept only a single SQL statement at a time. Chaining multiple queries with semicolons (;) is not supported and will result in compile time error.
-
sql!MacroAlways prefer to use this. It automatically:
- Infers Inputs: Maps
?to Rust types (i64,f64,String,bool). - Generates Outputs: For
SELECTqueries, creates a struct named after the field
- Infers Inputs: Maps
-
sql_runtime!Macro-
Use this only when you need the sql to to be executed at runtime with some compile time guarantees. Rarely needed in practice. You would know when you need it.
-
Originally,
sql_runtime!is intended more of an escape hatch when you cant use thesql!macro due to false positives. False positives are extremely extremely rare. Look below for more info. This is why u still have to define structs for SELECT statements and specify types for binding parameters for non-SELECT statements
a.
SELECTYou can map a query result to any struct by deriving
SqlMapping.SqlMappingmaps columns by index, not by name. The order of fields in your struct must match the order of columns in yourSELECTstatement exactly.use ;b. No Return Type
For
INSERT,UPDATE, orDELETEstatements// can continue to use it normally. -
-
Postgres
::type casting syntaxNote: bool type casting is not supported for now
sql! // Compiles to: // "SELECT CAST(price AS TEXT) FROM items" -
all()andfirst()methods for iterators-
all()collects the iterator into a vector. Just a lightweight wrapper around .collect() to prevent adding type hints (Vec<_>) in codelet results = db.get_active_users?; let collected_results =results.all?; // returns a Vec of owned results from the returned rows -
first()Returns the first row if available, or None if the query returned no results.let results = db.get_active_users?; let first_result = results.first?.unwrap; // returns the first row from the returned rows
-
-
Transactions
-
Note: you cannot name a field called
transactionin the struct since its a reserved method name. Failiure to do so will result in a compile time error.use ;
Type Mapping
| SQLite Context | Rust Type | Notes |
|---|---|---|
TEXT |
String / &str |
- |
INTEGER |
i64 |
- |
REAL |
f64 |
Includes FLOAT, DOUBLE |
BOOLEAN |
bool |
Requires CHECK (col IN (0,1)) or Check (col = 0 OR col = 1). You could technically use BOOL or BOOLEAN as the data type when creating table (due to sqlite flexible type nature) and it would work as well. But this is discouraged |
| Nullable | Option<T> |
When a column or expr has a possibility of returning NULL, this will be returned. its recommended to use NOT NULL when creating tables so that ergonomic-wise you don't always have to use Some(T) when adding parameters |
Dynamic runtime features
- Strongly recommended to use the
sql!macro for most use-cases. Dynamic runtime features are only needed in rare scenarios.
How is this different from sql_runtime!
sql_runtime!is intended more of an escape hatch when you cant use thesql!macro due to false positives. False positives are extremely extremely rare. Look below for more info. This is why u still have to define structs for SELECT statements and specify types for binding parameters for non-SELECT statements
Runtime Features
-
Dynamic runtime features happens fully at runtime. All the features are stated below in this code block.
use LazyConnection;
Transactions at Runtime
use LazyConnection;
Notes
Strict INSERT Validation
- Although standard SQL allows inserting any number of columns to a table, lazysql checks INSERT statements at compile time. If you omit any column (except for
AUTOINCREMENTandDEFAULT), code will fail to compile. This means you must either specify all columns explicitly, or use implicit insertion for all columns. This is done to prevent certain runtime errors such asNOT NULL constraint failedand more.
False positives during compile time checks
-
I tried my best to support as many sql and sqlite-specific queries as possible.
-
This isnt naturally easy in sqlite as they dont provide any api to give us type inference and schema awareness validation.
-
In the extremely rare case of a False positives (valid SQL syntax fails or type inference incorrectly fails), you can fall back to the
sql_runtime!macro. Would appreciate it if you could open an issue as well.
Cannot type cast as Boolean
- This is a limitation of sqlite since it doesn't natively have
booleantype. I may find some workaround in the future but it's not guaranteed. For now if you want to type cast as bool, u have to type cast it as anintegerand add either 1 (TRUE) or 0 (False)
TODOS
- upsert
- check_constarint field in SELECT is ignored for now. maybe in future will make use of this field
- cant cast as bool
- BLOBS
- bulk insert
- begin immediate