Aerospike Rust Client
Welcome to the preview of Aerospike's official Rust client. This is your opportunity to help shape the direction of the Rust client's ongoing development.
This early-release library brings async-native database operations to Rust developers, with support for batch updates and partition queries. We welcome your feedback as we work toward production readiness.
Feature highlights
Execution models:
- Async-First: Built for non-blocking IO, powered by Tokio by default, with optional support for async-std.
- Sync Support: Blocking APIs are available using a sync sub-crate for flexibility in legacy or mixed environments.
Advanced data operations:
- Batch protocol: full support for read, write, delete, and udf operations through the
new
BatchOperationAPI. - New query wire protocols: implements updated query protocols for improved consistency and performance.
Policy and expression enhancements:
- Replica policies: includes support for Replica, including PreferRack placement.
- Policy additions: new fields such as
allow_inline_ssd,respond_all_keysinBatchPolicy,read_touch_ttl, andQueryDurationinQueryPolicy. - Rate limiting: supports
records_per_secondfor query throttling.
Data model improvements:
- Type support: adds support for boolean particle type.
- New data constructs: returns types such as
Exists,OrderedMap,UnorderedMapnow supported for CDT reads. - Value conversions: implements
TryFromaerospike::Valuefor seamless type interoperability. - Infinity and wildcard: supports
Infinity,Wildcard, and corresponding expression buildersexpressions::infinity()andexpressions::wildcard(). - Size expressions: adds
expressions::record_size()andexpressions::memory_size()for granular control.
Take a look at the changelog for more details.
What’s coming next?
We are working toward full functional parity with our other officially supported clients. Features on the roadmap include:
- Partition queries
- Distributed ACID transactions
- Strong consistency
- Full TLS support for secure, production-ready deployments
Getting started
Prerequisites:
- Aerospike Database 6.4 or later.
- Rust version 1.75 or later
- Tokio runtime or async-std
Installation
-
Build from source code:
git clone --single-branch --branch v2 https://github.com/aerospike/aerospike-client-rust.git cd aerospike-client-rust -
Add the following to your
cargo.tomlfile:[dependencies] # Async API with tokio Runtime aerospike = { version = "<version>", features = ["rt-tokio"]} # OR # Async API with async-std runtime aerospike = { version = "<version>", features = ["rt-async-std"]} # The library still supports the old sync interface, but it will be deprecated in the future. # This is only for compatibility reasons and will be removed in a later stage. # Sync API with tokio aerospike = { version = "<version>", default-features = false, features = ["rt-tokio", "sync"]} # OR # Sync API with async-std aerospike = { version = "<version>", default-features = false, features = ["rt-async-std", "sync"]} -
Run the following command:
cargo build
Core feature examples
The following code examples demonstrate some of the Rust client's new features.
Client connection
Standard connection
Connect to an Aerospike cluster without TLS:
use ;
let policy = default;
let hosts = var
.unwrap_or;
let client = new.await
.expect;
TLS connection without client authentication
Connect to an Aerospike cluster with TLS but without client certificate authentication:
use ;
use RootCertStore;
use CertificateDer;
let mut policy = default;
policy.tls_config = Some;
let hosts = "tls-cluster.example.com:4333";
let client = new.await
.expect;
TLS connection with client authentication
Connect to an Aerospike cluster with TLS and mutual authentication using client certificates:
use ;
use RootCertStore;
use ;
let mut policy = default;
policy.tls_config = Some;
let hosts = "tls-cluster.example.com:4333";
let client = new.await
.expect;
Note: To use TLS features, enable the tls feature in your Cargo.toml:
[]
= { = "...", = ["tls"] }
CRUD operations
extern crate aerospike;
extern crate tokio;
use env;
use Instant;
use ;
use operations;
async
Batch operations
let mut bpolicy = default;
let apolicy = default;
let udf_body = r#"
function echo(rec, val)
return val
end
"#;
let task = client
.register_udf
.await
.unwrap;
task.wait_till_complete.await.unwrap;
let bin1 = as_bin!;
let bin2 = as_bin!;
let bin3 = as_bin!;
let key1 = as_key!;
let key2 = as_key!;
let key3 = as_key!;
let key4 = as_key!;
// key does not exist
let selected = from;
let all = All;
let none = None;
let wops = vec!;
let rops = vec!;
let bpr = default;
let bpw = default;
let bpd = default;
let bpu = default;
let batch = vec!;
let mut results = client.batch.await.unwrap;
dbg!;
// READ Operations
let batch = vec!;
let mut results = client.batch.await.unwrap;
dbg!;
// DELETE Operations
let batch = vec!;
let mut results = client.batch.await.unwrap;
dbg!;
// Read
let args1 = &;
let args2 = &;
let args3 = &;
let args4 = &;
let batch = vec!;
let mut results = client.batch.await.unwrap;
dbg!;
A complete working example can be found in examples/batch_operations.rs.
Query operations
The Rust client supports various query patterns for retrieving data from Aerospike. Below are examples demonstrating different query capabilities.
Simple equality query
Query records where a bin equals a specific value:
use ;
use PartitionFilter;
let policy = default;
let mut stmt = new;
stmt.add_filter;
let rs = client.query.await.unwrap;
let mut rs = rs.into_stream;
while let Some = rs.next.await
Range query
Query records where a bin value falls within a range:
let policy = default;
let mut stmt = new;
stmt.add_filter;
let rs = client.query.await.unwrap;
let mut rs = rs.into_stream;
while let Some = rs.next.await
Metadata-only query
Query records but only retrieve metadata (no bin data):
let policy = default;
let mut stmt = new;
stmt.add_filter;
let rs = client.query.await.unwrap;
let mut rs = rs.into_stream;
while let Some = rs.next.await
Cursor-based pagination
Query records in batches using partition cursors for pagination:
let policy = default;
let mut pf = all;
while !pf.done
Parallel query with multiple consumers
Process query results in parallel using multiple async tasks:
use Arc;
use ;
let policy = default;
let mut stmt = new;
stmt.add_filter;
let rs = client.query.await.unwrap;
let count = new;
let mut handles = vec!;
// Spawn 4 worker tasks to process results in parallel
for _ in 0..4
join_all.await;
println!;
Query with expression filter
Use filter expressions for more complex filtering logic:
use ;
let mut policy = default;
policy.base_policy.filter_expression.replace;
let stmt = new;
let rs = client.query.await.unwrap;
let mut rs = rs.into_stream;
while let Some = rs.next.await
Rate-limited query
Control query throughput by limiting records per second:
let mut policy = default;
policy.records_per_second = 100; // Limit to 100 records/second
let mut stmt = new;
stmt.add_filter;
let rs = client.query.await.unwrap;
let mut rs = rs.into_stream;
while let Some = rs.next.await
Prerequisites for queries
Before running queries, you need to create a secondary index on the bin you want to query:
use ;
let policy = default;
let task = client
.create_index_on_bin
.await
.expect;
// Wait for index creation to complete
task.wait_till_complete.await.unwrap;
For a complete working example with all query patterns, see examples/query.rs.
Timeout configuration
The Rust client provides flexible timeout configuration through socket_timeout and total_timeout parameters in policies. Understanding how these interact is crucial for handling network issues and controlling command execution time.
Timeout parameters
socket_timeout: Socket idle timeout when processing a database command (in milliseconds). Default value 5000 (5 seconds).total_timeout: Total command timeout, including retries (in milliseconds). Default value 0.
Timeout behavior rules
- Both zero (0, 0): No timeout limits - commands wait indefinitely
- Socket zero, total non-zero (0, N):
socket_timeoutinheritstotal_timeoutvalue - Socket non-zero, total zero (N, 0): Socket idle timeout of N ms, no total limit
- Both non-zero, socket > total (N, M where N > M):
socket_timeoutcapped attotal_timeout - Both non-zero, socket ≤ total (N, M where N ≤ M): Both timeouts enforced independently
When a socket timeout occurs, the client checks max_retries and total_timeout. If neither is exceeded, the command is automatically retried.
Rust client exposes these parameters through the Read/Write policy, and can be tuned as below:
use ;
let mut policy = default;
policy.base_policy.socket_timeout = 0;
policy.base_policy.total_timeout = 0;
let rec = client.get.await;
Socket recovery with timeout_delay
The timeout_delay parameter controls how the client handles sockets after a read timeout. This is particularly important for cloud deployments.
let mut policy = default;
policy.base_policy.socket_timeout = 2000; // 2 second socket timeout
policy.base_policy.total_timeout = 10000; // 10 second total timeout
policy.base_policy.timeout_delay = 3000; // 3 second delay for socket recovery
let rec = client.get.await;
How timeout_delay works:
- When
timeout_delay = 0(default): Socket is immediately closed on timeout - When
timeout_delay > 0: After a socket read timeout, the client attempts to drain remaining data from the socket in the background for up totimeout_delaymilliseconds- If all data is drained within the delay: Socket returned to connection pool (reusable)
- If delay expires before draining completes: Socket is closed
Why use timeout_delay?
Many cloud providers experience performance issues when clients close sockets while the server still has data to write (results in TCP RST packets). Draining the socket before closing avoids this penalty.
Trade-offs:
- ✓ Avoids TCP RST performance penalties on cloud platforms
- ✓ Allows socket reuse when recovery is successful
- ✗ Requires extra processing to drain sockets
- ✗ May need additional connections for command retries during recovery
Recommended value: If enabling timeout_delay, 3000ms (3 seconds) is a reasonable starting point.
For a complete working example demonstrating timeout scenarios, see examples/timeout_configuration.rs.
Feedback wanted
We need your help with:
- Real-world async patterns in your codebase
- Ergonomic pain points in API design
You’re not just testing this new client - you’re shaping the future of Rust in databases!
You can reach us through Github Issues or schedule a meeting to speak directly with our product team using this scheduling link.