pub struct Collection { /* private fields */ }Expand description
A collection represents a namespace for documents in the Sentinel database.
Collections are backed by filesystem directories, where each document is stored as a JSON file with metadata including version, timestamps, hash, and optional signature. The collection provides CRUD operations (Create, Read, Update, Delete) and advanced querying capabilities with streaming support for memory-efficient handling of large datasets.
§Structure
Each collection is stored in a directory with the following structure:
{collection_name}/- Root directory for the collection{collection_name}/{id}.json- Individual document files with embedded metadata{collection_name}/.deleted/- Soft-deleted documents (for recovery){collection_name}/.metadata.json- Collection metadata and indices (future)
§Streaming Operations
For memory efficiency with large datasets, operations like filter() and query()
return async streams that process documents one-by-one rather than loading
all documents into memory simultaneously.
§Example
use sentinel_dbms::{Store, Collection};
use futures::TryStreamExt;
use serde_json::json;
// Create a store and get a collection
let store = Store::new("/tmp/sentinel", None).await?;
let collection = store.collection("users").await?;
// Insert a document
let user_data = json!({
"name": "Alice",
"email": "alice@example.com",
"age": 30
});
collection.insert("user-123", user_data).await?;
// Retrieve the document
let doc = collection.get("user-123").await?;
assert!(doc.is_some());
assert_eq!(doc.unwrap().id(), "user-123");
// Stream all documents matching a predicate
let adults = collection.filter(|doc| {
doc.data().get("age")
.and_then(|v| v.as_i64())
.map_or(false, |age| age >= 18)
});
let adult_docs: Vec<_> = adults.try_collect().await?;
assert_eq!(adult_docs.len(), 1);Implementations§
Source§impl Collection
impl Collection
Sourcepub async fn aggregate(
&self,
filters: Vec<Filter>,
aggregation: Aggregation,
) -> Result<Value>
pub async fn aggregate( &self, filters: Vec<Filter>, aggregation: Aggregation, ) -> Result<Value>
Performs aggregation operations on documents matching the given filters.
Supported aggregations:
Count: Count of matching documentsSum(field): Sum of numeric values in the specified fieldAvg(field): Average of numeric values in the specified fieldMin(field): Minimum value in the specified fieldMax(field): Maximum value in the specified field
§Arguments
filters- Filters to apply before aggregationaggregation- The aggregation operation to perform
§Returns
Returns the aggregated result as a JSON Value.
§Examples
use sentinel_dbms::{Store, Collection, Filter, Aggregation};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("products").await?;
// Insert some test data
collection.insert("prod-1", json!({"name": "Widget", "price": 10.0})).await?;
collection.insert("prod-2", json!({"name": "Gadget", "price": 20.0})).await?;
// Count all products
let count = collection.aggregate(vec![], Aggregation::Count).await?;
assert_eq!(count, json!(2));
// Sum of all prices
let total = collection.aggregate(vec![], Aggregation::Sum("price".to_string())).await?;
assert_eq!(total, json!(30.0));Source§impl Collection
impl Collection
Sourcepub const fn created_at(&self) -> DateTime<Utc>
pub const fn created_at(&self) -> DateTime<Utc>
Returns the creation timestamp of the collection.
Sourcepub fn updated_at(&self) -> DateTime<Utc>
pub fn updated_at(&self) -> DateTime<Utc>
Returns the last update timestamp of the collection.
Sourcepub fn last_checkpoint_at(&self) -> Option<DateTime<Utc>>
pub fn last_checkpoint_at(&self) -> Option<DateTime<Utc>>
Returns the last checkpoint timestamp of the collection, if any.
Sourcepub fn total_documents(&self) -> u64
pub fn total_documents(&self) -> u64
Returns the total number of documents in the collection.
Sourcepub fn total_size_bytes(&self) -> u64
pub fn total_size_bytes(&self) -> u64
Returns the total size of all documents in the collection in bytes.
Sourcepub const fn stored_wal_config(&self) -> &CollectionWalConfig
pub const fn stored_wal_config(&self) -> &CollectionWalConfig
Returns a reference to the stored WAL configuration for this collection.
This is the WAL configuration as persisted in the collection metadata, without any temporary overrides that may be applied at runtime.
Sourcepub const fn wal_config(&self) -> &CollectionWalConfig
pub const fn wal_config(&self) -> &CollectionWalConfig
Returns the effective WAL configuration for this collection.
This includes the stored configuration plus any runtime overrides that may have been applied when the collection was accessed.
Sourcepub async fn save_metadata(&self) -> Result<()>
pub async fn save_metadata(&self) -> Result<()>
Saves the current collection metadata to disk.
This method persists the collection’s current state (document count, size, timestamps,
and WAL configuration) to the .metadata.json file in the collection directory. This
ensures that metadata remains consistent across restarts and can be used for monitoring
and optimization.
§Returns
Returns Ok(()) on success, or a SentinelError if the metadata cannot be saved.
Sourcepub async fn flush_metadata(&self) -> Result<()>
pub async fn flush_metadata(&self) -> Result<()>
Flushes any pending metadata changes to disk immediately.
This method forces a synchronous save of the collection metadata to disk, bypassing the normal debounced save mechanism. This is useful for tests and for ensuring data durability when needed.
§Returns
Returns Ok(()) on success, or a SentinelError if the metadata cannot be saved.
Sourcepub fn validate_document_id(id: &str) -> Result<()>
pub fn validate_document_id(id: &str) -> Result<()>
Validates a document ID according to filesystem-safe naming rules.
Document IDs must be filesystem-safe and cannot contain reserved characters or Windows reserved names. This prevents issues with file operations and ensures cross-platform compatibility.
§Arguments
id- The document ID to validate.
§Returns
Returns Ok(()) if the ID is valid, or a SentinelError::InvalidDocumentId
if the ID contains invalid characters or is a reserved name.
§Validation Rules
- Must not be empty
- Must not contain path separators (
/or\) - Must not contain control characters (0x00-0x1F)
- Must not contain Windows reserved characters (
< > : " | ? *) - Must not be a Windows reserved name (CON, PRN, AUX, NUL, COM1-9, LPT1-9)
- Must not contain spaces or other filesystem-unsafe characters
§Examples
use sentinel_dbms::Collection;
// Valid IDs
assert!(Collection::validate_document_id("user-123").is_ok());
assert!(Collection::validate_document_id("my_document").is_ok());
// Invalid IDs
assert!(Collection::validate_document_id("").is_err()); // empty
assert!(Collection::validate_document_id("path/file").is_err()); // path separator
assert!(Collection::validate_document_id("CON").is_err()); // reserved nameSourcepub fn start_event_processor(&mut self)
pub fn start_event_processor(&mut self)
Starts the background event processing task for the collection.
This method spawns an async task that processes internal collection events such as metadata updates and WAL operations. The task runs in the background and handles events sent via the event channel.
The event processor is responsible for:
- Processing document events (insert, update, delete)
- Debounced metadata persistence (every 500ms)
- Coordinating with the store’s event system
§Note
This method should only be called once during collection initialization. Multiple calls will replace the previous event task.
Sourcepub fn emit_event(&self, event: StoreEvent)
pub fn emit_event(&self, event: StoreEvent)
Emits an event to the store’s event system.
This is an internal method used to notify the store of collection-level events such as document insertions, updates, and deletions. The events are sent asynchronously and do not block the calling operation.
§Arguments
Emits an event to the collection’s event sender.
event- The event to emit to the store.
Source§impl Collection
impl Collection
Sourcepub async fn insert(&self, id: &str, data: Value) -> Result<()>
pub async fn insert(&self, id: &str, data: Value) -> Result<()>
Inserts a new document into the collection or overwrites an existing one.
The document is serialized to pretty-printed JSON and written to a file named
{id}.json within the collection’s directory. If a document with the same ID
already exists, it will be overwritten.
§Arguments
id- A unique identifier for the document. This will be used as the filename (with.jsonextension). Must be filesystem-safe.data- The JSON data to store. Can be any validserde_json::Value.
§Returns
Returns Ok(()) on success, or a SentinelError if the operation fails
(e.g., filesystem errors, serialization errors).
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
let user = json!({
"name": "Alice",
"email": "alice@example.com",
"age": 30
});
collection.insert("user-123", user).await?;Sourcepub async fn get(&self, id: &str) -> Result<Option<Document>>
pub async fn get(&self, id: &str) -> Result<Option<Document>>
Retrieves a document from the collection by its ID.
Reads the JSON file corresponding to the given ID and deserializes it into
a Document struct. If the document doesn’t exist, returns None.
By default, this method verifies both hash and signature with strict mode.
Use get_with_verification() to customize verification behavior.
§Arguments
id- The unique identifier of the document to retrieve.
§Returns
Returns:
Ok(Some(Document))if the document exists and was successfully readOk(None)if the document doesn’t exist (file not found)Err(SentinelError)if there was an error reading or parsing the document
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert a document first
collection.insert("user-123", json!({"name": "Alice"})).await?;
// Retrieve the document (with verification enabled by default)
let doc = collection.get("user-123").await?;
assert!(doc.is_some());
assert_eq!(doc.unwrap().id(), "user-123");
// Try to get a non-existent document
let missing = collection.get("user-999").await?;
assert!(missing.is_none());Sourcepub async fn get_with_verification(
&self,
id: &str,
options: &VerificationOptions,
) -> Result<Option<Document>>
pub async fn get_with_verification( &self, id: &str, options: &VerificationOptions, ) -> Result<Option<Document>>
Retrieves a document from the collection by its ID with custom verification options.
Reads the JSON file corresponding to the given ID and deserializes it into
a Document struct. If the document doesn’t exist, returns None.
§Arguments
id- The unique identifier of the document to retrieve.options- Verification options controlling hash and signature verification.
§Returns
Returns:
Ok(Some(Document))if the document exists and was successfully readOk(None)if the document doesn’t exist (file not found)Err(SentinelError)if there was an error reading, parsing, or verifying the document
§Example
use sentinel_dbms::{Store, Collection, VerificationMode, VerificationOptions};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert a document first
collection.insert("user-123", json!({"name": "Alice"})).await?;
// Retrieve with warning mode instead of strict
let options = VerificationOptions {
verify_signature: true,
verify_hash: true,
signature_verification_mode: VerificationMode::Warn,
empty_signature_mode: VerificationMode::Warn,
hash_verification_mode: VerificationMode::Warn,
};
let doc = collection.get_with_verification("user-123", &options).await?;
assert!(doc.is_some());Sourcepub async fn delete(&self, id: &str) -> Result<()>
pub async fn delete(&self, id: &str) -> Result<()>
Deletes a document from the collection (soft delete).
Moves the JSON file corresponding to the given ID to a .deleted/ subdirectory
within the collection. This implements soft deletes, allowing for recovery
of accidentally deleted documents. The .deleted/ directory is created
automatically if it doesn’t exist.
If the document doesn’t exist, the operation succeeds silently (idempotent).
§Arguments
id- The unique identifier of the document to delete.
§Returns
Returns Ok(()) on success (including when the document doesn’t exist),
or a SentinelError if the operation fails due to filesystem errors.
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert a document
collection.insert("user-123", json!({"name": "Alice"})).await?;
// Soft delete the document
collection.delete("user-123").await?;
// Document is no longer accessible via get()
let doc = collection.get("user-123").await?;
assert!(doc.is_none());
// But the file still exists in .deleted/
// (can be recovered manually if needed)Sourcepub async fn count(&self) -> Result<usize>
pub async fn count(&self) -> Result<usize>
Counts the total number of documents in the collection.
This method streams through all document IDs and counts them efficiently without loading the full documents into memory.
§Returns
Returns the total count of documents as a usize, or a SentinelError if
there was an error accessing the collection.
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert some documents
collection.insert("user-123", json!({"name": "Alice"})).await?;
collection.insert("user-456", json!({"name": "Bob"})).await?;
// Count the documents
let count = collection.count().await?;
assert_eq!(count, 2);Sourcepub async fn bulk_insert(&self, documents: Vec<(&str, Value)>) -> Result<()>
pub async fn bulk_insert(&self, documents: Vec<(&str, Value)>) -> Result<()>
Performs bulk insert operations on multiple documents.
Inserts multiple documents into the collection in a single operation. If any document fails to insert, the operation stops and returns an error. Documents are inserted in the order provided.
§Arguments
documents- A vector of (id, data) tuples to insert.
§Returns
Returns Ok(()) on success, or a SentinelError if any operation fails.
In case of failure, some documents may have been inserted before the error.
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Prepare bulk documents
let documents = vec![
("user-123", json!({"name": "Alice", "role": "admin"})),
("user-456", json!({"name": "Bob", "role": "user"})),
("user-789", json!({"name": "Charlie", "role": "user"})),
];
// Bulk insert
collection.bulk_insert(documents).await?;
// Verify all documents were inserted
assert!(collection.get("user-123").await?.is_some());
assert!(collection.get("user-456").await?.is_some());
assert!(collection.get("user-789").await?.is_some());pub async fn update(&self, id: &str, data: Value) -> Result<()>
Sourcepub async fn get_many(&self, ids: &[&str]) -> Result<Vec<Option<Document>>>
pub async fn get_many(&self, ids: &[&str]) -> Result<Vec<Option<Document>>>
Retrieves multiple documents by their IDs in a single operation.
This method efficiently loads multiple documents concurrently. For IDs that don’t exist,
None is returned in the corresponding position.
§Arguments
ids- A slice of document IDs to retrieve
§Returns
Returns a Vec<Option<Document>> where each element corresponds to the document
at the same index in the input ids slice. Some(document) if the document exists,
None if it doesn’t exist.
§Examples
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert some documents
collection.insert("user-1", json!({"name": "Alice"})).await?;
collection.insert("user-2", json!({"name": "Bob"})).await?;
// Batch get multiple documents
let docs = collection.get_many(&["user-1", "user-2", "user-3"]).await?;
assert_eq!(docs.len(), 3);
assert!(docs[0].is_some()); // user-1 exists
assert!(docs[1].is_some()); // user-2 exists
assert!(docs[2].is_none()); // user-3 doesn't existSourcepub async fn upsert(&self, id: &str, data: Value) -> Result<bool>
pub async fn upsert(&self, id: &str, data: Value) -> Result<bool>
Inserts a document if it doesn’t exist, or updates it if it does.
This is a convenience method that combines insert and update operations.
If the document doesn’t exist, it will be inserted. If it exists, the new data
will be merged with the existing data (see update for merge behavior).
§Arguments
id- The unique identifier of the documentdata- The data to insert or merge
§Returns
Returns Ok(true) if a new document was inserted, Ok(false) if an existing
document was updated.
§Examples
use sentinel_dbms::{Store, Collection};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// First call inserts the document
let inserted = collection.upsert("user-123", json!({"name": "Alice"})).await?;
assert!(inserted);
// Second call updates the existing document
let updated = collection.upsert("user-123", json!({"age": 30})).await?;
assert!(!updated);
// Document now contains both name and age
let doc = collection.get("user-123").await?.unwrap();
assert_eq!(doc.data()["name"], "Alice");
assert_eq!(doc.data()["age"], 30);Source§impl Collection
impl Collection
Sourcepub async fn query(&self, query: Query) -> Result<QueryResult>
pub async fn query(&self, query: Query) -> Result<QueryResult>
Executes a structured query against the collection.
This method supports complex filtering, sorting, pagination, and field projection. For optimal performance and memory usage:
- Queries without sorting use streaming processing with early limit application
- Queries with sorting collect filtered documents in memory for sorting
- Projection is applied only to final results to minimize memory usage
By default, this method verifies both hash and signature with strict mode.
Use query_with_verification() to customize verification behavior.
§Arguments
query- The query to execute
§Returns
Returns a QueryResult containing the matching documents and metadata.
§Example
use sentinel_dbms::{Store, Collection, QueryBuilder, Operator, SortOrder};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert test data
collection.insert("user-1", json!({"name": "Alice", "age": 25, "city": "NYC"})).await?;
collection.insert("user-2", json!({"name": "Bob", "age": 30, "city": "LA"})).await?;
collection.insert("user-3", json!({"name": "Charlie", "age": 35, "city": "NYC"})).await?;
// Query for users in NYC, sorted by age, limit 2
let query = QueryBuilder::new()
.filter("city", Operator::Equals, json!("NYC"))
.sort("age", SortOrder::Ascending)
.limit(2)
.projection(vec!["name", "age"])
.build();
let result = collection.query(query).await?;
let documents: Vec<_> = futures::TryStreamExt::try_collect(result.documents).await?;
assert_eq!(documents.len(), 2);Sourcepub async fn query_with_verification(
&self,
query: Query,
options: &VerificationOptions,
) -> Result<QueryResult>
pub async fn query_with_verification( &self, query: Query, options: &VerificationOptions, ) -> Result<QueryResult>
Executes a structured query against the collection with custom verification options.
This method supports complex filtering, sorting, pagination, and field projection. For optimal performance and memory usage:
- Queries without sorting use streaming processing with early limit application
- Queries with sorting collect filtered documents in memory for sorting
- Projection is applied only to final results to minimize memory usage
§Arguments
query- The query to executeoptions- Verification options controlling hash and signature verification.
§Returns
Returns a QueryResult containing the matching documents and metadata.
§Example
use sentinel_dbms::{Store, Collection, QueryBuilder, Operator, SortOrder, VerificationOptions, VerificationMode};
use serde_json::json;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert test data
collection.insert("user-1", json!({"name": "Alice", "age": 25, "city": "NYC"})).await?;
collection.insert("user-2", json!({"name": "Bob", "age": 30, "city": "LA"})).await?;
collection.insert("user-3", json!({"name": "Charlie", "age": 35, "city": "NYC"})).await?;
// Query with warning mode
let options = VerificationOptions::warn();
let query = QueryBuilder::new()
.filter("city", Operator::Equals, json!("NYC"))
.sort("age", SortOrder::Ascending)
.limit(2)
.projection(vec!["name", "age"])
.build();
let result = collection.query_with_verification(query, &options).await?;
let documents: Vec<_> = futures::TryStreamExt::try_collect(result.documents).await?;
assert_eq!(documents.len(), 2);Source§impl Collection
impl Collection
Sourcepub fn list(&self) -> Pin<Box<dyn Stream<Item = Result<String>> + Send>>
pub fn list(&self) -> Pin<Box<dyn Stream<Item = Result<String>> + Send>>
Lists all document IDs in the collection.
Returns a stream of document IDs from the collection directory. IDs are streamed as they are discovered, without guaranteed ordering. For sorted results, collect the stream and sort manually.
§Returns
Returns a stream of document IDs (filenames without the .json extension),
or a SentinelError if the operation fails due to filesystem errors.
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
use futures::TryStreamExt;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert some documents
collection.insert("user-123", json!({"name": "Alice"})).await?;
collection.insert("user-456", json!({"name": "Bob"})).await?;
// Stream all document IDs
let ids: Vec<_> = collection.list().try_collect().await?;
assert_eq!(ids.len(), 2);
assert!(ids.contains(&"user-123".to_string()));
assert!(ids.contains(&"user-456".to_string()));Sourcepub fn filter<F>(
&self,
predicate: F,
) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
pub fn filter<F>( &self, predicate: F, ) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
Filters documents in the collection using a predicate function.
This method performs streaming filtering by loading and checking documents one by one, keeping only matching documents in memory. This approach minimizes memory usage while maintaining good performance for most use cases.
By default, this method verifies both hash and signature with strict mode.
Use filter_with_verification() to customize verification behavior.
§Arguments
predicate- A function that takes a&Documentand returnstrueif the document should be included in the results.
§Returns
Returns a stream of documents that match the predicate.
§Example
use sentinel_dbms::{Store, Collection};
use serde_json::json;
use futures::stream::StreamExt;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert some test data
collection.insert("user-1", json!({"name": "Alice", "age": 25})).await?;
collection.insert("user-2", json!({"name": "Bob", "age": 30})).await?;
// Filter for users older than 26
let mut adults = collection.filter(|doc| {
doc.data().get("age")
.and_then(|v| v.as_i64())
.map_or(false, |age| age > 26)
});
let mut count = 0;
while let Some(doc) = adults.next().await {
let doc = doc?;
assert_eq!(doc.id(), "user-2");
count += 1;
}
assert_eq!(count, 1);Sourcepub fn filter_with_verification<F>(
&self,
predicate: F,
options: &VerificationOptions,
) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
pub fn filter_with_verification<F>( &self, predicate: F, options: &VerificationOptions, ) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
Filters documents in the collection using a predicate function with custom verification options.
This method performs streaming filtering by loading and checking documents one by one, keeping only matching documents in memory. This approach minimizes memory usage while maintaining good performance for most use cases.
§Arguments
predicate- A function that takes a&Documentand returnstrueif the document should be included in the results.options- Verification options controlling hash and signature verification.
§Returns
Returns a stream of documents that match the predicate.
§Example
use sentinel_dbms::{Store, Collection, VerificationOptions};
use serde_json::json;
use futures::stream::StreamExt;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Insert some test data
collection.insert("user-1", json!({"name": "Alice", "age": 25})).await?;
collection.insert("user-2", json!({"name": "Bob", "age": 30})).await?;
// Filter with warnings enabled
let options = VerificationOptions::warn();
let mut adults = collection.filter_with_verification(
|doc| {
doc.data().get("age")
.and_then(|v| v.as_i64())
.map_or(false, |age| age > 26)
},
&options
);
let mut count = 0;
while let Some(doc) = adults.next().await {
let doc = doc?;
assert_eq!(doc.id(), "user-2");
count += 1;
}
assert_eq!(count, 1);Sourcepub fn all(&self) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
pub fn all(&self) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
Streams all documents in the collection.
This method performs streaming by loading documents one by one, minimizing memory usage.
By default, this method verifies both hash and signature with strict mode.
Use all_with_verification() to customize verification behavior.
§Returns
Returns a stream of all documents in the collection.
§Example
use sentinel_dbms::{Collection, Store};
use futures::stream::StreamExt;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Stream all documents
let mut all_docs = collection.all();
while let Some(doc) = all_docs.next().await {
let doc = doc?;
println!("Document: {}", doc.id());
}Sourcepub fn all_with_verification(
&self,
options: &VerificationOptions,
) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
pub fn all_with_verification( &self, options: &VerificationOptions, ) -> Pin<Box<dyn Stream<Item = Result<Document>> + Send>>
Streams all documents in the collection with custom verification options.
This method performs streaming by loading documents one by one, minimizing memory usage.
§Arguments
options- Verification options controlling hash and signature verification.
§Returns
Returns a stream of all documents in the collection.
§Example
use sentinel_dbms::{Collection, Store, VerificationOptions};
use futures::stream::StreamExt;
let store = Store::new("/path/to/data", None).await?;
let collection = store.collection("users").await?;
// Stream all documents with warnings instead of errors
let options = VerificationOptions::warn();
let mut all_docs = collection.all_with_verification(&options);
while let Some(doc) = all_docs.next().await {
let doc = doc?;
println!("Document: {}", doc.id());
}Source§impl Collection
impl Collection
Sourcepub async fn verify_hash(
&self,
doc: &Document,
options: VerificationOptions,
) -> Result<()>
pub async fn verify_hash( &self, doc: &Document, options: VerificationOptions, ) -> Result<()>
Verifies document hash according to the specified verification options.
§Arguments
doc- The document to verifyoptions- The verification options
§Returns
Returns Ok(()) if verification passes or is handled according to the mode,
or Err(SentinelError::HashVerificationFailed) if verification fails in Strict mode.
Sourcepub async fn verify_signature(
&self,
doc: &Document,
options: VerificationOptions,
) -> Result<()>
pub async fn verify_signature( &self, doc: &Document, options: VerificationOptions, ) -> Result<()>
Verifies document signature according to the specified verification options.
§Arguments
doc- The document to verifyoptions- The verification options containing modes for different scenarios
§Returns
Returns Ok(()) if verification passes or is handled according to the mode,
or Err(SentinelError::SignatureVerificationFailed) if verification fails in Strict mode.