pub struct ServiceProvider { /* private fields */ }Expand description
Service provider for resolving dependencies from the DI container.
The ServiceProvider is the heart of the dependency injection system. It resolves
services according to their registered lifetimes (Singleton, Scoped, Transient) and
manages the lifecycle of singleton services including disposal.
§Performance Optimizations
ServiceProvider includes world-class performance optimizations:
- Singleton caching: Embedded OnceCell provides 31ns resolution (~31.5M ops/sec)
- Scoped caching: Slot-based resolution with O(1) access times
- Hybrid registry: Vec for small collections, HashMap for large ones
- Lock-free reads: After initialization, singleton access requires no locks
§Thread Safety
ServiceProvider is fully thread-safe and can be shared across multiple threads.
Singleton services are cached with proper synchronization, and the provider
can be cloned cheaply (it uses Arc internally).
§Examples
use ferrous_di::{ServiceCollection, Resolver};
use std::sync::Arc;
struct Database { url: String }
struct UserService { db: Arc<Database> }
let mut collection = ServiceCollection::new();
collection.add_singleton(Database { url: "postgres://localhost".to_string() });
collection.add_transient_factory::<UserService, _>(|resolver| {
UserService { db: resolver.get_required::<Database>() }
});
let provider = collection.build();
let user_service = provider.get_required::<UserService>();
assert_eq!(user_service.db.url, "postgres://localhost");Implementations§
Source§impl ServiceProvider
impl ServiceProvider
Sourcepub fn create_scope(&self) -> Scope
pub fn create_scope(&self) -> Scope
Creates a new scope for resolving scoped services.
Scoped services are cached per scope and are ideal for request-scoped dependencies in web applications. Each scope maintains its own cache of scoped services while still accessing singleton services from the root provider.
§Returns
A new Scope that can resolve both scoped and singleton services.
The scope maintains its own cache for scoped services.
§Examples
use ferrous_di::{ServiceCollection, Resolver};
use std::sync::{Arc, Mutex};
#[derive(Debug)]
struct RequestId(String);
let mut collection = ServiceCollection::new();
let counter = Arc::new(Mutex::new(0));
let counter_clone = counter.clone();
collection.add_scoped_factory::<RequestId, _>(move |_| {
let mut c = counter_clone.lock().unwrap();
*c += 1;
RequestId(format!("req-{}", *c))
});
let provider = collection.build();
// Create separate scopes
let scope1 = provider.create_scope();
let scope2 = provider.create_scope();
let req1a = scope1.get_required::<RequestId>();
let req1b = scope1.get_required::<RequestId>(); // Same instance
let req2 = scope2.get_required::<RequestId>(); // Different instance
assert!(Arc::ptr_eq(&req1a, &req1b)); // Same scope, same instance
assert!(!Arc::ptr_eq(&req1a, &req2)); // Different scopes, different instancesSourcepub async fn dispose_all(&self)
pub async fn dispose_all(&self)
Disposes all registered disposal hooks in LIFO order.
This method runs all asynchronous disposal hooks first (in reverse order), followed by all synchronous disposal hooks (in reverse order). This ensures proper cleanup of singleton services.
§Examples
use ferrous_di::{ServiceCollection, Dispose, AsyncDispose, Resolver};
use async_trait::async_trait;
use std::sync::Arc;
struct Cache;
impl Dispose for Cache {
fn dispose(&self) {
println!("Cache disposed");
}
}
struct Client;
#[async_trait]
impl AsyncDispose for Client {
async fn dispose(&self) {
println!("Client disposed");
}
}
let mut services = ServiceCollection::new();
services.add_singleton_factory::<Cache, _>(|r| {
let cache = Arc::new(Cache);
r.register_disposer(cache.clone());
Cache // Return concrete type
});
services.add_singleton_factory::<Client, _>(|r| {
let client = Arc::new(Client);
r.register_async_disposer(client.clone());
Client // Return concrete type
});
let provider = services.build();
// ... use services ...
provider.dispose_all().await;Source§impl ServiceProvider
impl ServiceProvider
Sourcepub fn resolve_singleton_fast_cache(
&self,
key: &Key,
) -> Option<Arc<dyn Any + Send + Sync>>
pub fn resolve_singleton_fast_cache( &self, key: &Key, ) -> Option<Arc<dyn Any + Send + Sync>>
Alternative high-performance singleton resolution using FastSingletonCache This provides an alternative to the embedded OnceCell approach for scenarios where maximum throughput is needed and error handling can be simplified
Sourcepub fn discover_tools(
&self,
criteria: &ToolSelectionCriteria,
) -> ToolDiscoveryResult
pub fn discover_tools( &self, criteria: &ToolSelectionCriteria, ) -> ToolDiscoveryResult
Discovers available tools based on capability requirements.
This is the main entry point for agent planners to find suitable tools for their tasks. Returns matching tools along with partial matches and any unsatisfied requirements.
§Examples
use ferrous_di::{ServiceCollection, ToolSelectionCriteria, CapabilityRequirement};
// ... after registering tools with capabilities ...
let mut services = ServiceCollection::new();
let provider = services.build();
// Find tools that can search the web
let criteria = ToolSelectionCriteria::new()
.require("web_search")
.exclude_tag("experimental")
.max_cost(0.01);
let result = provider.discover_tools(&criteria);
println!("Found {} matching tools", result.matching_tools.len());
for tool in &result.matching_tools {
println!(" - {}: {}", tool.name, tool.description);
}
if !result.unsatisfied_requirements.is_empty() {
println!("Missing capabilities: {:?}", result.unsatisfied_requirements);
}Sourcepub fn list_all_tools(&self) -> Vec<&ToolInfo>
pub fn list_all_tools(&self) -> Vec<&ToolInfo>
Gets all registered tools with their capability information.
Useful for debugging or building tool catalogs.
Sourcepub fn get_tool_info(&self, key: &Key) -> Option<&ToolInfo>
pub fn get_tool_info(&self, key: &Key) -> Option<&ToolInfo>
Gets capability information for a specific tool.
Source§impl ServiceProvider
impl ServiceProvider
Sourcepub async fn ready(
&self,
) -> Result<ReadinessReport, Box<dyn Error + Send + Sync>>
pub async fn ready( &self, ) -> Result<ReadinessReport, Box<dyn Error + Send + Sync>>
Performs readiness checks on all prewarmed services.
This method resolves all services marked for prewarming and runs
readiness checks on those that implement ReadyCheck. This is
typically called during application startup to ensure all critical
services are ready before accepting requests.
§Performance
Readiness checks are run in parallel with a configurable concurrency limit to balance startup time with resource usage.
§Examples
use ferrous_di::{ServiceCollection, ReadyCheck};
use async_trait::async_trait;
use std::sync::Arc;
struct DatabaseService;
#[async_trait]
impl ReadyCheck for DatabaseService {
async fn ready(&self) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Test database connection
Ok(())
}
}
let mut services = ServiceCollection::new();
services.add_singleton(DatabaseService);
services.prewarm::<DatabaseService>();
let provider = services.build();
let report = provider.ready().await?;
if report.all_ready() {
println!("All services ready! Starting application...");
} else {
eprintln!("Some services failed readiness checks:");
for failure in report.failures() {
eprintln!(" {}: {}",
failure.key.display_name(),
failure.error.as_deref().unwrap_or("Unknown error"));
}
std::process::exit(1);
}