pub struct ClientFactory;Expand description
Factory Pattern: Centralized logic for client creation and configuration
This factory encapsulates the logic of determining which LLM client to use based on model configuration, abstracting provider-specific details from the graph.
Implementations§
Source§impl ClientFactory
impl ClientFactory
Sourcepub fn supports_reasoning(model: &str) -> bool
pub fn supports_reasoning(model: &str) -> bool
Check if a model supports reasoning capabilities
Reasoning models (gpt-5, o1-*) require special handling and use the Responses API
Sourcepub fn validate_config(config: &LLMConfig) -> Result<()>
pub fn validate_config(config: &LLMConfig) -> Result<()>
Validate that the given LLM configuration is supported
Sourcepub fn should_use_reasoning_api(
config: &LLMConfig,
reasoning_client: &Option<Arc<dyn ReasoningClient>>,
) -> bool
pub fn should_use_reasoning_api( config: &LLMConfig, reasoning_client: &Option<Arc<dyn ReasoningClient>>, ) -> bool
Determine if the given client supports reasoning based on the model
This is a runtime check to see if we should attempt to use the Reasoning API
Sourcepub fn create_client(
_config: &LLMConfig,
_api_key: &str,
) -> Result<Arc<dyn LLMClient>>
pub fn create_client( _config: &LLMConfig, _api_key: &str, ) -> Result<Arc<dyn LLMClient>>
Future: Create an LLM client from configuration
Currently, clients are created at the application level and passed to the graph. This method is reserved for future use when we might want to create clients dynamically at runtime.