Expand description
LLM-based Rule Evaluation for Product-FARM
This crate provides LLM (Large Language Model) based rule evaluation, allowing rules to use AI reasoning instead of deterministic JSON Logic.
§Features
LlmEvaluatorConfig- Configuration for LLM calls (model, temperature, prompts)ClaudeLlmEvaluator- Anthropic Claude implementation (requiresanthropicfeature)OllamaLlmEvaluator- Ollama local LLM implementation (requiresollamafeature)PromptBuilder- Build context-rich prompts with rule metadataParallelLlmExecutor- Execute LLM rules in parallelRuleEngineLlmConfig- Environment-based configuration
§Environment Variables
All configuration can be loaded from environment variables with the
RULE_ENGINE_ prefix. See env_config module for full list.
§Quick Start
# Use Ollama (default)
export RULE_ENGINE_LLM_PROVIDER=ollama
export RULE_ENGINE_OLLAMA_MODEL=qwen2.5:7b
# Or use Anthropic
export RULE_ENGINE_LLM_PROVIDER=anthropic
export RULE_ENGINE_ANTHROPIC_API_KEY=your-key§Example
ⓘ
use product_farm_llm_evaluator::{
LlmEvaluatorConfig, RuleEngineLlmConfig,
PromptBuilder, RuleEvaluationContext, AttributeInfo,
ParallelLlmExecutor, ParallelExecutorConfig,
};
// Load config from environment
let config = RuleEngineLlmConfig::from_env();
println!("{}", config.summary());
// Build a context-rich prompt
let context = RuleEvaluationContext::new("calculate-premium")
.with_description("Calculate insurance premium")
.add_input(AttributeInfo::new("age").with_description("Driver's age"))
.add_output(AttributeInfo::new("premium").with_description("Monthly premium"));
let prompt = PromptBuilder::new().build(&context);Re-exports§
pub use env_config::RuleEngineLlmConfig;pub use env_config::GlobalLlmConfig;pub use env_config::AnthropicConfig;pub use env_config::OllamaConfig;pub use env_config::RetryConfig as EnvRetryConfig;pub use env_config::ENV_PREFIX;
Modules§
- env_
config - Environment-based configuration for LLM evaluators.
Structs§
- Attribute
Info - Metadata about an attribute for prompt context
- Claude
LlmEvaluator - LlmEvaluator
Config - Configuration for LLM-based rule evaluation.
- LlmRule
Result - Result of a single LLM rule evaluation
- Ollama
LlmEvaluator - Parallel
Executor Config - Configuration for the parallel executor
- Parallel
LlmExecutor - Async parallel executor for LLM rules
- Prompt
Builder - Builds prompts for LLM evaluation with rich context
- Retry
Config - Retry configuration with exponential backoff
- Rule
Evaluation Context - Full context for LLM rule evaluation
- Rule
Metadata - Metadata for rule evaluation
Enums§
- LlmEvaluator
Error - Errors that can occur during LLM evaluation
- Output
Format - Output format expected from the LLM
- Output
Format Instructions
Functions§
- default_
system_ prompt - Generate a system prompt for rule evaluation