Crate product_farm_llm_evaluator

Crate product_farm_llm_evaluator 

Source
Expand description

LLM-based Rule Evaluation for Product-FARM

This crate provides LLM (Large Language Model) based rule evaluation, allowing rules to use AI reasoning instead of deterministic JSON Logic.

§Features

  • LlmEvaluatorConfig - Configuration for LLM calls (model, temperature, prompts)
  • ClaudeLlmEvaluator - Anthropic Claude implementation (requires anthropic feature)
  • OllamaLlmEvaluator - Ollama local LLM implementation (requires ollama feature)
  • PromptBuilder - Build context-rich prompts with rule metadata
  • ParallelLlmExecutor - Execute LLM rules in parallel
  • RuleEngineLlmConfig - Environment-based configuration

§Environment Variables

All configuration can be loaded from environment variables with the RULE_ENGINE_ prefix. See env_config module for full list.

§Quick Start

# Use Ollama (default)
export RULE_ENGINE_LLM_PROVIDER=ollama
export RULE_ENGINE_OLLAMA_MODEL=qwen2.5:7b

# Or use Anthropic
export RULE_ENGINE_LLM_PROVIDER=anthropic
export RULE_ENGINE_ANTHROPIC_API_KEY=your-key

§Example

use product_farm_llm_evaluator::{
    LlmEvaluatorConfig, RuleEngineLlmConfig,
    PromptBuilder, RuleEvaluationContext, AttributeInfo,
    ParallelLlmExecutor, ParallelExecutorConfig,
};

// Load config from environment
let config = RuleEngineLlmConfig::from_env();
println!("{}", config.summary());

// Build a context-rich prompt
let context = RuleEvaluationContext::new("calculate-premium")
    .with_description("Calculate insurance premium")
    .add_input(AttributeInfo::new("age").with_description("Driver's age"))
    .add_output(AttributeInfo::new("premium").with_description("Monthly premium"));

let prompt = PromptBuilder::new().build(&context);

Re-exports§

pub use env_config::RuleEngineLlmConfig;
pub use env_config::GlobalLlmConfig;
pub use env_config::AnthropicConfig;
pub use env_config::OllamaConfig;
pub use env_config::RetryConfig as EnvRetryConfig;
pub use env_config::ENV_PREFIX;

Modules§

env_config
Environment-based configuration for LLM evaluators.

Structs§

AttributeInfo
Metadata about an attribute for prompt context
ClaudeLlmEvaluator
LlmEvaluatorConfig
Configuration for LLM-based rule evaluation.
LlmRuleResult
Result of a single LLM rule evaluation
OllamaLlmEvaluator
ParallelExecutorConfig
Configuration for the parallel executor
ParallelLlmExecutor
Async parallel executor for LLM rules
PromptBuilder
Builds prompts for LLM evaluation with rich context
RetryConfig
Retry configuration with exponential backoff
RuleEvaluationContext
Full context for LLM rule evaluation
RuleMetadata
Metadata for rule evaluation

Enums§

LlmEvaluatorError
Errors that can occur during LLM evaluation
OutputFormat
Output format expected from the LLM
OutputFormatInstructions

Functions§

default_system_prompt
Generate a system prompt for rule evaluation

Type Aliases§

LlmEvaluatorResult