pub struct LlmSchemaConfig {
pub enabled: bool,
pub provider: String,
pub model: String,
pub max_vendor_enrichments: usize,
pub enrich_customers: bool,
pub enrich_materials: bool,
pub enrich_findings: bool,
pub max_customer_enrichments: usize,
pub max_material_enrichments: usize,
pub max_finding_enrichments: usize,
}Expand description
LLM enrichment configuration.
Controls AI-augmented metadata enrichment using LLM providers. When enabled, vendor names, transaction descriptions, and anomaly explanations are enriched using the configured provider (mock by default).
Fields§
§enabled: boolWhether LLM enrichment is enabled.
provider: StringProvider type: “mock”, “openai”, “anthropic”, “custom”.
model: StringModel name/ID for the provider.
max_vendor_enrichments: usizeMaximum number of vendor names to enrich per run.
enrich_customers: boolv4.1.1+: also enrich customer names at generate time.
Default false preserves v4.1.0 behaviour.
enrich_materials: boolv4.1.1+: also enrich material descriptions at generate time.
Default false.
enrich_findings: boolv4.1.1+: also enrich audit finding titles at generate time
(the finding narratives remain on their existing template path
because they’re richer and locale-specific). Default false.
max_customer_enrichments: usizev4.1.1+: upper bound on customer enrichments per run. Matches
max_vendor_enrichments semantics.
max_material_enrichments: usizev4.1.1+: upper bound on material enrichments per run.
max_finding_enrichments: usizev4.1.1+: upper bound on finding enrichments per run.
Trait Implementations§
Source§impl Clone for LlmSchemaConfig
impl Clone for LlmSchemaConfig
Source§fn clone(&self) -> LlmSchemaConfig
fn clone(&self) -> LlmSchemaConfig
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more