1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
//! Ensemble LLM definitions and configuration.
//!
//! An **Ensemble LLM** is a fully-specified configuration of a single upstream
//! language model. It encapsulates:
//!
//! - Model identity (which LLM to use)
//! - Prompt structure (prefix/suffix messages)
//! - Decoding parameters (temperature, top_p, etc.)
//! - Provider preferences and routing
//! - Output mode, reasoning settings, and verbosity
//!
//! # Content-Addressed Identity
//!
//! Ensemble LLMs use **content-addressed identifiers** - their ID is derived
//! deterministically from their full definition using XXHash3-128. This ensures:
//!
//! - Two identical definitions always produce the same ID
//! - IDs can be computed anywhere (server, client, browser via WASM)
//! - No hidden mutation or "latest version" ambiguity
//!
//! # Normalization
//!
//! Before computing an ID, definitions are normalized via [`EnsembleLlmBase::prepare`]:
//!
//! - Default values are removed (e.g., `temperature: 1.0` becomes `None`)
//! - Empty collections are removed
//! - Collections are sorted for deterministic ordering
//!
//! # Example
//!
//! ```ignore
//! use objectiveai::ensemble_llm::{EnsembleLlmBase, EnsembleLlm, OutputMode};
//!
//! let base = EnsembleLlmBase {
//! model: "gpt-4".to_string(),
//! output_mode: OutputMode::Instruction,
//! temperature: Some(0.7),
//! // ... other fields
//! };
//!
//! let llm: EnsembleLlm = base.try_into()?;
//! println!("ID: {}", llm.id); // Deterministic content-addressed ID
//! ```
pub use *;
pub use *;
pub use *;
pub use *;
pub use *;
pub use *;
pub use *;