# ProofGuard Module API Documentation
Version: 3.0.0
The ProofGuard module triangulates claims across 3+ independent sources to verify
factual accuracy, implementing the three-source rule (CONS-006).
## Table of Contents
1. [Module Overview](#module-overview)
2. [Configuration](#configuration)
3. [Core Types](#core-types)
4. [Methods](#methods)
5. [Usage Examples](#usage-examples)
6. [Error Handling](#error-handling)
## Module Overview
ProofGuard enforces rigorous fact verification through multi-source triangulation,
detecting contradictions, ranking source quality, and producing calibrated
verification scores. It implements ReasonKit's core verification protocol.
Key capabilities:
- 3+ source requirement enforcement (CONS-006)
- Contradiction detection across sources
- Source tier ranking and weighting
- Confidence scoring with calibration
- Stance analysis (Support/Contradict/Neutral)
- Automated source credibility assessment
## Configuration
### ProofGuardConfig
Configuration struct controlling ProofGuard verification behavior.
```rust
pub struct ProofGuardConfig {
pub min_sources: usize,
pub require_tier1: bool,
pub min_agreement_ratio: f64,
pub contradiction_penalty: f64,
pub timeout_ms: u64,
}
```
#### Fields
| `min_sources` | `usize` | Yes | `3` | Minimum sources required for verification. Enforces triangulation. |
| `require_tier1` | `bool` | Yes | `true` | Require at least one Tier 1 source. Ensures quality baseline. |
| `min_agreement_ratio` | `f64` | Yes | `0.6` | Minimum agreement ratio for verification. Range: 0.0-1.0. |
| `contradiction_penalty` | `f64` | Yes | `0.3` | Confidence penalty for contradictions. Range: 0.0-1.0. |
| `timeout_ms` | `u64` | Yes | `15000` | Verification timeout in milliseconds. Prevents hanging operations. |
#### Implementation
```rust
impl Default for ProofGuardConfig {
fn default() -> Self {
Self {
min_sources: 3,
require_tier1: true,
min_agreement_ratio: 0.6,
contradiction_penalty: 0.3,
timeout_ms: 15000,
}
}
}
impl ProofGuardConfig {
/// Fast verification mode - relaxed requirements
pub fn fast() -> Self {
Self {
min_sources: 2,
require_tier1: false,
min_agreement_ratio: 0.5,
contradiction_penalty: 0.2,
timeout_ms: 10000,
}
}
/// Strict verification mode - highest standards
pub fn strict() -> Self {
Self {
min_sources: 5,
require_tier1: true,
min_agreement_ratio: 0.8,
contradiction_penalty: 0.5,
timeout_ms: 30000,
}
}
}
```
## Core Types
### ProofGuard
Main module struct implementing the ThinkToolModule trait.
```rust
pub struct ProofGuard {
config: ThinkToolModuleConfig,
proofguard_config: ProofGuardConfig,
}
```
#### Fields
| `config` | `ThinkToolModuleConfig` | Standard module configuration metadata |
| `proofguard_config` | `ProofGuardConfig` | ProofGuard-specific configuration parameters |
### ProofGuardResult
Structured output from ProofGuard execution containing verification results.
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProofGuardResult {
pub claim: String,
pub verification: VerificationRecommendation,
pub confidence: VerificationConfidence,
pub sources: Vec<AnalyzedSource>,
pub contradictions: Vec<Contradiction>,
pub agreement_ratio: f64,
pub metadata: VerificationMetadata,
}
```
#### Fields
| `claim` | `String` | Original claim being verified |
| `verification` | `VerificationRecommendation` | Recommendation based on evidence |
| `confidence` | `VerificationConfidence` | Confidence level in recommendation |
| `sources` | `Vec<AnalyzedSource>` | Detailed source analysis |
| `contradictions` | `Vec<Contradiction>` | Identified contradictions |
| `agreement_ratio` | `f64` | Ratio of supporting sources |
| `metadata` | `VerificationMetadata` | Execution metadata |
### VerificationRecommendation
High-level verification recommendation based on evidence analysis.
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum VerificationRecommendation {
StronglySupported, // High confidence support
ModeratelySupported, // Medium confidence support
WeaklySupported, // Low confidence support
InsufficientEvidence, // Not enough evidence
Contradicted, // Evidence contradicts claim
StronglyContradicted, // High confidence contradiction
}
```
### VerificationConfidence
Quantitative confidence measure in verification recommendation.
```rust
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
pub struct VerificationConfidence {
pub score: f64, // 0.0-1.0 confidence score
pub calibration: f64, // Calibration adjustment factor
pub source_quality: f64, // Average source quality weight
pub consistency: f64, // Evidence consistency measure
}
```
#### Fields
| `score` | `f64` | Base confidence score (0.0-1.0) |
| `calibration` | `f64` | Calibration adjustment factor |
| `source_quality` | `f64` | Average source quality weight |
| `consistency` | `f64` | Evidence consistency measure |
### AnalyzedSource
Detailed analysis of an individual source's contribution to verification.
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AnalyzedSource {
pub name: String,
pub tier: SourceTier,
pub source_type: SourceType,
pub stance: Stance,
pub weight: f64,
pub confidence: f64,
pub credibility_factors: Vec<CredibilityFactor>,
pub evidence_excerpts: Vec<String>,
}
```
#### Fields
| `name` | `String` | Source name/title |
| `tier` | `SourceTier` | Quality tier classification |
| `source_type` | `SourceType` | Type of source |
| `stance` | `Stance` | Support/Contradict/Neutral position |
| `weight` | `f64` | Calculated weight based on tier |
| `confidence` | `f64` | Confidence in source credibility |
| `credibility_factors` | `Vec<CredibilityFactor>` | Factors affecting credibility |
| `evidence_excerpts` | `Vec<String>` | Relevant evidence passages |
### SourceTier
Quality ranking of information sources affecting evidence weight.
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum SourceTier {
Primary, // Official docs, peer-reviewed papers, primary sources (weight: 1.0)
Secondary, // Reputable news, expert blogs, industry reports (weight: 0.7)
Independent, // Community content, forums (weight: 0.4)
Unverified, // Social media, unknown sources (weight: 0.2)
}
```
### SourceType
Classification of source types for specialized credibility assessment.
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum SourceType {
Academic, // Peer-reviewed research papers
Documentation, // Official technical documentation
News, // Reputable news organizations
Expert, // Industry expert commentary
Government, // Official government publications
Industry, // Industry whitepapers and reports
Community, // Community forums and discussions
Social, // Social media posts
PrimaryData, // Direct observation or measurement
}
```
### Stance
Position taken by a source regarding the claim under verification.
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum Stance {
Support, // Source supports the claim
Contradict, // Source contradicts the claim
Neutral, // Source is neutral/ambiguous
Partial, // Source partially supports the claim
}
```
### CredibilityFactor
Individual factor contributing to source credibility assessment.
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CredibilityFactor {
pub factor: CredibilityFactorType,
pub weight: f64,
pub evidence: String,
pub confidence: f64,
}
```
#### Fields
| `factor` | `CredibilityFactorType` | Type of credibility factor |
| `weight` | `f64` | Weight assigned to this factor |
| `evidence` | `String` | Evidence supporting this factor |
| `confidence` | `f64` | Confidence in this factor assessment |
### CredibilityFactorType
Types of factors affecting source credibility.
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum CredibilityFactorType {
Authority, // Author/expert credentials
Recency, // Publication/timestamp recency
Citations, // Reference to other sources
Methodology, // Research methodology quality
Independence, // Source independence/bias
Consistency, // Internal consistency
Corroboration, // External corroboration
}
```
### Contradiction
Identification of conflicting evidence between sources.
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Contradiction {
pub source_indices: Vec<usize>,
pub claim_variants: Vec<String>,
pub severity: ContradictionSeverity,
pub resolution_status: ResolutionStatus,
pub evidence_summary: String,
}
```
#### Fields
| `source_indices` | `Vec<usize>` | Indices of contradictory sources |
| `claim_variants` | `Vec<String>` | Different versions of the claim |
| `severity` | `ContradictionSeverity` | Seriousness of contradiction |
| `resolution_status` | `ResolutionStatus` | Current resolution state |
| `evidence_summary` | `String` | Summary of contradictory evidence |
### ContradictionSeverity
Severity classification for contradictions affecting confidence.
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ContradictionSeverity {
Minor, // Small discrepancy
Moderate, // Significant disagreement
Major, // Fundamental contradiction
Critical, // Completely incompatible
}
```
### VerificationMetadata
Execution metadata and performance statistics.
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct VerificationMetadata {
pub execution_time_ms: u64,
pub sources_analyzed: usize,
pub credibility_checks: usize,
pub contradiction_analysis: usize,
pub calibration_applied: bool,
}
```
#### Fields
| `execution_time_ms` | `u64` | Total execution time in milliseconds |
| `sources_analyzed` | `usize` | Number of sources analyzed |
| `credibility_checks` | `usize` | Number of credibility assessments |
| `contradiction_analysis` | `usize` | Number of contradiction checks |
| `calibration_applied` | `bool` | Whether confidence calibration was used |
## Methods
### ProofGuard::new()
Create a new ProofGuard module with default configuration.
```rust
pub fn new() -> Self
```
Returns: `ProofGuard` instance with default settings.
Example:
```rust
let module = ProofGuard::new();
assert_eq!(module.name(), "ProofGuard");
assert_eq!(module.version(), "3.0.0");
```
### ProofGuard::with_config()
Create a new ProofGuard module with custom configuration.
```rust
pub fn with_config(config: ProofGuardConfig) -> Self
```
Parameters:
- `config`: `ProofGuardConfig` - Custom configuration parameters
Returns: `ProofGuard` instance with specified configuration.
Example:
```rust
let config = ProofGuardConfig::strict();
let module = ProofGuard::with_config(config);
```
### ProofGuard::verify_claim()
Direct verification of a claim with provided sources.
```rust
pub fn verify_claim(&self, claim: &str, sources: &[ProofGuardSource]) -> Result<ProofGuardResult>
```
Parameters:
- `claim`: `&str` - Claim to verify
- `sources`: `&[ProofGuardSource]` - Sources supporting/contradicting the claim
Returns: `Result<ProofGuardResult>` - Verification results or error.
Example:
```rust
let module = ProofGuard::new();
let sources = vec![
ProofGuardSource {
name: "Rust Book".to_string(),
tier: "Primary".to_string(),
source_type: "Documentation".to_string(),
stance: "Support".to_string(),
},
// ... more sources
];
let result = module.verify_claim("Rust is memory-safe", &sources)?;
```
### ProofGuard::execute()
Execute the ProofGuard module synchronously.
```rust
impl ThinkToolModule for ProofGuard {
fn execute(&self, context: &ThinkToolContext) -> Result<ThinkToolOutput>
}
```
Parameters:
- `context`: `&ThinkToolContext` - Execution context with JSON claim and sources
Returns: `Result<ThinkToolOutput>` - Structured output or error.
Example:
```rust
let module = ProofGuard::new();
let context = ThinkToolContext::new(r#"{
"claim": "Quantum computers can break RSA encryption",
"sources": [...]
}"#);
let result = module.execute(&context)?;
```
### ProofGuard::config()
Get the module configuration.
```rust
pub fn config(&self) -> &ProofGuardConfig
```
Returns: `&ProofGuardConfig` - Reference to current configuration.
Example:
```rust
let module = ProofGuard::new();
let config = module.config();
assert_eq!(config.min_sources, 3);
```
## Usage Examples
### Basic Claim Verification
```rust
use reasonkit::thinktool::modules::{ProofGuard, ThinkToolContext, ThinkToolModule};
// Create module with default settings
let module = ProofGuard::new();
// Prepare JSON input with claim and sources
let context = ThinkToolContext::new(r#"{
"claim": "Rust is memory-safe without a garbage collector",
"sources": [
{
"name": "Rust Book",
"tier": "Primary",
"source_type": "Documentation",
"stance": "Support"
},
{
"name": "ACM Paper on Memory Safety",
"tier": "Primary",
"source_type": "Academic",
"stance": "Support"
},
{
"name": "Tech Blog Analysis",
"tier": "Secondary",
"source_type": "News",
"stance": "Support"
}
]
}"#);
// Execute verification
let result = module.execute(&context)?;
// Check verification recommendation
let verification = result.get_str("verification").unwrap();
println!("Verification result: {}", verification);
// Check confidence score
let confidence_score = result.get("confidence").unwrap().get("score").unwrap().as_f64().unwrap();
println!("Confidence score: {:.2}", confidence_score);
```
### Custom Configuration
```rust
use reasonkit::thinktool::modules::{ProofGuard, ProofGuardConfig, ThinkToolContext};
// Create strict configuration for high-stakes verification
let config = ProofGuardConfig {
min_sources: 5,
require_tier1: true,
min_agreement_ratio: 0.8,
contradiction_penalty: 0.5,
timeout_ms: 30000,
};
let module = ProofGuard::with_config(config);
// Complex claim verification
let context = ThinkToolContext::new(r#"{
"claim": "Artificial general intelligence will emerge by 2030",
"sources": [
{"name": "Expert Survey", "tier": "Primary", "source_type": "Academic", "stance": "Support"},
{"name": "Industry Report", "tier": "Secondary", "source_type": "Industry", "stance": "Support"},
{"name": "Skeptic Analysis", "tier": "Secondary", "source_type": "Expert", "stance": "Contradict"},
{"name": "Historical Precedent Study", "tier": "Primary", "source_type": "Academic", "stance": "Neutral"},
{"name": "Technical Feasibility Analysis", "tier": "Primary", "source_type": "Academic", "stance": "Partial"}
]
}"#);
let result = module.execute(&context)?;
// Analyze contradictions
let contradictions = result.get_array("contradictions").unwrap();
if !contradictions.is_empty() {
println!("Found {} contradictions requiring resolution", contradictions.len());
}
```
### Programmatic Source Addition
```rust
use reasonkit::thinktool::modules::{ProofGuard, ProofGuardSource, ThinkToolContext};
let module = ProofGuard::new();
// Programmatically build sources
let sources = vec![
ProofGuardSource {
name: "Official Climate Report".to_string(),
tier: "Primary".to_string(),
source_type: "Government".to_string(),
stance: "Support".to_string(),
..Default::default()
},
ProofGuardSource {
name: "Peer-reviewed Climate Study".to_string(),
tier: "Primary".to_string(),
source_type: "Academic".to_string(),
stance: "Support".to_string(),
..Default::default()
},
ProofGuardSource {
name: "Industry Whitepaper".to_string(),
tier: "Secondary".to_string(),
source_type: "Industry".to_string(),
stance: "Neutral".to_string(),
..Default::default()
},
];
// Convert to JSON for execution
let json_input = serde_json::json!({
"claim": "Global temperatures have risen significantly in the past century",
"sources": sources
});
let context = ThinkToolContext::new(json_input.to_string());
let result = module.execute(&context)?;
```
## Error Handling
ProofGuard defines specific error types for verification failures:
### ProofGuardError
Enumeration of all possible module-specific errors.
```rust
#[derive(Error, Debug, Clone)]
pub enum ProofGuardError {
InsufficientSources { provided: usize, required: usize },
MissingTier1Source { reason: String },
ContradictoryEvidence { contradictions: usize },
TriangulationFailed { reason: String },
SourceVerificationFailed { source: String, error: String },
ClaimFormatInvalid { message: String },
VerificationTimeout { duration_ms: u64 },
}
```
### Error Descriptions
| `InsufficientSources` | `provided`, `required` | Fewer than minimum required sources |
| `MissingTier1Source` | `reason` | No Tier 1 sources provided |
| `ContradictoryEvidence` | `contradictions` | Severe contradictions found |
| `TriangulationFailed` | `reason` | Unable to establish verification |
| `SourceVerificationFailed` | `source`, `error` | Error verifying source credibility |
| `ClaimFormatInvalid` | `message` | Malformed claim input |
| `VerificationTimeout` | `duration_ms` | Verification exceeded timeout |
### Error Conversion
All ProofGuardError variants are automatically converted to the standard Error type:
```rust
impl From<ProofGuardError> for Error {
fn from(err: ProofGuardError) -> Self {
Error::ThinkToolExecutionError(err.to_string())
}
}
```
### Handling Errors
```rust
use reasonkit::thinktool::modules::{ProofGuard, ThinkToolContext, ProofGuardError};
let module = ProofGuard::new();
let context = ThinkToolContext::new(r#"{"invalid": "format"}"#); // Malformed JSON
match module.execute(&context) {
Ok(result) => {
// Process successful result
println!("Verification completed with confidence: {:.2}", result.confidence);
}
Err(e) => {
// Handle specific ProofGuard errors
if let Some(pg_err) = e.downcast_ref::<ProofGuardError>() {
match pg_err {
ProofGuardError::InsufficientSources { provided, required } => {
eprintln!("Insufficient sources: {} provided, {} required", provided, required);
}
ProofGuardError::MissingTier1Source { reason } => {
eprintln!("Missing Tier 1 source: {}", reason);
}
ProofGuardError::VerificationTimeout { duration_ms } => {
eprintln!("Verification timed out after {}ms", duration_ms);
}
_ => eprintln!("ProofGuard error: {}", pg_err),
}
} else {
// Handle other errors
eprintln!("Other error: {}", e);
}
}
}
```
## Performance Considerations
1. **Source Limit**: Control `min_sources` to balance rigor with efficiency
2. **Timeout Settings**: Set appropriate timeouts based on source complexity
3. **Tier Requirements**: Relax `require_tier1` for faster verification when quality is less critical
4. **Agreement Ratio**: Adjust `min_agreement_ratio` for desired consensus threshold
5. **Contradiction Penalty**: Tune `contradiction_penalty` for risk tolerance
## Integration Notes
When using ProofGuard in protocol execution:
```rust
use reasonkit::thinktool::{ProtocolExecutor, ProtocolInput};
// ProtocolExecutor handles LLM integration automatically
let executor = ProtocolExecutor::new()?;
let result = executor.execute(
"proofguard",
ProtocolInput::json(r#"{"claim": "...", "sources": [...]}"#)
).await?;
```
The protocol-based approach provides:
- Automatic LLM assistance for source credibility assessment
- Streaming output for real-time verification progress
- Built-in retry logic for failed verification steps
- Comprehensive execution tracing and audit trail
- Integration with memory layer for source history tracking