reasonkit-core 0.1.8

The Reasoning Engine — Auditable Reasoning for Production AI | Rust-Native | Turn Prompts into Protocols
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
# ProofGuard Module API Documentation

Version: 3.0.0

The ProofGuard module triangulates claims across 3+ independent sources to verify
factual accuracy, implementing the three-source rule (CONS-006).

## Table of Contents

1. [Module Overview]#module-overview
2. [Configuration]#configuration
3. [Core Types]#core-types
4. [Methods]#methods
5. [Usage Examples]#usage-examples
6. [Error Handling]#error-handling

## Module Overview

ProofGuard enforces rigorous fact verification through multi-source triangulation,
detecting contradictions, ranking source quality, and producing calibrated
verification scores. It implements ReasonKit's core verification protocol.

Key capabilities:

- 3+ source requirement enforcement (CONS-006)
- Contradiction detection across sources
- Source tier ranking and weighting
- Confidence scoring with calibration
- Stance analysis (Support/Contradict/Neutral)
- Automated source credibility assessment

## Configuration

### ProofGuardConfig

Configuration struct controlling ProofGuard verification behavior.

```rust
pub struct ProofGuardConfig {
    pub min_sources: usize,
    pub require_tier1: bool,
    pub min_agreement_ratio: f64,
    pub contradiction_penalty: f64,
    pub timeout_ms: u64,
}
```

#### Fields

| Field                   | Type    | Required | Default | Description                                                        |
| ----------------------- | ------- | -------- | ------- | ------------------------------------------------------------------ |
| `min_sources`           | `usize` | Yes      | `3`     | Minimum sources required for verification. Enforces triangulation. |
| `require_tier1`         | `bool`  | Yes      | `true`  | Require at least one Tier 1 source. Ensures quality baseline.      |
| `min_agreement_ratio`   | `f64`   | Yes      | `0.6`   | Minimum agreement ratio for verification. Range: 0.0-1.0.          |
| `contradiction_penalty` | `f64`   | Yes      | `0.3`   | Confidence penalty for contradictions. Range: 0.0-1.0.             |
| `timeout_ms`            | `u64`   | Yes      | `15000` | Verification timeout in milliseconds. Prevents hanging operations. |

#### Implementation

```rust
impl Default for ProofGuardConfig {
    fn default() -> Self {
        Self {
            min_sources: 3,
            require_tier1: true,
            min_agreement_ratio: 0.6,
            contradiction_penalty: 0.3,
            timeout_ms: 15000,
        }
    }
}

impl ProofGuardConfig {
    /// Fast verification mode - relaxed requirements
    pub fn fast() -> Self {
        Self {
            min_sources: 2,
            require_tier1: false,
            min_agreement_ratio: 0.5,
            contradiction_penalty: 0.2,
            timeout_ms: 10000,
        }
    }

    /// Strict verification mode - highest standards
    pub fn strict() -> Self {
        Self {
            min_sources: 5,
            require_tier1: true,
            min_agreement_ratio: 0.8,
            contradiction_penalty: 0.5,
            timeout_ms: 30000,
        }
    }
}
```

## Core Types

### ProofGuard

Main module struct implementing the ThinkToolModule trait.

```rust
pub struct ProofGuard {
    config: ThinkToolModuleConfig,
    proofguard_config: ProofGuardConfig,
}
```

#### Fields

| Field               | Type                    | Description                                  |
| ------------------- | ----------------------- | -------------------------------------------- |
| `config`            | `ThinkToolModuleConfig` | Standard module configuration metadata       |
| `proofguard_config` | `ProofGuardConfig`      | ProofGuard-specific configuration parameters |

### ProofGuardResult

Structured output from ProofGuard execution containing verification results.

```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProofGuardResult {
    pub claim: String,
    pub verification: VerificationRecommendation,
    pub confidence: VerificationConfidence,
    pub sources: Vec<AnalyzedSource>,
    pub contradictions: Vec<Contradiction>,
    pub agreement_ratio: f64,
    pub metadata: VerificationMetadata,
}
```

#### Fields

| Field             | Type                         | Description                        |
| ----------------- | ---------------------------- | ---------------------------------- |
| `claim`           | `String`                     | Original claim being verified      |
| `verification`    | `VerificationRecommendation` | Recommendation based on evidence   |
| `confidence`      | `VerificationConfidence`     | Confidence level in recommendation |
| `sources`         | `Vec<AnalyzedSource>`        | Detailed source analysis           |
| `contradictions`  | `Vec<Contradiction>`         | Identified contradictions          |
| `agreement_ratio` | `f64`                        | Ratio of supporting sources        |
| `metadata`        | `VerificationMetadata`       | Execution metadata                 |

### VerificationRecommendation

High-level verification recommendation based on evidence analysis.

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum VerificationRecommendation {
    StronglySupported,  // High confidence support
    ModeratelySupported, // Medium confidence support
    WeaklySupported,    // Low confidence support
    InsufficientEvidence, // Not enough evidence
    Contradicted,       // Evidence contradicts claim
    StronglyContradicted, // High confidence contradiction
}
```

### VerificationConfidence

Quantitative confidence measure in verification recommendation.

```rust
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
pub struct VerificationConfidence {
    pub score: f64,           // 0.0-1.0 confidence score
    pub calibration: f64,     // Calibration adjustment factor
    pub source_quality: f64,  // Average source quality weight
    pub consistency: f64,     // Evidence consistency measure
}
```

#### Fields

| Field            | Type  | Description                     |
| ---------------- | ----- | ------------------------------- |
| `score`          | `f64` | Base confidence score (0.0-1.0) |
| `calibration`    | `f64` | Calibration adjustment factor   |
| `source_quality` | `f64` | Average source quality weight   |
| `consistency`    | `f64` | Evidence consistency measure    |

### AnalyzedSource

Detailed analysis of an individual source's contribution to verification.

```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AnalyzedSource {
    pub name: String,
    pub tier: SourceTier,
    pub source_type: SourceType,
    pub stance: Stance,
    pub weight: f64,
    pub confidence: f64,
    pub credibility_factors: Vec<CredibilityFactor>,
    pub evidence_excerpts: Vec<String>,
}
```

#### Fields

| Field                 | Type                     | Description                         |
| --------------------- | ------------------------ | ----------------------------------- |
| `name`                | `String`                 | Source name/title                   |
| `tier`                | `SourceTier`             | Quality tier classification         |
| `source_type`         | `SourceType`             | Type of source                      |
| `stance`              | `Stance`                 | Support/Contradict/Neutral position |
| `weight`              | `f64`                    | Calculated weight based on tier     |
| `confidence`          | `f64`                    | Confidence in source credibility    |
| `credibility_factors` | `Vec<CredibilityFactor>` | Factors affecting credibility       |
| `evidence_excerpts`   | `Vec<String>`            | Relevant evidence passages          |

### SourceTier

Quality ranking of information sources affecting evidence weight.

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum SourceTier {
    Primary,    // Official docs, peer-reviewed papers, primary sources (weight: 1.0)
    Secondary,  // Reputable news, expert blogs, industry reports (weight: 0.7)
    Independent, // Community content, forums (weight: 0.4)
    Unverified, // Social media, unknown sources (weight: 0.2)
}
```

### SourceType

Classification of source types for specialized credibility assessment.

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum SourceType {
    Academic,      // Peer-reviewed research papers
    Documentation, // Official technical documentation
    News,          // Reputable news organizations
    Expert,        // Industry expert commentary
    Government,    // Official government publications
    Industry,      // Industry whitepapers and reports
    Community,     // Community forums and discussions
    Social,        // Social media posts
    PrimaryData,   // Direct observation or measurement
}
```

### Stance

Position taken by a source regarding the claim under verification.

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum Stance {
    Support,    // Source supports the claim
    Contradict, // Source contradicts the claim
    Neutral,    // Source is neutral/ambiguous
    Partial,    // Source partially supports the claim
}
```

### CredibilityFactor

Individual factor contributing to source credibility assessment.

```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CredibilityFactor {
    pub factor: CredibilityFactorType,
    pub weight: f64,
    pub evidence: String,
    pub confidence: f64,
}
```

#### Fields

| Field        | Type                    | Description                          |
| ------------ | ----------------------- | ------------------------------------ |
| `factor`     | `CredibilityFactorType` | Type of credibility factor           |
| `weight`     | `f64`                   | Weight assigned to this factor       |
| `evidence`   | `String`                | Evidence supporting this factor      |
| `confidence` | `f64`                   | Confidence in this factor assessment |

### CredibilityFactorType

Types of factors affecting source credibility.

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum CredibilityFactorType {
    Authority,      // Author/expert credentials
    Recency,        // Publication/timestamp recency
    Citations,      // Reference to other sources
    Methodology,    // Research methodology quality
    Independence,   // Source independence/bias
    Consistency,    // Internal consistency
    Corroboration,  // External corroboration
}
```

### Contradiction

Identification of conflicting evidence between sources.

```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Contradiction {
    pub source_indices: Vec<usize>,
    pub claim_variants: Vec<String>,
    pub severity: ContradictionSeverity,
    pub resolution_status: ResolutionStatus,
    pub evidence_summary: String,
}
```

#### Fields

| Field               | Type                    | Description                       |
| ------------------- | ----------------------- | --------------------------------- |
| `source_indices`    | `Vec<usize>`            | Indices of contradictory sources  |
| `claim_variants`    | `Vec<String>`           | Different versions of the claim   |
| `severity`          | `ContradictionSeverity` | Seriousness of contradiction      |
| `resolution_status` | `ResolutionStatus`      | Current resolution state          |
| `evidence_summary`  | `String`                | Summary of contradictory evidence |

### ContradictionSeverity

Severity classification for contradictions affecting confidence.

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ContradictionSeverity {
    Minor,    // Small discrepancy
    Moderate, // Significant disagreement
    Major,    // Fundamental contradiction
    Critical, // Completely incompatible
}
```

### VerificationMetadata

Execution metadata and performance statistics.

```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct VerificationMetadata {
    pub execution_time_ms: u64,
    pub sources_analyzed: usize,
    pub credibility_checks: usize,
    pub contradiction_analysis: usize,
    pub calibration_applied: bool,
}
```

#### Fields

| Field                    | Type    | Description                             |
| ------------------------ | ------- | --------------------------------------- |
| `execution_time_ms`      | `u64`   | Total execution time in milliseconds    |
| `sources_analyzed`       | `usize` | Number of sources analyzed              |
| `credibility_checks`     | `usize` | Number of credibility assessments       |
| `contradiction_analysis` | `usize` | Number of contradiction checks          |
| `calibration_applied`    | `bool`  | Whether confidence calibration was used |

## Methods

### ProofGuard::new()

Create a new ProofGuard module with default configuration.

```rust
pub fn new() -> Self
```

Returns: `ProofGuard` instance with default settings.

Example:

```rust
let module = ProofGuard::new();
assert_eq!(module.name(), "ProofGuard");
assert_eq!(module.version(), "3.0.0");
```

### ProofGuard::with_config()

Create a new ProofGuard module with custom configuration.

```rust
pub fn with_config(config: ProofGuardConfig) -> Self
```

Parameters:

- `config`: `ProofGuardConfig` - Custom configuration parameters

Returns: `ProofGuard` instance with specified configuration.

Example:

```rust
let config = ProofGuardConfig::strict();
let module = ProofGuard::with_config(config);
```

### ProofGuard::verify_claim()

Direct verification of a claim with provided sources.

```rust
pub fn verify_claim(&self, claim: &str, sources: &[ProofGuardSource]) -> Result<ProofGuardResult>
```

Parameters:

- `claim`: `&str` - Claim to verify
- `sources`: `&[ProofGuardSource]` - Sources supporting/contradicting the claim

Returns: `Result<ProofGuardResult>` - Verification results or error.

Example:

```rust
let module = ProofGuard::new();
let sources = vec![
    ProofGuardSource {
        name: "Rust Book".to_string(),
        tier: "Primary".to_string(),
        source_type: "Documentation".to_string(),
        stance: "Support".to_string(),
    },
    // ... more sources
];
let result = module.verify_claim("Rust is memory-safe", &sources)?;
```

### ProofGuard::execute()

Execute the ProofGuard module synchronously.

```rust
impl ThinkToolModule for ProofGuard {
    fn execute(&self, context: &ThinkToolContext) -> Result<ThinkToolOutput>
}
```

Parameters:

- `context`: `&ThinkToolContext` - Execution context with JSON claim and sources

Returns: `Result<ThinkToolOutput>` - Structured output or error.

Example:

```rust
let module = ProofGuard::new();
let context = ThinkToolContext::new(r#"{
    "claim": "Quantum computers can break RSA encryption",
    "sources": [...]
}"#);
let result = module.execute(&context)?;
```

### ProofGuard::config()

Get the module configuration.

```rust
pub fn config(&self) -> &ProofGuardConfig
```

Returns: `&ProofGuardConfig` - Reference to current configuration.

Example:

```rust
let module = ProofGuard::new();
let config = module.config();
assert_eq!(config.min_sources, 3);
```

## Usage Examples

### Basic Claim Verification

```rust
use reasonkit::thinktool::modules::{ProofGuard, ThinkToolContext, ThinkToolModule};

// Create module with default settings
let module = ProofGuard::new();

// Prepare JSON input with claim and sources
let context = ThinkToolContext::new(r#"{
    "claim": "Rust is memory-safe without a garbage collector",
    "sources": [
        {
            "name": "Rust Book",
            "tier": "Primary",
            "source_type": "Documentation",
            "stance": "Support"
        },
        {
            "name": "ACM Paper on Memory Safety",
            "tier": "Primary",
            "source_type": "Academic",
            "stance": "Support"
        },
        {
            "name": "Tech Blog Analysis",
            "tier": "Secondary",
            "source_type": "News",
            "stance": "Support"
        }
    ]
}"#);

// Execute verification
let result = module.execute(&context)?;

// Check verification recommendation
let verification = result.get_str("verification").unwrap();
println!("Verification result: {}", verification);

// Check confidence score
let confidence_score = result.get("confidence").unwrap().get("score").unwrap().as_f64().unwrap();
println!("Confidence score: {:.2}", confidence_score);
```

### Custom Configuration

```rust
use reasonkit::thinktool::modules::{ProofGuard, ProofGuardConfig, ThinkToolContext};

// Create strict configuration for high-stakes verification
let config = ProofGuardConfig {
    min_sources: 5,
    require_tier1: true,
    min_agreement_ratio: 0.8,
    contradiction_penalty: 0.5,
    timeout_ms: 30000,
};

let module = ProofGuard::with_config(config);

// Complex claim verification
let context = ThinkToolContext::new(r#"{
    "claim": "Artificial general intelligence will emerge by 2030",
    "sources": [
        {"name": "Expert Survey", "tier": "Primary", "source_type": "Academic", "stance": "Support"},
        {"name": "Industry Report", "tier": "Secondary", "source_type": "Industry", "stance": "Support"},
        {"name": "Skeptic Analysis", "tier": "Secondary", "source_type": "Expert", "stance": "Contradict"},
        {"name": "Historical Precedent Study", "tier": "Primary", "source_type": "Academic", "stance": "Neutral"},
        {"name": "Technical Feasibility Analysis", "tier": "Primary", "source_type": "Academic", "stance": "Partial"}
    ]
}"#);

let result = module.execute(&context)?;

// Analyze contradictions
let contradictions = result.get_array("contradictions").unwrap();
if !contradictions.is_empty() {
    println!("Found {} contradictions requiring resolution", contradictions.len());
}
```

### Programmatic Source Addition

```rust
use reasonkit::thinktool::modules::{ProofGuard, ProofGuardSource, ThinkToolContext};

let module = ProofGuard::new();

// Programmatically build sources
let sources = vec![
    ProofGuardSource {
        name: "Official Climate Report".to_string(),
        tier: "Primary".to_string(),
        source_type: "Government".to_string(),
        stance: "Support".to_string(),
        ..Default::default()
    },
    ProofGuardSource {
        name: "Peer-reviewed Climate Study".to_string(),
        tier: "Primary".to_string(),
        source_type: "Academic".to_string(),
        stance: "Support".to_string(),
        ..Default::default()
    },
    ProofGuardSource {
        name: "Industry Whitepaper".to_string(),
        tier: "Secondary".to_string(),
        source_type: "Industry".to_string(),
        stance: "Neutral".to_string(),
        ..Default::default()
    },
];

// Convert to JSON for execution
let json_input = serde_json::json!({
    "claim": "Global temperatures have risen significantly in the past century",
    "sources": sources
});

let context = ThinkToolContext::new(json_input.to_string());
let result = module.execute(&context)?;
```

## Error Handling

ProofGuard defines specific error types for verification failures:

### ProofGuardError

Enumeration of all possible module-specific errors.

```rust
#[derive(Error, Debug, Clone)]
pub enum ProofGuardError {
    InsufficientSources { provided: usize, required: usize },
    MissingTier1Source { reason: String },
    ContradictoryEvidence { contradictions: usize },
    TriangulationFailed { reason: String },
    SourceVerificationFailed { source: String, error: String },
    ClaimFormatInvalid { message: String },
    VerificationTimeout { duration_ms: u64 },
}
```

### Error Descriptions

| Error Variant              | Parameters             | Description                         |
| -------------------------- | ---------------------- | ----------------------------------- |
| `InsufficientSources`      | `provided`, `required` | Fewer than minimum required sources |
| `MissingTier1Source`       | `reason`               | No Tier 1 sources provided          |
| `ContradictoryEvidence`    | `contradictions`       | Severe contradictions found         |
| `TriangulationFailed`      | `reason`               | Unable to establish verification    |
| `SourceVerificationFailed` | `source`, `error`      | Error verifying source credibility  |
| `ClaimFormatInvalid`       | `message`              | Malformed claim input               |
| `VerificationTimeout`      | `duration_ms`          | Verification exceeded timeout       |

### Error Conversion

All ProofGuardError variants are automatically converted to the standard Error type:

```rust
impl From<ProofGuardError> for Error {
    fn from(err: ProofGuardError) -> Self {
        Error::ThinkToolExecutionError(err.to_string())
    }
}
```

### Handling Errors

```rust
use reasonkit::thinktool::modules::{ProofGuard, ThinkToolContext, ProofGuardError};

let module = ProofGuard::new();
let context = ThinkToolContext::new(r#"{"invalid": "format"}"#); // Malformed JSON

match module.execute(&context) {
    Ok(result) => {
        // Process successful result
        println!("Verification completed with confidence: {:.2}", result.confidence);
    }
    Err(e) => {
        // Handle specific ProofGuard errors
        if let Some(pg_err) = e.downcast_ref::<ProofGuardError>() {
            match pg_err {
                ProofGuardError::InsufficientSources { provided, required } => {
                    eprintln!("Insufficient sources: {} provided, {} required", provided, required);
                }
                ProofGuardError::MissingTier1Source { reason } => {
                    eprintln!("Missing Tier 1 source: {}", reason);
                }
                ProofGuardError::VerificationTimeout { duration_ms } => {
                    eprintln!("Verification timed out after {}ms", duration_ms);
                }
                _ => eprintln!("ProofGuard error: {}", pg_err),
            }
        } else {
            // Handle other errors
            eprintln!("Other error: {}", e);
        }
    }
}
```

## Performance Considerations

1. **Source Limit**: Control `min_sources` to balance rigor with efficiency
2. **Timeout Settings**: Set appropriate timeouts based on source complexity
3. **Tier Requirements**: Relax `require_tier1` for faster verification when quality is less critical
4. **Agreement Ratio**: Adjust `min_agreement_ratio` for desired consensus threshold
5. **Contradiction Penalty**: Tune `contradiction_penalty` for risk tolerance

## Integration Notes

When using ProofGuard in protocol execution:

```rust
use reasonkit::thinktool::{ProtocolExecutor, ProtocolInput};

// ProtocolExecutor handles LLM integration automatically
let executor = ProtocolExecutor::new()?;
let result = executor.execute(
    "proofguard",
    ProtocolInput::json(r#"{"claim": "...", "sources": [...]}"#)
).await?;
```

The protocol-based approach provides:

- Automatic LLM assistance for source credibility assessment
- Streaming output for real-time verification progress
- Built-in retry logic for failed verification steps
- Comprehensive execution tracing and audit trail
- Integration with memory layer for source history tracking