reasonkit-core 0.1.8

The Reasoning Engine — Auditable Reasoning for Production AI | Rust-Native | Turn Prompts into Protocols
# ThinkTools API Overview

Version: 3.0.0

This document provides a comprehensive overview of ReasonKit's ThinkTools API,
covering the five core reasoning modules: GigaThink, LaserLogic, BedRock,
ProofGuard, and BrutalHonesty.

## Table of Contents

1. [Introduction]#introduction
2. [Core Concepts]#core-concepts
3. [Module Comparison]#module-comparison
4. [Common Patterns]#common-patterns
5. [Integration Guide]#integration-guide
6. [Best Practices]#best-practices

## Introduction

ThinkTools are structured reasoning protocols that transform ad-hoc LLM prompting
into auditable, reproducible reasoning chains. Each module implements a specific
analytical strategy:

| Tool              | Code | Purpose                        | Key Feature           |
| ----------------- | ---- | ------------------------------ | --------------------- |
| **GigaThink**     | `gt` | Expansive creative thinking    | 10+ perspectives      |
| **LaserLogic**    | `ll` | Precision deductive reasoning  | Fallacy detection     |
| **BedRock**       | `br` | First principles decomposition | Core axiom extraction |
| **ProofGuard**    | `pg` | Multi-source verification      | 3+ sources required   |
| **BrutalHonesty** | `bh` | Adversarial self-critique      | Skeptical scoring     |

## Core Concepts

### ThinkToolModule Trait

All modules implement the core `ThinkToolModule` trait:

```rust
pub trait ThinkToolModule: Send + Sync {
    fn config(&self) -> &ThinkToolModuleConfig;
    fn execute(&self, context: &ThinkToolContext) -> Result<ThinkToolOutput>;
    fn name(&self) -> &str { &self.config().name }
    fn version(&self) -> &str { &self.config().version }
    fn description(&self) -> &str { &self.config().description }
    fn confidence_weight(&self) -> f64 { self.config().confidence_weight }
}
```

### ThinkToolContext

Universal execution context for all modules:

```rust
pub struct ThinkToolContext {
    pub query: String,           // Primary input/query
    pub previous_steps: Vec<String>, // Results from prior reasoning steps
}
```

### ThinkToolOutput

Standardized output format:

```rust
pub struct ThinkToolOutput {
    pub module: String,              // Module name
    pub confidence: f64,             // Confidence score (0.0-1.0)
    pub output: serde_json::Value,   // Structured module-specific output
}
```

## Module Comparison

### Strategic Focus

| Module        | Expansion | Reduction | Verification | Critique | Foundation |
| ------------- | --------- | --------- | ------------ | -------- | ---------- |
| GigaThink     | ✅ High   | ❌ Low    | ❌ Low       | ❌ Low   | ❌ Low     |
| LaserLogic    | ❌ Low    | ✅ High   | ✅ Medium    | ❌ Low   | ❌ Low     |
| BedRock       | ❌ Low    | ✅ High   | ❌ Low       | ❌ Low   | ✅ High    |
| ProofGuard    | ❌ Low    | ❌ Low    | ✅ High      | ❌ Low   | ❌ Low     |
| BrutalHonesty | ❌ Low    | ❌ Low    | ❌ Low       | ✅ High  | ❌ Low     |

### Confidence Handling

| Module        | Confidence Source     | Adjustment Method             | Calibration              |
| ------------- | --------------------- | ----------------------------- | ------------------------ |
| GigaThink     | Perspective coherence | Average of perspectives       | Cross-validation         |
| LaserLogic    | Logical validity      | Formal proof strength         | Soundness checking       |
| BedRock       | Axiomatic foundation  | Principle reliability weights | Reconstruction quality   |
| ProofGuard    | Source triangulation  | Evidence agreement ratio      | Source quality weighting |
| BrutalHonesty | Flaw analysis         | Severity-based penalties      | Bias-aware adjustment    |

### Execution Characteristics

| Module        | Speed  | Resource Usage | Async Support | LLM Dependency |
| ------------- | ------ | -------------- | ------------- | -------------- |
| GigaThink     | Medium | High           | ✅ Yes        | ✅ High        |
| LaserLogic    | Fast   | Low            | ❌ No         | ❌ Low         |
| BedRock       | Slow   | High           | ✅ Yes        | ✅ High        |
| ProofGuard    | Medium | Medium         | ✅ Yes        | ✅ Medium      |
| BrutalHonesty | Medium | Medium         | ✅ Yes        | ✅ High        |

## Common Patterns

### 1. Sequential Chain Execution

Execute modules in sequence for comprehensive analysis:

```rust
use reasonkit::thinktool::modules::{GigaThink, LaserLogic, ProofGuard, BrutalHonesty};
use reasonkit::thinktool::{ThinkToolContext, ThinkToolModule};

fn comprehensive_analysis(query: &str) -> Result<()> {
    // 1. Generate perspectives (GigaThink)
    let gt = GigaThink::new();
    let gt_result = gt.execute(&ThinkToolContext::new(query))?;

    // 2. Validate logic (LaserLogic)
    let ll = LaserLogic::new();
    let synthesis = gt_result.get_str("synthesis").unwrap_or(query);
    let ll_result = ll.execute(&ThinkToolContext::new(synthesis))?;

    // 3. Verify facts (ProofGuard)
    let pg = ProofGuard::new();
    // Extract claims from previous steps and verify them

    // 4. Challenge assumptions (BrutalHonesty)
    let bh = BrutalHonesty::new();
    let bh_result = bh.execute(&ThinkToolContext::new(synthesis))?;

    Ok(())
}
```

### 2. Parallel Execution

Run multiple modules concurrently for efficiency:

```rust
use tokio::try_join;
use reasonkit::thinktool::modules::{GigaThink, LaserLogic, BrutalHonesty};

async fn parallel_analysis(query: &str) -> Result<()> {
    let context = ThinkToolContext::new(query);

    // Execute multiple modules in parallel
    let (gt_result, ll_result, bh_result) = try_join!(
        async { GigaThink::new().execute(&context) },
        async { LaserLogic::new().execute(&context) },
        async { BrutalHonesty::new().execute(&context) }
    )?;

    // Process results...

    Ok(())
}
```

### 3. Profile-Based Execution

Use predefined reasoning profiles for consistent analysis:

```rust
use reasonkit::thinktool::profiles::{ProfileRegistry, ReasoningProfile};
use reasonkit::thinktool::{ProtocolExecutor, ProtocolInput};

async fn profiled_analysis(query: &str) -> Result<()> {
    // Load profile registry
    let registry = ProfileRegistry::load_default()?;

    // Get balanced profile (GT + LL + BR + PG)
    let profile = registry.get_profile("balanced")?;

    // Execute with profile
    let executor = ProtocolExecutor::new()?;
    let result = executor.execute_profile(
        profile,
        ProtocolInput::query(query)
    ).await?;

    Ok(())
}
```

### 4. Configuration Patterns

Customize modules for specific use cases:

```rust
use reasonkit::thinktool::modules::{GigaThink, GigaThinkConfig, LaserLogic, LaserLogicConfig};

// Fast analysis configuration
fn fast_config() -> (GigaThink, LaserLogic) {
    let gt = GigaThink::with_config(
        GigaThinkConfig::fast()
    );

    let ll = LaserLogic::with_config(
        LaserLogicConfig::quick()
    );

    (gt, ll)
}

// Deep analysis configuration
fn deep_config() -> (GigaThink, LaserLogic) {
    let gt = GigaThink::with_config(
        GigaThinkConfig::deep()
    );

    let ll = LaserLogic::with_config(
        LaserLogicConfig::deep()
    );

    (gt, ll)
}
```

## Integration Guide

### Direct Module Usage

```rust
use reasonkit::thinktool::modules::{GigaThink, ThinkToolContext, ThinkToolModule};

// Create module instance
let module = GigaThink::new();

// Prepare execution context
let context = ThinkToolContext::new("Strategic question to analyze");

// Execute module
let result = module.execute(&context)?;

// Access structured output
let confidence = result.confidence;
let perspectives = result.get_array("perspectives").unwrap();

println!("Analysis confidence: {:.2}%", confidence * 100.0);
```

### Protocol Executor Integration

```rust
use reasonkit::thinktool::{ProtocolExecutor, ProtocolInput};

// Create protocol executor (handles LLM integration)
let executor = ProtocolExecutor::new()?;

// Execute via protocol
let result = executor.execute(
    "gigathink",
    ProtocolInput::query("Question to analyze")
).await?;

// Access results
let module = &result.module;
let confidence = result.confidence;
```

### Custom Error Handling

```rust
use reasonkit::thinktool::modules::{GigaThink, GigaThinkError, ThinkToolContext};
use reasonkit::error::Error;

fn handle_errors() {
    let module = GigaThink::new();
    let context = ThinkToolContext::new("");

    match module.execute(&context) {
        Ok(result) => {
            println!("Success: confidence {:.2}", result.confidence);
        }
        Err(e) => {
            // Handle module-specific errors
            if let Some(gt_err) = e.downcast_ref::<GigaThinkError>() {
                match gt_err {
                    GigaThinkError::QueryTooShort { length, minimum } => {
                        eprintln!("Query too short: {} < {}", length, minimum);
                    }
                    _ => eprintln!("GigaThink error: {}", gt_err),
                }
            } else {
                // Handle other errors
                eprintln!("Other error: {}", e);
            }
        }
    }
}
```

## Best Practices

### 1. Module Selection Strategy

Choose modules based on analytical needs:

- **Exploration**: Start with GigaThink for broad perspective generation
- **Validation**: Use LaserLogic for logical consistency checking
- **Foundation**: Apply BedRock for fundamental principle analysis
- **Verification**: Employ ProofGuard for factual accuracy confirmation
- **Challenge**: Utilize BrutalHonesty for critical examination

### 2. Confidence Interpretation

Understand confidence scoring nuances:

```rust
fn interpret_confidence(confidence: f64) -> &'static str {
    match confidence {
        c if c >= 0.9 => "Very High - Strong evidence/consistency",
        c if c >= 0.8 => "High - Good evidence/logic",
        c if c >= 0.7 => "Medium - Adequate support",
        c if c >= 0.6 => "Low - Some support but concerns",
        _ => "Very Low - Significant doubts/weaknesses"
    }
}
```

### 3. Error Recovery Patterns

Implement graceful degradation:

```rust
fn resilient_analysis(query: &str) -> Result<ThinkToolOutput> {
    let context = ThinkToolContext::new(query);

    // Try primary analysis
    let module = GigaThink::new();
    match module.execute(&context) {
        Ok(result) => Ok(result),
        Err(e) => {
            // Fall back to simpler analysis
            eprintln!("Primary analysis failed: {}, falling back", e);
            let fallback = LaserLogic::new();
            fallback.execute(&context)
        }
    }
}
```

### 4. Performance Optimization

Balance thoroughness with efficiency:

```rust
// For real-time applications
let fast_modules = vec![
    Box::new(LaserLogic::with_config(LaserLogicConfig::quick())) as Box<dyn ThinkToolModule>,
    Box::new(BrutalHonesty::with_config(BrutalHonestyConfig::gentle())),
];

// For batch/offline analysis
let deep_modules = vec![
    Box::new(GigaThink::with_config(GigaThinkConfig::deep())),
    Box::new(BedRock::with_config(BedRockConfig::deep())),
    Box::new(ProofGuard::with_config(ProofGuardConfig::strict())),
];
```

### 5. Result Composition

Combine multiple module outputs effectively:

```rust
fn compose_results(results: Vec<ThinkToolOutput>) -> CompositeAnalysis {
    let avg_confidence = results.iter().map(|r| r.confidence).sum::<f64>() / results.len() as f64;

    CompositeAnalysis {
        confidence: avg_confidence,
        perspectives: extract_perspectives(&results),
        logical_validity: extract_validity(&results),
        factual_accuracy: extract_accuracy(&results),
        critical_issues: extract_issues(&results),
    }
}
```

## Version Compatibility

This documentation covers ThinkTools API version 3.0.0. Key changes from previous versions:

- Enhanced configuration flexibility
- Improved error handling with specific error types
- Better async support across all modules
- Standardized output formats
- Expanded field-level documentation

For migration from earlier versions, refer to individual module API documentation
which includes backward compatibility notes.