Expand description
Β§ReasonKit
Β§The AI Reasoning Engine
βDesigned, not Dreamed.β β From Prompt to Cognitive Engineering.
Auditable Reasoning for Production AI | Rust-Native | Turn Prompts into Protocols
Website | Pro | Docs | Resources | Enterprise | About | GitHub
Β§π Quick Install
curl -fsSL https://get.reasonkit.sh | bashUniversal Installer β’ All Platforms β’ All Shells β’ 30 Seconds
Β§The Problem We Solve
Most AI is a slot machine. Insert prompt β pull lever β unclear, hope for coherence, at the mercy of chance.
ReasonKit is a factory. Input data β apply protocol β deeper logic, auditable result, known probabibility.

The Cost of Wrong Decisions: Without structured reasoning, AI decisions lead to financial loss and missed opportunities. Structured protocols catch errors early and prevent costly mistakes before they compound.
LLMs are fundamentally probabilistic. Same prompt β different outputs. This creates critical failures:
| Failure | Impact | Our Solution |
|---|---|---|
| Inconsistency | Unreliable for production | Deterministic protocol execution |
| Hallucination | Dangerous falsehoods | Multi-source triangulation + adversarial critique |
| Opacity | No audit trail | Complete execution tracing with confidence scores |
We donβt eliminate probability (impossible). We constrain it through structured protocols that force probabilistic outputs into deterministic execution paths.
Β§Quick Start
Already installed? Jump to Choose Your Workflow or How to Use.
Need installation help? See the Installation Guide or Installation Section below.
Β§π€ Choose Your Workflow
Β§π€ Claude Code (Opus 4.5)
Agentic CLI. No API key required.
claude mcp add reasonkit -- rk serve-mcp
claude "Use ReasonKit to analyze: Should we migrate to microservices?"Learn more: Claude Code Integration
Β§π ChatGPT (Browser)
Manual MCP Bridge. Injects the reasoning protocol directly into the chat.
# Generate strict protocol
rk protocol "Should we migrate to microservices?" | pbcopy
# β Paste into ChatGPT: "Execute this protocol..."Learn more: ChatGPT Integration
Β§β‘ Gemini 3.0 Pro (API)
Native CLI integration with Googleβs latest preview.
export GEMINI_API_KEY=AIza...
rk think --model gemini-3.0-pro-preview "Should we migrate to microservices?"Learn more: Google Gemini Integration β’ All Provider Integrations
Note: The
rkcommand is the shorthand alias forrk.
30 seconds to structured reasoning. See How to Use for more examples.
Β§ThinkTools: The 5-Step Reasoning Chain
Each ThinkTool acts as a variance reduction filter, transforming probabilistic outputs into increasingly deterministic reasoning paths.
π Full Documentation: ThinkTools Guide β’ API Reference



| ThinkTool | Operation | What It Does |
|---|---|---|
| GigaThink | Diverge() | Generate 10+ perspectives, explore widely |
| LaserLogic | Converge() | Detect fallacies, validate logic, find gaps |
| BedRock | Ground() | First principles decomposition, identify axioms |
| ProofGuard | Verify() | Multi-source triangulation, require 3+ sources |
| BrutalHonesty | Critique() | Adversarial red team, attack your own reasoning |
Β§Variance Reduction: The Chain Effect
Result: Raw LLM variance ~85% β Protocol-constrained variance ~28%
Β§Reasoning Profiles
Pre-configured chains for different rigor levels. See Reasoning Profiles Guide for detailed documentation.
# Fast analysis (70% confidence target)
rk think --profile quick "Is this email phishing?"
# Standard analysis (80% confidence target)
rk think --profile balanced "Should we use microservices?"
# Thorough analysis (85% confidence target)
rk think --profile deep "Design A/B test for feature X"
# Maximum rigor (95% confidence target)
rk think --profile paranoid "Validate cryptographic implementation"| Profile | Chain | Confidence | Use Case |
|---|---|---|---|
--quick | GigaThink β LaserLogic | 70% | Fast sanity checks |
--balanced | All 5 ThinkTools | 80% | Standard decisions |
--deep | All 5 + meta-cognition | 85% | Complex problems |
--paranoid | All 5 + validation pass | 95% | Critical decisions |
Β§See It In Action

$ rk think --profile balanced "Should we migrate to microservices?"
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ThinkTool Chain: GigaThink β LaserLogic β BedRock β ProofGuard
Variance: 85% β 72% β 58% β 42% β 28%
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[GigaThink] 10 PERSPECTIVES GENERATED Variance: 85%
1. OPERATIONAL: Maintenance overhead +40% initially
2. TEAM TOPOLOGY: Conway's Law - do we have the teams?
3. COST ANALYSIS: Infrastructure scales non-linearly
...
β Variance after exploration: 72% (-13%)
[LaserLogic] HIDDEN ASSUMPTIONS DETECTED Variance: 72%
β Assuming network latency is negligible
β Assuming team has distributed tracing expertise
β Logical gap: No evidence microservices solve stated problem
β Variance after validation: 58% (-14%)
[BedRock] FIRST PRINCIPLES DECOMPOSITION Variance: 58%
β’ Axiom: Monoliths are simpler to reason about (empirical)
β’ Axiom: Distributed systems introduce partitions (CAP theorem)
β’ Gap: Cannot prove maintainability improvement without data
β Variance after grounding: 42% (-16%)
[ProofGuard] TRIANGULATION RESULT Variance: 42%
β’ 3/5 sources: Microservices increase complexity initially
β’ 2/5 sources: Some teams report success
β’ Confidence: 0.72 (MEDIUM) - Mixed evidence
β Variance after verification: 28% (-14%)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
VERDICT: conditional_yes | Confidence: 87% | Duration: 2.3s
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββWhat This Shows:
- Transparency: See exactly where confidence comes from
- Auditability: Every step logged and verifiable
- Deterministic Path: Same protocol β same execution flow
- Variance Reduction: Quantified uncertainty reduction at each stage
Β§Architecture
The ReasonKit architecture uses a Protocol Engine wrapper to enforce deterministic execution over probabilistic LLM outputs.
π Full Documentation: Architecture Guide β’ API Reference


Three-Layer Architecture:
-
Probabilistic LLM (Unavoidable)
- LLMs generate tokens probabilistically
- Same prompt β different outputs
- We cannot eliminate this
-
Deterministic Protocol Engine (Our Innovation)
- Wraps the probabilistic LLM layer
- Enforces strict execution paths
- Validates outputs against schemas
- State machine ensures consistent flow
-
ThinkTool Chain (Variance Reduction)
- Each ThinkTool reduces variance
- Multi-stage validation catches errors
- Confidence scoring quantifies uncertainty
Key Components:
- Protocol Engine: Orchestrates execution with strict state management
- ThinkTools: Modular cognitive operations with defined contracts
- LLM Integration: Unified client (Claude, GPT, Gemini, 18+ providers)
- Telemetry: Local SQLite for execution traces + variance metrics

Architecture (Mermaid Diagram)
flowchart LR
subgraph CLI["ReasonKit CLI (rk)"]
A[User Command<br/>rk think --profile balanced]
end
subgraph PROTOCOL["Deterministic Protocol Engine"]
B1[State Machine<br/>Execution Plan]
B2[ThinkTool Orchestrator]
B3[(SQLite Trace DB)]
end
subgraph LLM["LLM Layer (Probabilistic)"]
C1[Provider Router]
C2[Claude / GPT / Gemini / ...]
end
subgraph TOOLS["ThinkTools Β· Variance Reduction"]
G["GigaThink<br/>Diverge()"]
LZ["LaserLogic<br/>Converge()"]
BR["BedRock<br/>Ground()"]
PG["ProofGuard<br/>Verify()"]
BH["BrutalHonesty<br/>Critique()"]
end
A --> B1 --> B2 --> G --> LZ --> BR --> PG --> BH --> B3
B2 --> C1 --> C2 --> B2
classDef core fill:#030508,stroke:#06b6d4,stroke-width:1px,color:#f9fafb;
classDef tool fill:#0a0d14,stroke:#10b981,stroke-width:1px,color:#f9fafb;
classDef llm fill:#111827,stroke:#a855f7,stroke-width:1px,color:#f9fafb;
class CLI,PROTOCOL core;
class G,LZ,BR,PG,BH tool;
class LLM,llm C1,C2;Β§Built for Production
ReasonKit is written in Rust because reasoning infrastructure demands reliability.
| Capability | What It Means for You |
|---|---|
| Predictable Latency | <5ms orchestration overhead, no GC pauses |
| Memory Safety | Zero crashes from null pointers or buffer overflows |
| Single Binary | Deploy anywhere, no Python environment required |
| Fearless Concurrency | Run 100+ reasoning chains in parallel safely |
| Type Safety | Errors caught at compile time, not runtime |
Benchmarked Performance (view full report β’ online version):
| Operation | Time | Target |
|---|---|---|
| Protocol orchestration | 4.4ms | <10ms |
| RRF Fusion (100 elements) | 33ΞΌs | <5ms |
| Document chunking (10 KB) | 27ΞΌs | <5ms |
| RAPTOR tree traversal (1000 nodes) | 33ΞΌs | <5ms |
Why This Matters:
Your AI reasoning shouldnβt crash in production. It shouldnβt pause for garbage collection during critical decisions. It shouldnβt require complex environment management to deploy.
ReasonKitβs Rust foundation ensures deterministic, auditable execution every timeβthe same engineering choice trusted by Linux, Cloudflare, Discord, and AWS for their most critical infrastructure.
Β§Memory Infrastructure (Optional)
Memory modules (storage, embedding, retrieval, RAPTOR, indexing) are available in the standalone reasonkit-mem crate.
π Documentation: Memory Layer Guide β’ Crates.io β’ Docs.rs
Enable the memory feature to use these modules:
[dependencies]
reasonkit-core = { version = "0.1", features = ["memory"] }Features:
- Qdrant vector database (embedded mode)
- Hybrid search (dense + sparse fusion)
- RAPTOR hierarchical retrieval
- Local embeddings (BGE-M3 ONNX)
- BM25 full-text search (Tantivy)
Β§Installation
Primary Method (Universal - All Platforms & Shells):
curl -fsSL https://get.reasonkit.sh | bashπ Full Installation Guide: docs.reasonkit.sh/getting-started/installation
Platform Support:
- β Linux (all distributions)
- β macOS (Intel & Apple Silicon)
- β Windows (WSL & Native PowerShell)
- β FreeBSD (experimental)
Shell Support:
- β Bash (auto-detected, PATH configured)
- β Zsh (auto-detected, PATH configured)
- β Fish (auto-detected, PATH configured)
- β Nu (Nushell) (auto-detected, PATH configured)
- β PowerShell (cross-platform, PATH configured)
- β Elvish (auto-detected, PATH configured)
- β tcsh/csh/ksh (basic support)
Features:
- π¨ Beautiful terminal UI with progress visualization
- β‘ Fast installation (~30 seconds)
- π Secure (HTTPS-only, checksum verification)
- π§ Smart shell detection and PATH configuration
- π Real-time build progress with ETA
- π Automatic Rust installation if needed
π Learn more: Installation Guide β’ Installation Audit Report
Alternative Methods
# Cargo (Rust) - Recommended for Developers
cargo install reasonkit-core
# From Source (Latest Features)
git clone https://github.com/reasonkit/reasonkit-core
cd reasonkit-core && cargo build --releaseπ¦ Package Links: Crates.io β’ Docs.rs β’ GitHub Releases
Windows (Native PowerShell):
irm https://get.reasonkit.sh/windows | iexPython bindings available via PyO3 (build from source with --features python).
Β§How to Use
Command Structure: rk <command> [options] [arguments]
π Full CLI Reference: CLI Documentation β’ API Reference
Standard Operations:
# Balanced analysis (5-step protocol)
rk think --profile balanced "Should we migrate our monolith to microservices?"
# Quick sanity check (2-step protocol)
rk think --profile quick "Is this email a phishing attempt?"
# Maximum rigor (paranoid mode)
rk think --profile paranoid "Validate this cryptographic implementation"
# Scientific method (research & experiments)
rk think --profile scientific "Design A/B test for feature X"With Memory (RAG):
# Ingest documents
rk ingest document.pdf
# Query with RAG
rk query "What are the key findings in the research papers?"
# View execution traces
rk trace list
rk trace export <id>π Learn more: RAG Guide β’ Memory Layer Documentation
Β§Contributing: The 5 Gates of Quality
We demand excellence. All contributions must pass The 5 Gates of Quality:
π Contributing Guide: CONTRIBUTING.md β’ Quality Gates Documentation

# Clone & Setup
git clone https://github.com/reasonkit/reasonkit-core
cd reasonkit-core
# The 5 Gates (MANDATORY)
cargo build --release # Gate 1: Compilation (Exit 0)
cargo clippy -- -D warnings # Gate 2: Linting (0 errors)
cargo fmt --check # Gate 3: Formatting (Pass)
cargo test --all-features # Gate 4: Testing (100% pass)
cargo bench # Gate 5: Performance (<5% regression)Quality Score Target: 8.0/10 minimum for release.
π Complete Guidelines: CONTRIBUTING.md β’ Quality Metrics
Β§π·οΈ Community Badge
If you use ReasonKit in your project, add our badge:
[](https://reasonkit.sh)π Badge Guide: Community Badges β’ All Variants
Β§π¨ Branding & Design
- Brand Playbook - Complete brand guidelines
- Component Spec - UI component system
- Motion Guidelines - Animation system
- 3D Assets - WebGL integration guide
- Integration Guide - Complete integration instructions
π Online Resources: Brand Guidelines β’ Design System
Β§Design Philosophy: Honest Engineering
We donβt claim to eliminate probability. Thatβs impossible. LLMs are probabilistic by design.
We do claim to constrain it. Through structured protocols, multi-stage validation, and deterministic execution paths, we transform probabilistic token generation into auditable reasoning chains.
| What We Battle | How We Battle It | What Weβre Honest About |
|---|---|---|
| Inconsistency | Deterministic protocol execution | LLM outputs still vary, but execution paths donβt |
| Hallucination | Multi-source triangulation, adversarial critique | Canβt eliminate, but can detect and flag |
| Opacity | Full execution tracing, confidence scoring | Transparency doesnβt guarantee correctness |
| Uncertainty | Explicit confidence metrics, variance reduction | We quantify uncertainty, not eliminate it |
Β§Version & Maturity
| Component | Status | Notes |
|---|---|---|
| ThinkTools Chain | β Stable | Core reasoning protocols production-ready |
| MCP Server | β Stable | Model Context Protocol integration |
| CLI | πΆ Scaffolded | mcp, serve-mcp, completions work; others planned |
| Memory Features | β Stable | Via reasonkit-mem crate |
| Python Bindings | πΆ Beta | Build from source with --features python |
Current Version: v0.1.5 | CHANGELOG | Releases β’ π¦ Crates.io β’ π Docs.rs
Β§Verify Installation
# Check version
rk --version
# Verify MCP server starts
rk serve-mcp --help
# Run a quick test (requires LLM API key)
OPENAI_API_KEY=your-key rk mcpπ Troubleshooting: Installation Issues β’ Common Problems
Β§License
Apache 2.0 - See LICENSE
Open Source Core: All core reasoning protocols and ThinkTools are open source under Apache 2.0.
π License Information: LICENSE β’ License Strategy

ReasonKit β Turn Prompts into Protocols
Designed, Not Dreamed
Website | Pro | Docs | Resources | Enterprise | About | GitHub
AI Thinking Enhancement System - Turn Prompts into Protocols
ReasonKit Core is a pure reasoning engine that improves AI thinking patterns through structured reasoning protocols called ThinkTools. It transforms ad-hoc LLM prompting into auditable, reproducible reasoning chains.
Β§Philosophy
βDesigned, Not Dreamedβ - Structure beats raw intelligence. By imposing systematic reasoning protocols, ReasonKit helps AI models produce more reliable, verifiable, and explainable outputs.
Β§Quick Start
Β§Rust Usage
use reasonkit::thinktool::{ProtocolExecutor, ProtocolInput};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Create executor (auto-detects LLM from environment)
let executor = ProtocolExecutor::new()?;
// Run GigaThink for multi-perspective analysis
let result = executor.execute(
"gigathink",
ProtocolInput::query("Should we use microservices?")
).await?;
println!("Confidence: {:.2}", result.confidence);
for perspective in result.perspectives() {
println!("- {}", perspective);
}
Ok(())
}Β§Python Usage
from reasonkit import Reasoner, Profile, run_gigathink
# Quick usage with convenience functions
result = run_gigathink("What factors drive startup success?")
print(result.perspectives)
# Full control with Reasoner class
r = Reasoner()
result = r.think_with_profile(Profile.Balanced, "Should we pivot?")
print(f"Confidence: {result.confidence:.1%}")Β§ThinkTools (Core Reasoning Protocols)
ReasonKit provides five core ThinkTools, each implementing a specific reasoning strategy:
| Tool | Code | Purpose | Output |
|---|---|---|---|
| GigaThink | gt | Expansive creative thinking | 10+ diverse perspectives |
| LaserLogic | ll | Precision deductive reasoning | Validity assessment, fallacy detection |
| BedRock | br | First principles decomposition | Core axioms, rebuilt foundations |
| ProofGuard | pg | Multi-source verification | Triangulated evidence (3+ sources) |
| BrutalHonesty | bh | Adversarial self-critique | Flaws, weaknesses, counter-arguments |
Β§Reasoning Profiles
Profiles chain multiple ThinkTools together for comprehensive analysis:
| Profile | ThinkTools | Min Confidence | Use Case |
|---|---|---|---|
quick | GT, LL | 70% | Fast initial analysis |
balanced | GT, LL, BR, PG | 80% | Standard decision-making |
deep | All 5 | 85% | Complex problems |
paranoid | All 5 + validation | 95% | High-stakes decisions |
Β§Feature Flags
memory- Enable memory layer integration viareasonkit-memaesthetic- Enable UI/UX assessment capabilitiesvibe- Enable VIBE protocol validation systemcode-intelligence- Enable multi-language code analysisarf- Enable Autonomous Reasoning Frameworkminimax- Enable MiniMax M2 model integration
Β§Supported LLM Providers
ReasonKit supports 18+ LLM providers out of the box:
- Major Cloud: Anthropic, OpenAI, Google Gemini, Vertex AI, Azure OpenAI, AWS Bedrock
- Specialized: xAI (Grok), Groq, Mistral, DeepSeek, Cohere, Perplexity, Cerebras
- Inference: Together AI, Fireworks AI, Alibaba Qwen
- Aggregation: OpenRouter (300+ models), Cloudflare AI Gateway
Β§Architecture
+------------------+ +------------------+ +------------------+
| User Query | --> | Protocol Engine | --> | Auditable Output|
+------------------+ +------------------+ +------------------+
|
+-------------+-------------+
| | |
+----v----+ +-----v-----+ +----v----+
| LLM | | ThinkTool | | Profile |
| Client | | Modules | | System |
+---------+ +-----------+ +---------+Β§Modules
thinktool- Core ThinkTool protocols and execution engineengine- High-level async reasoning loop with streamingorchestration- Long-horizon task orchestration (100+ tool calls)error- Error types and result aliasestelemetry- Metrics and observability
Β§Optional Modules (Feature-Gated)
- [
bindings] - Python bindings via PyO3 (requirespython) - [
rag] - Full RAG engine with LLM integration (requiresmemory) - [
aesthetic] - UI/UX assessment system (requiresaesthetic) - [
vibe] - VIBE protocol validation (requiresvibe) - [
code_intelligence] - Multi-language code analysis (requirescode-intelligence)
Re-exportsΒ§
pub use error::Error;pub use error::Result;pub use orchestration::ComponentCoordinator;pub use orchestration::ErrorRecovery;pub use orchestration::LongHorizonConfig;pub use orchestration::LongHorizonOrchestrator;pub use orchestration::LongHorizonResult;pub use orchestration::PerformanceTracker;pub use orchestration::StateManager;pub use orchestration::TaskGraph;pub use orchestration::TaskNode;pub use orchestration::TaskPriority;pub use orchestration::TaskStatus;pub use engine::Decision;pub use engine::MemoryContext;pub use engine::Profile as ReasoningProfile;pub use engine::ReasoningConfig;pub use engine::ReasoningError;pub use engine::ReasoningEvent;pub use engine::ReasoningLoop;pub use engine::ReasoningLoopBuilder;pub use engine::ReasoningSession;pub use engine::ReasoningStep;pub use engine::StepKind;pub use engine::StreamHandle;pub use engine::ThinkToolResult;
ModulesΒ§
- constants
- Global constants and configuration defaults.
- engine
- High-performance async reasoning engine with streaming support.
- error
- Error types and result aliases for ReasonKit operations.
- evaluation
- Evaluation and benchmarking utilities.
- ingestion
- Document ingestion and processing pipeline. Document ingestion module for ReasonKit Core
- llm
- Provider-neutral LLM clients (e.g. Ollama
/api/chat). Provider-neutral LLM infrastructure. - m2
- MiniMax M2 model integration for 100+ tool calling.
- mcp
- MCP (Model Context Protocol) server implementations.
- memory_
interface - Memory interface trait for reasonkit-mem integration.
- orchestration
- Long-horizon task orchestration system.
- processing
- Document processing and transformation utilities. Processing module for ReasonKit Core
- telemetry
- Telemetry, metrics, and observability infrastructure.
- thinktool
- ThinkTool protocol engine - the core of ReasonKit.
- traits
- Core trait definitions for cross-crate integration.
- verification
- Verification and validation utilities. Verification Module - Protocol Delta Implementation
- web
- Web interface and HTTP API components. Web Search Integration for ReasonKit Web
- web_
interface - Web interface handlers and routes. Web Browser Interface Trait for ReasonKit Core
StructsΒ§
- Author
- Author information for document metadata.
- Chunk
- A chunk of text from a document.
- Document
- A document in the knowledge base.
- Document
Content - Document content container.
- Embedding
Ids - References to different embedding types for a chunk.
- Metadata
- Document metadata for indexing and retrieval.
- Processing
Status - Processing status for a document.
- Retrieval
Config Non- memory - Simple retrieval configuration (available without memory feature).
- Search
Result - Search result from a query.
- Source
- Source information for a document.
EnumsΒ§
- Content
Format - Content format enumeration.
- Document
Type - Document type categorization for the knowledge base.
- Match
Source - Source of a search match for hybrid retrieval.
- Processing
State - Processing state enumeration for documents.
- Source
Type - Source type enumeration for document provenance.
ConstantsΒ§
- VERSION
- Crate version string for runtime logging and API responses.