Crate reasonkit

Crate reasonkit 

Source
Expand description

Β§ReasonKit

Β§The AI Reasoning Engine

β€œDesigned, not Dreamed.” β€” From Prompt to Cognitive Engineering.

Auditable Reasoning for Production AI | Rust-Native | Turn Prompts into Protocols

ReasonKit - The Reasoning Engine - Auditable Reasoning for Production AI

CI Security Crates.io docs.rs Downloads License Rust MCP

Website | Pro | Docs | Resources | Enterprise | About | GitHub


Β§πŸš€ Quick Install

curl -fsSL https://get.reasonkit.sh | bash

Universal Installer β€’ All Platforms β€’ All Shells β€’ 30 Seconds

πŸ“– Installation Guide β€’ πŸ“¦ Crates.io β€’ πŸ“š Docs.rs


Β§The Problem We Solve

Most AI is a slot machine. Insert prompt β†’ pull lever β†’ unclear, hope for coherence, at the mercy of chance.

ReasonKit is a factory. Input data β†’ apply protocol β†’ deeper logic, auditable result, known probabibility.

Wrong Decisions vs Structured Reasoning: Financial Loss & Missed Opportunities vs Errors Caught & Costly Mistakes Prevented

The Cost of Wrong Decisions: Without structured reasoning, AI decisions lead to financial loss and missed opportunities. Structured protocols catch errors early and prevent costly mistakes before they compound.

LLMs are fundamentally probabilistic. Same prompt β†’ different outputs. This creates critical failures:

FailureImpactOur Solution
InconsistencyUnreliable for productionDeterministic protocol execution
HallucinationDangerous falsehoodsMulti-source triangulation + adversarial critique
OpacityNo audit trailComplete execution tracing with confidence scores

We don’t eliminate probability (impossible). We constrain it through structured protocols that force probabilistic outputs into deterministic execution paths.


Β§Quick Start

Already installed? Jump to Choose Your Workflow or How to Use.

Need installation help? See the Installation Guide or Installation Section below.

Β§πŸ€– Choose Your Workflow

Β§πŸ€– Claude Code (Opus 4.5)

Agentic CLI. No API key required.

claude mcp add reasonkit -- rk serve-mcp
claude "Use ReasonKit to analyze: Should we migrate to microservices?"

Learn more: Claude Code Integration

§🌐 ChatGPT (Browser)

Manual MCP Bridge. Injects the reasoning protocol directly into the chat.

# Generate strict protocol
rk protocol "Should we migrate to microservices?" | pbcopy

# β†’ Paste into ChatGPT: "Execute this protocol..."

Learn more: ChatGPT Integration

§⚑ Gemini 3.0 Pro (API)

Native CLI integration with Google’s latest preview.

export GEMINI_API_KEY=AIza...
rk think --model gemini-3.0-pro-preview "Should we migrate to microservices?"

Learn more: Google Gemini Integration β€’ All Provider Integrations

Note: The rk command is the shorthand alias for rk.

30 seconds to structured reasoning. See How to Use for more examples.


Β§ThinkTools: The 5-Step Reasoning Chain

Each ThinkTool acts as a variance reduction filter, transforming probabilistic outputs into increasingly deterministic reasoning paths.

πŸ“– Full Documentation: ThinkTools Guide β€’ API Reference

Tree-of-Thoughts vs Chain-of-Thought: 74% vs 4% Success Rate (NeurIPS 2023)

ReasonKit Protocol Chain - Turn Prompts into Protocols

ReasonKit Core ThinkTool Chain - Variance Reduction

ReasonKit Variance Reduction Chart

ThinkToolOperationWhat It Does
GigaThinkDiverge()Generate 10+ perspectives, explore widely
LaserLogicConverge()Detect fallacies, validate logic, find gaps
BedRockGround()First principles decomposition, identify axioms
ProofGuardVerify()Multi-source triangulation, require 3+ sources
BrutalHonestyCritique()Adversarial red team, attack your own reasoning

Β§Variance Reduction: The Chain Effect

Result: Raw LLM variance ~85% β†’ Protocol-constrained variance ~28%


Β§Reasoning Profiles

Pre-configured chains for different rigor levels. See Reasoning Profiles Guide for detailed documentation.

ReasonKit Core Reasoning Profiles Scale

# Fast analysis (70% confidence target)
rk think --profile quick "Is this email phishing?"

# Standard analysis (80% confidence target)
rk think --profile balanced "Should we use microservices?"

# Thorough analysis (85% confidence target)
rk think --profile deep "Design A/B test for feature X"

# Maximum rigor (95% confidence target)
rk think --profile paranoid "Validate cryptographic implementation"
ProfileChainConfidenceUse Case
--quickGigaThink β†’ LaserLogic70%Fast sanity checks
--balancedAll 5 ThinkTools80%Standard decisions
--deepAll 5 + meta-cognition85%Complex problems
--paranoidAll 5 + validation pass95%Critical decisions

Β§See It In Action

ReasonKit Terminal Experience

$ rk think --profile balanced "Should we migrate to microservices?"

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ThinkTool Chain: GigaThink β†’ LaserLogic β†’ BedRock β†’ ProofGuard
Variance:        85% β†’ 72% β†’ 58% β†’ 42% β†’ 28%
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

[GigaThink] 10 PERSPECTIVES GENERATED                         Variance: 85%
  1. OPERATIONAL: Maintenance overhead +40% initially
  2. TEAM TOPOLOGY: Conway's Law - do we have the teams?
  3. COST ANALYSIS: Infrastructure scales non-linearly
  ...
  β†’ Variance after exploration: 72% (-13%)

[LaserLogic] HIDDEN ASSUMPTIONS DETECTED                      Variance: 72%
  ⚠ Assuming network latency is negligible
  ⚠ Assuming team has distributed tracing expertise
  ⚠ Logical gap: No evidence microservices solve stated problem
  β†’ Variance after validation: 58% (-14%)

[BedRock] FIRST PRINCIPLES DECOMPOSITION                      Variance: 58%
  β€’ Axiom: Monoliths are simpler to reason about (empirical)
  β€’ Axiom: Distributed systems introduce partitions (CAP theorem)
  β€’ Gap: Cannot prove maintainability improvement without data
  β†’ Variance after grounding: 42% (-16%)

[ProofGuard] TRIANGULATION RESULT                             Variance: 42%
  β€’ 3/5 sources: Microservices increase complexity initially
  β€’ 2/5 sources: Some teams report success
  β€’ Confidence: 0.72 (MEDIUM) - Mixed evidence
  β†’ Variance after verification: 28% (-14%)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
VERDICT: conditional_yes | Confidence: 87% | Duration: 2.3s
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

What This Shows:

  • Transparency: See exactly where confidence comes from
  • Auditability: Every step logged and verifiable
  • Deterministic Path: Same protocol β†’ same execution flow
  • Variance Reduction: Quantified uncertainty reduction at each stage

Β§Architecture

The ReasonKit architecture uses a Protocol Engine wrapper to enforce deterministic execution over probabilistic LLM outputs.

πŸ“– Full Documentation: Architecture Guide β€’ API Reference

ReasonKit Core Architecture Exploded View

ReasonKit ThinkTool Chain Architecture

Three-Layer Architecture:

  1. Probabilistic LLM (Unavoidable)

    • LLMs generate tokens probabilistically
    • Same prompt β†’ different outputs
    • We cannot eliminate this
  2. Deterministic Protocol Engine (Our Innovation)

    • Wraps the probabilistic LLM layer
    • Enforces strict execution paths
    • Validates outputs against schemas
    • State machine ensures consistent flow
  3. ThinkTool Chain (Variance Reduction)

    • Each ThinkTool reduces variance
    • Multi-stage validation catches errors
    • Confidence scoring quantifies uncertainty

Key Components:

  • Protocol Engine: Orchestrates execution with strict state management
  • ThinkTools: Modular cognitive operations with defined contracts
  • LLM Integration: Unified client (Claude, GPT, Gemini, 18+ providers)
  • Telemetry: Local SQLite for execution traces + variance metrics

ReasonKit AI Integration Options: Claude, Gemini, OpenAI, Cursor, VS Code, Any LLM

Architecture (Mermaid Diagram)
flowchart LR
    subgraph CLI["ReasonKit CLI (rk)"]
      A[User Command<br/>rk think --profile balanced]
    end

    subgraph PROTOCOL["Deterministic Protocol Engine"]
      B1[State Machine<br/>Execution Plan]
      B2[ThinkTool Orchestrator]
      B3[(SQLite Trace DB)]
    end

    subgraph LLM["LLM Layer (Probabilistic)"]
      C1[Provider Router]
      C2[Claude / GPT / Gemini / ...]
    end

    subgraph TOOLS["ThinkTools Β· Variance Reduction"]
      G["GigaThink<br/>Diverge()"]
      LZ["LaserLogic<br/>Converge()"]
      BR["BedRock<br/>Ground()"]
      PG["ProofGuard<br/>Verify()"]
      BH["BrutalHonesty<br/>Critique()"]
    end

    A --> B1 --> B2 --> G --> LZ --> BR --> PG --> BH --> B3
    B2 --> C1 --> C2 --> B2

    classDef core fill:#030508,stroke:#06b6d4,stroke-width:1px,color:#f9fafb;
    classDef tool fill:#0a0d14,stroke:#10b981,stroke-width:1px,color:#f9fafb;
    classDef llm fill:#111827,stroke:#a855f7,stroke-width:1px,color:#f9fafb;

    class CLI,PROTOCOL core;
    class G,LZ,BR,PG,BH tool;
    class LLM,llm C1,C2;

Β§Built for Production

ReasonKit is written in Rust because reasoning infrastructure demands reliability.

CapabilityWhat It Means for You
Predictable Latency<5ms orchestration overhead, no GC pauses
Memory SafetyZero crashes from null pointers or buffer overflows
Single BinaryDeploy anywhere, no Python environment required
Fearless ConcurrencyRun 100+ reasoning chains in parallel safely
Type SafetyErrors caught at compile time, not runtime

Benchmarked Performance (view full report β€’ online version):

OperationTimeTarget
Protocol orchestration4.4ms<10ms
RRF Fusion (100 elements)33ΞΌs<5ms
Document chunking (10 KB)27ΞΌs<5ms
RAPTOR tree traversal (1000 nodes)33ΞΌs<5ms

Why This Matters:

Your AI reasoning shouldn’t crash in production. It shouldn’t pause for garbage collection during critical decisions. It shouldn’t require complex environment management to deploy.

ReasonKit’s Rust foundation ensures deterministic, auditable execution every timeβ€”the same engineering choice trusted by Linux, Cloudflare, Discord, and AWS for their most critical infrastructure.


Β§Memory Infrastructure (Optional)

Memory modules (storage, embedding, retrieval, RAPTOR, indexing) are available in the standalone reasonkit-mem crate.

πŸ“– Documentation: Memory Layer Guide β€’ Crates.io β€’ Docs.rs

Enable the memory feature to use these modules:

[dependencies]
reasonkit-core = { version = "0.1", features = ["memory"] }

Features:

  • Qdrant vector database (embedded mode)
  • Hybrid search (dense + sparse fusion)
  • RAPTOR hierarchical retrieval
  • Local embeddings (BGE-M3 ONNX)
  • BM25 full-text search (Tantivy)

Β§Installation

Primary Method (Universal - All Platforms & Shells):

curl -fsSL https://get.reasonkit.sh | bash

πŸ“– Full Installation Guide: docs.reasonkit.sh/getting-started/installation

Platform Support:

  • βœ… Linux (all distributions)
  • βœ… macOS (Intel & Apple Silicon)
  • βœ… Windows (WSL & Native PowerShell)
  • βœ… FreeBSD (experimental)

Shell Support:

  • βœ… Bash (auto-detected, PATH configured)
  • βœ… Zsh (auto-detected, PATH configured)
  • βœ… Fish (auto-detected, PATH configured)
  • βœ… Nu (Nushell) (auto-detected, PATH configured)
  • βœ… PowerShell (cross-platform, PATH configured)
  • βœ… Elvish (auto-detected, PATH configured)
  • βœ… tcsh/csh/ksh (basic support)

Features:

  • 🎨 Beautiful terminal UI with progress visualization
  • ⚑ Fast installation (~30 seconds)
  • πŸ”’ Secure (HTTPS-only, checksum verification)
  • 🧠 Smart shell detection and PATH configuration
  • πŸ“Š Real-time build progress with ETA
  • πŸ”„ Automatic Rust installation if needed

πŸ“– Learn more: Installation Guide β€’ Installation Audit Report

Alternative Methods
# Cargo (Rust) - Recommended for Developers
cargo install reasonkit-core

# From Source (Latest Features)
git clone https://github.com/reasonkit/reasonkit-core
cd reasonkit-core && cargo build --release

πŸ“¦ Package Links: Crates.io β€’ Docs.rs β€’ GitHub Releases

Windows (Native PowerShell):

irm https://get.reasonkit.sh/windows | iex

Python bindings available via PyO3 (build from source with --features python).


Β§How to Use

Command Structure: rk <command> [options] [arguments]

πŸ“– Full CLI Reference: CLI Documentation β€’ API Reference

Standard Operations:

# Balanced analysis (5-step protocol)
rk think --profile balanced "Should we migrate our monolith to microservices?"

# Quick sanity check (2-step protocol)
rk think --profile quick "Is this email a phishing attempt?"

# Maximum rigor (paranoid mode)
rk think --profile paranoid "Validate this cryptographic implementation"

# Scientific method (research & experiments)
rk think --profile scientific "Design A/B test for feature X"

With Memory (RAG):

# Ingest documents
rk ingest document.pdf

# Query with RAG
rk query "What are the key findings in the research papers?"

# View execution traces
rk trace list
rk trace export <id>

πŸ“– Learn more: RAG Guide β€’ Memory Layer Documentation


Β§Contributing: The 5 Gates of Quality

We demand excellence. All contributions must pass The 5 Gates of Quality:

πŸ“– Contributing Guide: CONTRIBUTING.md β€’ Quality Gates Documentation

ReasonKit Quality Gates Shield

# Clone & Setup
git clone https://github.com/reasonkit/reasonkit-core
cd reasonkit-core

# The 5 Gates (MANDATORY)
cargo build --release        # Gate 1: Compilation (Exit 0)
cargo clippy -- -D warnings  # Gate 2: Linting (0 errors)
cargo fmt --check            # Gate 3: Formatting (Pass)
cargo test --all-features    # Gate 4: Testing (100% pass)
cargo bench                  # Gate 5: Performance (<5% regression)

Quality Score Target: 8.0/10 minimum for release.

πŸ“– Complete Guidelines: CONTRIBUTING.md β€’ Quality Metrics


§🏷️ Community Badge

If you use ReasonKit in your project, add our badge:

[![Reasoned By ReasonKit](https://raw.githubusercontent.com/reasonkit/reasonkit-core/main/brand/badges/reasoned-by.svg)](https://reasonkit.sh)

πŸ“– Badge Guide: Community Badges β€’ All Variants


§🎨 Branding & Design

πŸ“– Online Resources: Brand Guidelines β€’ Design System


Β§Design Philosophy: Honest Engineering

We don’t claim to eliminate probability. That’s impossible. LLMs are probabilistic by design.

We do claim to constrain it. Through structured protocols, multi-stage validation, and deterministic execution paths, we transform probabilistic token generation into auditable reasoning chains.

What We BattleHow We Battle ItWhat We’re Honest About
InconsistencyDeterministic protocol executionLLM outputs still vary, but execution paths don’t
HallucinationMulti-source triangulation, adversarial critiqueCan’t eliminate, but can detect and flag
OpacityFull execution tracing, confidence scoringTransparency doesn’t guarantee correctness
UncertaintyExplicit confidence metrics, variance reductionWe quantify uncertainty, not eliminate it

Β§Version & Maturity

ComponentStatusNotes
ThinkTools Chainβœ… StableCore reasoning protocols production-ready
MCP Serverβœ… StableModel Context Protocol integration
CLIπŸ”Ά Scaffoldedmcp, serve-mcp, completions work; others planned
Memory Featuresβœ… StableVia reasonkit-mem crate
Python BindingsπŸ”Ά BetaBuild from source with --features python

Current Version: v0.1.5 | CHANGELOG | Releases β€’ πŸ“¦ Crates.io β€’ πŸ“š Docs.rs

Β§Verify Installation

# Check version
rk --version

# Verify MCP server starts
rk serve-mcp --help

# Run a quick test (requires LLM API key)
OPENAI_API_KEY=your-key rk mcp

πŸ“– Troubleshooting: Installation Issues β€’ Common Problems


Β§License

Apache 2.0 - See LICENSE

Open Source Core: All core reasoning protocols and ThinkTools are open source under Apache 2.0.

πŸ“– License Information: LICENSE β€’ License Strategy


ReasonKit Ecosystem Connection

ReasonKit β€” Turn Prompts into Protocols

Designed, Not Dreamed

Website | Pro | Docs | Resources | Enterprise | About | GitHub

πŸ“¦ Package Links: Crates.io β€’ Docs.rs β€’ PyPI

# ReasonKit Core

AI Thinking Enhancement System - Turn Prompts into Protocols

ReasonKit Core is a pure reasoning engine that improves AI thinking patterns through structured reasoning protocols called ThinkTools. It transforms ad-hoc LLM prompting into auditable, reproducible reasoning chains.

Β§Philosophy

β€œDesigned, Not Dreamed” - Structure beats raw intelligence. By imposing systematic reasoning protocols, ReasonKit helps AI models produce more reliable, verifiable, and explainable outputs.

Β§Quick Start

Β§Rust Usage

β“˜
use reasonkit::thinktool::{ProtocolExecutor, ProtocolInput};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Create executor (auto-detects LLM from environment)
    let executor = ProtocolExecutor::new()?;

    // Run GigaThink for multi-perspective analysis
    let result = executor.execute(
        "gigathink",
        ProtocolInput::query("Should we use microservices?")
    ).await?;

    println!("Confidence: {:.2}", result.confidence);
    for perspective in result.perspectives() {
        println!("- {}", perspective);
    }
    Ok(())
}

Β§Python Usage

from reasonkit import Reasoner, Profile, run_gigathink

# Quick usage with convenience functions
result = run_gigathink("What factors drive startup success?")
print(result.perspectives)

# Full control with Reasoner class
r = Reasoner()
result = r.think_with_profile(Profile.Balanced, "Should we pivot?")
print(f"Confidence: {result.confidence:.1%}")

Β§ThinkTools (Core Reasoning Protocols)

ReasonKit provides five core ThinkTools, each implementing a specific reasoning strategy:

ToolCodePurposeOutput
GigaThinkgtExpansive creative thinking10+ diverse perspectives
LaserLogicllPrecision deductive reasoningValidity assessment, fallacy detection
BedRockbrFirst principles decompositionCore axioms, rebuilt foundations
ProofGuardpgMulti-source verificationTriangulated evidence (3+ sources)
BrutalHonestybhAdversarial self-critiqueFlaws, weaknesses, counter-arguments

Β§Reasoning Profiles

Profiles chain multiple ThinkTools together for comprehensive analysis:

ProfileThinkToolsMin ConfidenceUse Case
quickGT, LL70%Fast initial analysis
balancedGT, LL, BR, PG80%Standard decision-making
deepAll 585%Complex problems
paranoidAll 5 + validation95%High-stakes decisions

Β§Feature Flags

  • memory - Enable memory layer integration via reasonkit-mem
  • aesthetic - Enable UI/UX assessment capabilities
  • vibe - Enable VIBE protocol validation system
  • code-intelligence - Enable multi-language code analysis
  • arf - Enable Autonomous Reasoning Framework
  • minimax - Enable MiniMax M2 model integration

Β§Supported LLM Providers

ReasonKit supports 18+ LLM providers out of the box:

  • Major Cloud: Anthropic, OpenAI, Google Gemini, Vertex AI, Azure OpenAI, AWS Bedrock
  • Specialized: xAI (Grok), Groq, Mistral, DeepSeek, Cohere, Perplexity, Cerebras
  • Inference: Together AI, Fireworks AI, Alibaba Qwen
  • Aggregation: OpenRouter (300+ models), Cloudflare AI Gateway

Β§Architecture

+------------------+     +------------------+     +------------------+
|   User Query     | --> | Protocol Engine  | --> |  Auditable Output|
+------------------+     +------------------+     +------------------+
                                 |
                   +-------------+-------------+
                   |             |             |
              +----v----+  +-----v-----+  +----v----+
              | LLM     |  | ThinkTool |  | Profile |
              | Client  |  | Modules   |  | System  |
              +---------+  +-----------+  +---------+

Β§Modules

  • thinktool - Core ThinkTool protocols and execution engine
  • engine - High-level async reasoning loop with streaming
  • orchestration - Long-horizon task orchestration (100+ tool calls)
  • error - Error types and result aliases
  • telemetry - Metrics and observability

Β§Optional Modules (Feature-Gated)

  • [bindings] - Python bindings via PyO3 (requires python)
  • [rag] - Full RAG engine with LLM integration (requires memory)
  • [aesthetic] - UI/UX assessment system (requires aesthetic)
  • [vibe] - VIBE protocol validation (requires vibe)
  • [code_intelligence] - Multi-language code analysis (requires code-intelligence)

Re-exportsΒ§

pub use error::Error;
pub use error::Result;
pub use orchestration::ComponentCoordinator;
pub use orchestration::ErrorRecovery;
pub use orchestration::LongHorizonConfig;
pub use orchestration::LongHorizonOrchestrator;
pub use orchestration::LongHorizonResult;
pub use orchestration::PerformanceTracker;
pub use orchestration::StateManager;
pub use orchestration::TaskGraph;
pub use orchestration::TaskNode;
pub use orchestration::TaskPriority;
pub use orchestration::TaskStatus;
pub use engine::Decision;
pub use engine::MemoryContext;
pub use engine::Profile as ReasoningProfile;
pub use engine::ReasoningConfig;
pub use engine::ReasoningError;
pub use engine::ReasoningEvent;
pub use engine::ReasoningLoop;
pub use engine::ReasoningLoopBuilder;
pub use engine::ReasoningSession;
pub use engine::ReasoningStep;
pub use engine::StepKind;
pub use engine::StreamHandle;
pub use engine::ThinkToolResult;

ModulesΒ§

constants
Global constants and configuration defaults.
engine
High-performance async reasoning engine with streaming support.
error
Error types and result aliases for ReasonKit operations.
evaluation
Evaluation and benchmarking utilities.
ingestion
Document ingestion and processing pipeline. Document ingestion module for ReasonKit Core
llm
Provider-neutral LLM clients (e.g. Ollama /api/chat). Provider-neutral LLM infrastructure.
m2
MiniMax M2 model integration for 100+ tool calling.
mcp
MCP (Model Context Protocol) server implementations.
memory_interface
Memory interface trait for reasonkit-mem integration.
orchestration
Long-horizon task orchestration system.
processing
Document processing and transformation utilities. Processing module for ReasonKit Core
telemetry
Telemetry, metrics, and observability infrastructure.
thinktool
ThinkTool protocol engine - the core of ReasonKit.
traits
Core trait definitions for cross-crate integration.
verification
Verification and validation utilities. Verification Module - Protocol Delta Implementation
web
Web interface and HTTP API components. Web Search Integration for ReasonKit Web
web_interface
Web interface handlers and routes. Web Browser Interface Trait for ReasonKit Core

StructsΒ§

Author
Author information for document metadata.
Chunk
A chunk of text from a document.
Document
A document in the knowledge base.
DocumentContent
Document content container.
EmbeddingIds
References to different embedding types for a chunk.
Metadata
Document metadata for indexing and retrieval.
ProcessingStatus
Processing status for a document.
RetrievalConfigNon-memory
Simple retrieval configuration (available without memory feature).
SearchResult
Search result from a query.
Source
Source information for a document.

EnumsΒ§

ContentFormat
Content format enumeration.
DocumentType
Document type categorization for the knowledge base.
MatchSource
Source of a search match for hybrid retrieval.
ProcessingState
Processing state enumeration for documents.
SourceType
Source type enumeration for document provenance.

ConstantsΒ§

VERSION
Crate version string for runtime logging and API responses.