Expand description
§every-other-token
A real-time LLM token stream interceptor for token-level interaction research.
This crate sits between the caller and the model, intercepts the token stream as it arrives over SSE, applies one of five transform strategies to tokens at configurable positions, scores model confidence at each position using the OpenAI logprob API, and routes the enriched events to a terminal renderer, a zero-dependency web UI, and an optional WebSocket collaboration room.
§Feature flags
| Flag | Description |
|---|---|
sqlite-log | Persist experiment runs to a local SQLite database via store::ExperimentStore. |
self-tune | Enable the self-improvement telemetry bus and tuning controller. |
self-modify | Enable snapshot-based parameter mutation (requires self-tune). |
intelligence | Reserved namespace for future interpretability features. |
evolution | Reserved namespace for future evolutionary optimisation. |
helix-bridge | HTTP bridge that polls /api/stats and pushes config patches. |
redis-backing | Write-through Redis persistence for agent memory and snapshots. |
wasm | WASM target bindings via wasm-bindgen. |
§Quickstart
export OPENAI_API_KEY="sk-..."
cargo run -- "What is consciousness?" --visual
cargo run -- "What is consciousness?" --web
cargo run -- "Explain recursion" --research --runs 20 --output results.jsonModules§
- cli
- Command-line argument definitions and helper functions.
- collab
- Multiplayer collaboration: room state, participant management, WebSocket handling.
- config
- Optional configuration file support (#16).
- error
- Crate-level error type for Every-Other-Token.
- heatmap
- Per-position token confidence heatmap exporter.
- providers
- Provider plugin system and SSE wire types.
- render
- Terminal rendering helpers extracted from the core
TokenInterceptor. - replay
- research
- Headless research mode and batch experiment execution.
- store
- SQLite-backed persistence for experiment sessions and per-run metrics.
- transforms
- Token transform pipeline.
- web
- Embedded web UI server and HTTP request handling.
Structs§
- Research
Session - Aggregated statistics from one or more headless inference runs.
- Token
Alternative - One alternative token and its probability (for top-K logprob display).
- Token
Event - A single processed token emitted by the streaming pipeline.
- Token
Interceptor - The core streaming engine that sits between the caller and the LLM.
Functions§
- run_
research_ headless - Run
runsheadless inference calls, collect allTokenEvents, and return an aggregatedResearchSession. Call sites must provide a constructed interceptor (no web_tx set — events are returned via the mpsc channel).