Skip to main content

Crate every_other_token

Crate every_other_token 

Source
Expand description

§every-other-token

A real-time LLM token stream interceptor for token-level interaction research.

This crate sits between the caller and the model, intercepts the token stream as it arrives over SSE, applies one of five transform strategies to tokens at configurable positions, scores model confidence at each position using the OpenAI logprob API, and routes the enriched events to a terminal renderer, a zero-dependency web UI, and an optional WebSocket collaboration room.

§Feature flags

FlagDescription
sqlite-logPersist experiment runs to a local SQLite database via store::ExperimentStore.
self-tuneEnable the self-improvement telemetry bus and tuning controller.
self-modifyEnable snapshot-based parameter mutation (requires self-tune).
intelligenceReserved namespace for future interpretability features.
evolutionReserved namespace for future evolutionary optimisation.
helix-bridgeHTTP bridge that polls /api/stats and pushes config patches.
redis-backingWrite-through Redis persistence for agent memory and snapshots.
wasmWASM target bindings via wasm-bindgen.

§Quickstart

export OPENAI_API_KEY="sk-..."
cargo run -- "What is consciousness?" --visual
cargo run -- "What is consciousness?" --web
cargo run -- "Explain recursion" --research --runs 20 --output results.json

Modules§

cli
Command-line argument definitions and helper functions.
collab
Multiplayer collaboration: room state, participant management, WebSocket handling.
config
Optional configuration file support (#16).
error
Crate-level error type for Every-Other-Token.
heatmap
Per-position token confidence heatmap exporter.
providers
Provider plugin system and SSE wire types.
render
Terminal rendering helpers extracted from the core TokenInterceptor.
replay
research
Headless research mode and batch experiment execution.
store
SQLite-backed persistence for experiment sessions and per-run metrics.
transforms
Token transform pipeline.
web
Embedded web UI server and HTTP request handling.

Structs§

ResearchSession
Aggregated statistics from one or more headless inference runs.
TokenAlternative
One alternative token and its probability (for top-K logprob display).
TokenEvent
A single processed token emitted by the streaming pipeline.
TokenInterceptor
The core streaming engine that sits between the caller and the LLM.

Functions§

run_research_headless
Run runs headless inference calls, collect all TokenEvents, and return an aggregated ResearchSession. Call sites must provide a constructed interceptor (no web_tx set — events are returned via the mpsc channel).