thoughtchain
thoughtchain is a standalone Rust crate for durable agent memory.
It stores semantically typed thoughts in an append-only, hash-chained memory log through a swappable storage adapter layer. The current default backend is JSONL, but the chain model is no longer tied to that format. Agents can:
- persist important insights, decisions, constraints, and checkpoints
- record retrospectives and lessons learned after hard failures or non-obvious fixes
- relate new thoughts to earlier thoughts with typed graph edges
- query memory by type, role, agent, tags, concepts, text, and importance
- reconstruct context for agent resumption
- export a Markdown memory view that can back
MEMORY.md, MCP, REST, or CLI flows
The crate is intentionally independent from cloudllm so it can be embedded in
other agent systems without creating circular dependencies.
What Is In This Folder
thoughtchain/ contains:
- the standalone
thoughtchainlibrary crate - an optional
serverfeature for HTTP MCP and REST servers - the
thoughtchainddaemon binary - dedicated tests under
thoughtchain/tests
Build
From inside thoughtchain/:
Build with server support:
Test
Run the crate tests:
Run tests including the server feature:
Run rustdoc tests:
Generate Docs
Build local Rust documentation:
Include the server API docs:
Run The Daemon
The standalone daemon binary is thoughtchaind.
Run it with the server feature enabled:
When it starts, it serves both:
- an MCP server
- a REST server
It prints the active chain directory, default chain key, and bound MCP/REST addresses on startup.
Daemon Configuration
thoughtchaind is configured with environment variables:
THOUGHTCHAIN_DIRDirectory where ThoughtChain storage adapters store chain files.THOUGHTCHAIN_DEFAULT_KEYDefaultchain_keyused when requests omit one. Default:borganism-brainTHOUGHTCHAIN_STORAGE_ADAPTERStorage backend for newly opened chains. Supported values:jsonl,binary. Default:jsonlTHOUGHTCHAIN_BIND_HOSTBind host for both HTTP servers. Default:127.0.0.1THOUGHTCHAIN_MCP_PORTMCP server port. Default:9471THOUGHTCHAIN_REST_PORTREST server port. Default:9472
Example:
THOUGHTCHAIN_DIR=/tmp/thoughtchain \
THOUGHTCHAIN_DEFAULT_KEY=borganism-brain \
THOUGHTCHAIN_STORAGE_ADAPTER=jsonl \
THOUGHTCHAIN_BIND_HOST=127.0.0.1 \
THOUGHTCHAIN_MCP_PORT=9471 \
THOUGHTCHAIN_REST_PORT=9472 \
Server Surfaces
MCP endpoints:
GET /healthPOST /POST /tools/listPOST /tools/execute
REST endpoints:
GET /healthGET /v1/chainsPOST /v1/bootstrapPOST /v1/agentsPOST /v1/thoughtsPOST /v1/retrospectivesPOST /v1/searchPOST /v1/recent-contextPOST /v1/memory-markdownPOST /v1/head
Using With MCP Clients
thoughtchaind exposes both:
- a standard streamable HTTP MCP endpoint at
POST / - the legacy CloudLLM-compatible MCP endpoints at
POST /tools/listandPOST /tools/execute
That means you can:
- use native MCP clients such as Codex and Claude Code against
http://127.0.0.1:9471 - keep using direct HTTP calls or
cloudllm's MCP compatibility layer when needed
Codex
Codex CLI expects a streamable HTTP MCP server when you use --url:
Useful follow-up commands:
This connects Codex to the daemon's standard MCP root endpoint.
Qwen Code
Qwen Code uses the same HTTP MCP transport model:
Useful follow-up commands:
For user-scoped configuration:
Claude Code
Claude Code supports MCP servers through its claude mcp commands and
project/user MCP config. For a remote HTTP MCP server, the configuration shape
is transport-based:
Useful follow-up commands:
Claude Code also supports JSON config files such as .mcp.json. A ThoughtChain
HTTP MCP config looks like this:
Important:
/mcpinside Claude Code is mainly for managing or authenticating MCP servers that are already configured- the server itself must already be running at the configured URL
GitHub Copilot CLI
GitHub Copilot CLI can also connect to thoughtchaind as a remote HTTP MCP
server.
From interactive mode:
- Run
/mcp add - Set
Server Nametothoughtchain - Set
Server TypetoHTTP - Set
URLtohttp://127.0.0.1:9471 - Leave headers empty unless you add auth later
- Save the config
You can also configure it manually in ~/.copilot/mcp-config.json:
Retrospective Memory
ThoughtChain supports a dedicated retrospective workflow for lessons learned.
- Use
thoughtchain_appendfor ordinary durable facts, constraints, decisions, plans, and summaries. - Use
thoughtchain_append_retrospectiveafter a repeated failure, a long snag, or a non-obvious fix when future agents should avoid repeating the same struggle.
The retrospective helper:
- defaults
thought_typetoLessonLearned - always stores the thought with
role = Retrospective - still supports tags, concepts, confidence, importance, and
refsto earlier thoughts such as the original mistake or correction
Shared-Chain Multi-Agent Use
Multiple agents can write to the same chain_key.
Each stored thought carries:
agent_idagent_name- optional
agent_owner
That allows a shared chain to represent memory from:
- multiple agents in one workflow
- multiple named roles in one orchestration system
- multiple tenants or owners writing to the same chain namespace
Queries can filter by:
agent_idagent_nameagent_owner
Related Docs
At the repository root:
THOUGHTCHAIN_MCP.mdTHOUGHTCHAIN_REST.mdthoughtchain/WHITEPAPER.mdthoughtchain/changelog.txt