thoughtchain
thoughtchain is a standalone Rust crate for durable agent memory.
It stores semantically typed thoughts in an append-only, hash-chained memory log through a swappable storage adapter layer. The current default backend is JSONL, but the chain model is no longer tied to that format. Agents can:
- persist important insights, decisions, constraints, and checkpoints
- relate new thoughts to earlier thoughts with typed graph edges
- query memory by type, role, agent, tags, concepts, text, and importance
- reconstruct context for agent resumption
- export a Markdown memory view that can back
MEMORY.md, MCP, REST, or CLI flows
The crate is intentionally independent from cloudllm so it can be embedded in
other agent systems without creating circular dependencies.
What Is In This Folder
thoughtchain/ contains:
- the standalone
thoughtchainlibrary crate - an optional
serverfeature for HTTP MCP and REST servers - the
thoughtchainddaemon binary - dedicated tests under
thoughtchain/tests
Build
From inside thoughtchain/:
Build with server support:
Test
Run the crate tests:
Run tests including the server feature:
Run rustdoc tests:
Generate Docs
Build local Rust documentation:
Include the server API docs:
Run The Daemon
The standalone daemon binary is thoughtchaind.
Run it with the server feature enabled:
When it starts, it serves both:
- an MCP server
- a REST server
It prints the active chain directory, default chain key, and bound MCP/REST addresses on startup.
Daemon Configuration
thoughtchaind is configured with environment variables:
THOUGHTCHAIN_DIRDirectory where ThoughtChain storage adapters store chain files.THOUGHTCHAIN_DEFAULT_KEYDefaultchain_keyused when requests omit one. Default:borganism-brainTHOUGHTCHAIN_STORAGE_ADAPTERStorage backend for newly opened chains. Supported values:jsonl,binary. Default:jsonlTHOUGHTCHAIN_BIND_HOSTBind host for both HTTP servers. Default:127.0.0.1THOUGHTCHAIN_MCP_PORTMCP server port. Default:9471THOUGHTCHAIN_REST_PORTREST server port. Default:9472
Example:
THOUGHTCHAIN_DIR=/tmp/thoughtchain \
THOUGHTCHAIN_DEFAULT_KEY=borganism-brain \
THOUGHTCHAIN_STORAGE_ADAPTER=jsonl \
THOUGHTCHAIN_BIND_HOST=127.0.0.1 \
THOUGHTCHAIN_MCP_PORT=9471 \
THOUGHTCHAIN_REST_PORT=9472 \
Server Surfaces
MCP endpoints:
GET /healthPOST /tools/listPOST /tools/execute
REST endpoints:
GET /healthPOST /v1/bootstrapPOST /v1/thoughtsPOST /v1/searchPOST /v1/recent-contextPOST /v1/memory-markdownPOST /v1/head
Using With MCP Clients
thoughtchaind currently exposes a CloudLLM-compatible MCP-like HTTP surface:
POST /tools/listPOST /tools/execute
That is enough for cloudllm, local testing, and direct HTTP calls, but it is
not yet a standard streamable HTTP MCP transport endpoint. That distinction is
important:
- you can test it today with
curl,cloudllm, or another direct client - you cannot yet register
http://127.0.0.1:9471as a native remote MCP server in Codex or Claude Code and expect it to work as a standard MCP transport
When thoughtchaind grows a standard MCP transport, these are the command
shapes you will use.
Codex
Codex CLI expects a streamable HTTP MCP server when you use --url.
Example command shape:
Useful follow-up commands:
Important:
- this is the correct Codex command form
- it will only work once
thoughtchaindexposes a standard MCP transport - against the current
/tools/listand/tools/executesurface, direct HTTP calls are still the correct way to test
Claude Code
Claude Code supports MCP servers through its claude mcp commands and
project/user MCP config. For a remote HTTP MCP server, the configuration shape
is transport-based.
Example command shape for a future standard HTTP transport:
Useful follow-up commands:
Claude Code also supports JSON config files such as .mcp.json. A future
ThoughtChain HTTP MCP config would look like this:
Important:
/mcpinside Claude Code is mainly for managing or authenticating MCP servers that are already configured- it is not the step that turns the current ThoughtChain daemon into a standard MCP transport
- until ThoughtChain exposes standard MCP HTTP or SSE transport, use its current HTTP endpoints directly
Shared-Chain Multi-Agent Use
Multiple agents can write to the same chain_key.
Each stored thought carries:
agent_idagent_name- optional
agent_owner
That allows a shared chain to represent memory from:
- multiple agents in one workflow
- multiple named roles in one orchestration system
- multiple tenants or owners writing to the same chain namespace
Queries can filter by:
agent_idagent_nameagent_owner
Related Docs
At the repository root:
THOUGHTCHAIN_MCP.mdTHOUGHTCHAIN_REST.mdthoughtchain/changelog.txt