tdln-brain
Deterministic Cognitive Layer for LogLine OS
NL → TDLN SemanticUnit → canonical bytes (via json_atomic) → happy Gate → verifiable execution.
What is this?
tdln-brain is the cognitive shim between LLMs and the LogLine kernel. It:
- Renders a typed
CognitiveContext(system directive, recall, constraints) into LLM-ready messages - Parses model output into a strict
SemanticUnit— or returns a hard error - Separates reasoning from action (free-form text vs. strict JSON)
Invariants
- Strict output: JSON that parses into a
SemanticUnitor it's aBrainError::Hallucination - Kernel awareness: constraints (policies) visible before generation, reducing Gate rejections
- Deterministic canon: one source of truth for canonical bytes (delegates to
json_atomic)
Quickstart
use ;
use json;
async
API Overview
Core Types
// Cognitive context for prompt rendering
// Chat message
// Parsed decision
NeuralBackend Trait
Implement this to plug in any LLM:
Parser
// Extract JSON from raw LLM output, parse into SemanticUnit
;
Handles:
- Clean JSON:
{"kind":"grant",...} - Fenced blocks:
```json {...} ``` - Mixed prose + JSON
Error Model
| Error | Meaning |
|---|---|
Provider(msg) |
Transport/API error |
Hallucination(msg) |
Output not valid TDLN JSON |
ContextOverflow |
Context window exceeded |
Parsing(msg) |
Malformed JSON |
Features
default— Core functionalityhttp-drivers— Includesreqwestfor HTTP-based backends
Security
#![forbid(unsafe_code)]- No implicit decisions — invalid output = hard error
- Canon chain downstream (hash at compile/proof stage, not here)
License
MIT — See LICENSE