Expand description
§Laminae — The Missing Layer Between Raw LLMs and Production AI
Laminae is a modular SDK that adds personality, voice, safety, learning, and containment to any AI application. Each layer works independently or together as a full stack.
§The Layers
| Layer | Crate | What It Does |
|---|---|---|
| Psyche | [laminae-psyche] | Multi-agent cognitive pipeline (Id + Superego → Ego) |
| Persona | [laminae-persona] | Voice extraction and style enforcement |
| Cortex | [laminae-cortex] | Self-improving learning loop from user edits |
| Shadow | [laminae-shadow] | Adversarial red-teaming of AI output |
| Ironclad | [laminae-ironclad] | Process-level execution sandbox |
| Glassbox | [laminae-glassbox] | Input/output containment layer |
Plus [laminae-ollama] for local LLM inference via Ollama.
§Quick Start
[dependencies]
laminae = "0.4"Use individual crates for fine-grained control, or this meta-crate for the full stack.
Re-exports§
pub use laminae_persona as persona;pub use laminae_cortex as cortex;pub use laminae_glassbox as glassbox;pub use laminae_psyche as psyche;pub use laminae_shadow as shadow;pub use laminae_ironclad as ironclad;pub use laminae_ollama as ollama;