laminae
The missing layer between raw LLMs and production AI.
Meta-crate that re-exports all Laminae layers. Add this one dependency to get the full stack.
Installation
[]
= "0.4"
= { = "1", = ["full"] }
Or pick individual layers:
= "0.4" # Multi-agent cognitive pipeline
= "0.4" # Voice extraction & enforcement
= "0.4" # Self-improving learning loop
= "0.4" # Adversarial red-teaming
= "0.4" # I/O containment
= "0.4" # Process sandbox
= "0.4" # Ollama client
The Layers
| Layer | Module | What It Does |
|---|---|---|
| Psyche | laminae::psyche |
Id + Superego shape the Ego's response with invisible context |
| Persona | laminae::persona |
Voice extraction from samples, style enforcement, AI phrase detection |
| Cortex | laminae::cortex |
Tracks user edits, detects patterns, learns reusable instructions |
| Shadow | laminae::shadow |
Automated security auditing of AI output |
| Ironclad | laminae::ironclad |
Command whitelist, network sandbox, resource watchdog |
| Glassbox | laminae::glassbox |
Input/output validation, rate limiting, path protection |
Plus laminae::ollama for local LLM inference.
Quick Example
use ;
use OllamaClient;
;
async
See the examples for Claude API, OpenAI API, Shadow auditing, and full-stack integration.
License
Apache-2.0 - Copyright 2026 Orel Ohayon.