portcullis
A quotient lattice for AI agent permissions that prevents the "uninhabitable state".
The Uninhabitable State
The uninhabitable state describes three capabilities that, when combined in an AI agent, create critical security vulnerabilities:
- Private Data Access - reading files, credentials, secrets
- Untrusted Content Exposure - web search, fetching URLs, processing external input
- Exfiltration Vector - git push, PR creation, API calls, shell commands
When an agent has all three at autonomous levels, prompt injection attacks can exfiltrate private data without human oversight.
┌─────────────────────────────────────────────────────────────────┐
│ THE LETHAL uninhabitable state │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Private │ │ Untrusted │ │ Exfiltration │ │
│ │ Data │ + │ Content │ + │ Vector │ │
│ │ Access │ │ Exposure │ │ │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ↓ ↓ ↓ │
│ read_files ≥ LowRisk web_fetch ≥ LowRisk git_push ≥ LowRisk │
│ web_search ≥ LowRisk create_pr ≥ LowRisk│
│ run_bash ≥ LowRisk │
│ │
│ When ALL THREE are autonomous → Prompt injection = Data theft │
└─────────────────────────────────────────────────────────────────┘
Solution: The Nucleus Operator
This crate models permissions as a product lattice L with a portcullis operator that projects onto the quotient lattice L' of safe configurations:
L = Capabilities × Obligations × Paths × Budget × Commands × Time
L' = { x ∈ L : ν(x) = x } — the quotient of safe configurations
The portcullis ν satisfies:
• Idempotent: ν(ν(x)) = ν(x)
• Deflationary: ν(x) ≤ x
• Meet-preserving: ν(x ∧ y) = ν(x) ∧ ν(y)
When the uninhabitable state is detected, exfiltration operations gain approval obligations. The quotient L' contains only configurations where this invariant holds.
See THREAT_MODEL.md for what this crate prevents and what it does NOT prevent.
Quick Start
Add to your Cargo.toml:
[]
= "1"
Basic usage:
use ;
// Create a permission set with dangerous capabilities
let mut perms = default;
perms.capabilities.read_files = Always; // Private data
perms.capabilities.web_fetch = LowRisk; // Untrusted content
perms.capabilities.git_push = LowRisk; // Exfiltration
// The meet operation applies the portcullis and adds approval obligations
let safe = perms.meet;
assert!;
Capability Levels
Permissions use a three-level lattice for autonomous capability:
| Level | Value | Description |
|---|---|---|
Never |
0 | Never allow |
LowRisk |
1 | Auto-approve for low-risk operations |
Always |
2 | Always auto-approve |
Approval requirements are modeled separately as Obligations. The meet operation takes the minimum of two capability levels (most restrictive).
Lattice Properties
The permission lattice satisfies the algebraic lattice properties:
- Commutative:
a ∧ b = b ∧ a - Associative:
(a ∧ b) ∧ c = a ∧ (b ∧ c) - Idempotent:
a ∧ a = a - Absorption:
a ∧ (a ∨ b) = a - Monotonic delegation:
delegate(parent, request) ≤ parent
These properties are verified by property-based tests using proptest.
Components
CapabilityLattice
Controls what tools/operations the agent can use autonomously:
use ;
let caps = CapabilityLattice ;
Obligations
Obligations specify which operations require human approval:
use ;
let mut obligations = default;
obligations.insert;
obligations.insert;
PathLattice
Controls file system access with glob patterns and sandboxing:
use PathLattice;
use Path;
// Block sensitive files
let paths = block_sensitive;
assert!;
assert!;
assert!;
// Sandbox to a directory
let sandboxed = with_work_dir;
BudgetLattice
Controls cost and token limits with precision arithmetic:
use BudgetLattice;
let mut budget = with_cost_limit;
assert!; // Returns true (within budget)
assert_eq!;
CommandLattice
Controls shell command execution with proper parsing. Supports both string allow/block rules and structured (program + argv) patterns:
use CommandLattice;
let cmds = default;
assert!;
assert!;
assert!; // Quoting bypass blocked
// Structured rules (program + args) for precision
let mut structured = permissive;
structured.allow_rule;
structured.block_rule;
assert!;
assert!;
TimeLattice
Controls temporal validity windows:
use TimeLattice;
let time = hours;
assert!;
assert!;
Delegation
Safely delegate permissions to subagents (monotonicity guaranteed):
use PermissionLattice;
let parent = permissive;
let requested = default;
match parent.delegate_to
Builder Pattern
Use the builder for fluent construction:
use ;
let perms = builder
.description
.capabilities
.budget
.uninhabitable_constraint
.created_by
.build;
Preset Configurations
Common permission configurations:
use PermissionLattice;
// Read-only: file reading and search only
let readonly = read_only;
// Filesystem read-only: read + search with sensitive paths blocked
let fs_readonly = filesystem_readonly;
// Web research: read + web search/fetch
let research = web_research;
// Code review: read + limited web search
let review = code_review;
// Edit-only: write + edit without exec or web
let edit_only = edit_only;
// Local dev: write + shell without web
let local_dev = local_dev;
// Fix issue: write + bash + git commit (PR requires approval)
let fix = fix_issue;
// Release: git push/PR with approvals
let release = release;
// Network-only: web access only
let network_only = network_only;
// Database client: CLI access for db tools
let db_client = database_client;
// Permissive: for trusted contexts (portcullis still enforced!)
let trusted = permissive;
// Restrictive: minimal permissions
let minimal = restrictive;
Mathematical Framework Extensions
The crate provides advanced mathematical structures for principled permission modeling:
Frame Theory and Nucleus Operators
The nucleus operator is formalized as a proper frame-theoretic construct, enabling type-safe quotient lattices:
use ;
// Create the uninhabitable state quotient nucleus
let nucleus = new;
// Project through the nucleus to get compile-time safety guarantee
let dangerous = permissive;
let safe = from_nucleus;
// The inner lattice is guaranteed to be uninhabitable state-safe
assert!;
The nucleus satisfies:
- Idempotent:
j(j(x)) = j(x) - Deflationary:
j(x) ≤ x - Meet-preserving:
j(x ∧ y) = j(x) ∧ j(y)
Heyting Algebra (Intuitionistic Implication)
Conditional permissions using the Heyting adjunction (c ∧ a) ≤ b ⟺ c ≤ (a → b):
use ;
let current = CapabilityLattice ;
let target = CapabilityLattice ;
// Check entailment (does current imply target?)
assert!;
// Compute what's missing to reach target
let gap = permission_gap;
Galois Connections (Trust Domain Translation)
Principled security label translation across trust domains:
use ;
// Create a bridge between internal and external domains
let bridge = internal_external;
// Translate permissions (security-preserving)
let internal = permissive;
let external = bridge.to_target;
// Round-trip shows information loss (deflationary closure)
let round_trip = bridge.round_trip;
assert!;
Available presets: internal_external, production_staging, human_agent, read_only.
Modal Operators (Necessity and Possibility)
Distinguish what is guaranteed (□) from what is achievable (◇):
use ;
let perms = fix_issue;
let context = new;
// Necessity: what can be done without approval
let guaranteed = context.necessary;
// Possibility: what could be achieved with escalation
let achievable = context.possible;
// Check if escalation is required
if context.requires_escalation
Graded Monad (Composable Risk Tracking)
Track risk through computation chains with proper monad laws:
use ;
// Pure computation (no risk)
let safe: = pure;
// Chain computations, risk accumulates via composition
let result = safe
.and_then
.and_then;
assert_eq!; // max of grades
// Evaluate permission profile with automatic risk grading
let perms = fix_issue;
let graded = evaluate_with_risk;
The graded monad satisfies:
- Left identity:
pure(a).and_then(f) = f(a) - Right identity:
m.and_then(pure) = m - Associativity:
(m.and_then(f)).and_then(g) = m.and_then(|x| f(x).and_then(g))
Running the Examples
Type-Safe Enforcement
Use PermissionGuard for compile-time enforcement:
use ;
// The GuardedAction type cannot be constructed without passing checks
Security Model
What We Prevent
- Uninhabitable state completion at autonomous levels
- Privilege escalation via delegation
- Budget inflation via negative charges
- Path traversal attacks
- Command injection via quoting
- Deserialization bypass of constraints
- Permission tampering (checksum verification)
What We Do NOT Prevent
- Human approval of malicious actions (social engineering)
- Attacks within a single capability
- Side-channel attacks
- Kernel-level escapes
See THREAT_MODEL.md for the complete security model.
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT license (LICENSE-MIT)
at your option.
References
Security
- The Uninhabitable State - Simon Willison
- Container Hardening Against Agentic AI
- Lattice-based Access Control - Denning 1976, Sandhu 1993
Mathematical Foundations
- Nuclei in Locale Theory - nLab
- Heyting Algebra - nLab
- Galois Connections - Wikipedia
- Graded Monads - nLab
- Modal Logic S4 - Stanford Encyclopedia
- Sheaf Semantics for Noninterference - Sterling & Harper