<div align="center">
# plato-instinct
**18 Agent Instincts with Constraint Grammar**
*MUST, SHOULD, CANNOT, MAY — formal behavioral constraints for AI agents.*
[](https://crates.io/crates/plato-instinct)
[](https://docs.rs/plato-instinct)
[](https://github.com/cocapn/plato-instinct/actions)
[](LICENSE)
</div>
## What?
`plato-instinct` provides a **formal grammar for agent behavioral constraints**. Instead of vague instructions ("be careful"), agents get typed, enforceable instincts:
- **MUST** — Hard constraint. Violation is a critical failure.
- **SHOULD** — Soft constraint. Violation is logged and scored.
- **CANNOT** — Negative constraint. The agent is structurally unable to perform the action.
- **MAY** — Optional capability. The agent can but isn't required to.
## Quick Start
```toml
[dependencies]
plato-instinct = "0.1"
```
```rust
use plato_instinct::*;
let engine = InstinctEngine::new();
// Define instincts with formal severity
engine.add(Instinct::new(
"exfiltrate_credentials",
Severity::MUST_NOT, // Critical — never leak secrets
Enforcement::Hard, // Structurally enforced
"Never expose fleet credentials to external channels"
));
engine.add(Instinct::new(
"push_every_30min",
Severity::SHOULD, // Soft — preferred but not critical
Enforcement::Soft, // Scored but not blocked
"Push work to vessel repos every 30 minutes"
));
// Evaluate agent behavior against instincts
let result = engine.evaluate(&action);
```
## Core API
| `Instinct` | A named behavioral constraint with severity and enforcement level |
| `Severity` | `MUST`, `SHOULD`, `CANNOT`, `MAY` — RFC 2119-style constraint levels |
| `Enforcement` | `Hard` (blocks action) or `Soft` (scores and logs) |
| `Assertion` | A checked behavioral claim against an instinct |
| `State` | Current instinct satisfaction state for an agent |
| `Reflex` | Automatic response triggered by instinct evaluation |
| `Thresholds` | Configurable thresholds for soft enforcement scoring |
| `InstinctEngine` | The full engine managing all instincts for an agent |
## Design Philosophy
Agent behavioral constraints should be **machine-checkable**, not just human-readable. By encoding instincts as typed data structures with formal enforcement semantics, we get:
1. **Verifiable compliance** — `InstinctEngine::audit()` checks all instincts
2. **Composable constraints** — instincts can reference and override each other
3. **Scored behavior** — soft constraints produce measurable compliance scores
4. **Emergent discipline** — hard constraints are structurally impossible to violate
## Part of PLATO
Part of the [PLATO ecosystem](https://github.com/cocapn) — formal knowledge and behavioral constraints for AI agent fleets.
## License
MIT — [Cocapn Fleet](https://github.com/cocapn)