normcore 0.1.1

Rust implementation baseline for NormCore normative admissibility evaluator
Documentation
  • Coverage
  • 0.49%
    1 out of 206 items documented1 out of 1 items with examples
  • Size
  • Source code size: 211.11 kB This is the summed size of all the files inside the crates.io package for this release.
  • Documentation size: 14.37 MB This is the summed size of all files generated by rustdoc for all configured targets
  • Ø build duration
  • this release: 18s Average build duration of successful builds.
  • all releases: 18s Average build duration of successful builds in releases after 2024-10-23.
  • Links
  • Homepage
  • Documentation
  • olegische/normcore
    0 0 0
  • crates.io
  • Dependencies
  • Versions
  • Owners
  • olegische

NormCore (Rust)

NormCore is a deterministic normative admissibility evaluator for agent speech acts.

It answers one question only:

Was the agent allowed to speak in this form, given what it observed?

It does not evaluate semantic truth, task correctness, or answer quality.

Specification

NormCore tracks the IETF Internet-Draft:

Notes:

  • This is an Internet-Draft (work in progress), not an RFC.
  • Axiom labels in this crate (A4, A5, A6, A7) follow that draft.
  • If draft wording changes, behavior may be updated in future releases.

Install

Library:

cargo add normcore

CLI:

cargo install normcore

How It Works

NormCore evaluates normative form and grounding, not semantic truth:

  1. Extract normative statements from the assistant output.
  2. Detect modality (for example assertive, conditional, refusal).
  3. Build grounding only from externally observed evidence (tool results + optional external grounds).
  4. Link statements to cited grounds ([@citation_key]).
  5. Apply axioms (A4-A7) lexicographically.

A single violation is enough for an inadmissible final result.

Hard Invariants

  • Agent text cannot license itself.
  • Grounding must come from externally observed evidence.
  • Citations link claims to grounds by key ([@key]).
  • Personalization/memory/profile data is non-epistemic and not grounding.

Status Model

Top-level AdmissibilityJudgment.status is one of:

  • acceptable
  • conditionally_acceptable
  • violates_norm
  • unsupported
  • ill_formed
  • underdetermined
  • no_normative_content

Public API

use normcore::{evaluate, EvaluateInput};

let judgment = evaluate(EvaluateInput {
    agent_output: Some("If deployment is blocked, we should roll back.".to_string()),
    conversation: None,
    grounds: None,
})
.expect("evaluation should succeed");

assert_eq!(judgment.status.as_str(), "conditionally_acceptable");

Inputs

evaluate() accepts:

  • agent_output (optional): assistant output string
  • conversation (optional): full chat history as JSON messages; last message must be assistant
  • grounds (optional): external grounds as OpenAI-style annotations or normalized grounds

At least one of agent_output or conversation is required. If both are provided, agent_output must exactly match the last assistant content in conversation.

Minimal CLI Usage

normcore evaluate --agent-output "We should deploy now."

Unlicensed assertive -> expected violates_norm.

normcore evaluate --agent-output "If the deployment is blocked, we should roll back."

Conditional phrasing -> expected conditionally_acceptable.

normcore evaluate --conversation '[
  {"role":"assistant","content":"","tool_calls":[{"id":"callWeatherNYC","type":"function","function":{"name":"get_weather","arguments":"{\"city\":\"New York\"}"}}]},
  {"role":"tool","tool_call_id":"callWeatherNYC","content":"{\"weather_id\":\"nyc_2026-02-07\"}"},
  {"role":"assistant","content":"You should carry an umbrella [@callWeatherNYC]."}
]'

Grounded assertive with citation -> expected acceptable.

Repository Development

Repository-focused scripts, test tracks, and from-source workflows are documented in the root README: