Skip to main content

Module template

Module template 

Source
Expand description

Prompt-template engine for .harn.prompt assets and the render / render_prompt builtins.

§Surface

{{ name }}                                 interpolation
{{ user.name }} / {{ items[0] }}           nested path access
{{ name | upper | default: "anon" }}       filter pipeline
{{ if expr }}..{{ elif expr }}..{{ else }}..{{ end }}
{{ for x in xs }}..{{ else }}..{{ end }}   else = empty-iterable fallback
{{ for k, v in dict }}..{{ end }}
{{ include "partial.harn.prompt" }}
{{ include "partial.harn.prompt" with { x: name } }}
{{ section "task" }}..{{ endsection }}
{{# comment — stripped at parse time #}}
{{ raw }}..literal {{braces}}..{{ endraw }}
{{- x -}}                                  whitespace-trim markers

Back-compat: bare {{ident}} resolves silently to the empty fallthrough (writes back the literal text on miss) — preserving the pre-v2 contract. All new constructs raise TemplateError on parse or evaluation failure.

Modules§

lint
AST surface that harn-lint consumes to enforce .harn.prompt drift-prevention rules (#1669).

Structs§

BranchDecision
One conditional or section decision recorded during a template render. Powers the “variant resolution” trace surfaced in the portal so on-call engineers can answer “which capability branch fired for this model?” without re-running the template. Recorded deterministically — same llm snapshot + bindings always produce the same trace, which is what makes replay reproducible (#1668).
LlmRenderContext
Resolved provider/model identity plus the corresponding capability snapshot, materialized at LLM-frame entry and injected as the llm binding during any render() call inside that frame.
LlmRenderContextGuard
RAII guard that pushes a context on construction and pops on drop. Use this in Rust hosts (e.g. llm_call_impl) so the stack stays balanced across ?-shortcircuits and panics.
PromptSourceSpan
One byte-range in a rendered prompt mapped back to its source template. Foundation for the prompt-provenance UX (burin-code #93): hover a chunk of the live prompt in the debugger and jump to the .harn.prompt line that produced it.
RegisteredPrompt

Enums§

BranchKind
PromptSpanKind

Functions§

current_llm_render_context
Return a clone of the active frame, or None if no LLM context is in scope. Render entry-points use this to decide whether to inject the llm binding.
lookup_prompt_consumers
Return every span across every registered prompt that overlaps a template range. Powers the inverse “which rendered ranges consumed this template region?” navigation.
lookup_prompt_span
Resolve an output byte offset to its originating template span. Returns the innermost matching Expr / LegacyBareInterp span when one exists, falling back to broader structural spans (If / For / Include) so a click anywhere in a rendered loop iteration still navigates somewhere useful.
next_prompt_render_ordinal
Produce the next monotonic ordinal for a render-mark. Pipelines invoke the prompt_mark_rendered builtin which calls this to obtain a sequence number without having to know about per-session event counters. The IDE scrubber orders matching consumers by this ordinal when the emitted_at_ms timestamps collide.
pop_llm_render_context
Pop the most recently pushed frame. Returns None (rather than panicking) if the stack was empty, since the host may legitimately unwind through a balanced push/pop sequence.
prompt_render_indices
Fetch every event index where prompt_id was rendered. Called by the DAP adapter to populate the eventIndices list in the burin/promptConsumers response.
push_llm_render_context
Push a frame onto the ambient render-context stack. Pair with pop_llm_render_context (or use LlmRenderContextGuard) so the stack stays balanced even on the unwind path.
record_prompt_render_index
Record a render event index against a prompt_id (#106). The scrubber’s jump-to-render action walks this map to move the playhead to the AgentEvent where the template was consumed. Stored as a Vec so re-renders of the same prompt id accumulate.
render_template_to_string
Render a template for callers outside the VM crate that need the same prompt-template semantics as render(...) / render_prompt(...).
render_template_to_string_with_branch_trace
Render a template and return the capability branch trace that drove logical-section materialization. This is the deterministic counterpart to the template.render transcript event and is used by prompt evals that need to score section shape without scraping JSONL artifacts.
validate_template_syntax
Parse-only validation for lint/preflight. Returns a human-readable error message when the template body is syntactically invalid; Ok(()) when the template would parse. Does not resolve {{ include }} targets — those are validated at render time with their own error reporting.