agentprompt
LLM prompt templates with Jinja2 syntax. Render system / user / assistant turns into a typed message list ready for the Anthropic or OpenAI SDK.
[]
= "0.1"
Why
pydantic-ai #921 (22 reactions) is the open issue everyone keeps re-asking: how do I template my prompts cleanly? Jinja2 is the de-facto answer in Python (jxnl/instructor and many other Python libs ship with it). Rust LLM users get nothing. agentprompt is the smallest possible primitive: minijinja under the hood, role-aware on top.
Quick start
use ;
use json;
let messages = new
.system
.user
.render
.unwrap;
// messages[0] = { role: System, content: "You are a concise assistant." }
// messages[1] = { role: User, content: "Question: what is 2+2?" }
Message derives Serialize with a lowercase role field — drop straight into Anthropic's messages payload or OpenAI's messages array.
With your provider SDK
Anthropic example (pseudo):
let messages = new
.system
.user
.render?;
let resp = anthropic_client
.messages
.create
.model
.max_tokens
.messages // already shaped right
.send.await?;
Single-template form
If you don't need role splitting, use Prompt directly:
use Prompt;
use json;
let p = new.unwrap;
let out = p.render.unwrap;
assert_eq!;
Strictness
Missing vars fail loud (strict mode is on). We'd rather break tests than silently send the model a prompt with {{q}} literal in it.
What it doesn't do
- Doesn't include /
{% include %}other templates from disk in v0.1 (you can usePrompt::new(file_contents)after reading them yourself). - Doesn't autoescape (these aren't HTML).
- Doesn't hand-craft per-provider message shapes — it gives you
Vec<Message>; you map to the SDK's type.
License
MIT