greentic-designer 0.6.0

Greentic Designer — orchestrator that powers Adaptive Card design via the adaptive-card-mcp toolkit
Documentation

greentic-designer

CI License: MIT

Greentic Designer is a small Rust binary that orchestrates Adaptive Card design. It ships a web UI and a single-command CLI that drive an LLM tool-calling loop against the tools exposed by adaptive-card-core. The LLM designs Adaptive Cards v1.6 from scratch by calling tools such as validate_card, analyze_card, optimize_card, and transform_card.

This repo used to be a 3-layer template engine (themes + primitives + presets) with Handlebars rendering, JSON Schema validation, and external template packs. All of that has been removed. There are no presets, no Handlebars, no preset catalog — the LLM writes the JSON directly and the orchestrator hands the result back to the caller.

Status: v0.1. The chat, validate, and examples endpoints are live; /api/generate, /api/pack, and /api/deploy are 501 Not Implemented stubs reserved for the orchestration pipeline. The knowledge base ships empty by design — advanced sample cards will be curated in adaptive-card-core.

Architecture

┌─────────┐      ┌──────────┐      ┌────────────────────┐      ┌────────────┐
│ browser │ ───▶ │  gtc ui  │ ───▶ │ POST /api/chat     │ ───▶ │  OpenAI    │
└─────────┘      └──────────┘      │  (axum handler)    │      └─────┬──────┘
     ▲                             └──────────┬─────────┘            │
     │                                        │                      │
     │                                        ▼                      │
     │                         ┌──────────────────────────┐           │
     │                         │ tool_bridge::dispatch    │ ◀─────────┘
     │                         │ (12 tools, adaptive-     │   tool_calls
     │                         │  card-core backend)      │
     │                         └──────────┬───────────────┘
     │                                    │
     └────────────── Adaptive Card JSON ◀─┘

tool_bridge::dispatch is the single entry point for tool execution; every tool call the LLM makes is routed through it into adaptive_card_core.

Quick Start

cargo run -- ui --openai-api-key sk-...
# Opens http://127.0.0.1:<random-port> in your browser

Optional flags:

cargo run -- ui \
  --openai-api-key sk-... \
  --model gpt-4o-mini \
  --port 3000

Note: The React UI in assets/ui/ is the pre-refactor frontend and still references the old /api/templates / /api/render endpoints. It will be rewired in a follow-up PR. Until then, drive the backend directly via curl against the endpoints below.

API

Endpoint Purpose
POST /api/chat LLM multi-turn tool-calling loop — the main entry point
POST /api/validate Validate an Adaptive Card (passthrough to validate_card)
POST /api/simulate-http Proxy an outbound HTTP request for the UI simulator (CORS shield + secret-safe)
GET /api/examples List knowledge base entries (filter via ?category=)
GET /api/examples/{id} Fetch a single knowledge base entry
GET /api/examples/suggest?q=... Keyword search against the knowledge base
POST /api/pack Build a .gtpack (and optional .gtbundle) from cards; injects component-http nodes when present
POST /api/generate 501 stub — reserved for one-shot card generation
POST /api/deploy 501 stub — reserved for the deploy pipeline

Simulating HTTP nodes

Flows authored by the LLM can mix Adaptive Card nodes with HTTP nodes (type: "http", with config.url / method / body_mapping). When the UI is in Graph mode you can click an HTTP node to open the config editor and press Test request to execute the current spec live. In Demo mode the walkthrough reaches each HTTP node, calls /api/simulate-http, merges the response into the running form-data namespace (${node_id.field}), and then advances to the next entry.

Session history

The web UI persists every editing session (chat messages, active card or flow, graph selection, demo walkthrough) to IndexedDB. Use the Sessions dropdown in the header to switch between saved sessions, rename the current one, or start a fresh session — state is restored automatically on refresh.

Tools Available to the LLM

The LLM calling /api/chat has access to 12 tools, all backed by adaptive-card-core:

  • Core (10): validate_card, analyze_card, check_accessibility, optimize_card, transform_card, template_card, data_to_card, list_examples, get_example, suggest_layout
  • Orchestration stubs (2): pack_card, deploy_pack — return a structured 501 until the orchestration backends land

Development

# Full local CI (what husky pre-push runs)
./ci/local_check.sh

# Standard Rust commands
cargo build
cargo test --workspace
cargo fmt --all -- --check
cargo clippy --workspace --all-targets -- -D warnings

See CLAUDE.md for the module map, request flow, and contribution conventions.

References

License

MIT — see LICENSE.