greentic-flow-builder 0.2.0

Greentic Flow Builder — orchestrator that powers Adaptive Card design via the adaptive-card-mcp toolkit
Documentation

greentic-flow-builder

CI License: MIT

Greentic Flow Builder is a small Rust binary that orchestrates Adaptive Card design. It ships a web UI and a single-command CLI that drive an LLM tool-calling loop against the tools exposed by adaptive-card-core. The LLM designs Adaptive Cards v1.6 from scratch by calling tools such as validate_card, analyze_card, optimize_card, and transform_card.

This repo used to be a 3-layer template engine (themes + primitives + presets) with Handlebars rendering, JSON Schema validation, and external template packs. All of that has been removed. There are no presets, no Handlebars, no preset catalog — the LLM writes the JSON directly and the orchestrator hands the result back to the caller.

Status: v0.1. The chat, validate, and examples endpoints are live; /api/generate, /api/pack, and /api/deploy are 501 Not Implemented stubs reserved for the orchestration pipeline. The knowledge base ships empty by design — advanced sample cards will be curated in adaptive-card-core.

Architecture

┌─────────┐      ┌──────────┐      ┌────────────────────┐      ┌────────────┐
│ browser │ ───▶ │  gtc ui  │ ───▶ │ POST /api/chat     │ ───▶ │  OpenAI    │
└─────────┘      └──────────┘      │  (axum handler)    │      └─────┬──────┘
     ▲                             └──────────┬─────────┘            │
     │                                        │                      │
     │                                        ▼                      │
     │                         ┌──────────────────────────┐           │
     │                         │ tool_bridge::dispatch    │ ◀─────────┘
     │                         │ (12 tools, adaptive-     │   tool_calls
     │                         │  card-core backend)      │
     │                         └──────────┬───────────────┘
     │                                    │
     └────────────── Adaptive Card JSON ◀─┘

tool_bridge::dispatch is the single entry point for tool execution; every tool call the LLM makes is routed through it into adaptive_card_core.

Quick Start

cargo run -- ui --openai-api-key sk-...
# Opens http://127.0.0.1:<random-port> in your browser

Optional flags:

cargo run -- ui \
  --openai-api-key sk-... \
  --model gpt-4o-mini \
  --port 3000

Note: The React UI in assets/ui/ is the pre-refactor frontend and still references the old /api/templates / /api/render endpoints. It will be rewired in a follow-up PR. Until then, drive the backend directly via curl against the endpoints below.

API

Endpoint Purpose
POST /api/chat LLM multi-turn tool-calling loop — the main entry point
POST /api/validate Validate an Adaptive Card (passthrough to validate_card)
GET /api/examples List knowledge base entries (filter via ?category=)
GET /api/examples/{id} Fetch a single knowledge base entry
GET /api/examples/suggest?q=... Keyword search against the knowledge base
POST /api/generate 501 stub — reserved for one-shot card generation
POST /api/pack 501 stub — reserved for cards → .gtpack packaging
POST /api/deploy 501 stub — reserved for the deploy pipeline

Tools Available to the LLM

The LLM calling /api/chat has access to 12 tools, all backed by adaptive-card-core:

  • Core (10): validate_card, analyze_card, check_accessibility, optimize_card, transform_card, template_card, data_to_card, list_examples, get_example, suggest_layout
  • Orchestration stubs (2): pack_card, deploy_pack — return a structured 501 until the orchestration backends land

Development

# Full local CI (what husky pre-push runs)
./ci/local_check.sh

# Standard Rust commands
cargo build
cargo test --workspace
cargo fmt --all -- --check
cargo clippy --workspace --all-targets -- -D warnings

See CLAUDE.md for the module map, request flow, and contribution conventions.

References

License

MIT — see LICENSE.