juglans 0.2.17

Compiler and runtime for Juglans Workflow Language
docs.rs failed to build juglans-0.2.17
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build: juglans-0.1.0

Others write code to draw graphs. Juglans writes graphs as code. Your workflow file is a directed acyclic graph of typed nodes and edges — the compiler parses it, validates it, and runs it. No DAG-builder boilerplate, no state-machine glue, no Python harness.

# router.jg — classify user input, then branch

[assistant]: { "model": "gpt-4o-mini", "temperature": 0.0, "system_prompt": "Classify user input. Return JSON with key 'intent' set to 'question' or 'task'." }

[classify]: chat(agent=assistant, message=input.query, format="json")
[answer]:   print(message="Answering: " + input.query)
[execute]:  print(message="Executing: " + input.query)
[fallback]: print(message="I did not understand.")

[assistant] -> [classify]

[classify] -> switch output.intent {
    "question": [answer]
    "task":     [execute]
    default:    [fallback]
}
juglans router.jg --input '{"query": "What is a DAG?"}'

That file IS the architecture diagram. The branching, routing, and convergence are explicit in the syntax.

Why Juglans?

Approach Problem Juglans solves
Airflow / Prefect Python code generates the DAG; the graph is a second-class artifact.
LangGraph / CrewAI State machines between agents; no true topological composition.
Terraform Declarative graph, but no control flow, no functions, no AI.
BPMN / XML Verbose, not composable, no runtime.
Juglans Graph topology is the program — composable, verifiable, executable in one step.

Features

  • Declarative DAG — conditional edges, switch routing, foreach / while loops, on error handlers, [name(params)]: { ... } function definitions
  • Inline agents — agents are JSON map nodes defined alongside the workflow that uses them, no separate file
  • 100+ expression functions — Python-like syntax: len, map, filter, reduce, sort_by, group_by, zip, regex_*, json, uuid, date helpers, lambdas
  • Embedded HTTP backendserve() turns a workflow into an Axum handler; every URL hits the workflow as an axum fallback
  • Native LLM providers — OpenAI, Anthropic, DeepSeek, Google Gemini, Qwen, xAI, ByteDance Ark (no broker, no proxy)
  • Python ecosystem bridgepython: ["pandas", "sklearn"] and call modules directly, with object references for non-serializable types
  • MCP integration — plug in any Model Context Protocol server as a tool source
  • Package registryjuglans pack / publish / add to share reusable libraries
  • Bot adapters — Telegram, Feishu, WeChat — one flag to turn a workflow into a chatbot
  • Cross-platform — macOS, Linux, Windows, and WASM (full engine runs in the browser)

Install

# Prebuilt binary (recommended) — latest GitHub release
curl -fsSL https://raw.githubusercontent.com/juglans-ai/juglans/main/install.sh | sh

# From source — requires Rust 1.80+
git clone https://github.com/juglans-ai/juglans.git
cd juglans && cargo install --path .

Verify with juglans --version.

30-Second Quick Start

cat > hello.jg <<'EOF'
[greet]: print(message="Hello, " + input.name + "!")
[done]:  print(message="Workflow complete.")
[greet] -> [done]
EOF

juglans hello.jg --input '{"name": "World"}'

Next: read the Quick Start guide and Tutorial 1.

CLI

# Run & validate
juglans <file>              # Execute a .jg or .jgx file
juglans check [path]        # Validate syntax (like cargo check)
juglans test [path]         # Run test_* nodes across the project
juglans doctest [path]      # Validate code blocks in markdown docs

# Dev loop
juglans web       --port 3000      # Local HTTP server with SSE streaming
juglans serve     --port 3000      # Unified web API + all configured bot adapters
juglans chat      --agent path.jg  # Interactive TUI
juglans cron      file --schedule  # Run on a cron schedule
juglans lsp                        # Language Server Protocol
juglans bot       <platform>       # Telegram / Feishu / WeChat adapter

# Packages
juglans init <name>       # Scaffold a new project
juglans install           # Install jgpackage.toml dependencies
juglans add <pkg>         # Add a package dependency
juglans remove <pkg>      # Remove a package dependency
juglans pack              # Build a .tar.gz archive
juglans publish           # Publish to the registry
juglans skills            # Sync Agent Skills from GitHub

# Deploy & account
juglans deploy    [--tag] [--push]  # Build a Docker image and run it
juglans whoami                      # Show current account info

Run juglans --help or juglans <cmd> --help for every flag.

Architecture

┌────────────────────────────────────────────────────────┐
│                      Juglans CLI                        │
├────────────────────────────────────────────────────────┤
│     .jg Parser                       .jgx Parser        │
│          │                                │             │
│          ▼                                ▼             │
│  ┌──────────────────────────────────────────────┐       │
│  │           Workflow Executor (DAG)             │       │
│  │    cycles check · variable resolve · run      │       │
│  └──────────────────────┬────────────────────────┘       │
│           ┌─────────────┼─────────────┬─────────┐       │
│           ▼             ▼             ▼         ▼       │
│       Builtins    LLM Providers   MCP Tools  Python     │
│      (chat, p,     (OpenAI,      (filesystem, (pandas,  │
│       bash, db,    Anthropic,     github,     sklearn,  │
│       http, ...)   DeepSeek...)   browser)    numpy)    │
└────────────────────────────────────────────────────────┘

Documentation

Contributing

Issues, PRs, and discussions are welcome. See CONTRIBUTING.md for build steps and code conventions.

License

MIT