telic 0.1.0

Engine-agnostic game AI framework: command trees, utility scoring, coordinated assignment, and TrueSkill evaluation.
Documentation
  • Coverage
  • 56.25%
    171 out of 304 items documented0 out of 169 items with examples
  • Size
  • Source code size: 141.98 kB This is the summed size of all the files inside the crates.io package for this release.
  • Documentation size: 15.17 MB This is the summed size of all files generated by rustdoc for all configured targets
  • Ø build duration
  • this release: 1m 3s Average build duration of successful builds.
  • all releases: 1m 3s Average build duration of successful builds in releases after 2024-10-23.
  • Links
  • Goblinlordx/telic
    0 0 0
  • crates.io
  • Dependencies
  • Versions
  • Owners
  • Goblinlordx

telic

(adjective) directed toward a definite end; purposive.

telic is an engine-agnostic game AI framework for Rust. It gives you:

  • a clean interface contract between game and agent — a command-tree API that makes invalid commands unrepresentable;
  • an AI toolkit with utility scoring, coordinated assignment, and GOAP/HTN planning primitives (optional — agents can use any, all, or none);
  • an evaluation arena with TrueSkill ratings and pairwise head-to-head tournaments for comparing AI strategies objectively.

Built across 5 example games — strategy, card games, poker, real-time squad combat — and tuned against 30,000+ tournament games.

Install

[dependencies]
telic = "0.1"

Quick start

A telic game implements three traits:

use telic::arena::{GameState, GameView, CommandProvider, CommandTree, PlayerIndex};

impl GameState for MyGame { /* apply_command, view_for, is_terminal, ... */ }
impl CommandProvider for MyGameCommands { /* command_tree for each player */ }

An agent picks from the tree of valid commands:

use telic::arena::{GameAgent, CommandTree};

impl GameAgent<MyView, MyCommand> for MyAgent {
    fn decide(&mut self, view: &MyView, tree: &CommandTree<MyCommand>) -> Option<MyCommand> {
        tree.argmax(|cmd| self.score(cmd, view))
    }
    // ... lifecycle hooks ...
}

Evaluate against other agents:

use telic::arena::{MultiPlayerArena, ClosureFactory};

let report = MultiPlayerArena::new(2)
    .with_games(1000)
    .add_agent_type(ClosureFactory::new("mine", || Box::new(MyAgent::new())))
    .add_agent_type(ClosureFactory::new("baseline", || Box::new(RandomAgent::new())))
    .run::<MyGame, MyGameCommands>(|_| MyGame::new());
report.print_summary();

Why a command tree?

Good game-AI systems have long enforced valid-action-only decisions — legal_moves() in chess engines, legal_actions() in OpenSpiel, preconditions on behavior-tree nodes and GOAP actions, action-masking in RL policies. Where many hand-rolled custom-game AIs go wrong is propose-and-hope: the agent returns any command; the game rejects it if invalid. Scoring-based agents are especially prone to this — they cheerfully score "move unit X" without noticing X already moved, or "attack enemy Y" when Y is out of range. The rejected command either breaks the game loop or wastes retries.

telic treats the valid-action set as a first-class framework primitive, in the tradition of OpenSpiel. The CommandProvider enumerates every valid command as a tree:

Layer("actions")
├── "end_turn" → Leaf(EndTurn)
├── "attack"   → Layer
│     ├── "unit_1" → Leaf(Attack { unit_id: 1, target: (3,5) })
│     └── "unit_2" → Leaf(Attack { ... })
├── "capture"  → Layer(...)
└── "move"     → Layer(...)

The agent picks from leaves. By construction it cannot return a command the game would reject.

The tree supports:

  • Structural sharing via Arc<CommandTree<C>> — reuse unchanged subtrees across ticks.
  • Laziness via LazyLayer — branches enumerate on first access; agents that never descend into a branch never pay the cost.
  • Continuous parameters via Parametric leaves with ParamDomain::Continuous { min, max } — for aim angles, rotation velocities, move vectors, and other non-enumerable inputs.

See docs/command_tree.md for the full API and agent patterns (random, utility, hierarchical, FPS aim).

Toolkit highlights

  • UtilityAction<S> — compose multi-factor scorers with response curves (Linear, Inverse, Threshold, Custom) over arbitrary state S.
  • AssignmentStrategy — multi-entity task assignment with four built-in strategies: Greedy (with coordination callback), Hungarian (Kuhn-Munkres optimal), RoundRobin, WeightedRandom (softmax-sampled).
  • BeliefSet<S> — named boolean/numeric queries over state, for GOAP preconditions or utility considerations.
  • GoapPlanner — backward-chaining search (A*, DFS, Bidirectional).
  • Task<S> — HTN hierarchical decomposition.

Examples

Game Genre Scale
simple_wars Turn-based strategy (Advance Wars micro) 16×16 grid, fog of war
splendor Engine-building card game Real cards, 2P
love_letter Hidden-info card game 8-card deck, deduction
poker Texas Hold'em Deep-stack heads-up, escalating blinds
arena_combat Real-time squad combat 60 fps tick-based

Run any tournament with cargo run --release -p <example>-example.

Empirical findings

Summary of what works across ~6000 tournament games per genre:

  • Utility scoring is universally effective — the top-1 or top-2 AI in 4 of 5 games.
  • Coordinated assignment (Greedy with coordination callback) adds ~20% win rate for multi-unit strategy games.
  • Opponent modeling (learning raise-honesty from showdowns) adds a 23-point TrueSkill bump in poker.
  • GOAP works best as a weight modifier on utility scoring, not as an action filter.
  • HTN is a performance optimization (skip search) not a capability difference.

See docs/findings.md for the full data.

Documentation

License

Licensed under the Apache License, Version 2.0. See LICENSE.