arboriter-mcts
A flexible, well-documented Monte Carlo Tree Search (MCTS) implementation for Rust, built for game AI and decision-making processes.
Features
- ๐งฐ Generic implementation that works with any game or decision process
- ๐ Multiple selection policies (UCB1, UCB1-Tuned, PUCT) for optimal exploration/exploitation
- ๐ฒ Customizable simulation strategies to match your domain knowledge
- ๐ Detailed search statistics and visualization for debugging and analysis
- ๐งช Comprehensive test suite ensuring correctness and reliability
- ๐ Thorough documentation with examples for easy integration
Installation
Add this to your Cargo.toml:
[]
= "0.1.0"
Basic Usage
Here's a simple example of how to use the library:
use ;
// Implement GameState for your game
// Create a configuration for the search
let config = default
.with_exploration_constant
.with_max_iterations;
// Create the MCTS searcher with initial state
let mut mcts = MCTSnew;
// Find the best action
let best_action = mcts.search?;
// Get search statistics
println!;
Running the Examples
The repository includes complete examples for common games that demonstrate the MCTS algorithm in action:
- Tic-Tac-Toe: A simple 3x3 game where you can play against the AI
- Connect Four: A more complex 7x6 game with stronger tactical elements
To run the examples:
# Play Tic-Tac-Toe against the AI
# Play Connect Four against the AI
How MCTS Works
Monte Carlo Tree Search combines tree search with random sampling to find optimal decisions:
-
Selection: Starting from the root, select successive child nodes down to a leaf node using a selection policy that balances exploration and exploitation.
-
Expansion: If the leaf node is not terminal and has untried actions, create one or more child nodes by applying those actions.
-
Simulation: From the new node, simulate a game to completion using a default policy (often random play).
-
Backpropagation: Update the statistics (visit counts and rewards) for all nodes in the path from the selected node to the root.
This process is repeated many times, gradually building a tree of game states and improving the value estimates for each action.
Advanced Customization
You can customize all aspects of the MCTS algorithm to match your specific needs:
let mut mcts = MCTSnew
// Use UCB1 for selection with custom exploration constant
.with_selection_policy
// Use domain-specific heuristics for simulation
.with_simulation_policy
// Use a weighted backpropagation policy
.with_backpropagation_policy;
// Configure time-based search limits
let action = mcts.search_for_time?;
Documentation
For detailed documentation and API reference, visit docs.rs/arboriter-mcts.
Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
License
This crate is licensed under the MIT license. See LICENSE for details.
Acknowledgments
- Built on the arboriter tree traversal primitive
- Inspired by Tyler Glaiel's blog post on tree traversal primitives
- Influenced by successful MCTS implementations in AlphaGo and other game AI systems