arboriter-mcts
A flexible, well-documented Monte Carlo Tree Search (MCTS) implementation for Rust, built for game AI and decision-making processes.
Features
- 🌳 Built on arboriter tree traversal primitives - showcases how arboriter simplifies complex tree operations throughout the algorithm
- 🧰 Generic implementation that works with any game or decision process
- 🔄 Multiple selection policies (UCB1, UCB1-Tuned, PUCT) for optimal exploration/exploitation
- 🎲 Customizable simulation strategies to match your domain knowledge
- 🚀 Memory-efficient node pooling for improved performance in sequential searches
- 📊 Detailed search statistics and visualization for debugging and analysis
- 🔍 Elegant, readable implementation that demonstrates how tree traversal primitives can simplify complex algorithms
- 📝 Thorough documentation with examples for easy integration
Installation
Add this to your Cargo.toml:
[]
= "0.2.1"
Basic Usage
Here's a simple example of how to use the library:
use ;
// Implement GameState for your game
// Create a configuration for the search
let config = default
.with_exploration_constant
.with_max_iterations;
// Create the MCTS searcher with initial state
let mut mcts = MCTSnew;
// Find the best action
let best_action = mcts.search?;
// Get search statistics
println!;
Running the Examples
The repository includes complete examples for common games that demonstrate the MCTS algorithm in action:
- Tic-Tac-Toe: A simple 3x3 game where you can play against the AI
- Connect Four: A more complex 7x6 game with stronger tactical elements
To run the examples:
# Play Tic-Tac-Toe against the AI
# Play Connect Four against the AI
How MCTS Works
Monte Carlo Tree Search combines tree search with random sampling to find optimal decisions:
-
Selection: Starting from the root, select successive child nodes down to a leaf node using a selection policy that balances exploration and exploitation.
-
Expansion: If the leaf node is not terminal and has untried actions, create one or more child nodes by applying those actions.
-
Simulation: From the new node, simulate a game to completion using a default policy (often random play).
-
Backpropagation: Update the statistics (visit counts and rewards) for all nodes in the path from the selected node to the root.
This process is repeated many times, gradually building a tree of game states and improving the value estimates for each action.
Our implementation includes memory-efficient node pooling which recycles nodes during sequential searches instead of constantly allocating and deallocating memory, significantly improving performance especially for multi-step decision processes.
Advanced Customization
You can customize all aspects of the MCTS algorithm to match your specific needs:
// Standard configuration
let mut mcts = MCTSnew
// Use UCB1 for selection with custom exploration constant
.with_selection_policy
// Use domain-specific heuristics for simulation
.with_simulation_policy
// Use a weighted backpropagation policy
.with_backpropagation_policy;
// Configure time-based search limits
let action = mcts.search_for_time?;
// Enable node pooling for better performance
let config_with_pooling = default
.with_max_iterations
.with_node_pool_config; // Initial pool size and chunk size
// Create MCTS with node pooling
let mut mcts_with_pool = MCTSwith_node_pool;
// For sequential searches (e.g., playing a full game), you can reset the root
// to reuse nodes instead of creating a new MCTS instance each time
mcts_with_pool.reset_root;
Node pooling provides significant performance benefits by reducing memory allocation overhead, especially for:
- Games with large branching factors
- Sequential searches during gameplay
- Deep search trees
- Time-critical applications
The statistics output will show node pool usage details when enabled.
Documentation
For detailed documentation and API reference, visit docs.rs/arboriter-mcts.
Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
License
This crate is licensed under the MIT license. See LICENSE for details.
Acknowledgments
- Built on the arboriter tree traversal primitive
- Inspired by Tyler Glaiel's blog post on tree traversal primitives
- Influenced by successful MCTS implementations in AlphaGo and other game AI systems