multipool 0.2.6

A configurable thread pool with optional work-stealing support and task priority scheduling.
Documentation
# TODO: Features and Enhancements for Multipool


Following are some suggested features and enhancements by ChatGPT what would consider for the Multipool library:

## High Priority


- [x] **Task Prioritization**

  - Allow tasks to have different priority levels (e.g., high, medium, low).
  - Implement scheduling logic to execute higher-priority tasks first.

- [ ] **Dynamic Scaling**

  - Implement dynamic thread scaling based on workload.
  - Add configuration for min/max thread limits.

- [ ] **Timeouts and Cancellation**

  - Add support for task timeouts to cancel tasks exceeding a specified duration.
  - Provide APIs for canceling pending tasks.

- [x] **Monitoring and Metrics**

  - Track active threads, queued tasks, and completed tasks.
  - Add metrics hooks for users to gather performance insights.

- [ ] **Error Handling and Recovery**
  - Provide mechanisms to retry failed tasks or execute fallback tasks.
  - Expose hooks for logging and error reporting.

## Medium Priority


- [ ] **Round-Robin and Least-Loaded First** 
  
  - can be added as simple, configurable modes for non-priority tasks.

- [ ] **Hash-Based and Affinity-Based** 

  - are useful for systems with shared state or specific hardware requirements.

- [ ] **Feedback-Driven or Hierarchical Load Balancing** 

  - are more advanced approaches that would increase the library's adaptability for dynamic or large-scale systems.

- [ ] **Thread Affinity**

  - Allow tasks to bind to specific threads for performance optimizations.

- [ ] **Asynchronous API**

  - Support `async`/`await` syntax for tasks.

- [ ] **Idle Thread Timeout**

  - Implement configurable timeout to shut down idle threads.

- [ ] **Queue Depth Control**

  - Add a configuration option to limit the maximum size of the task queue.

- [ ] **Pluggable Queue Strategies**
  - Allow users to customize the queue implementation (e.g., FIFO, LIFO, lock-free).

## Low Priority


- [ ] **Task Batching**

  - Support submitting and processing batches of tasks efficiently.

- [ ] **Fairness Mechanisms**

  - Ensure no task starvation, even with high-priority tasks.

- [ ] **Thread Local Storage**

  - Allow threads to maintain state across tasks.

- [ ] **Warm-Up and Pre-Warming**
  - Pre-spawn and initialize threads for reduced task latency.

## Advanced Features


- [ ] **Custom Schedulers**

  - Provide an interface for user-defined scheduling algorithms.

- [ ] **Rate Limiting and Throttling**

  - Add support to control task execution rates.

- [ ] **Cross-Thread Communication**

  - Include built-in mechanisms for inter-task communication (e.g., channels).

- [ ] **Task Dependency Management**

  - Allow users to specify dependencies between tasks.

- [ ] **Resource Limiting**

  - Enable users to define memory or CPU time limits for tasks.

- [ ] **Worker Specialization**

  - Define specialized worker roles (e.g., I/O-intensive, computation-intensive).

- [ ] **Priority Pools**

  - Implement multiple priority-based sub-pools for segregating tasks.

- [ ] **Integration with Rust Ecosystem**
  - Ensure compatibility with libraries like `tokio` and `async-std`.

## Documentation and Examples


- [ ] Update documentation to include detailed descriptions of new features.
- [ ] Add examples for:
  - Task prioritization
  - Dynamic scaling
  - Using async tasks
  - Custom queue strategies

## Testing and Validation


- [ ] Add unit tests for new features.
- [ ] Add benchmarks for performance-critical features (e.g., work-stealing, dynamic scaling).
- [ ] Validate compatibility with major Rust versions.