Module enhanced

Source
Expand description

§Enhanced Module - EnhancedThreadShare

This module provides EnhancedThreadShare<T>, a powerful extension of ThreadShare<T> that adds automatic thread management capabilities.

§🚀 Overview

EnhancedThreadShare<T> eliminates the need for manual thread management by providing:

  • Automatic Thread Spawning: Spawn threads with a single method call
  • Built-in Thread Tracking: Monitor active thread count and status
  • Automatic Thread Joining: Wait for all threads to complete with join_all()
  • Thread Naming: Give meaningful names to threads for debugging
  • All ThreadShare Features: Inherits all capabilities from ThreadShare<T>

§Key Benefits

§🎯 Simplified Thread Management

// Old way: Manual thread management
use thread_share::share;
use std::thread;

let data = share!(vec![1, 2, 3]);
let clone1 = data.clone();
let clone2 = data.clone();

let handle1 = thread::spawn(move || { /* logic */ });
let handle2 = thread::spawn(move || { /* logic */ });

handle1.join().expect("Failed to join");
handle2.join().expect("Failed to join");

// New way: Enhanced thread management
use thread_share::enhanced_share;

let enhanced = enhanced_share!(vec![1, 2, 3]);

enhanced.spawn("worker1", |data| { /* logic */ });
enhanced.spawn("worker2", |data| { /* logic */ });

enhanced.join_all().expect("Failed to join");

§📊 Real-time Monitoring

use thread_share::enhanced_share;

let enhanced = enhanced_share!(vec![1, 2, 3]);

enhanced.spawn("processor", |data| { /* logic */ });
enhanced.spawn("validator", |data| { /* logic */ });

println!("Active threads: {}", enhanced.active_threads());

// Wait for completion
enhanced.join_all().expect("Failed to join");

assert!(enhanced.is_complete());

§Architecture

EnhancedThreadShare<T> wraps a ThreadShare<T> and adds:

  • inner: ThreadShare<T> - The underlying shared data
  • threads: Arc<Mutex<HashMap<String, JoinHandle<()>>>> - Thread tracking

§Thread Lifecycle

  1. Creation: EnhancedThreadShare::new(data) or enhanced_share!(data)
  2. Spawning: enhanced.spawn(name, function) creates named threads
  3. Execution: Threads run with access to shared data
  4. Monitoring: Track active threads with active_threads()
  5. Completion: Wait for all threads with join_all()

§Example Usage

§Basic Thread Management

use thread_share::{enhanced_share, spawn_workers};

let data = enhanced_share!(vec![1, 2, 3]);

// Spawn individual threads
data.spawn("sorter", |data| {
    data.update(|v| v.sort());
});

data.spawn("validator", |data| {
    assert!(data.get().is_sorted());
});

// Wait for completion
data.join_all().expect("Failed to join");

§Using Macros

use thread_share::share;

let data = share!(String::from("Hello"));
let clone = data.clone();

// Spawn a simple thread
std::thread::spawn(move || {
    clone.update(|s| s.push_str(" World"));
});

// Wait a bit and check result
std::thread::sleep(std::time::Duration::from_millis(100));
println!("Updated: {}", data.get());

§Real-world Example

use thread_share::share;
use std::time::Duration;

#[derive(Clone)]
struct Server {
    port: u16,
    is_running: bool,
    connections: u32,
}

let server = share!(Server {
    port: 8080,
    is_running: false,
    connections: 0,
});

let server_clone = server.clone();

// Spawn a simple server thread
std::thread::spawn(move || {
    server_clone.update(|s| {
        s.is_running = true;
        s.connections = 5;
    });
});

// Wait a bit and check result
std::thread::sleep(Duration::from_millis(100));
let final_state = server.get();
println!("Server running: {}, connections: {}", final_state.is_running, final_state.connections);

§Performance Characteristics

  • Thread Spawning: Minimal overhead over standard thread::spawn
  • Thread Tracking: Constant-time operations for thread management
  • Memory Usage: Small overhead for thread tracking structures
  • Scalability: Efficient for up to hundreds of threads

§Best Practices

  1. Use descriptive thread names for easier debugging
  2. Keep thread functions focused on single responsibilities
  3. Always call join_all() to ensure proper cleanup
  4. Monitor thread count with active_threads() for debugging
  5. Handle errors gracefully from join_all() and spawn()

§Error Handling

use thread_share::share;

let data = share!(0);
let clone = data.clone();

// Spawn thread with error handling
let handle = std::thread::spawn(move || {
    clone.update(|x| *x = *x + 1);
});

// Handle join errors
if let Err(e) = handle.join() {
    eprintln!("Thread execution failed: {:?}", e);
}

§Thread Safety

EnhancedThreadShare<T> automatically implements Send and Sync traits when T implements them, making it safe to use across thread boundaries.

§Integration with Macros

This module works seamlessly with the library’s macros:

  • enhanced_share! - Creates EnhancedThreadShare<T> instances
  • spawn_workers! - Spawns multiple threads with single macro call

§Comparison with Manual Thread Management

AspectManual ManagementEnhancedThreadShare
Thread Creationthread::spawn() callsenhanced.spawn()
Thread TrackingManual JoinHandle storageAutomatic tracking
Thread JoiningManual join() callsjoin_all()
Error HandlingPer-thread error handlingCentralized error handling
DebuggingNo thread identificationNamed threads
Code ComplexityHighLow

Structs§

EnhancedThreadShare
Enhanced ThreadShare with built-in thread management