Expand description
§Trash Parallelism
A high-performance Rust library providing comprehensive async, threading, memory management, and utility functions for building efficient applications.
§Overview
Trash Parallelism is a batteries-included Rust library designed for high-performance applications requiring efficient async operations, parallel processing, memory management, and system-level utilities. Built with performance and ergonomics in mind.
§Key Features
- 🚀 Async Operations: High-performance async utilities with smol, futures-lite, and crossfire
- ⚡ Threading: Parallel processing with work-stealing schedulers and fork-join patterns
- 💾 Memory Management: Efficient allocation with mimalloc and custom memory pools
- 📡 Channels: Advanced channel communication with monitoring and serialization
- 🖥️ System Utilities: Time handling, environment variables, file system operations
- 📊 Data Processing: Parsing, serialization, and data manipulation
- 🔄 I/O Operations: File and network I/O with async support
- 📝 Logging: Structured logging with performance monitoring
- 🛠️ Utilities: Compression, hashing, JSON handling, and more
§Quick Start
§Basic Usage
use trash_utilities::*;
// Async task spawning
spawn_task!("my_task", async {
println!("Hello from async task!");
Ok(())
});
// Parallel processing
let data = vec![1, 2, 3, 4, 5];
let result = parallel_map(data, |x| x * 2);
assert_eq!(result, vec![2, 4, 6, 8, 10]);
// System utilities
let now = current_utc_time();
let home = read_env_var("HOME").unwrap_or_default();
println!("Current time: {}, Home: {}", now, home);§Advanced Example: High-Performance Data Processing
use trash_utilities::*;
use smol;
// Parallel data processing pipeline
let raw_data = vec![1u32; 10000];
// Process in parallel
let processed = parallel_map(raw_data, |x| x * x + 1);
// Serialize to JSON asynchronously
let json_data = serde::serialize_to_json(&processed).unwrap();
// Compress and save
let compressed = io::utils::compress_data_brotli(json_data.as_bytes(), 6).await.unwrap();
io::utils::write_file_async("processed_data.gz", &compressed).await.unwrap();
println!("Processed {} elements", processed.len());§Memory Management Example
use trash_utilities::memory::*;
// Custom memory pool allocation
let pool_name = "processing_pool";
create_memory_pool(pool_name, 1024 * 1024).unwrap(); // 1MB pool
// Allocate from pool
let ptr = alloc_from_pool!(pool_name, 1024).unwrap();
// Use allocated memory safely
unsafe {
std::ptr::write_bytes(ptr, 0, 1024); // Initialize to zero
}
// Pool automatically manages cleanup§Channel-Based Communication
use trash_utilities::channels::*;
use smol;
// Create monitored channel
let (tx, rx, monitor) = create_monitored_channel::<String>(10);
// Send messages
for i in 0..5 {
send_async(&tx, format!("Message {}", i)).await.unwrap();
}
// Receive and process
for _ in 0..5 {
let msg = recv_async(&rx).await.unwrap();
println!("Received: {}", msg);
}
// Check performance stats
let stats = monitor.get_stats();
println!("Processed {} messages", stats.messages_sent);§Module Organization
§Core Functionality
async: Comprehensive async utilities with smol runtimeparallel: Threading and parallelism with work-stealingmemory: Memory management and custom poolschannels: Advanced channel communication
§Data & I/O
serde: Serialization utilities (JSON, base64, etc.)io: Async file operations and compressiondata: Data parsing and manipulationchars: String processing and encoding
§System Integration
§Performance Characteristics
- Zero-Copy Operations: Where possible, avoids unnecessary allocations
- Async-First Design: Built for non-blocking I/O and concurrency
- Memory Efficient: Custom allocators and pooling reduce overhead
- Parallel Processing: Automatic parallelization for CPU-bound tasks
- Monitoring: Built-in performance tracking and statistics
§Safety & Reliability
- Memory Safe: All operations are memory-safe with no undefined behavior
- Thread Safe: Concurrent operations are properly synchronized
- Error Handling: Comprehensive error propagation with context
- Resource Management: Automatic cleanup and RAII patterns
- Testing: Extensive test coverage for reliability
Re-exports§
pub use parallel::parallel_for_each;pub use parallel::parallel_map;pub use sys::current_utc_time;pub use sys::read_env_var;