# cpu_load : Real-time CPU Monitoring with Intelligent Core Selection
## Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Usage](#usage)
- [API Reference](#api-reference)
- [Design](#design)
- [Tech Stack](#tech-stack)
- [Project Structure](#project-structure)
- [History](#history)
## Overview
cpu_load is high-performance Rust library for real-time CPU load monitoring and intelligent core selection, built on the **compio** async ecosystem.
Traditional CPU load APIs (like `sysinfo`) require waiting ~200ms between calls to get accurate readings. This library solves that problem by running background sampling tasks, allowing instant access to pre-collected metrics without blocking.
The library continuously samples CPU usage in background tasks, maintaining up-to-date load metrics for global CPU and individual cores. Through atomic operations and lock-free data structures, it delivers high-performance monitoring suitable for concurrent applications.
## Features
- Real-time background sampling of CPU metrics
- Thread-safe atomic operations for concurrent access
- Intelligent core selection via lazy round-robin algorithm
- Standard Rust iterator pattern support
- Zero runtime allocation with pre-allocated structures
- Seamless compio async runtime integration
## Usage
### Basic Monitoring
```rust
use cpu_load::CpuLoad;
// Use default 1s sampling interval
let monitor = CpuLoad::new();
// Global CPU load (0-100)
let global = monitor.global();
println!("Global: {global}%");
// Specific core load
if let Some(load) = monitor.core(0) {
println!("Core 0: {load}%");
}
// Core count
println!("Cores: {}", monitor.len());
```
### Custom Sampling Interval
```rust
use std::time::Duration;
use cpu_load::CpuLoad;
// Custom 500ms sampling interval
let monitor = CpuLoad::init(Duration::from_millis(500));
```
### Intelligent Core Selection
```rust
use cpu_load::CpuLoad;
let monitor = CpuLoad::new();
// Get idlest core index for task assignment
let core = monitor.idlest();
println!("Idlest core: {core}");
```
### Iterator Pattern
```rust
use cpu_load::CpuLoad;
let monitor = CpuLoad::new();
// Iterate all core loads
for (i, load) in monitor.into_iter().enumerate() {
println!("Core {i}: {load}%");
}
// Collect to vector
let loads: Vec<u8> = monitor.into_iter().collect();
// Filter high-load cores
let high: Vec<usize> = monitor
.into_iter()
.enumerate()
```mermaid
graph TD
A[CpuLoad::new/init] --> B[Create Instance]
B --> C[Spawn Background Task]
B --> D[Return CpuLoad]
C --> E[Initial Delay 100ms]
E --> F[Sample CPU Metrics]
F --> G[Sleep Interval]
G --> H{Stop Signal?}
J[idlest Call] --> K{cursor >= n?}
M -->|Fail| N[Spin Wait]
N --> L
M -->|Success| O[Sort by Load]
O --> P[Reset cursor]
P --> Q[Return rank 0]
```
### Key Principles
**Separation of Concerns**: Background sampling isolated from API access.
**Lock-free Design**: Atomic operations prevent thread contention.
**Lazy Evaluation**: Core sorting occurs only when cursor exhausts rank array.
**Memory Efficiency**: Pre-allocated Box<[Atomic]> prevents runtime allocation.
## Tech Stack
| Rust 2024 | Modern language features |
| compio | Async runtime (io-uring/IOCP) |
| sysinfo | Cross-platform system info |
| Atomic ops | Lock-free synchronization |
### compio Ecosystem
compio leverages platform-specific I/O primitives:
- **Linux**: io-uring for high-throughput I/O
- **Windows**: IOCP for optimal performance
- **Cross-platform**: Consistent API
## Project Structure
```
cpu_load/
├── src/
│ ├── lib.rs # Main implementation
│ └── iter.rs # Iterator
├── tests/
│ └── main.rs # Integration tests
├── readme/
│ ├── en.md # English docs
│ └── zh.md # Chinese docs
└── Cargo.toml
```
## History
CPU load monitoring traces back to early Unix systems where `load average` measured system demand—average processes in run queue over 1, 5, and 15 minutes.
The mid-2000s shift from single-core to multi-core processors created need for per-core tracking. This library builds on that evolution with real-time per-core metrics.
The lazy round-robin algorithm draws inspiration from distributed system load balancers. Rather than maintaining continuously sorted lists (computationally expensive), it uses on-demand sorting triggered by access patterns.
Atomic operations in monitoring became prominent with multi-threaded applications. Traditional lock-based approaches caused significant overhead in high-frequency scenarios. This library embraces lock-free patterns standard in high-performance systems programming.
Fun fact: The term "load average" was coined by the TENEX operating system in the early 1970s at BBN Technologies. TENEX later influenced Unix development, and the concept persists in modern systems via `/proc/loadavg` on Linux.