fast-fs 0.2.0

High-speed async file system traversal library with batteries-included file browser component
Documentation
<!-- <FILE>crates/fast-fs/PERFORMANCE.md</FILE> - <DESC>Updated code snippets for OFPF API</DESC> -->
<!-- <VERS>VERSION: 0.2.0 - 2025-12-07T16:59:58Z</VERS> -->
<!-- <WCTX>Updating integration guide to use new standalone functions</WCTX> -->
<!-- <CLOG>Replaced FastReader::read_dir with fast_fs::read_dir</CLOG> -->

# Yazi Performance Patterns & Best Practices
This document captures the performance philosophy behind Yazi's blazing-fast file manager, extracted for use in `fast-fs` and `fast-preview` libraries.
## Core Philosophy
**Never block the UI. Never render unnecessarily. Batch everything.**
## Key Patterns
### 1. Atomic Render Flag (Conditional Rendering)
Yazi uses a global atomic flag to track if a render is needed:
```rust
// yazi-macro/src/render.rs
static NEED_RENDER: AtomicBool = AtomicBool::new(false);
macro_rules! render {
    () => {
        NEED_RENDER.store(true, Ordering::Relaxed);
    };
    ($cond:expr) => {
        if $cond { render!(); }
    };
}
```
**Best Practice**: Only set render flag when state actually changes.
### 2. Render Throttling (10ms minimum)
The main loop enforces a minimum 10ms between renders (~100fps max):
```rust
// yazi-fm/src/app/app.rs
let mut last_render = Instant::now();
// Only render if 10ms has passed
timeout = Duration::from_millis(10).checked_sub(last_render.elapsed());
if timeout.is_none() {
    render();
    last_render = Instant::now();
}
```
**Best Practice**: Throttle UI updates to avoid excessive redraws.
### 3. Event Batching (recv_many)
Process multiple events before considering a render:
```rust
let mut events = Vec::with_capacity(50);
rx.recv_many(&mut events, 50).await;
for event in events.drain(..) {
    dispatcher.dispatch(event)?;
}
```
**Best Practice**: Batch event processing, render once after batch.
### 4. I/O Result Batching (chunks_timeout)
Use `tokio_stream::chunks_timeout` to batch async results:
```rust
let stream = UnboundedReceiverStream::new(rx)
    .chunks_timeout(50000, Duration::from_millis(500));
while let Some(chunk) = stream.next().await {
    // Process up to 50,000 items or 500ms worth
    FilesOp::Part(wd.clone(), chunk, ticket).emit();
}
```
**Best Practice**: Batch I/O results with time/count limits.
### 5. Versioned State (Deferred Operations)
Only sort/filter when actually needed:
```rust
pub struct Files {
    revision: u64,  // Incremented on changes
    version: u64,   // Last sorted version
}
pub fn catchup_revision(&mut self) -> bool {
    if self.version == self.revision {
        return false;  // No changes, skip
    }
    self.version = self.revision;
    self.sorter.sort(&mut self.items);
    true
}
```
**Best Practice**: Defer expensive operations until display time.
### 6. Cancellation Tokens
Cancel previous operations before starting new ones:
```rust
pub struct Preview {
    previewer_ct: Option<CancellationToken>,
}
pub fn go(&mut self, file: File) {
    // Cancel any ongoing preview
    self.previewer_ct.take().map(|ct| ct.cancel());
    // Start new preview
    self.previewer_ct = isolate::peek(&previewer.run, file);
}
```
**Best Practice**: Always cancel stale async work.
### 7. Ticket-Based Updates
Prevent stale results from overwriting current data:
```rust
pub fn update_part(&mut self, files: Vec<File>, ticket: Id) {
    if ticket != self.ticket {
        return;  // Stale data from old request
    }
    self.items.extend(files);
}
```
**Best Practice**: Track request IDs to discard stale responses.
## Integration Guide for fast-fs
### Recommended Event Loop Pattern
```rust
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::{Duration, Instant};
use tokio::sync::mpsc;
static NEED_RENDER: AtomicBool = AtomicBool::new(false);
pub fn request_render() {
    NEED_RENDER.store(true, Ordering::Relaxed);
}
async fn event_loop(mut rx: mpsc::UnboundedReceiver<Event>) {
    let mut events = Vec::with_capacity(50);
    let mut last_render = Instant::now();
    loop {
        // Batch receive events
        let count = rx.recv_many(&mut events, 50).await;
        if count == 0 { break; }
        // Process all events
        for event in events.drain(..) {
            handle_event(event);
        }
        // Throttled render
        if NEED_RENDER.swap(false, Ordering::Relaxed) {
            if last_render.elapsed() >= Duration::from_millis(10) {
                render_ui();
                last_render = Instant::now();
            }
        }
    }
}
```
### Recommended File Loading Pattern
```rust
use fast_fs::{read_dir, FileList};
use tokio_stream::StreamExt;
use tokio_stream::wrappers::UnboundedReceiverStream;
async fn load_directory(path: &Path) -> FileList {
    let (tx, rx) = tokio::sync::mpsc::unbounded_channel();
    let mut list = FileList::new();
    // Spawn background reader
    let path = path.to_owned();
    tokio::spawn(async move {
        // Use standalone function
        let entries = fast_fs::read_dir(&path).await.unwrap_or_default();
        for entry in entries {
            tx.send(entry).ok();
        }
    });
    // Batch receive with timeout
    let stream = UnboundedReceiverStream::new(rx)
        .chunks_timeout(1000, Duration::from_millis(100));
    tokio::pin!(stream);
    while let Some(batch) = stream.next().await {
        list.push_batch(batch);
        request_render();  // Flag render needed
    }
    list.catchup();  // Sort once at end
    list
}
```
## Performance Checklist
- [ ] Never call render directly from I/O callbacks
- [ ] Use atomic flag to request renders
- [ ] Throttle renders to ≤100fps (10ms minimum)
- [ ] Batch event processing (50+ at a time)
- [ ] Batch I/O results with chunks_timeout
- [ ] Use versioned state to defer sorting
- [ ] Cancel stale async operations
- [ ] Use tickets to discard stale responses
- [ ] Capacity-hint collections (Vec::with_capacity)

<!-- <FILE>crates/fast-fs/PERFORMANCE.md</FILE> - <DESC>Updated code snippets for OFPF API</DESC> -->
<!-- <VERS>END OF VERSION: 0.2.0 - 2025-12-07T16:59:58Z</VERS> -->