# Architecture
## On Threading vs Async
This codebase is intentionally synchronous and threaded. It exists partly as an experiment
to understand what async runtimes solve and at what cost.
### What we do instead of async
| `tokio::spawn` | `std::thread::spawn` |
| `mpsc` / channels | `std::sync::mpsc` |
| Oneshot response channels | `mpsc::sync_channel(1)` per request |
| `select!` on multiple futures | Single event handler thread draining a unified `mpsc` channel |
| Non-blocking I/O | Blocking I/O on dedicated threads |
| `await` yield points | Per-event `thread::spawn` for blocking work (HTTP calls, scripts) |
### Where this hurts
- **Shared state:** the handler thread and TUI thread both need `State`, requiring
`Arc<Mutex<State>>`. In async, `&self` access with cooperative scheduling avoids this.
- **Boilerplate:** every cross-thread interaction requires explicit channel setup, sender
cloning, and careful lifetime management that async handles implicitly.
## Directory Structure
```
src/
main.rs Entry point, CLI dispatch
cli.rs CLI subcommands
client.rs HTTP client wrapper
logging.rs Logging setup
config/
mod.rs Config loading and saving
api/ Raw wire types, deserialization only
alert.rs
device.rs
function.rs
mode.rs
profile.rs Profile PUT shim
sse.rs
status.rs
mod.rs
domain/ Domain types and business logic created from API data
alert.rs
device.rs
mode.rs
profile.rs
mod.rs
cctv/ Leader election and IPC
mod.rs cctv::init, Handle, Cctv trait
daemon_stub/
mod.rs DaemonStub — IPC client stand-in for CLI commands
ipc/
mod.rs IpcRequest / IpcResponse wire types
daemon/ Runtime state and coordination
mod.rs Daemon struct, init, tick, wait_for_it
state.rs Mutable state, SSE handlers
sse.rs SSE stream reader
ipc_server.rs IPC connection acceptor (forwards to Daemon via mpsc)
task.rs Task, Trigger, Action types
tui/
mod.rs Render and input loop
pages/
mod.rs Page trait, shared state
alert.rs
device.rs
mode.rs
profile.rs Profile editor
task.rs Task editor
```
## Overview
These are generated graphs. I kept them because they seem to help agents and
they look nice. But they're hard to read and may be outdated at times.
```
┌─────────────────────────────────────────────────────────────┐
│ cctv binary │
│ │
│ ┌───────┐ ┌────────┐ ┌──────────┐ │
│ │ CLI │──►│ Config │──►│cctv::init│ │
│ │ parse │ │ load │ └────┬─────┘ │
│ └───────┘ └────────┘ │ │
│ ┌──────┴──────┐ │
│ ┌────▼────┐ ┌────▼──────┐ │
│ │ Daemon │ │DaemonStub │ │
│ │ │ │ (IPC) │ │
│ └────┬────┘ └────┬────┘ │
│ │ │ │
│ TUI / Daemonize ActivateMode / │
│ ActivatePrev / Dump │
└─────────────────────────────────────────────────────────────┘
│
HTTP + SSE
│
┌───────────▼──────────┐
│ coolercontrold │
│ (system daemon) │
└──────────────────────┘
```
## Layered Architecture
```
┌──────────────────────────────────────────────────────────────┐
│ TUI Layer tui/ │
│ ┌────────┐ ┌─────────┐ ┌──────┐ ┌───────┐ ┌──────┐ │
│ │ Device │ │ Profile │ │ Mode │ │ Alert │ │ Task │ │
│ │ Page │ │ Page │ │ Page │ │ Page │ │ Page │ │
│ └───┬────┘ └────┬────┘ └──┬───┘ └───┬───┘ └──┬───┘ │
│ │ │ │ │ │ │
│ └───────────┴────┬────┴─────────┴─────────┘ │
│ ▼ │
│ daemon.state.lock() (read) │
│ daemon.change_mode() (write) │
│ daemon.update_profile() (write) │
├──────────────────────────────────────────────────────────────┤
│ Daemon Layer daemon/ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Daemon { client, config, profiles, modes, state } │ │
│ │ │ │ │ │
│ │ │ Arc<Mutex<State>> │ │
│ │ │ ┌─────────────────┐ │ │
│ │ │ │ devices, alerts, │ │ │
│ │ │ │ current_mode_uid,│ │ │
│ │ │ │ tasks │ │ │
│ │ │ └────────▲─────────┘ │ │
│ │ │ │ Event │ │
│ │ │ ┌────────┴─────────┐ │ │
│ │ │ │ event handler │ │ │
│ │ │ │ thread │ │ │
│ │ │ └────────▲─────────┘ │ │
│ │ │ │ mpsc │ │
│ │ │ ┌─────────────┼──────────┐ │ │
│ │ │ ┌───┴───┐ ┌──────┴──┐ ┌─────┴──┐ │ │
│ │ │ │ SSE │ │ SSE │ │ SSE │ │ │
│ │ │ │ modes │ │ status │ │ alerts │ │ │
│ │ │ └───┬───┘ └────┬────┘ └────┬───┘ │ │
│ │ │ └───────────┼───────────┘ │ │
│ └───────────┼─────────────────────┼──────────────────┘ │
├──────────────┼─────────────────────┼─────────────────────────┤
│ Domain Layer │ domain/ │ │
│ │ │ │
│ Device::load() Profile::load() Mode::load() Alert::load()
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ api structs ──► domain structs (richer models) │
├──────────────────────────────────────────────────────────────┤
│ API Layer api/ │
│ │
│ Raw serde types: Devices, CCStatus, Profiles, Functions, │
│ Modes, Alerts, Sse │
│ │
│ ┌───────────────────────────────────────────┐ │
│ │ Client { agent, url, cookie } │ │
│ │ GET/POST/PUT ──► coolercontrold HTTP API │ │
│ └───────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
```
## Event Flow
All external inputs (SSE streams, IPC requests, promotion) are unified into a single
`mpsc::channel::<Message>()` drained by one handler thread. Blocking work (HTTP calls,
scripts) is dispatched to per-operation `thread::spawn` calls so the handler never stalls.
```
coolercontrold cctv
───────────── ──────────────────────
GET /sse/modes ◄──────────── SSE thread (modes) ──┐
GET /sse/status ◄──────────── SSE thread (status) ──┤
GET /sse/alerts ◄──────────── SSE thread (alerts) ──┤
│ Message::Sse
IPC client ─────────────────► ipc_server thread ──┤ Message::Ipc
(per-connection thread) │
leader socket won ──────────► unix_listener_worker ──┤ Message::Promoted
│
▼
┌──────────────────┐
│ message handler │
│ thread │
└────────┬─────────┘
│
┌───────────────────────────────┤
▼ ▼ ▼
Message::Sse Message::Ipc Message::Promoted
│ │ │
▼ ▼ ▼
lock State, Dump: read set is_leader,
update, serialized_ spawn ipc_server
unlock daemon thread
│ │
▼ ▼
matching tasks ChangeMode:
(leader only) thread::spawn
│ HTTP POST
▼
thread::spawn
per action
(Script / HTTP)
```
## Task Automation
```
┌─────────────┐ ┌───────────────────┐ ┌──────────────────────┐
│ Trigger │ │ Task │ │ Action │
├─────────────┤ ├───────────────────┤ ├──────────────────────┤
│ Alert │────►│ name │────►│ Script { path } │
│ { uid, │ │ state (lifecycle) │ │ ChangeMode { uid } │
│ state } │ │ trigger │ │ ActivatePreviousMode │
│ │ │ action │ └──────────────────────┘
│ ModeChange │ └───────────────────┘
│ { uid } │
└─────────────┘
Task lifecycle: Applied ──► Draft ──► Unapplied ──► Applied
└──► Deleted
Leader election: abstract unix socket "cctv-{url}"
only the leader instance executes tasks
TUI and Daemonize always get their own Daemon
regardless of whether a leader exists
```
## Config Resolution
```
┌──────────────────────────────────────┐
│ /etc/coolercontrol/config.toml │ daemon_address, port
└──────────────────┬───────────────────┘
▼
┌──────────────────────────────────────┐
│ $XDG_CONFIG_HOME/coolercontrol/ │ time_range_s, username,
│ cctv.json │ skip_splash, tasks,
│ (or $CCTV_CONFIG_FILEPATH) │ address/port overrides
└──────────────────┬───────────────────┘
▼
┌──────────────────────────────────────┐
│ Environment variables │ CCTV_DAEMON_PASSWORD
└──────────────────┬───────────────────┘
▼
┌──────────────────────────────────┐
│ Config │
│ { url, │
│ time_range, username, password, │
│ skip_splash, tasks } │
└──────────────────────────────────┘
```