Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
FreeCycle
GPU-aware Ollama lifecycle manager for Windows 11. Runs as a system tray application that monitors NVIDIA GPU usage and game processes, automatically enabling and disabling networked Ollama access when the GPU is available for LLM compute workloads.
What It Does
FreeCycle sits in your Windows system tray and watches for GPU-intensive games and applications. When a game + GPU-intensive application is detected (or heavy non-whitelisted VRAM usage), Ollama is shut down instantly. When the GPU is free (including a 30-minute cooldown after the game exits), Ollama is started and exposed to the local network so other machines and agentic workflows can use your GPU for LLM inference.
Key features:
- Monitors for 10 preconfigured game executables (configurable)
- Tracks VRAM usage from non-whitelisted processes (50% threshold, configurable)
- Starts/stops Ollama automatically with network exposure (
OLLAMA_HOST=0.0.0.0:11434) - Auto-downloads and updates required models (
llama3.1:8b-instruct-q4_K_M,nomic-embed-text) - Streams model pull progress into the tray tooltip with live percentage updates when Ollama reports byte totals
- Lets the local user unlock remote model installs from the tray for one hour at a time, then auto-locks again
- Waits 60 seconds after Windows resume before re-enabling Ollama (configurable)
- Provides an HTTP API (port 7443) for external agents to signal GPU task start/stop
- Color-coded tray icon: green (available), red (blocked), blue (agent task), yellow (downloading), grey (error)
- Self-registers for Windows auto-start; disables Ollama's own auto-start
- Single-instance enforcement via lockfile
Requirements
- Windows 11 x86_64
- NVIDIA GPU with drivers installed (NVML)
- Ollama installed
Installation
From source
cargo install --path .
From crates.io
cargo install freecycle
Manual build
cargo build --release
The binary is at target\release\freecycle.exe.
Usage
freecycle # Start normally (warnings/errors to stderr)
freecycle -v # Start with verbose debug logging to ~/freecycle-verbose.log
freecycle --help # Show help
FreeCycle registers itself to start automatically when you log into Windows. The tray icon appears in the system tray notification area.
Tray Icon States
| Color | Meaning |
|---|---|
| Green | GPU available, Ollama running |
| Red | Game detected, cooldown active, or wake delay active |
| Blue | External agent task in progress |
| Yellow | Downloading/updating models |
| Grey | Error or initializing |
While a model pull or update is active, the tray tooltip now prefers concise progress lines such as Downloading llama3.1:8b-instruct-q4_K_M: 42% so the current percentage stays visible within the Windows tooltip length limit.
Right-Click Menu
- Status: Shows current state (read-only label)
- Force Enable Ollama: Override and start Ollama immediately until a newly detected blocked state clears it. A post-resume wake delay clears this override.
- Force Disable Ollama: Override and stop Ollama
- Enable Remote Model Installs (1 Hour): Allows remote agents to request
POST /models/installfor the next hour. The menu text flips to a disable action while the window is open, and FreeCycle auto-locks it again after one hour. - Open Logs: Opens the verbose log in Notepad
- Open Config: Opens config.toml in Notepad
- Exit FreeCycle: Shuts down FreeCycle and Ollama
Configuration
Configuration file: %APPDATA%\FreeCycle\config.toml (created on first run with defaults).
[]
= 5000
= 2000
= 1800
= 50
= 300
= 3
= 1
= 60
[]
= "0.0.0.0"
= 11434
= 10
[]
= ["llama3.1:8b-instruct-q4_K_M", "nomic-embed-text"]
= 5
= 24
[]
= [
"VRChat.exe", "vrchat.exe", "Cyberpunk2077.exe", "HELLDIVERS2.exe",
"GenshinImpact.exe", "ZenlessZoneZero.exe", "Overwatch.exe",
"VALORANT.exe", "eldenring.exe", "MonsterHunterWilds.exe"
]
[]
= [
"ollama_llama_server", "ollama_llama_server.exe",
"ollama.exe", "ollama", "dwm.exe", "csrss.exe"
]
[]
= 7443
= "0.0.0.0"
Agent Signal API
External agentic workflows can signal task start/stop to FreeCycle via HTTP. This is still the right path for direct integrations and custom jobs that do not go through the shipped MCP tools.
Endpoints
GET /status Returns current FreeCycle status as JSON.
Current status responses also include:
remote_model_installs_unlocked:truewhile the tray-controlled install window is openremote_model_installs_expires_in_seconds: seconds remaining before the install window auto-locks again, ornullwhen locked
POST /task/start
Turns the tray icon blue and shows the task in the tooltip.
POST /task/stop
Clears the task and reverts the icon to green.
GET /health
Returns JSON health data such as {"ok": true, "message": "FreeCycle is running"}.
POST /models/install
Installs any Ollama model that the local server can resolve, but only while the tray menu unlock is active. If the user has not enabled the tray toggle, FreeCycle returns 403 Forbidden with guidance to unlock it locally first. To browse installable model names, check the Ollama Library.
Example
MCP Server
The repository includes a Node.js MCP server in mcp-server/ that exposes FreeCycle and Ollama as tools for Claude Code, OpenAI Codex, and other MCP compatible clients. The server loads configuration from mcp-server/freecycle-mcp.config.json.
When a local MCP tool is invoked, the server checks Ollama first. If Ollama is not responding and wake-on-LAN is enabled in the config, it sends magic packets to the configured FreeCycle machine and polls FreeCycle every 30 seconds for up to 15 minutes. If local inference becomes unavailable, the MCP tool reports local unavailability so workflows can fall back to cloud models.
The long-running local MCP tools (freecycle_pull_model, freecycle_generate, freecycle_chat, freecycle_embed, freecycle_benchmark) automatically call /task/start and /task/stop so the FreeCycle tray reflects active MCP work. freecycle_pull_model routes through FreeCycle's /models/install endpoint instead of calling Ollama directly, so the local user must unlock remote model installs from the tray first. The unlock expires after one hour.
Prerequisites
- Node.js version 18 or later
Install and Build
- Install dependencies and build the MCP server:
The compiled MCP server is now in mcp-server/dist/index.js.
Configure
Edit mcp-server/freecycle-mcp.config.json to set your FreeCycle and Ollama host/port addresses:
Key fields (see mcp-server/references/config-and-timeouts.md for full schema):
freecycle.host,freecycle.port: Network address of the FreeCycle serverollama.host,ollama.port: Network address of the Ollama APIwakeOnLan.enabled: Set totrueif you want the MCP server to wake the FreeCycle machine on demandwakeOnLan.macAddress: MAC address of the FreeCycle machine (required if WoL is enabled)
Register with Claude Code
If you have built the MCP server and configured it, register it with Claude Code:
Adjust the paths to match your FreeCycle repository location.
Register with OpenAI Codex
Add the server to your Codex cline_mcp_settings.json or equivalent config file:
First-Run Verification
After registering the MCP server, ask Claude Code or your MCP client to call the freecycle_status tool. A successful response means the MCP server can reach FreeCycle and is ready to use.
More Information
For detailed configuration, environment variable overrides, timeouts, tool descriptions, and troubleshooting, see:
mcp-server/RAG_INDEX.md: Token-conscious index of all MCP reference filesmcp-server/references/setup-and-registration.md: Detailed setup and registration stepsmcp-server/references/config-and-timeouts.md: Complete configuration schemamcp-server/references/tools-and-routing.md: Tool list and readiness behaviormcp-server/references/failure-recovery.md: Troubleshooting guide
Development
cargo check # Type check
cargo build # Debug build
cargo build --release # Release build
cargo test # Run tests
cargo clippy # Lint
cargo fmt # Format
cargo run -- -v # Run with verbose logging
License
Apache 2.0