aidaemon
Website ยท Documentation ยท GitHub ยท Discord ยท ๐
A personal AI agent that runs as a background daemon, accessible via Telegram, Slack, or Discord, with tool use, MCP integration, web research, scheduled tasks, and persistent memory.
I built this because I wanted to control my computer from my phone, from anywhere. I also wanted it to run on cheap hardware - a Raspberry Pi, an old laptop, a $5/month VPS - without eating all the RAM just to sit idle waiting for messages.
Why Rust?
aidaemon runs 24/7 as a background daemon. It needs to be small, fast, and run on anything:
- Runs on cheap/old hardware - a lightweight Rust binary. On a Raspberry Pi or a $5 VPS with 512 MB RAM, it runs comfortably where heavier runtimes won't.
- Single binary, zero runtime - one binary, copy it to any machine and run it. Install with
curl -sSfL https://get.aidaemon.ai | bashorcargo install aidaemon. - Startup in milliseconds - restarts after a crash are near-instant, which matters for the auto-recovery retry loop.
- No garbage collector - predictable latency. No GC pauses between receiving the LLM response and sending the reply.
If you don't care about resource usage and want more channels (WhatsApp, Signal, iMessage) or a web canvas, check out OpenClaw which does similar things in TypeScript.
Features
Channels
- Telegram interface - chat with your AI assistant from any device
- Slack integration - Socket Mode support with threads, file sharing, and inline approvals
- Discord integration - bot with slash commands and thread support
- Dynamic bot management - add or list bots at runtime via
/connectand/botscommands, no restart needed - Multi-bot support - run multiple Telegram, Slack, and Discord bots from a single daemon
LLM Providers
- Multiple providers - native Anthropic, Google Gemini, DeepSeek, and OpenAI-compatible (OpenAI, OpenRouter, Ollama, etc.)
- ExecutionPolicy routing - risk-based model selection using tool capabilities (read-only, side-effects, high-impact writes), uncertainty scoring, and mid-loop adaptation
- Token/cost tracking - per-session and daily usage statistics with optional budget limits
Tools & Agents
- 40+ tools - file operations (read, write, edit, search), git info/commit, terminal, system info, web research, browser, HTTP requests, and more
- Dynamic MCP management - add, remove, and configure MCP servers at runtime via the
manage_mcptool - Browser tool - headless Chrome with screenshot, click, fill, and JS execution
- Web research - search (DuckDuckGo/Brave) and fetch tools for internet access
- HTTP requests - authenticated API calls with OAuth 1.0a, Bearer, Header, and Basic auth profiles
- Sub-agent spawning - recursive agents with configurable depth, iteration limit, and dynamic budget extension
- CLI agent delegation - delegate tasks to claude, gemini, codex, aider, copilot (auto-discovered via
which) - Goal tracking - long-running goals with task breakdown, scheduled runs, blockers, and diagnostic tracing
- Channel history - read recent Slack channel messages with time filtering and user resolution
- Skills system - trigger-based markdown instructions with dynamic management, remote registries, and auto-promotion from successful procedures
- Tool capability registry - each tool declares read_only, external_side_effect, needs_approval, idempotent, high_impact_write for risk-based filtering
OAuth & API Integration
- OAuth 2.0 PKCE - built-in flows for Twitter/X and GitHub, plus custom providers
- OAuth 1.0a - legacy API support (e.g., Twitter v1.1)
- HTTP auth profiles - pre-configured auth for external APIs (Bearer, Header, Basic, OAuth)
- Token management - tokens stored in OS keychain, automatic refresh, connection tracking
Memory & State
- Persistent memory - SQLite-backed conversation history + facts table, with fast in-memory working memory
- Memory consolidation - background fact extraction with vector embeddings (AllMiniLML6V2) for semantic recall
- Evidence-gated learning - stricter thresholds for auto-promoting procedures to skills (7+ successes, 90%+ success rate)
- Context window management - role-based token quotas with sliding window summarization
- People intelligence - organic contact management with auto-extracted facts, relationship tracking, and privacy controls
- Database encryption - SQLCipher AES-256 encryption at rest enabled by default, with automatic plaintext migration
Automation
- Scheduled tasks - cron-style task scheduling with natural language time parsing
- HeartbeatCoordinator - unified background task scheduler with jitter, semaphore-bounded concurrency, and exponential backoff
- Bounded auto-tuning - adaptive uncertainty threshold that adjusts based on task failure ratios
- Email triggers - IMAP IDLE monitors your inbox and notifies you on new emails
- Background task registry - track and cancel long-running tasks
File Transfer
- File sharing - send and receive files through your chat channel
- Configurable inbox/outbox - control where files are stored and which directories the agent can access
Security & Config
- Config manager - LLM can read/update
config.tomlwith automatic backup, restore, and secrets redaction - Command approval flow - inline keyboard (Allow Once / Allow Always / Deny) for unapproved terminal commands
- HTTP write approval - POST/PUT/PATCH/DELETE requests require user approval with risk classification
- Secrets management - OS keychain integration + environment variable support for API keys
Operations
- Web dashboard - built-in status page with usage stats, active sessions, and task monitoring
- Channel commands -
/model,/models,/auto,/reload,/restart,/clear,/cost,/tasks,/cancel,/connect,/bots,/help - Auto-retry with backoff - exponential backoff (5s -> 10s -> 20s -> 40s -> 60s cap) for dispatcher crashes
- Health endpoint - HTTP
/healthfor monitoring - Service installer - one command to install as a systemd or launchd service
- Setup wizard - interactive first-run setup, no manual config editing needed
Quick Start
One-line install (any VPS / Linux / macOS)
|
Downloads the latest binary, verifies its SHA256 checksum, and installs to /usr/local/bin.
Homebrew (macOS / Linux)
Cargo
Build from source
The wizard will guide you through:
- Selecting your LLM provider (OpenAI, OpenRouter, Ollama, Google AI Studio, Anthropic, etc.)
- Entering your API key
- Selecting and setting up one or more channels (Telegram, Slack, Discord)
Configuration
All settings live in config.toml (generated by the wizard). See config.toml.example for the full reference.
Secrets Management
API keys and tokens can be specified in three ways (resolution order):
"keychain"โ reads from OS credential store (macOS Keychain, Windows Credential Manager, Linux Secret Service)"${ENV_VAR}"โ reads from environment variable (for Docker/CI)- Plain value โ used as-is (not recommended for production)
The setup wizard stores secrets in the OS keychain automatically.
Provider
[]
= "openai_compatible" # "openai_compatible" (default), "google_genai", or "anthropic"
= "keychain" # or "${AIDAEMON_API_KEY}" or plain value
= "https://openrouter.ai/api/v1"
[]
= "openai/gpt-4o"
= "openai/gpt-4o-mini"
= "anthropic/claude-sonnet-4"
The kind field selects the provider protocol:
openai_compatible(default) โ works with OpenAI, OpenRouter, Ollama, DeepSeek, or any OpenAI-compatible APIgoogle_genaiโ native Google Generative AI API (Gemini models)anthropicโ native Anthropic Messages API (Claude models)
The three model tiers (fast, primary, smart) are used by the smart router. Simple messages (greetings, short lookups) route to fast, complex tasks (code, multi-step reasoning) route to smart, and everything else goes to primary.
Telegram
[]
= "keychain" # or "${TELOXIDE_TOKEN}" or plain value
= [123456789]
Slack
Enabled by default in standard builds:
[]
= "keychain" # xapp-... Socket Mode token
= "keychain" # xoxb-... Bot token
= ["U12345678"] # Slack user IDs (strings)
= true # Reply in threads (default: true)
Slack is activated automatically when both app_token and bot_token are set.
If you want a minimal binary, build with --no-default-features and re-enable only what you need.
Terminal Tool
[]
# Set to ["*"] to allow all commands (only if you trust the LLM fully)
= ["ls", "cat", "head", "tail", "echo", "date", "whoami", "pwd", "find", "grep"]
MCP Servers
MCP servers can be configured statically or added at runtime via the manage_mcp tool.
[]
= "npx"
= ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
The manage_mcp tool supports runtime management:
- add โ add and start a new MCP server (allowed commands:
npx,uvx,node,python,python3) - list โ list all registered servers and their tools
- remove โ remove a server
- set_env โ store API keys for a server in the OS keychain
- restart โ restart a server with fresh environment from keychain
Browser
[]
= true
= true
= 1280
= 720
# Use an existing Chrome profile to inherit cookies/sessions
# user_data_dir = "~/Library/Application Support/Google/Chrome"
# profile = "Default"
Sub-agents
[]
= true
= 3 # max nesting levels
= 10 # initial agentic loop iterations per sub-agent
= 25 # max iterations even with dynamic budget extension
= 8000
= 300 # 5 minute timeout per sub-agent
Sub-agents can request additional iterations via the request_more_iterations tool, up to max_iterations_cap.
CLI Agents
For the smoothest unattended agent-to-agent workflows, run aidaemon on a dedicated machine (small VPS, mini PC, or spare laptop) and interact with it remotely from chat. This keeps your day-to-day workstation separate while letting delegated CLI agents run with minimal friction.
If you prefer running on your primary machine, you can still use CLI agents with more conservative flags.
Recommended unattended profile (dedicated host):
[]
= true
= 600
= 16000
# Tools are auto-discovered via `which`. Override or add your own:
[]
= "claude"
= ["-p", "--dangerously-skip-permissions", "--output-format", "stream-json", "--verbose"]
[]
= "gemini"
= ["--sandbox=false", "--yolo", "--output-format", "stream-json"]
[]
= "codex"
= ["exec", "--json", "--dangerously-bypass-approvals-and-sandbox"]
When cli_agent runs without specifying a tool, aidaemon now auto-selects the best installed default in this order: claude, gemini, codex, copilot, aider.
Conservative profile (primary machine):
[]
= "codex"
= ["exec", "--json", "--full-auto"]
Skills
Skills are trigger-based markdown instructions that guide the agent's behavior. They can be loaded from a directory, added from URLs, created inline, or installed from remote registries.
[]
= true
= "skills" # relative to config.toml location
# Optional: remote registries for browsing and installing community skills
= [
"https://example.com/skills/registry.json"
]
The manage_skills tool supports runtime management:
- add โ add a skill from a URL
- add_inline โ create a skill from raw markdown with YAML frontmatter
- list โ list all loaded skills with their status and triggers
- remove/enable/disable โ manage individual skills
- browse โ search remote skill registries
- install โ install a skill from a registry
- update โ re-fetch a skill from its source URL
Successful procedures (>= 7 uses, >= 90% success rate) are automatically promoted to skills every 12 hours via evidence-gated learning.
OAuth
OAuth enables the agent to authenticate with external services like Twitter/X and GitHub. Built-in providers require no URL configuration โ just enable OAuth and set credentials.
[]
= true
= "http://localhost:8080" # must match your OAuth app's redirect URI
# Optional: add custom OAuth providers beyond the built-in Twitter/GitHub
[]
= "oauth2_pkce"
= "https://connect.stripe.com/oauth/authorize"
= "https://connect.stripe.com/oauth/token"
= ["read_write"]
= ["api.stripe.com"]
Built-in providers (no URL config needed):
- twitter (alias: x) โ Tweet read/write, user info, offline access
- github โ User info, repository access
The manage_oauth tool handles the full lifecycle:
- providers โ list available providers and credential status
- set_credentials โ store client_id/client_secret in OS keychain
- connect โ start OAuth flow (displays authorize URL, waits for callback)
- list โ show connected services with token expiry
- refresh โ refresh an expired access token
- remove โ disconnect a service
HTTP Auth Profiles
Pre-configured auth profiles for external APIs, used by the http_request tool. Supports four auth types:
# Bearer token (OAuth 2.0 or API key)
[]
= "bearer"
= ["api.stripe.com"]
= "keychain"
# OAuth 1.0a (e.g., Twitter v1.1 API)
[]
= "oauth1a"
= ["api.twitter.com"]
= "keychain"
= "keychain"
= "keychain"
= "keychain"
# Custom header auth
[]
= "header"
= ["api.example.com"]
= "X-API-Key"
= "keychain"
# Basic auth
[]
= "basic"
= ["internal.company.com"]
= "service_account"
= "keychain"
All credential fields support "keychain" for OS keychain storage. The allowed_domains field is required and enforces domain restrictions on each profile.
OAuth connections established via manage_oauth automatically create auth profiles โ no manual config needed for built-in providers.
People Intelligence
Organic contact management that learns about people from conversations. Disabled by default.
[]
= true
= true # learn facts from conversations
= [ # categories to auto-extract
"birthday", "preference", "interest",
"work", "family", "important_date"
]
= [ # never auto-extracted
"health", "finance", "political", "religious"
]
= 180 # auto-delete stale facts
= 30 # suggest reconnecting after inactivity
The manage_people tool provides manual control:
- add/list/view/update/remove โ manage person records
- add_fact/remove_fact โ manage facts about a person
- link โ link a platform identity (e.g.,
slack:U123,telegram:456) - export/purge โ export or delete all data for a person
- audit/confirm โ review and verify auto-extracted facts
Privacy model:
- Owner sees the full contact graph in DMs
- Non-owners get communication style adaptation only
- Public channels receive no personal fact injection
- Restricted categories are never auto-extracted
Background tasks run daily: stale fact pruning, upcoming date reminders (14-day window), and reconnect suggestions.
Email Triggers
[]
= "imap.gmail.com"
= 993
= "you@gmail.com"
= "keychain" # or "${AIDAEMON_EMAIL_PASSWORD}"
= "INBOX"
Web Search
[]
= "duck_duck_go" # "duck_duck_go" (default, no API key) or "brave"
= "keychain" # required for Brave Search
Scheduled Tasks
[]
= true
= 30 # how often to check for due tasks
[[]]
= "daily-summary"
= "every day at 9am" # natural language or cron syntax
= "Summarize my unread emails"
= false # if true, runs once then deletes
= false # if true, skips terminal approval
File Transfer
[]
= true
= "~/.aidaemon/files/inbox" # where received files are stored
= ["~"] # directories the agent can send files from
= 10
= 24 # auto-delete received files after this time
Daemon & Dashboard
[]
= 8080
= "127.0.0.1" # bind address for health endpoint (default: 127.0.0.1)
= true # enable web dashboard (default: true)
The dashboard provides a web UI at http://127.0.0.1:8080/ with status, usage stats, active sessions, and task monitoring. Authentication uses a token stored in the OS keychain.
State
[]
= "aidaemon.db"
= 50
= 6 # how often to run memory consolidation
= 100 # max facts to include in system prompt
= 1000000 # optional daily token limit (resets at midnight UTC)
# encryption_key = "keychain" # optional override; defaults to AIDAEMON_ENCRYPTION_KEY from .env
On startup, aidaemon now enforces encrypted state by default:
- If
AIDAEMON_ENCRYPTION_KEYis missing and the DB is new/plaintext, a key is generated and written to.env. - Existing plaintext SQLite DBs are migrated automatically to SQLCipher with backup + integrity checks.
- Set
AIDAEMON_ALLOW_PLAINTEXT_DB=1only for emergency recovery scenarios. Encryption at rest protects against database-file exposure, but if an attacker obtains both the DB and key, the data can be decrypted.
Channel Commands
These commands work in Telegram, Slack, and Discord:
| Command | Description |
|---|---|
/model |
Show current model |
/model <name> |
Switch to a specific model (disables auto-routing) |
/models |
List available models from provider |
/auto |
Re-enable automatic model routing by query complexity |
/reload |
Reload config.toml (applies model changes, re-enables auto-routing) |
/restart |
Restart the daemon (picks up new binary, config, MCP servers) |
/clear |
Clear conversation context and start fresh |
/cost |
Show token usage statistics for current session |
/tasks |
List running and recent background tasks |
/cancel <id> |
Cancel a running background task |
/connect <channel> <token> |
Add a new bot at runtime (Telegram, Slack, Discord) |
/bots |
List all connected bots (config-based and dynamic) |
/help |
Show available commands |
Running as a Service
# macOS (launchd)
# Linux (systemd)
Security Model
Where to Run It
aidaemon works great on any dedicated machine โ an old laptop, a Mac Mini, a Raspberry Pi, or a $5/mo VPS. Docker works too if that's your thing, but it's not required. For the best long-running setup, give it its own machine and treat your everyday workstation as a separate environment. Any spare computer you have lying around works perfectly.
Application-Level Protections
- User authentication โ
allowed_user_idsis enforced on every message and callback query. Unauthorized users are silently ignored. - Role-based access control โ Owner, Guest, and Public roles with different tool access levels. Scheduled task management is restricted to owners.
- Terminal allowlist โ commands must match an
allowed_prefixesentry using word-boundary matching ("ls"allowsls -labut notlsblk). Set to["*"]to allow all. - Shell operator detection โ commands containing
;,|,`,&&,||,$(,>(,<(, or newlines always require approval, regardless of prefix match. - Command approval flow โ unapproved commands trigger an inline keyboard (Allow Once / Allow Always / Deny). The agent blocks until you respond.
- Persistent approvals โ "Allow Always" choices are persisted across restarts. Use
permission_mode = "cautious"to make all approvals session-only. - Path verification โ file-modifying commands are blocked unless the target paths were first observed via read-only commands (e.g.,
ls,cat). - Stall detection โ consecutive same-tool loops, alternating tool patterns, and hard iteration caps prevent runaway agent execution.
- HTTP request approval โ write operations (POST, PUT, PATCH, DELETE) and authenticated requests require user approval with risk classification.
- SSRF protection โ HTTP requests, redirects, and MCP server additions validate URLs against private IP ranges, localhost, and metadata endpoints.
- HTTPS enforcement โ the
http_requesttool only allows HTTPS URLs. - Domain allowlists โ each HTTP auth profile restricts which domains it can authenticate against.
- Input sanitization โ external content (tool outputs, web fetches, trigger payloads, skill bodies) is stripped of prompt injection patterns and invisible Unicode before reaching the LLM.
- Untrusted trigger sessions โ sessions originating from automated sources (e.g. email triggers, scheduled tasks with
trusted = false) require terminal approval for every command. - Sub-agent isolation โ sub-agents inherit the parent's user role (no privilege escalation) and share the parent's path verification tracker.
- MCP environment scrubbing โ MCP server sub-processes start with a minimal environment; credentials are not forwarded unless explicitly configured.
- Config secrets redaction โ when the LLM reads config via the config manager tool, sensitive keys (
api_key,password,bot_token, etc.) are replaced with[REDACTED]. - Config change approval โ sensitive config modifications (API keys, allowed users, terminal wildcards) require explicit user approval.
- OAuth token security โ OAuth tokens and dynamic bot tokens are stored in the OS keychain, never in config files or chat history.
- Encrypted state by default โ database contents are encrypted at rest; startup auto-migrates legacy plaintext DBs with rollback-safe backup.
- Public channel protection โ public-facing channels use a minimal system prompt with no internal architecture details, and output is sanitized to redact secrets.
- Dashboard security โ bearer token authentication with rate limiting, token expiration (24h), and constant-time comparison.
- File permissions โ config backups are written with
0600(owner-only read/write) on Unix.
Inspired by OpenClaw
aidaemon was inspired by OpenClaw (GitHub), a personal AI assistant that runs on your own devices and connects to channels like WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and more.
Both projects share the same goal: a self-hosted AI assistant you control. The key differences:
| aidaemon | OpenClaw | |
|---|---|---|
| Language | Rust | TypeScript/Node.js |
| Channels | Telegram, Slack, Discord | WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Teams, and more |
| Scope | Lightweight daemon with web dashboard | Full-featured platform with web UI, canvas, TTS, browser control |
| Config | Single config.toml with keychain secrets |
JSON5 config with hot-reload and file watching |
| Error recovery | Inline error classification per HTTP status, model fallback, config backup rotation | Multi-layer retry policies, auth profile cooldowns, provider rotation, restart sentinels |
| State | SQLite + in-memory working memory (encrypted by default) | Pluggable storage with session management |
| Install | curl -sSfL https://get.aidaemon.ai | bash |
npm/Docker |
| Dependencies | ~30 crates, single static binary | Node.js ecosystem |
aidaemon is designed for users who want a lightweight daemon in Rust with essential features. If you need more channels (WhatsApp, Signal, iMessage) or a richer plugin ecosystem, check out OpenClaw.
Architecture
Channels โโโ Agent โโโ ExecutionPolicy โโโ Router โโโ LLM Provider
(Telegram, โ (risk gate, (profile (OpenAI-compatible /
Slack, โ uncertainty, โ model Anthropic /
Discord) โ tool filtering) mapping) Google Gemini)
โ
โโโโ Tools (40+, with ToolCapabilities)
โ โโโ File ops (read, write, edit, search, project inspect)
โ โโโ Terminal / RunCommand (with approval flow)
โ โโโ Git (info, commit)
โ โโโ Browser (headless Chrome)
โ โโโ Web research (search + fetch)
โ โโโ HTTP requests (with auth profiles + OAuth)
โ โโโ MCP servers (JSON-RPC over stdio, dynamic management)
โ โโโ Sub-agents / CLI agents (claude, gemini, codex, aider)
โ โโโ Goals & tasks (manage, schedule, trace, blockers)
โ โโโ People intelligence (contact management)
โ โโโ Skills (use, manage, resources)
โ โโโ OAuth, config, health probe, diagnostics
โ
โโโโ State
โ โโโ SQLite (messages, facts, episodes, goals, procedures)
โ โโโ In-memory working memory (VecDeque, capped)
โ
โโโโ Memory Manager
โ โโโ Fact extraction (evidence-gated consolidation)
โ โโโ Vector embeddings (AllMiniLML6V2)
โ โโโ Context window (role-based token quotas)
โ โโโ People intelligence (organic fact learning)
โ
โโโโ HeartbeatCoordinator (unified background tasks)
โ
โโโโ Skills (trigger-based, with registries + auto-promotion)
Triggers โโโ EventBus โโโ Agent โโโ Channel notification
โโโ IMAP IDLE (email)
โโโ Goal scheduler (60s tick)
Health server (axum) โโโ GET /health + Web Dashboard + OAuth callbacks
- Agent loop: user message โ ExecutionPolicy (risk score + uncertainty) โ Router (profile โ model) โ call LLM โ tool execution with capability filtering โ mid-loop adaptation โ return final response
- Working memory:
VecDeque<Message>in RAM, capped at N messages, hydrated from SQLite on cold start - Session ID = channel-specific chat/thread ID
- MCP: spawns server subprocesses, communicates via JSON-RPC over stdio. Servers can be added/removed at runtime.
- Memory consolidation: periodically extracts durable facts from conversations, stores with vector embeddings for semantic retrieval
- People intelligence: auto-extracts contact facts during consolidation, runs daily background tasks for date reminders and reconnect suggestions
- Token tracking: per-request usage logged to SQLite, queryable via
/costcommand or dashboard