aidaemon 0.7.6

A personal AI agent that runs as a background daemon, accessible via Telegram, Slack, or Discord, with tool use, MCP integration, and persistent memory
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
# aidaemon

[Website](https://aidaemon.ai/) ยท [Documentation](https://docs.aidaemon.ai/) ยท [GitHub](https://github.com/davo20019/aidaemon) ยท [Discord](https://discord.gg/JCCPtEEy) ยท [๐•](https://x.com/aidaemon_ai)

A personal AI agent that runs as a background daemon, accessible via Telegram, Slack, or Discord, with tool use, MCP integration, web research, scheduled tasks, and persistent memory.

I built this because I wanted to control my computer from my phone, from anywhere. I also wanted it to run on cheap hardware - a Raspberry Pi, an old laptop, a $5/month VPS - without eating all the RAM just to sit idle waiting for messages.

## Why Rust?

aidaemon runs 24/7 as a background daemon. It needs to be small, fast, and run on anything:

- **Runs on cheap/old hardware** - a lightweight Rust binary. On a Raspberry Pi or a $5 VPS with 512 MB RAM, it runs comfortably where heavier runtimes won't.
- **Single binary, zero runtime** - one binary, copy it to any machine and run it. Install with `curl -sSfL https://get.aidaemon.ai | bash` or `cargo install aidaemon`.
- **Startup in milliseconds** - restarts after a crash are near-instant, which matters for the auto-recovery retry loop.
- **No garbage collector** - predictable latency. No GC pauses between receiving the LLM response and sending the reply.

If you don't care about resource usage and want more channels (WhatsApp, Signal, iMessage) or a web canvas, check out [OpenClaw](https://openclaw.ai) which does similar things in TypeScript.

## Features

### Channels
- **Telegram interface** - chat with your AI assistant from any device
- **Slack integration** - Socket Mode support with threads, file sharing, and inline approvals
- **Discord integration** - bot with slash commands and thread support
- **Dynamic bot management** - add or list bots at runtime via `/connect` and `/bots` commands, no restart needed
- **Multi-bot support** - run multiple Telegram, Slack, and Discord bots from a single daemon

### LLM Providers
- **Multiple providers** - native Anthropic, Google Gemini, DeepSeek, and OpenAI-compatible (OpenAI, OpenRouter, Ollama, etc.)
- **ExecutionPolicy routing** - risk-based model selection using tool capabilities (read-only, side-effects, high-impact writes), uncertainty scoring, and mid-loop adaptation
- **Token/cost tracking** - per-session and daily usage statistics with optional budget limits

### Tools & Agents
- **40+ tools** - file operations (read, write, edit, search), git info/commit, terminal, system info, web research, browser, HTTP requests, and more
- **Dynamic MCP management** - add, remove, and configure MCP servers at runtime via the `manage_mcp` tool
- **Browser tool** - headless Chrome with screenshot, click, fill, and JS execution
- **Web research** - search (DuckDuckGo/Brave) and fetch tools for internet access
- **HTTP requests** - authenticated API calls with OAuth 1.0a, Bearer, Header, and Basic auth profiles
- **Sub-agent spawning** - recursive agents with configurable depth, iteration limit, and dynamic budget extension
- **CLI agent delegation** - delegate tasks to claude, gemini, codex, aider, copilot (auto-discovered via `which`)
- **Goal tracking** - long-running goals with task breakdown, scheduled runs, blockers, and diagnostic tracing
- **Channel history** - read recent Slack channel messages with time filtering and user resolution
- **Skills system** - trigger-based markdown instructions with dynamic management, remote registries, and auto-promotion from successful procedures
- **Tool capability registry** - each tool declares read_only, external_side_effect, needs_approval, idempotent, high_impact_write for risk-based filtering

### OAuth & API Integration
- **OAuth 2.0 PKCE** - built-in flows for Twitter/X and GitHub, plus custom providers
- **OAuth 1.0a** - legacy API support (e.g., Twitter v1.1)
- **HTTP auth profiles** - pre-configured auth for external APIs (Bearer, Header, Basic, OAuth)
- **Token management** - tokens stored in OS keychain, automatic refresh, connection tracking

### Memory & State
- **Persistent memory** - SQLite-backed conversation history + facts table, with fast in-memory working memory
- **Memory consolidation** - background fact extraction with vector embeddings (AllMiniLML6V2) for semantic recall
- **Evidence-gated learning** - stricter thresholds for auto-promoting procedures to skills (7+ successes, 90%+ success rate)
- **Context window management** - role-based token quotas with sliding window summarization
- **People intelligence** - organic contact management with auto-extracted facts, relationship tracking, and privacy controls
- **Database encryption** - SQLCipher AES-256 encryption at rest enabled by default, with automatic plaintext migration

### Automation
- **Scheduled tasks** - cron-style task scheduling with natural language time parsing
- **HeartbeatCoordinator** - unified background task scheduler with jitter, semaphore-bounded concurrency, and exponential backoff
- **Bounded auto-tuning** - adaptive uncertainty threshold that adjusts based on task failure ratios
- **Email triggers** - IMAP IDLE monitors your inbox and notifies you on new emails
- **Background task registry** - track and cancel long-running tasks

### File Transfer
- **File sharing** - send and receive files through your chat channel
- **Configurable inbox/outbox** - control where files are stored and which directories the agent can access

### Security & Config
- **Config manager** - LLM can read/update `config.toml` with automatic backup, restore, and secrets redaction
- **Command approval flow** - inline keyboard (Allow Once / Allow Always / Deny) for unapproved terminal commands
- **HTTP write approval** - POST/PUT/PATCH/DELETE requests require user approval with risk classification
- **Secrets management** - OS keychain integration + environment variable support for API keys

### Operations
- **Web dashboard** - built-in status page with usage stats, active sessions, and task monitoring
- **Channel commands** - `/model`, `/models`, `/auto`, `/reload`, `/restart`, `/clear`, `/cost`, `/tasks`, `/cancel`, `/connect`, `/bots`, `/help`
- **Auto-retry with backoff** - exponential backoff (5s -> 10s -> 20s -> 40s -> 60s cap) for dispatcher crashes
- **Health endpoint** - HTTP `/health` for monitoring
- **Service installer** - one command to install as a systemd or launchd service
- **Setup wizard** - interactive first-run setup, no manual config editing needed

## Quick Start

### One-line install (any VPS / Linux / macOS)

```bash
curl -sSfL https://get.aidaemon.ai | bash
```

Downloads the latest binary, verifies its SHA256 checksum, and installs to `/usr/local/bin`.

### Homebrew (macOS / Linux)

```bash
brew install davo20019/tap/aidaemon
aidaemon  # launches the setup wizard on first run
```

### Cargo

```bash
cargo install aidaemon
aidaemon
```

### Build from source

```bash
cargo build --release
./target/release/aidaemon
```

The wizard will guide you through:
1. Selecting your LLM provider (OpenAI, OpenRouter, Ollama, Google AI Studio, Anthropic, etc.)
2. Entering your API key
3. Selecting and setting up one or more channels (Telegram, Slack, Discord)

## Configuration

All settings live in `config.toml` (generated by the wizard). See [`config.toml.example`](config.toml.example) for the full reference.

### Secrets Management

API keys and tokens can be specified in three ways (resolution order):

1. **`"keychain"`** โ€” reads from OS credential store (macOS Keychain, Windows Credential Manager, Linux Secret Service)
2. **`"${ENV_VAR}"`** โ€” reads from environment variable (for Docker/CI)
3. **Plain value** โ€” used as-is (not recommended for production)

The setup wizard stores secrets in the OS keychain automatically.

### Provider

```toml
[provider]
kind = "openai_compatible"  # "openai_compatible" (default), "google_genai", or "anthropic"
api_key = "keychain"        # or "${AIDAEMON_API_KEY}" or plain value
base_url = "https://openrouter.ai/api/v1"

[provider.models]
primary = "openai/gpt-4o"
fast = "openai/gpt-4o-mini"
smart = "anthropic/claude-sonnet-4"
```

The `kind` field selects the provider protocol:
- `openai_compatible` (default) โ€” works with OpenAI, OpenRouter, Ollama, DeepSeek, or any OpenAI-compatible API
- `google_genai` โ€” native Google Generative AI API (Gemini models)
- `anthropic` โ€” native Anthropic Messages API (Claude models)

The three model tiers (`fast`, `primary`, `smart`) are used by the smart router. Simple messages (greetings, short lookups) route to `fast`, complex tasks (code, multi-step reasoning) route to `smart`, and everything else goes to `primary`.

### Telegram

```toml
[telegram]
bot_token = "keychain"           # or "${TELOXIDE_TOKEN}" or plain value
allowed_user_ids = [123456789]
```

### Slack
Enabled by default in standard builds:

```toml
[slack]
app_token = "keychain"           # xapp-... Socket Mode token
bot_token = "keychain"           # xoxb-... Bot token
allowed_user_ids = ["U12345678"] # Slack user IDs (strings)
use_threads = true               # Reply in threads (default: true)
```

Slack is activated automatically when both `app_token` and `bot_token` are set.
If you want a minimal binary, build with `--no-default-features` and re-enable only what you need.

### Terminal Tool

```toml
[terminal]
# Set to ["*"] to allow all commands (only if you trust the LLM fully)
allowed_prefixes = ["ls", "cat", "head", "tail", "echo", "date", "whoami", "pwd", "find", "grep"]
```

### MCP Servers

MCP servers can be configured statically or added at runtime via the `manage_mcp` tool.

```toml
[mcp.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
```

The `manage_mcp` tool supports runtime management:
- **add** โ€” add and start a new MCP server (allowed commands: `npx`, `uvx`, `node`, `python`, `python3`)
- **list** โ€” list all registered servers and their tools
- **remove** โ€” remove a server
- **set_env** โ€” store API keys for a server in the OS keychain
- **restart** โ€” restart a server with fresh environment from keychain

### Browser

```toml
[browser]
enabled = true
headless = true
screenshot_width = 1280
screenshot_height = 720
# Use an existing Chrome profile to inherit cookies/sessions
# user_data_dir = "~/Library/Application Support/Google/Chrome"
# profile = "Default"
```

### Sub-agents

```toml
[subagents]
enabled = true
max_depth = 3              # max nesting levels
max_iterations = 10        # initial agentic loop iterations per sub-agent
max_iterations_cap = 25    # max iterations even with dynamic budget extension
max_response_chars = 8000
timeout_secs = 300         # 5 minute timeout per sub-agent
```

Sub-agents can request additional iterations via the `request_more_iterations` tool, up to `max_iterations_cap`.

### CLI Agents

For the smoothest unattended agent-to-agent workflows, run `aidaemon` on a dedicated machine (small VPS, mini PC, or spare laptop) and interact with it remotely from chat. This keeps your day-to-day workstation separate while letting delegated CLI agents run with minimal friction.

If you prefer running on your primary machine, you can still use CLI agents with more conservative flags.

Recommended unattended profile (dedicated host):

```toml
[cli_agents]
enabled = true
timeout_secs = 600
max_output_chars = 16000

# Tools are auto-discovered via `which`. Override or add your own:
[cli_agents.tools.claude]
command = "claude"
args = ["-p", "--dangerously-skip-permissions", "--output-format", "stream-json", "--verbose"]

[cli_agents.tools.gemini]
command = "gemini"
args = ["--sandbox=false", "--yolo", "--output-format", "stream-json"]

[cli_agents.tools.codex]
command = "codex"
args = ["exec", "--json", "--dangerously-bypass-approvals-and-sandbox"]
```

When `cli_agent` runs without specifying a `tool`, aidaemon now auto-selects the best installed default in this order: `claude`, `gemini`, `codex`, `copilot`, `aider`.

Conservative profile (primary machine):

```toml
[cli_agents.tools.codex]
command = "codex"
args = ["exec", "--json", "--full-auto"]
```

### Skills

Skills are trigger-based markdown instructions that guide the agent's behavior. They can be loaded from a directory, added from URLs, created inline, or installed from remote registries.

```toml
[skills]
enabled = true
dir = "skills"    # relative to config.toml location

# Optional: remote registries for browsing and installing community skills
registries = [
    "https://example.com/skills/registry.json"
]
```

The `manage_skills` tool supports runtime management:
- **add** โ€” add a skill from a URL
- **add_inline** โ€” create a skill from raw markdown with YAML frontmatter
- **list** โ€” list all loaded skills with their status and triggers
- **remove/enable/disable** โ€” manage individual skills
- **browse** โ€” search remote skill registries
- **install** โ€” install a skill from a registry
- **update** โ€” re-fetch a skill from its source URL

Successful procedures (>= 7 uses, >= 90% success rate) are automatically promoted to skills every 12 hours via evidence-gated learning.

### OAuth

OAuth enables the agent to authenticate with external services like Twitter/X and GitHub. Built-in providers require no URL configuration โ€” just enable OAuth and set credentials.

```toml
[oauth]
enabled = true
callback_url = "http://localhost:8080"  # must match your OAuth app's redirect URI

# Optional: add custom OAuth providers beyond the built-in Twitter/GitHub
[oauth.providers.stripe]
auth_type = "oauth2_pkce"
authorize_url = "https://connect.stripe.com/oauth/authorize"
token_url = "https://connect.stripe.com/oauth/token"
scopes = ["read_write"]
allowed_domains = ["api.stripe.com"]
```

Built-in providers (no URL config needed):
- **twitter** (alias: **x**) โ€” Tweet read/write, user info, offline access
- **github** โ€” User info, repository access
- **google** โ€” Gmail, Calendar, Tasks (PKCE flow)

The `manage_oauth` tool handles the full lifecycle:
- **providers** โ€” list available providers and credential status
- **set_credentials** โ€” store client_id/client_secret in OS keychain
- **connect** โ€” start OAuth flow (displays authorize URL, waits for callback)
- **list** โ€” show connected services with token expiry
- **refresh** โ€” refresh an expired access token
- **remove** โ€” disconnect a service

### HTTP Auth Profiles

Pre-configured auth profiles for external APIs, used by the `http_request` tool. Supports four auth types:

```toml
# Bearer token (OAuth 2.0 or API key)
[http_auth.stripe]
auth_type = "bearer"
allowed_domains = ["api.stripe.com"]
token = "keychain"

# OAuth 1.0a (e.g., Twitter v1.1 API)
[http_auth.twitter_v1]
auth_type = "oauth1a"
allowed_domains = ["api.twitter.com"]
api_key = "keychain"
api_secret = "keychain"
access_token = "keychain"
access_token_secret = "keychain"

# Custom header auth
[http_auth.custom_api]
auth_type = "header"
allowed_domains = ["api.example.com"]
header_name = "X-API-Key"
header_value = "keychain"

# Basic auth
[http_auth.internal]
auth_type = "basic"
allowed_domains = ["internal.company.com"]
username = "service_account"
password = "keychain"
```

All credential fields support `"keychain"` for OS keychain storage. The `allowed_domains` field is required and enforces domain restrictions on each profile.

OAuth connections established via `manage_oauth` automatically create auth profiles โ€” no manual config needed for built-in providers.

### People Intelligence

Organic contact management that learns about people from conversations. Disabled by default.

```toml
[people]
enabled = true
auto_extract = true                    # learn facts from conversations
auto_extract_categories = [            # categories to auto-extract
    "birthday", "preference", "interest",
    "work", "family", "important_date"
]
restricted_categories = [              # never auto-extracted
    "health", "finance", "political", "religious"
]
fact_retention_days = 180              # auto-delete stale facts
reconnect_reminder_days = 30           # suggest reconnecting after inactivity
```

The `manage_people` tool provides manual control:
- **add/list/view/update/remove** โ€” manage person records
- **add_fact/remove_fact** โ€” manage facts about a person
- **link** โ€” link a platform identity (e.g., `slack:U123`, `telegram:456`)
- **export/purge** โ€” export or delete all data for a person
- **audit/confirm** โ€” review and verify auto-extracted facts

Privacy model:
- Owner sees the full contact graph in DMs
- Non-owners get communication style adaptation only
- Public channels receive no personal fact injection
- Restricted categories are never auto-extracted

Background tasks run daily: stale fact pruning, upcoming date reminders (14-day window), and reconnect suggestions.

### Email Triggers

```toml
[triggers.email]
host = "imap.gmail.com"
port = 993
username = "you@gmail.com"
password = "keychain"          # or "${AIDAEMON_EMAIL_PASSWORD}"
folder = "INBOX"
```

### Web Search

```toml
[search]
backend = "duck_duck_go"       # "duck_duck_go" (default, no API key) or "brave"
api_key = "keychain"           # required for Brave Search
```

### Scheduled Tasks

```toml
[scheduler]
enabled = true
tick_interval_secs = 30        # how often to check for due tasks

[[scheduler.tasks]]
name = "daily-summary"
schedule = "every day at 9am"  # natural language or cron syntax
prompt = "Summarize my unread emails"
oneshot = false                # if true, runs once then deletes
trusted = false                # if true, skips terminal approval
```

### File Transfer

```toml
[files]
enabled = true
inbox_dir = "~/.aidaemon/files/inbox"  # where received files are stored
outbox_dirs = ["~"]                     # directories the agent can send files from
max_file_size_mb = 10
retention_hours = 24                    # auto-delete received files after this time
```

### Daemon & Dashboard

```toml
[daemon]
health_port = 8080
health_bind = "127.0.0.1"      # bind address for health endpoint (default: 127.0.0.1)
dashboard_enabled = true       # enable web dashboard (default: true)
```

The dashboard provides a web UI at `http://127.0.0.1:8080/` with status, usage stats, active sessions, and task monitoring. Authentication uses a token stored in the OS keychain.

### State

```toml
[state]
db_path = "aidaemon.db"
working_memory_cap = 50
consolidation_interval_hours = 6   # how often to run memory consolidation
max_facts = 100                    # max facts to include in system prompt
task_token_budget = 500000         # max tokens per task (default 500k, 0 = unlimited)
daily_token_budget = 1000000       # optional daily token limit (resets at midnight UTC)
# encryption_key = "keychain"      # optional override; defaults to AIDAEMON_ENCRYPTION_KEY from .env
```

On startup, aidaemon now enforces encrypted state by default:
- If `AIDAEMON_ENCRYPTION_KEY` is missing and the DB is new/plaintext, a key is generated and written to `.env`.
- Existing plaintext SQLite DBs are migrated automatically to SQLCipher with backup + integrity checks.
- Set `AIDAEMON_ALLOW_PLAINTEXT_DB=1` only for emergency recovery scenarios.
Encryption at rest protects against database-file exposure, but if an attacker obtains both the DB and key, the data can be decrypted.

## Channel Commands

These commands work in Telegram, Slack, and Discord:

| Command | Description |
|---|---|
| `/model` | Show current model |
| `/model <name>` | Switch to a specific model (disables auto-routing) |
| `/models` | List available models from provider |
| `/auto` | Re-enable automatic model routing by query complexity |
| `/reload` | Reload `config.toml` (applies model changes, re-enables auto-routing) |
| `/restart` | Restart the daemon (picks up new binary, config, MCP servers) |
| `/clear` | Clear conversation context and start fresh |
| `/cost` | Show token usage statistics for current session |
| `/tasks` | List running and recent background tasks |
| `/cancel <id>` | Cancel a running background task |
| `/connect <channel> <token>` | Add a new bot at runtime (Telegram, Slack, Discord) |
| `/bots` | List all connected bots (config-based and dynamic) |
| `/help` | Show available commands |

## Running as a Service

```bash
# macOS (launchd)
aidaemon install-service
launchctl load ~/Library/LaunchAgents/ai.aidaemon.plist

# Linux (systemd)
sudo aidaemon install-service
sudo systemctl enable --now aidaemon
```

## Security Model

### Where to Run It

aidaemon works great on any dedicated machine โ€” an old laptop, a Mac Mini, a Raspberry Pi, or a $5/mo VPS. Docker works too if that's your thing, but it's not required. For the best long-running setup, give it its own machine and treat your everyday workstation as a separate environment. Any spare computer you have lying around works perfectly.

### Application-Level Protections

- **User authentication** โ€” `allowed_user_ids` is enforced on every message and callback query. Unauthorized users are silently ignored.
- **Role-based access control** โ€” Owner, Guest, and Public roles with different tool access levels. Scheduled task management is restricted to owners.
- **Terminal allowlist** โ€” commands must match an `allowed_prefixes` entry using word-boundary matching (`"ls"` allows `ls -la` but not `lsblk`). Set to `["*"]` to allow all.
- **Shell operator detection** โ€” commands containing `;`, `|`, `` ` ``, `&&`, `||`, `$(`, `>(`, `<(`, or newlines always require approval, regardless of prefix match.
- **Command approval flow** โ€” unapproved commands trigger an inline keyboard (Allow Once / Allow Always / Deny). The agent blocks until you respond.
- **Persistent approvals** โ€” "Allow Always" choices are persisted across restarts. Use `permission_mode = "cautious"` to make all approvals session-only.
- **Path verification** โ€” file-modifying commands are blocked unless the target paths were first observed via read-only commands (e.g., `ls`, `cat`).
- **Stall detection** โ€” consecutive same-tool loops, alternating tool patterns, and hard iteration caps prevent runaway agent execution.
- **HTTP request approval** โ€” write operations (POST, PUT, PATCH, DELETE) and authenticated requests require user approval with risk classification.
- **SSRF protection** โ€” HTTP requests, redirects, and MCP server additions validate URLs against private IP ranges, localhost, and metadata endpoints.
- **HTTPS enforcement** โ€” the `http_request` tool only allows HTTPS URLs.
- **Domain allowlists** โ€” each HTTP auth profile restricts which domains it can authenticate against.
- **Input sanitization** โ€” external content (tool outputs, web fetches, trigger payloads, skill bodies) is stripped of prompt injection patterns and invisible Unicode before reaching the LLM.
- **Untrusted trigger sessions** โ€” sessions originating from automated sources (e.g. email triggers, scheduled tasks with `trusted = false`) require terminal approval for every command.
- **Sub-agent isolation** โ€” sub-agents inherit the parent's user role (no privilege escalation) and share the parent's path verification tracker.
- **MCP environment scrubbing** โ€” MCP server sub-processes start with a minimal environment; credentials are not forwarded unless explicitly configured.
- **Config secrets redaction** โ€” when the LLM reads config via the config manager tool, sensitive keys (`api_key`, `password`, `bot_token`, etc.) are replaced with `[REDACTED]`.
- **Config change approval** โ€” sensitive config modifications (API keys, allowed users, terminal wildcards) require explicit user approval.
- **OAuth token security** โ€” OAuth tokens and dynamic bot tokens are stored in the OS keychain, never in config files or chat history.
- **Encrypted state by default** โ€” database contents are encrypted at rest; startup auto-migrates legacy plaintext DBs with rollback-safe backup.
- **Public channel protection** โ€” public-facing channels use a minimal system prompt with no internal architecture details, and output is sanitized to redact secrets.
- **Dashboard security** โ€” bearer token authentication with rate limiting, token expiration (24h), and constant-time comparison.
- **File permissions** โ€” config backups are written with `0600` (owner-only read/write) on Unix.

## Inspired by OpenClaw

aidaemon was inspired by [OpenClaw](https://openclaw.ai) ([GitHub](https://github.com/openclaw/openclaw)), a personal AI assistant that runs on your own devices and connects to channels like WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and more.

Both projects share the same goal: a self-hosted AI assistant you control. The key differences:

| | aidaemon | OpenClaw |
|---|---|---|
| **Language** | Rust | TypeScript/Node.js |
| **Channels** | Telegram, Slack, Discord | WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Teams, and more |
| **Scope** | Lightweight daemon with web dashboard | Full-featured platform with web UI, canvas, TTS, browser control |
| **Config** | Single `config.toml` with keychain secrets | JSON5 config with hot-reload and file watching |
| **Error recovery** | Inline error classification per HTTP status, model fallback, config backup rotation | Multi-layer retry policies, auth profile cooldowns, provider rotation, restart sentinels |
| **State** | SQLite + in-memory working memory (encrypted by default) | Pluggable storage with session management |
| **Install** | `curl -sSfL https://get.aidaemon.ai \| bash` | npm/Docker |
| **Dependencies** | ~30 crates, single static binary | Node.js ecosystem |

aidaemon is designed for users who want a lightweight daemon in Rust with essential features. If you need more channels (WhatsApp, Signal, iMessage) or a richer plugin ecosystem, check out OpenClaw.

## Architecture

```
Channels โ”€โ”€โ†’ Agent โ”€โ”€โ†’ ExecutionPolicy โ”€โ”€โ†’ Router โ”€โ”€โ†’ LLM Provider
(Telegram,     โ”‚        (risk gate,         (profile    (OpenAI-compatible /
 Slack,        โ”‚         uncertainty,        โ†’ model      Anthropic /
 Discord)      โ”‚         tool filtering)     mapping)     Google Gemini)
               โ”‚
               โ”œโ”€โ”€โ†’ Tools (40+, with ToolCapabilities)
               โ”‚     โ”œโ”€โ”€ File ops (read, write, edit, search, project inspect)
               โ”‚     โ”œโ”€โ”€ Terminal / RunCommand (with approval flow)
               โ”‚     โ”œโ”€โ”€ Git (info, commit)
               โ”‚     โ”œโ”€โ”€ Browser (headless Chrome)
               โ”‚     โ”œโ”€โ”€ Web research (search + fetch)
               โ”‚     โ”œโ”€โ”€ HTTP requests (with auth profiles + OAuth)
               โ”‚     โ”œโ”€โ”€ MCP servers (JSON-RPC over stdio, dynamic management)
               โ”‚     โ”œโ”€โ”€ Sub-agents / CLI agents (claude, gemini, codex, aider)
               โ”‚     โ”œโ”€โ”€ Goals & tasks (manage, schedule, trace, blockers)
               โ”‚     โ”œโ”€โ”€ People intelligence (contact management)
               โ”‚     โ”œโ”€โ”€ Skills (use, manage, resources)
               โ”‚     โ””โ”€โ”€ OAuth, config, health probe, diagnostics
               โ”‚
               โ”œโ”€โ”€โ†’ State
               โ”‚     โ”œโ”€โ”€ SQLite (messages, facts, episodes, goals, procedures)
               โ”‚     โ””โ”€โ”€ In-memory working memory (VecDeque, capped)
               โ”‚
               โ”œโ”€โ”€โ†’ Memory Manager
               โ”‚     โ”œโ”€โ”€ Fact extraction (evidence-gated consolidation)
               โ”‚     โ”œโ”€โ”€ Vector embeddings (AllMiniLML6V2)
               โ”‚     โ”œโ”€โ”€ Context window (role-based token quotas)
               โ”‚     โ””โ”€โ”€ People intelligence (organic fact learning)
               โ”‚
               โ”œโ”€โ”€โ†’ HeartbeatCoordinator (unified background tasks)
               โ”‚
               โ””โ”€โ”€โ†’ Skills (trigger-based, with registries + auto-promotion)

Triggers โ”€โ”€โ†’ EventBus โ”€โ”€โ†’ Agent โ”€โ”€โ†’ Channel notification
โ”œโ”€โ”€ IMAP IDLE (email)
โ””โ”€โ”€ Goal scheduler (60s tick)

Health server (axum) โ”€โ”€โ†’ GET /health + Web Dashboard + OAuth callbacks
```

- **Agent loop**: user message โ†’ ExecutionPolicy (risk score + uncertainty) โ†’ Router (profile โ†’ model) โ†’ call LLM โ†’ tool execution with capability filtering โ†’ mid-loop adaptation โ†’ return final response
- **Working memory**: `VecDeque<Message>` in RAM, capped at N messages, hydrated from SQLite on cold start
- **Session ID** = channel-specific chat/thread ID
- **MCP**: spawns server subprocesses, communicates via JSON-RPC over stdio. Servers can be added/removed at runtime.
- **Memory consolidation**: periodically extracts durable facts from conversations, stores with vector embeddings for semantic retrieval
- **People intelligence**: auto-extracts contact facts during consolidation, runs daily background tasks for date reminders and reconnect suggestions
- **Token tracking**: per-request usage logged to SQLite, queryable via `/cost` command or dashboard

## License

[MIT](LICENSE)