aidaemon
Website ยท Documentation ยท Discord ยท ๐
A self-hosted AI agent that runs on your machine and talks to you on Telegram, Slack, or Discord. It can run commands, browse the web, manage files, remember things across conversations, and learn your workflows over time.
I built this because I wanted to control my computer from my phone, from anywhere. Send a message on Telegram, and it runs a command, checks a log, deploys something, or researches a topic โ then replies with the result. I also wanted it to run on cheap hardware: a Raspberry Pi, an old laptop, a $5/month VPS.
What makes it different
- It's yours โ runs on your machine, your API keys, your data. Nothing goes to a third party beyond the LLM API calls you choose to make.
- It remembers you โ persistent memory with fact extraction, vector embeddings, and semantic recall. It learns your preferences, your projects, the people you mention. Not a shallow "memory" feature โ a full knowledge graph that grows over time.
- It's always on โ runs as a background daemon. Message it on Telegram at 3am from your phone. Schedule tasks with cron syntax or natural language. It's not an app you open โ it's an assistant that's always available.
- It can act โ 40+ tools: terminal commands, file operations, git, web browsing, web research, HTTP requests with OAuth, MCP servers. It doesn't just answer questions โ it does things.
- It's a single binary โ no Docker, no Node.js, no Python environment. One binary, copy it anywhere and run it. Starts in milliseconds, uses minimal RAM.
- It works with any LLM โ OpenAI, xAI (Grok), Anthropic, Google Gemini, DeepSeek, Moonshot, MiniMax, Ollama, OpenRouter, Cloudflare AI Gateway, or any OpenAI-compatible API. Switch providers without changing anything else.
Quick Start
One-line install
|
Homebrew
Cargo
Then run aidaemon โ the setup wizard walks you through picking a provider, entering your API key, and connecting a channel.
Features
Channels
Talk to it on Telegram, Slack (Socket Mode), or Discord. Run multiple bots, add new ones at runtime with /connect, no restart needed.
Telegram owners can also run /terminal [agent] [working_dir] to launch the hosted terminal Mini App (https://terminal.aidaemon.ai) for direct CLI-agent sessions with automatic secure daemon pairing.
Tools & Agents
40+ built-in tools including terminal execution, file operations, git, web browser (headless Chrome), web search, HTTP requests with auth profiles, and MCP server integration. Tools declare their risk level โ read-only operations run freely, side effects require approval.
Delegate complex tasks to sub-agents or external CLI agents (Claude, Gemini, Codex, Aider โ auto-discovered).
Memory
SQLite-backed persistent memory with vector embeddings (AllMiniLML6V2). Background consolidation extracts durable facts from conversations. Semantic recall surfaces relevant context automatically. Database encrypted at rest by default.
Skills
Trigger-based instructions that teach it new workflows. Load from files, URLs, or remote registries. Successful procedures auto-promote to skills after enough consistent use.
Automation
Cron-style scheduled tasks with natural language time parsing. Email triggers via IMAP IDLE. Long-running goal tracking with task breakdown and multi-schedule support.
Security
Terminal command allowlists, inline approval flow (Allow Once / Allow Always / Deny), SSRF protection, input sanitization, encrypted state, secrets in OS keychain, and role-based access control. See the security docs for the full model.
Running as a Service
# macOS
# Linux
Configuration
All settings live in config.toml, generated by the setup wizard. See config.toml.example for the full reference and the documentation for detailed guides on each feature. The provider config supports both model-level fallbacks on the primary backend and ordered cross-provider fallbacks via [[provider.fallbacks]].
The built-in policy profiles cheap, balanced, and strong are execution presets, not separate model tiers. They all use your configured default model; the differences are context budget, tool budget, verification depth, and approval behavior. The open-source default auto-routing now floors at balanced for better reliability, while cheap remains available as a lower-budget preset for custom forks and policy tweaks.
Secrets (API keys, bot tokens) are stored in your OS keychain by default โ not in config files.
If you prefer env-only secrets, set AIDAEMON_NO_KEYCHAIN=1. In that mode:
- aidaemon reads secrets from environment variables /
.env - secret writes (including OAuth token refresh rotation) are persisted back to the env file
- set
AIDAEMON_ENV_FILE=/absolute/path/.envto control which env file is used (default:.envin working directory)
For env-only mode, keep the env file private (chmod 600) and out of version control.
For external APIs, the built-in auth flow is:
manage_apifor a single deterministic connect + learn + verify flowmanage_oauthfor OAuth services, including custom OAuth 2.0 PKCE, authorization-code, and client-credentials providers you register at runtimemanage_http_authfor API keys, bearer tokens, custom headers, basic auth, or OAuth 1.0a credentialsmanage_skillswithlearn_apito ingest the official docs or OpenAPI/Swagger URL into a reusable API guide after auth is set up
learn_api now prefers OpenAPI/Swagger when available, crawls multiple docs pages on the same host when needed, discovers linked specs from docs, bundles remote OpenAPI $ref documents, and can switch to GraphQL schema introspection when a GraphQL endpoint is available.
manage_api can now auto-derive a safe verification probe from a learned OpenAPI spec, and for GraphQL onboarding it can reuse introspection as the verification probe when you didnโt provide a separate verify_url.
Both paths keep secrets in the keychain by default, or in the configured .env file when AIDAEMON_NO_KEYCHAIN=1.
Building from Source
See CLAUDE.md for architecture details, module map, and contributor guidance.