i-self
A personal developer companion CLI: scans your repos, indexes your code, runs LLMs over it, watches your activity, and moves agent sessions between tools (Claude Code ↔ Aider ↔ Goose ↔ Codex CLI ↔ Continue.dev). One Rust binary, local-first state in ~/.i-self/.
Looking for the full feature tour? FEATURES.md walks every subcommand with worked examples and known limitations.
What it does
- Profile your code — clone & scan GitHub / GitLab / Bitbucket repos, extract languages, patterns, and commit history.
- Semantic search & RAG — index code into embeddings and answer questions about it (
query,ask,search). - Code review — LLM review with your own coding patterns as context.
- Activity monitor — track keystrokes, mouse, active window, screenshots; surface suggestions / fire automation rules.
- Cloud backup — push
~/.i-self/to any S3-compatible bucket (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2). - Vulnerability scan — check
Cargo.lock/package-lock.jsonagainst the OSV database. - Skills & learning — extract skill profile from your code, gap-analyze against job descriptions, suggest learning paths.
- Share & continue agent sessions across tools — list / export / upload / import sessions from Claude Code, Aider, Goose, OpenAI Codex CLI, Continue.dev, OpenCode, ChatGPT exports, and any OpenAI-format JSON. Hand a transcript off from one agent and continue in another.
Install
Requires rustc 1.88+ (the MSRV is pinned by the aws-sdk-s3 1.51.x line).
Building from source works too:
&&
Quick start
# Tell it about you
# Index a project for semantic search (requires OPENAI_API_KEY for real embeddings)
# Ask it a question
# Scan dependencies for CVEs
# Start the dashboard (http://127.0.0.1:8080 by default, loopback only)
Subcommand reference
| Command | What it does |
|---|---|
setup |
First-run config wizard — auth tokens, scan defaults |
query / ask / search |
Semantic search and RAG over the indexed code |
index |
Generate embeddings for a directory of source |
track |
Record coding sessions and work patterns |
skills |
Skill profile + gap analysis vs. job descriptions |
learn |
Suggest learning paths |
review |
LLM code review |
monitor |
Activity monitor (keystrokes, mouse, screenshots) |
dashboard |
Axum web UI + JSON API on 127.0.0.1:8080 |
api |
Alias for dashboard, default port 3000 |
sync |
Push / pull ~/.i-self/ to S3-compatible storage |
automate |
Trigger-based action rules (idle → notify, etc.) |
snippet |
Local code snippet manager |
vuln |
OSV-backed dependency CVE scanner |
message |
Telegram / WhatsApp messaging integration |
team |
Aggregate multi-developer profiles |
share |
List / export / upload / import AI-agent sessions across tools (see FEATURES.md) |
plugin list |
List built-in analyzers (no dynamic loading; see "Analyzers" below) |
Configuration
Edit ~/.i-self/config.toml:
= "ghp_..."
= "..."
= "..."
[]
= "openai" # openai | anthropic | gemini | litellm
= "gpt-4o-mini"
= "sk-..." # or set OPENAI_API_KEY in env
= "sk-ant-..."
= "AIza..."
= 4096
= 0.7
[]
= 5 # seconds between screenshots
= 300 # seconds before "idle"
= true
= true
[]
# Cloud sync — see "Cloud sync" below for env-var equivalents.
= "my-iself-backup"
= "us-east-1"
# access_key / secret_key prefer env vars; see below.
LLM providers
| Provider | Env var | Default model | Streaming |
|---|---|---|---|
| OpenAI | OPENAI_API_KEY |
gpt-4o-mini |
✅ |
| Anthropic | ANTHROPIC_API_KEY |
claude-3-haiku |
✅ |
| Gemini | GEMINI_API_KEY |
gemini-pro |
✅ |
| LiteLLM | LITELLM_API_KEY |
gpt-3.5-turbo |
depends on backing model |
LiteLLM is a proxy that fronts 100+ models via the OpenAI API shape; configure litellm_base_url to your proxy.
Embeddings (semantic search)
Semantic search uses OpenAI's text-embedding-3-small at 384 dimensions when OPENAI_API_KEY is set. If no key is set, it falls back to a keyword-bucket hash embedder and logs a warning at startup — you'll get something working offline, but it isn't real semantic similarity, so plan to set the key.
Cloud sync (S3-compatible)
One backend serves all S3-compatible providers. The endpoint selects which:
| Provider | Endpoint |
|---|---|
| AWS S3 | (leave unset) |
| MinIO | http://localhost:9000 (or your URL) |
| Cloudflare R2 | https://<account-id>.r2.cloudflarestorage.com |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com |
| Backblaze B2 | https://s3.<region>.backblazeb2.com |
Credentials follow the standard AWS chain — env vars (AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY) → ~/.aws/credentials → IAM role.
# MinIO example
Other env-var overrides: ISELF_SYNC_REGION, ISELF_SYNC_PREFIX, ISELF_SYNC_VHOST_STYLE=1 (force virtual-hosted-style addressing instead of path-style; only needed for AWS).
Dashboard / API auth
i-self dashboard and i-self api start the same axum server. Routes:
GET /healthz— always publicGET /— HTML dashboard/api/profile,/api/stats,/api/search,/api/ai/{ask,explain,generate},/api/teams/*— JSON
Default bind is 127.0.0.1. Auth is loopback-trust by default — any local process can call the API.
To expose on the network, set both ISELF_BIND and ISELF_API_TOKEN:
# Clients send: Authorization: Bearer $ISELF_API_TOKEN
The server refuses to start on a non-loopback address without a token — exposing /api/ai/* publicly without auth would let anyone burn your LLM budget.
The token can also live in ~/.i-self/api_token (one line, no whitespace).
Vulnerability scanner
# Scan a project's lockfiles
# Quick check a single package version
- Supports
Cargo.lockandpackage-lock.json(lockfile versions 1, 2, 3). Other lockfile names are detected but only those two are parsed today. - Backed by OSV. Severity is taken from
database_specific.severitywhen present, otherwise computed from a CVSS v3 vector. - Exit code 2 if any per-package OSV lookup fails — the scan is incomplete and "0 vulns" is not safe to trust. Failures are listed on stderr.
Activity monitor
Uses device_query to poll keyboard / mouse / active window every 50ms, plus screencapture (macOS) / gnome-screenshot (Linux) / PowerShell (Windows) for periodic screenshots.
macOS: the first run will prompt for Accessibility permission so the system can report keyboard/mouse state to a non-foreground app. Without it, counters stay at zero.
Automation rules
Trigger-based rules persist in ~/.i-self/automation.toml.
Triggers
| Type | Description |
|---|---|
idle |
User idle ≥ N seconds |
meeting-silence |
No meeting activity for N seconds |
no-activity |
No keyboard/mouse for N seconds |
activity-spike |
Keystrokes-per-window above threshold |
incoming-call |
Detect incoming call (system events; platform-dependent) |
Actions
| Action | Status | What it does |
|---|---|---|
notify |
✅ implemented | Desktop notification (osascript / notify-send / PowerShell) |
telegram |
✅ implemented | POST to Telegram Bot API; needs TELEGRAM_BOT_TOKEN + TELEGRAM_CHAT_ID |
whatsapp |
✅ implemented | Cloud WhatsApp Business API; needs WHATSAPP_API_KEY + phone numbers |
log |
✅ implemented | Log to tracing |
exit-meeting |
⚠️ stub | Logs only — no real meeting-app integration yet |
start-recording |
⚠️ stub | Logs only |
stop-recording |
⚠️ stub | Logs only |
send-message |
⚠️ stub | Logs only — distinct from telegram/whatsapp |
run-command |
⚠️ stub | Logs the command but doesn't execute it |
Stubbed actions still appear in automate test output so you can verify trigger matching, but they don't do the named action.
Sharing agent sessions across tools
i-self share discovers AI-agent transcripts on disk, exports them to a
portable format, and imports them into a different agent so you can
hand off a conversation between tools.
Provider matrix
| Tool | List | Load | Import (write) | Storage location |
|---|---|---|---|---|
| Claude Code | ✅ | ✅ | ✅ | ~/.claude/projects/<encoded-cwd>/*.jsonl |
| Aider | ✅ | ✅ | ✅ | <project>/.aider.chat.history.md |
| Goose (Block) | ✅ | ✅ | ✅ | ~/.config/goose/sessions/*.jsonl |
| OpenAI Codex CLI | ✅ | ✅ | ✅ | ~/.codex/sessions/YYYY/MM/DD/rollout-*.jsonl |
| Continue.dev | ✅ | ✅ | ✅ | ~/.continue/sessions/*.json |
| OpenCode | ✅ | ✅ | — (use clipboard) |
~/.local/share/opencode/storage/session/ |
| Generic OpenAI JSON | ✅* | ✅ | ✅ | any dir set via ISELF_GENERIC_DIR |
| Clipboard (paste-into) | — | — | ✅ | stdout (pipe to pbcopy/xclip/Set-Clipboard) |
| Copilot Chat / Cline / Cursor | ❌ | ❌ | ✅ via clipboard | not file-addressable |
*The generic provider only enumerates when ISELF_GENERIC_DIR is set, to
avoid scanning random JSON files in your home dir. Point it at a directory
of conversation files (e.g. an unzipped ChatGPT data export).
Each provider's data dir can be overridden with an env var: ISELF_GOOSE_DIR,
ISELF_CODEX_DIR, ISELF_CONTINUE_DIR, ISELF_OPENCODE_DIR,
ISELF_AIDER_SEARCH_ROOTS (colon-separated), ISELF_GENERIC_DIR.
Discovering and exporting
# List every session across every supported provider, newest first
# Filter
# Export one session to JSON (the canonical interchange format)
# Render to Markdown or self-contained HTML for human reading / sharing
# Strip secrets that look like API keys, tokens, passwords first
# Upload to your S3 bucket and get a 24-hour presigned URL
Importing into another agent
# Continue a Claude Code session in Aider
# Continue an Aider session in Claude Code (writes a synthetic JSONL transcript)
# Cross-tool: hand off to Goose, Codex CLI, or Continue.dev
# Generic OpenAI JSON — feed mods / fabric / any OpenAI-SDK consumer
# For agents without addressable on-disk storage (Copilot, Cline, Cursor),
# use the clipboard target and paste into the chat box
| | |
# `<input>` can also be an HTTPS URL (e.g. a presigned `share upload` link)
# or `-` for stdin
Imported sessions get a leading provenance message:
[i-self import] Continued from <provider> session <id>. <N> prior messages follow.
so the recipient agent (and you, scrolling back) can tell at a glance where
the transcript came from.
Fidelity caveat: tool-call structure flattens to text when crossing agent boundaries (each agent has its own tool catalog). The recipient gets a faithful, readable transcript they can continue from — not a wire-compatible session that re-executes the source agent's tool calls verbatim.
Snippets
Analyzers (formerly "plugins")
There is no dynamic plugin loader. The earlier plugin load --directory command was misleading — it walked .so files but never dlopened them. The three built-in analyzers (security patterns, performance patterns, documentation) are compiled in. To add another, implement AnalyzerPlugin in src/plugins/mod.rs and rebuild.
VS Code extension
A thin TypeScript wrapper that shells out to the CLI binary. Build it:
Commands: list/search snippets, add selected code as snippet, start/stop activity tracking, code review panel.
Architecture
src/
├── main.rs # clap dispatcher (~20 subcommands)
├── ai/ {openai,claude,gemini,litellm}.rs # LLM provider clients (all stream)
├── analyzer/ # local code stats + git history
├── automation/ # trigger → action engine
├── config/ # ~/.i-self/config.toml
├── github/ + vcs/{gitlab,bitbucket}.rs # repo scanning
├── monitor/ {input,screenshot,notification}.rs # device_query poller, screencapture
├── plugins/ # built-in code analyzers (NOT a dynamic loader)
├── semantic/ {index,search,storage}.rs # embeddings + cosine similarity
├── storage/ # ~/.i-self/ filesystem layout, profiles, KB
├── sync/ # aws-sdk-s3 backed cloud sync
├── vuln/ # OSV API + Cargo/npm lockfile parsers + CVSS v3
└── web/ {auth,handlers,routes,state}.rs # axum dashboard + bearer auth
State is in ~/.i-self/:
config.toml # main config
sync_config.json # cloud sync settings
api_token # optional — bearer token for the dashboard/API
automation.toml # automation rules
profile.json # developer profile
embeddings/ # semantic index
screenshots/ # activity monitor captures
Tech stack
- Rust (rustc 1.88, MSRV pinned by aws-sdk-s3 1.51 line)
- tokio, axum, reqwest
octocrab(GitHub),git2(libgit2),walkdiraws-sdk-s3for cloud sync (S3-compatible)device_queryfor global input polling- OpenAI
text-embedding-3-smallfor semantic search - OSV REST API for vulnerability data
Status
This is a working prototype, not a polished release. Known sharp edges:
- Some automation actions (
exit-meeting,run-command, etc.) log instead of executing. - Embeddings are not tagged with the model that produced them, so toggling
OPENAI_API_KEYbetween sessions can produce nonsensical search scores until you reindex. - No rate limiting on the API server. Bearer auth gates who can call; once authenticated, calls are unbounded.
- No-op fallback embeddings (when
OPENAI_API_KEYis unset) are keyword-bucket hashes, not semantic.
Contributions welcome.
License
MIT