# i-self
A personal developer companion CLI: scans your repos, indexes your code, runs LLMs over it, watches your activity, and **moves agent sessions between tools** (Claude Code ↔ Aider ↔ Goose ↔ Codex CLI ↔ Continue.dev). One Rust binary, local-first state in `~/.i-self/`.
> **Looking for the full feature tour?** [FEATURES.md](FEATURES.md) walks every subcommand with worked examples and known limitations.
## What it does
- **Profile your code** — clone & scan GitHub / GitLab / Bitbucket repos, extract languages, patterns, and commit history.
- **Semantic search & RAG** — index code into embeddings and answer questions about it (`query`, `ask`, `search`).
- **Code review** — LLM review with your own coding patterns as context.
- **Activity monitor** — track keystrokes, mouse, active window, screenshots; surface suggestions / fire automation rules.
- **Cloud backup** — push `~/.i-self/` to any S3-compatible bucket (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2).
- **Vulnerability scan** — check `Cargo.lock` / `package-lock.json` against the OSV database.
- **Skills & learning** — extract skill profile from your code, gap-analyze against job descriptions, suggest learning paths.
- **Share & continue agent sessions across tools** — list / export / upload / **import** sessions from Claude Code, Aider, Goose, OpenAI Codex CLI, Continue.dev, OpenCode, ChatGPT exports, and any OpenAI-format JSON. Hand a transcript off from one agent and continue in another.
## Install
```bash
cargo install i-self
```
Requires rustc 1.88+ (the MSRV is pinned by the `aws-sdk-s3 1.51.x` line).
Building from source works too:
```bash
git clone <this-repo> && cd i-self
cargo build --release
./target/release/i-self --help
```
## Quick start
```bash
# Tell it about you
i-self setup
# Index a project for semantic search (requires OPENAI_API_KEY for real embeddings)
i-self index ./my-project
# Ask it a question
i-self ask "where do we handle retry logic?"
# Scan dependencies for CVEs
i-self vuln scan --path ./my-project
# Start the dashboard (http://127.0.0.1:8080 by default, loopback only)
i-self dashboard
```
## Subcommand reference
| `setup` | First-run config wizard — auth tokens, scan defaults |
| `query` / `ask` / `search` | Semantic search and RAG over the indexed code |
| `index` | Generate embeddings for a directory of source |
| `track` | Record coding sessions and work patterns |
| `skills` | Skill profile + gap analysis vs. job descriptions |
| `learn` | Suggest learning paths |
| `review` | LLM code review |
| `monitor` | Activity monitor (keystrokes, mouse, screenshots) |
| `dashboard` | Axum web UI + JSON API on `127.0.0.1:8080` |
| `api` | Alias for `dashboard`, default port 3000 |
| `sync` | Push / pull `~/.i-self/` to S3-compatible storage |
| `automate` | Trigger-based action rules (idle → notify, etc.) |
| `snippet` | Local code snippet manager |
| `vuln` | OSV-backed dependency CVE scanner |
| `message` | Telegram / WhatsApp messaging integration |
| `team` | Aggregate multi-developer profiles |
| `share` | List / export / upload / **import** AI-agent sessions across tools (see [FEATURES.md](FEATURES.md#cross-agent-session-sharing)) |
| `plugin list` | List built-in analyzers (no dynamic loading; see "Analyzers" below) |
## Configuration
Edit `~/.i-self/config.toml`:
```toml
github_token = "ghp_..."
gitlab_token = "..."
bitbucket_token = "..."
[llm]
openai_api_key = "sk-..." # or set OPENAI_API_KEY in env
anthropic_api_key = "sk-ant-..."
gemini_api_key = "AIza..."
max_tokens = 4096
temperature = 0.7
[monitor]
screenshot_interval = 5 # seconds between screenshots
idle_threshold = 300 # seconds before "idle"
capture_keyboard = true
capture_mouse = true
[cloud]
# Cloud sync — see "Cloud sync" below for env-var equivalents.
bucket = "my-iself-backup"
region = "us-east-1"
# access_key / secret_key prefer env vars; see below.
```
### LLM providers
| OpenAI | `OPENAI_API_KEY` | `gpt-4o-mini` | ✅ |
| Anthropic | `ANTHROPIC_API_KEY` | `claude-3-haiku` | ✅ |
| Gemini | `GEMINI_API_KEY` | `gemini-pro` | ✅ |
| LiteLLM | `LITELLM_API_KEY` | `gpt-3.5-turbo` | depends on backing model |
LiteLLM is a proxy that fronts 100+ models via the OpenAI API shape; configure `litellm_base_url` to your proxy.
### Embeddings (semantic search)
Semantic search uses **OpenAI's `text-embedding-3-small` at 384 dimensions** when `OPENAI_API_KEY` is set. If no key is set, it falls back to a keyword-bucket hash embedder and logs a warning at startup — you'll get *something* working offline, but it isn't real semantic similarity, so plan to set the key.
## Cloud sync (S3-compatible)
One backend serves all S3-compatible providers. The `endpoint` selects which:
| AWS S3 | (leave unset) |
| MinIO | `http://localhost:9000` (or your URL) |
| Cloudflare R2 | `https://<account-id>.r2.cloudflarestorage.com` |
| DigitalOcean Spaces | `https://<region>.digitaloceanspaces.com` |
| Backblaze B2 | `https://s3.<region>.backblazeb2.com` |
Credentials follow the standard AWS chain — env vars (`AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`) → `~/.aws/credentials` → IAM role.
```bash
# MinIO example
export ISELF_SYNC_BUCKET=my-iself
export ISELF_SYNC_ENDPOINT=http://localhost:9000
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=minio-secret
i-self sync push # upload ~/.i-self/ to bucket
i-self sync pull # download bucket back to ~/.i-self/
i-self sync status # show effective config
```
Other env-var overrides: `ISELF_SYNC_REGION`, `ISELF_SYNC_PREFIX`, `ISELF_SYNC_VHOST_STYLE=1` (force virtual-hosted-style addressing instead of path-style; only needed for AWS).
## Dashboard / API auth
`i-self dashboard` and `i-self api` start the same axum server. Routes:
- `GET /healthz` — always public
- `GET /` — HTML dashboard
- `/api/profile`, `/api/stats`, `/api/search`, `/api/ai/{ask,explain,generate}`, `/api/teams/*` — JSON
**Default bind is `127.0.0.1`.** Auth is loopback-trust by default — any local process can call the API.
To expose on the network, set both `ISELF_BIND` and `ISELF_API_TOKEN`:
```bash
export ISELF_BIND=0.0.0.0
export ISELF_API_TOKEN=$(openssl rand -hex 32)
i-self dashboard --port 8080
# Clients send: Authorization: Bearer $ISELF_API_TOKEN
```
The server **refuses to start on a non-loopback address without a token** — exposing `/api/ai/*` publicly without auth would let anyone burn your LLM budget.
The token can also live in `~/.i-self/api_token` (one line, no whitespace).
## Vulnerability scanner
```bash
# Scan a project's lockfiles
i-self vuln scan --path ./my-project
# Quick check a single package version
i-self vuln check lodash 4.17.20 --ecosystem npm
```
- Supports `Cargo.lock` and `package-lock.json` (lockfile versions 1, 2, 3). Other lockfile names are detected but only those two are parsed today.
- Backed by [OSV](https://osv.dev). Severity is taken from `database_specific.severity` when present, otherwise computed from a CVSS v3 vector.
- **Exit code 2** if any per-package OSV lookup fails — the scan is incomplete and "0 vulns" is not safe to trust. Failures are listed on stderr.
## Activity monitor
```bash
i-self monitor start # spawns the background poller
i-self monitor stop
i-self monitor suggestions
```
Uses `device_query` to poll keyboard / mouse / active window every 50ms, plus `screencapture` (macOS) / `gnome-screenshot` (Linux) / PowerShell (Windows) for periodic screenshots.
**macOS:** the first run will prompt for Accessibility permission so the system can report keyboard/mouse state to a non-foreground app. Without it, counters stay at zero.
## Automation rules
Trigger-based rules persist in `~/.i-self/automation.toml`.
```bash
i-self automate init # write default rules
i-self automate ls
i-self automate add --name "Idle alert" --trigger idle --duration 300 \
--action notify --message "You've been idle 5 min"
i-self automate test --event idle --value 600 # actually fires the matched actions
```
### Triggers
| `idle` | User idle ≥ N seconds |
| `meeting-silence`| No meeting activity for N seconds |
| `no-activity` | No keyboard/mouse for N seconds |
| `activity-spike` | Keystrokes-per-window above threshold |
| `incoming-call` | Detect incoming call (system events; platform-dependent) |
### Actions
| `notify` | ✅ implemented | Desktop notification (`osascript` / `notify-send` / PowerShell) |
| `telegram` | ✅ implemented | POST to Telegram Bot API; needs `TELEGRAM_BOT_TOKEN` + `TELEGRAM_CHAT_ID` |
| `whatsapp` | ✅ implemented | Cloud WhatsApp Business API; needs `WHATSAPP_API_KEY` + phone numbers |
| `log` | ✅ implemented | Log to tracing |
| `exit-meeting` | ⚠️ stub | Logs only — no real meeting-app integration yet |
| `start-recording` | ⚠️ stub | Logs only |
| `stop-recording` | ⚠️ stub | Logs only |
| `send-message` | ⚠️ stub | Logs only — distinct from `telegram`/`whatsapp` |
| `run-command` | ⚠️ stub | Logs the command but doesn't execute it |
Stubbed actions still appear in `automate test` output so you can verify trigger matching, but they don't do the named action.
## Sharing agent sessions across tools
`i-self share` discovers AI-agent transcripts on disk, exports them to a
portable format, and **imports** them into a different agent so you can
hand off a conversation between tools.
### Provider matrix
| Claude Code | ✅ | ✅ | ✅ | `~/.claude/projects/<encoded-cwd>/*.jsonl` |
| Aider | ✅ | ✅ | ✅ | `<project>/.aider.chat.history.md` |
| Goose (Block) | ✅ | ✅ | ✅ | `~/.config/goose/sessions/*.jsonl` |
| OpenAI Codex CLI | ✅ | ✅ | ✅ | `~/.codex/sessions/YYYY/MM/DD/rollout-*.jsonl` |
| Continue.dev | ✅ | ✅ | ✅ | `~/.continue/sessions/*.json` |
| OpenCode | ✅ | ✅ | — (use `clipboard`) | `~/.local/share/opencode/storage/session/` |
| Generic OpenAI JSON | ✅* | ✅ | ✅ | any dir set via `ISELF_GENERIC_DIR` |
| Clipboard (paste-into) | — | — | ✅ | stdout (pipe to `pbcopy`/`xclip`/`Set-Clipboard`) |
| Copilot Chat / Cline / Cursor | ❌ | ❌ | ✅ via clipboard | not file-addressable |
*The generic provider only enumerates when `ISELF_GENERIC_DIR` is set, to
avoid scanning random JSON files in your home dir. Point it at a directory
of conversation files (e.g. an unzipped ChatGPT data export).
Each provider's data dir can be overridden with an env var: `ISELF_GOOSE_DIR`,
`ISELF_CODEX_DIR`, `ISELF_CONTINUE_DIR`, `ISELF_OPENCODE_DIR`,
`ISELF_AIDER_SEARCH_ROOTS` (colon-separated), `ISELF_GENERIC_DIR`.
### Discovering and exporting
```bash
# List every session across every supported provider, newest first
i-self share ls
# Filter
i-self share ls --provider claude-code
i-self share ls --since 7d # h / d / w suffixes
# Export one session to JSON (the canonical interchange format)
i-self share export <session-id> --format json --output session.json
# Render to Markdown or self-contained HTML for human reading / sharing
i-self share export <session-id> --format markdown
i-self share export <session-id> --format html
# Strip secrets that look like API keys, tokens, passwords first
i-self share export <session-id> --format markdown --redact
# Upload to your S3 bucket and get a 24-hour presigned URL
i-self share upload <session-id> --format html --expires-in 86400
```
### Importing into another agent
```bash
# Continue a Claude Code session in Aider
i-self share export <claude-session-id> --format json --output s.json
i-self share import s.json --target aider --project /path/to/repo
# Continue an Aider session in Claude Code (writes a synthetic JSONL transcript)
i-self share import s.json --target claude-code
# Cross-tool: hand off to Goose, Codex CLI, or Continue.dev
i-self share import s.json --target goose
i-self share import s.json --target codex
i-self share import s.json --target continue
# Generic OpenAI JSON — feed mods / fabric / any OpenAI-SDK consumer
i-self share import s.json --target generic-openai
# For agents without addressable on-disk storage (Copilot, Cline, Cursor),
# use the clipboard target and paste into the chat box
i-self share import s.json --target clipboard | Set-Clipboard # Windows PowerShell
# `<input>` can also be an HTTPS URL (e.g. a presigned `share upload` link)
# or `-` for stdin
i-self share import https://s3.example.com/.../session.json --target aider
i-self share import - --target claude-code < session.json
```
Imported sessions get a leading provenance message:
`[i-self import] Continued from <provider> session <id>. <N> prior messages follow.`
so the recipient agent (and you, scrolling back) can tell at a glance where
the transcript came from.
**Fidelity caveat:** tool-call structure flattens to text when crossing
agent boundaries (each agent has its own tool catalog). The recipient gets
a faithful, readable transcript they can continue from — not a wire-compatible
session that re-executes the source agent's tool calls verbatim.
## Snippets
```bash
i-self snippet add "My Snippet" "fn main() {}" rust --tags "async,tokio"
i-self snippet ls --language rust
i-self snippet ls --tag tokio --favorites
i-self snippet search "http"
i-self snippet show <id>
```
## Analyzers (formerly "plugins")
```bash
i-self plugin list
```
There is **no dynamic plugin loader**. The earlier `plugin load --directory` command was misleading — it walked `.so` files but never `dlopen`ed them. The three built-in analyzers (security patterns, performance patterns, documentation) are compiled in. To add another, implement `AnalyzerPlugin` in [src/plugins/mod.rs](src/plugins/mod.rs) and rebuild.
## VS Code extension
A thin TypeScript wrapper that shells out to the CLI binary. Build it:
```bash
cd vscode-extension
npm install
npm run vscode:prepublish
```
Commands: list/search snippets, add selected code as snippet, start/stop activity tracking, code review panel.
## Architecture
```
src/
├── main.rs # clap dispatcher (~20 subcommands)
├── ai/ {openai,claude,gemini,litellm}.rs # LLM provider clients (all stream)
├── analyzer/ # local code stats + git history
├── automation/ # trigger → action engine
├── config/ # ~/.i-self/config.toml
├── github/ + vcs/{gitlab,bitbucket}.rs # repo scanning
├── monitor/ {input,screenshot,notification}.rs # device_query poller, screencapture
├── plugins/ # built-in code analyzers (NOT a dynamic loader)
├── semantic/ {index,search,storage}.rs # embeddings + cosine similarity
├── storage/ # ~/.i-self/ filesystem layout, profiles, KB
├── sync/ # aws-sdk-s3 backed cloud sync
├── vuln/ # OSV API + Cargo/npm lockfile parsers + CVSS v3
└── web/ {auth,handlers,routes,state}.rs # axum dashboard + bearer auth
```
State is in `~/.i-self/`:
```
config.toml # main config
sync_config.json # cloud sync settings
api_token # optional — bearer token for the dashboard/API
automation.toml # automation rules
profile.json # developer profile
embeddings/ # semantic index
screenshots/ # activity monitor captures
```
## Tech stack
- Rust (rustc 1.88, MSRV pinned by aws-sdk-s3 1.51 line)
- tokio, axum, reqwest
- `octocrab` (GitHub), `git2` (libgit2), `walkdir`
- `aws-sdk-s3` for cloud sync (S3-compatible)
- `device_query` for global input polling
- OpenAI `text-embedding-3-small` for semantic search
- OSV REST API for vulnerability data
## Status
This is a working prototype, not a polished release. Known sharp edges:
- Some automation actions (`exit-meeting`, `run-command`, etc.) log instead of executing.
- Embeddings are not tagged with the model that produced them, so toggling `OPENAI_API_KEY` between sessions can produce nonsensical search scores until you reindex.
- No rate limiting on the API server. Bearer auth gates *who* can call; once authenticated, calls are unbounded.
- No-op fallback embeddings (when `OPENAI_API_KEY` is unset) are keyword-bucket hashes, not semantic.
Contributions welcome.
## License
MIT