A single Rust binary that aggregates 11 search providers into one unified search interface. Designed from day one for AI agents — structured JSON output, semantic exit codes, machine-readable capabilities, and auto-JSON when piped.
Works with OpenClaw, Claude Code, Codex CLI, Gemini CLI, or any agent framework that can shell out to a command.
That's it. Auto-detects your intent, fans out to the right providers in parallel, deduplicates results, and returns them in under 2 seconds.
Why
Every search API is good at something different. Brave has its own 35-billion page index. Serper gives you raw Google results plus Scholar, Patents, and Places. Exa does neural/semantic search and finds LinkedIn profiles. Perplexity gives AI-synthesized answers with citations using Sonar Pro. Jina reads any URL into clean markdown. Firecrawl renders JavaScript-heavy pages. xAI searches X/Twitter via Grok.
search routes your query to the right combination automatically — or lets you pick exactly which providers to use.
Install
Cargo (recommended):
One-liner (macOS / Linux):
|
Homebrew:
From source:
Binary size: ~6MB. Startup: ~2ms. Memory: ~5MB. No Python, no Node, no Docker.
Quick Start
# 1. Set your API keys (any combination works — even just one)
# Or use environment variables
# 2. Search
Usage
# Auto-detect mode (recommended — just type what you want)
# Force a specific mode
# Pick specific providers
# Control output
|
Auto-JSON: Output is automatically JSON when piped to another program. Human-readable tables when you're in a terminal.
Modes
| Mode | What it does | Providers used |
|---|---|---|
auto |
Detects intent from your query | varies |
general |
Broad web search | Brave + Serper + Exa + Jina + Tavily + Perplexity |
news |
Breaking news, current events | Brave News + Serper News + Tavily + Perplexity |
academic |
Research papers, studies | Exa + Serper + Tavily + Perplexity |
people |
LinkedIn profiles, bios | Exa |
deep |
Maximum coverage | Exa + Serper + Tavily + Perplexity |
scholar |
Google Scholar | Serper + SerpApi |
patents |
Patent search | Serper |
images |
Image search | Serper |
places |
Local businesses, maps | Serper |
extract |
Full text from a URL | Stealth -> Jina -> Firecrawl -> Browserless |
scrape |
Page scraping | Stealth -> Jina -> Firecrawl -> Browserless |
similar |
Find similar pages to a URL | Exa |
social |
X/Twitter social search | xAI (Grok) |
Providers
| Provider | Strength | Best for |
|---|---|---|
| Brave | Independent 35B-page index, not Google | Web search, news, privacy-focused results |
| Serper | Raw Google SERP + specialist endpoints | Scholar, patents, images, places, fact-checking |
| Exa | Neural/semantic search, category filters | Research papers, LinkedIn people, finding similar sites |
| Jina | Fast URL-to-markdown, 500 RPM free tier | Reading article content, quick extraction |
| Firecrawl | JavaScript rendering, structured extraction | Dynamic pages, SPAs, data extraction |
| Tavily | General, news, academic, deep search | Broad coverage, research-oriented queries |
| SerpApi | 80+ engines: Google, Bing, YouTube, Baidu | Scholar, multi-engine coverage |
| Perplexity | AI-powered answers with citations (Sonar Pro) | Complex queries, synthesized answers with sources |
| Browserless | Cloud browser for Cloudflare/JS-heavy pages | Anti-bot bypass, dynamic page rendering |
| Stealth | Anti-bot stealth scraper | Extracting content from protected pages |
| xAI | X/Twitter search via Grok AI | Tweets, trending topics, social sentiment |
Agent Integration
Built for AI agents from day one. Every command supports --json and structured error codes.
# Discover capabilities programmatically
# Structured JSON with metadata
# {
# "version": "1",
# "status": "success",
# "query": "...",
# "mode": "general",
# "results": [...],
# "metadata": {
# "elapsed_ms": 1542,
# "result_count": 10,
# "providers_queried": ["brave", "serper", "exa"],
# "providers_failed": []
# }
# }
# Errors are also structured JSON
# {
# "status": "error",
# "error": {
# "code": "no_providers",
# "message": "No providers configured for mode 'general'",
# "suggestion": "Configure at least one provider API key"
# }
# }
Exit codes are semantic:
| Code | Meaning | Agent action |
|---|---|---|
| 0 | Success | Process results |
| 1 | Runtime error | Retry might help |
| 2 | Config error | Fix configuration |
| 3 | Auth missing | Set API key |
| 4 | Rate limited | Back off and retry |
Configuration
Config file lives at ~/.config/search/config.toml (Linux) or ~/Library/Application Support/search/config.toml (macOS).
Environment variables override the config file. Prefix: SEARCH_KEYS_:
How It Works
- Parse — Clap parses your query, mode, provider filter, and output preferences
- Classify — If mode is
auto, regex-based intent classifier picks the right mode - Route — Mode determines which providers to query (or you override with
-p) - Fan out —
tokio::JoinSetfires all providers in parallel with per-provider timeouts - Collect — Results stream in as providers respond (no waiting for the slowest)
- Dedup — URL normalization removes duplicates across providers
- Log — Every search is logged to
~/Library/Application Support/search/logs/(JSONL) - Render — JSON envelope or colored terminal table, auto-detected from context
Updating
Building from Source
# Binary at target/release/search
License
MIT
Created by Boris Djordjevic at 199 Biotechnologies.