OpenClaw Research Tool
Web search for OpenClaw agents, powered by OpenRouter. Ask questions in natural language, get accurate answers with cited sources. Defaults to GPT-5.2 which excels at documentation lookups and citation-heavy research.
Note: Even low-effort queries may take 1 minute or more to complete. High/xhigh reasoning can take 10+ minutes depending on complexity. This is normal — the model is searching the web, reading pages, and synthesizing an answer.
For OpenClaw agents: Run research-tool in a sub-agent so your main session stays responsive while the search runs:
sessions_spawn task:"research-tool 'your query here'"⚠️ Never set a timeout on exec when running research-tool. Queries routinely take 1-10+ minutes. Use
yieldMsto background it, then poll — but do NOT settimeoutor the process will be killed mid-search.
Built on OpenRouter, which gives any model live web search via the :online suffix. The default model is openai/gpt-5.2:online, but you can use any model OpenRouter supports.
Install
Setup
Get an API key from OpenRouter and set it in your environment:
Or add it to a .env file in your working directory.
Usage
Search in natural language — write your query the way you'd ask a person:
# Simple question
# Deep analysis
# Quick fact check
# Custom persona
# Use a different model
# Pipe from stdin
|
# Save output (response goes to stdout, metadata to stderr)
Options
| Flag | Short | Default | Description |
|---|---|---|---|
--model |
-m |
openai/gpt-5.2:online |
Model to use. Defaults to GPT-5.2 — great for cited answers and docs. Append :online to any model for web search. |
--effort |
-e |
low |
Reasoning effort: low, medium, high, xhigh |
--system |
-s |
Research assistant | Custom system prompt / persona |
--max-tokens |
12800 |
Max response tokens | |
--timeout |
none | Optional request timeout in seconds (no timeout by default) | |
--stdin |
Read query from stdin |
How it works
- Your query is sent to OpenRouter's chat completions API
- The
:onlinemodel variant enables live web search — the model browses the web, reads pages, and synthesizes an answer - Response text goes to stdout (pipe-friendly), reasoning traces and token stats go to stderr
- Connection status is printed so you know if the search is still running or failed
Tips
- Write naturally. "What are the best practices for Rust error handling?" works better than keyword-style queries.
- Provide context. The model starts from zero — the more detail you give, the better the answer. A 200-word question with background context will outperform a 5-word question.
- Use effort levels.
--effort lowfor quick lookups (~1-3 min),--effort xhighfor deep research (5-20+ min). - Any model works with
:online. Tryanthropic/claude-opus-4-6:onlineorgoogle/gemini-2.5-pro:onlinefor different perspectives.
Output
🔍 Researching with openai/gpt-5.2:online (effort: high)...
✅ Connected — waiting for response...
[response text to stdout]
📊 Tokens: 4470 prompt + 184 completion = 4654 total | ⏱ 5s
Cost
Roughly $0.01–0.05 per query depending on response length and reasoning effort. Token usage is printed after each query.
License
MIT