dv — Dataverse CLI
A fast Rust CLI for querying real-time social media data from X/Twitter and Reddit, powered by the Bittensor SN13 decentralized data network.
[!NOTE] Dataverse CLI is currently in Beta. We'd love your feedback — please open an issue or submit a PR.
Features at a Glance
- Real-Time Search — Query X/Twitter and Reddit posts by keyword, username, or URL via decentralized Bittensor miners
- Large-Scale Collection — Gravity tasks collect data continuously for up to 7 days across the miner network
- Dataset Export — Build downloadable Parquet datasets from collected data
- Multiple Output Formats — Table, JSON, and CSV output for terminal, scripting, and analysis
- Agent/LLM Friendly —
dv commandsemits a full JSON schema of all commands for tool integration - Dry-Run Mode — Preview exact API requests without executing or consuming credits
- Secure Config — API keys stored with 0600 permissions, masked in output
Install
Cargo (Rust)
From Source
Manual
Download the binary for your platform from Releases, and place dv in your $PATH.
Setup
Get a free API key at app.macrocosmos.ai, then:
# Interactive setup (recommended — input is masked)
# Or via environment variable
# Verify configuration
API key resolution order: --api-key flag > MC_API env > MACROCOSMOS_API_KEY env > config file.
Global Flags
# JSON output (for scripting and agents)
|
# CSV export
# Dry-run mode (shows the API request without executing it)
# Custom timeout
All data commands support -o json and -o csv. Diagnostics go to stderr; stdout is always clean data.
Commands
dv search — Real-Time Social Data
Search X/Twitter or Reddit posts in real-time via the Bittensor SN13 miner network.
# Search X by keyword
# Search by username (X only)
# Multiple keywords with AND mode
# Search Reddit
# Search by URL
| Flag | Default | Description |
|---|---|---|
source |
— | Required. x, twitter, or reddit |
-k, --keywords |
— | Keywords, comma-separated (up to 5). For Reddit, first item is subreddit |
-u, --usernames |
— | Usernames, comma-separated (up to 5, X only) |
--from |
24h ago | Start date (YYYY-MM-DD or ISO 8601) |
--to |
now | End date (YYYY-MM-DD or ISO 8601) |
-l, --limit |
100 | Max results (1–1000) |
--mode |
any | Keyword match mode: any (OR) or all (AND) |
--url |
— | Search by URL instead of keywords |
dv gravity create — Start Data Collection
Create a Gravity task that collects social data from the Bittensor miner network for up to 7 days.
| Flag | Default | Description |
|---|---|---|
-p, --platform |
— | Required. x, twitter, or reddit |
-t, --topic |
— | Topic to track. X: #hashtag or $cashtag. Reddit: r/subreddit |
-k, --keyword |
— | Additional keyword filter |
-n, --name |
— | Task name |
--email |
— | Notification email on completion |
dv gravity status — Monitor Tasks
List all tasks or check a specific task. Always use --crawlers to see record counts and data sizes.
# List all tasks with collection stats
# Check a specific task
| Flag | Default | Description |
|---|---|---|
task_id |
— | Omit to list all tasks |
--crawlers |
false | Include record counts and data sizes |
dv gravity build — Build Dataset
Build a downloadable Parquet dataset from a crawler.
Warning: This stops the crawler and deregisters it from the network. Only build when you have enough data.
| Flag | Default | Description |
|---|---|---|
crawler_id |
— | Required. Crawler ID |
--max-rows |
10000 | Maximum rows in dataset |
dv gravity dataset — Dataset Status
Check dataset build progress and get download links.
dv gravity cancel / dv gravity cancel-dataset
dv auth — Configure API Key
Interactive setup that validates your key against the SN13 network and saves to config.
dv status — Check Connection
Shows API key source and tests connectivity to the SN13 network.
Agent / LLM Integration
Dataverse CLI is designed for use by AI agents and LLMs.
# Full JSON schema of all commands, flags, types, and examples
The hidden dv commands outputs a machine-readable catalog for tool integration. See AGENTS.md for the full integration guide including response schemas, workflow tips, and common patterns.
Gravity Workflow
1. Create task → dv gravity create -p x -k bitcoin -n "my task"
2. Monitor → dv gravity status --crawlers
3. Wait → Let miners collect data (hours to days)
4. Build dataset → dv gravity build crawler-0-multicrawler-... --max-rows 50000
5. Check progress → dv gravity dataset dataset-...
6. Download → Parquet files with download URLs
Tip: Don't build too early. If a task has very few records, the dataset will be empty. Let it collect for at least a few hours.
Development
Tech Stack
| Crate | Purpose |
|---|---|
| clap | CLI argument parsing with derive API |
| reqwest | Async HTTP/2 client with rustls |
| serde | JSON serialization/deserialization |
| tokio | Async runtime |
| tabled | Terminal table formatting |
| colored | Terminal colors |
| dialoguer | Interactive prompts |
License
MIT — see LICENSE.