๐ Manx - Lightning-Fast Documentation Finder
Blazing-fast CLI tool for developers to find documentation, code snippets, and answers instantly
๐ Quick Start โข ๐ Documentation โข โ๏ธ Configuration
โจ What Makes Manx Special?
Manx is the fastest way to find documentation and code snippets from your terminal with three levels of capability:
๐ Default Mode
Works immediately - no setup
โก Hash Embeddings
Built-in algorithm (0ms)
๐ Official Docs
Context7 integration
๐ Keyword Search
Great exact matching
๐พ Zero Storage
No downloads needed
๐ง Enhanced Mode
Better search - 1 command setup
๐ค Neural Embeddings
HuggingFace models (87-400MB)
๐ฏ Semantic Understanding
"database" = "data storage"
๐ Intent Matching
Superior result relevance
๐ Easy Installation
manx embedding download
๐ RAG Mode
Your docs + AI - local setup
๐ Private Documents
Your indexed files only
๐ฏ Semantic + AI Search
Your knowledge + LLM synthesis
๐ Multi-format Support
PDF, Markdown, DOCX, URLs
๐ Use --rag flag
manx search "topic" --rag
๐ค AI Mode
Full synthesis - API key setup
๐ง Neural + AI Analysis
Best of both worlds
๐ฌ Comprehensive Answers
Code + explanations + citations
๐ Multi-Provider Support
OpenAI, Anthropic, Groq, etc.
๐๏ธ Fine-grained Control
Per-command AI toggle
Start with Default โ Upgrade to Enhanced โ Index your docs (RAG) โ Add AI when needed
๐ง How Manx Works Under the Hood
๐ Search Architecture Flow
graph TD
A[๐ User Query] --> B{Search Command}
B --> C[snippet/search/doc]
C --> D[Query Processing]
D --> E{Embedding Provider}
E -->|Default| F[๐ฅ Hash Algorithm]
E -->|Enhanced| G[๐ง Neural Model]
E -->|API| H[โ๏ธ OpenAI/HF API]
F --> I[Vector Generation]
G --> I
H --> I
I --> J{Data Sources}
J -->|Official| K[๐ Context7 API]
J -->|Local| L[๐ Indexed Docs]
J -->|Cache| M[๐พ Local Cache]
K --> N[Semantic Search]
L --> N
M --> N
N --> O[Result Ranking]
O --> P{AI Enhancement}
P -->|Disabled| Q[๐ Documentation Results]
P -->|Enabled| R[๐ค LLM Analysis]
R --> S[๐ฏ Enhanced Response]
Q --> T[๐ฑ Terminal Output]
S --> T
โ๏ธ Embedding System Architecture
graph LR
A[User Query] --> B{Embedding Config}
B -->|hash| C[๐ฅ Hash Provider<br/>384D, 0ms, 0MB]
B -->|onnx:model| D[๐ง ONNX Provider<br/>384-768D, 0ms, 87-400MB]
B -->|openai:model| E[โ๏ธ OpenAI Provider<br/>1536-3072D, ~100ms, API]
B -->|ollama:model| F[๐ Ollama Provider<br/>Variable, ~50ms, Local]
C --> G[Word Hashing<br/>+ N-gram Features]
D --> H[Neural Network<br/>Inference]
E --> I[REST API Call]
F --> J[Local Model Server]
G --> K[Vector Output]
H --> K
I --> K
J --> K
K --> L[Cosine Similarity<br/>Search]
L --> M[Ranked Results]
๐ Configuration Workflow
sequenceDiagram
participant U as User
participant C as CLI
participant M as Model Manager
participant P as Provider
participant S as Search Engine
Note over U,S: Initial Setup (Optional)
U->>C: manx embedding list --available
C->>U: Show HuggingFace models
U->>C: manx embedding download model-name
C->>M: Download from HuggingFace
M->>M: Extract dimensions from config.json
M->>C: Model installed + metadata saved
U->>C: manx config --embedding-provider onnx:model-name
C->>M: Load model metadata
M->>C: Dimension: 768, Path: ~/.cache/manx/models/
C->>C: Update config with detected dimension
Note over U,S: Daily Usage
U->>C: manx snippet react "hooks"
C->>P: Initialize provider from config
P->>P: Load model (onnx) or use algorithm (hash)
P->>S: Generate embeddings
S->>U: Search results with semantic ranking
๐พ Data Flow & Storage
graph TB
subgraph "๐ Local Storage"
A[~/.config/manx/<br/>config.json]
B[~/.cache/manx/models/<br/>ONNX files + metadata]
C[~/.cache/manx/rag/<br/>Indexed documents]
D[~/.cache/manx/cache/<br/>API responses]
end
subgraph "๐ External APIs"
E[Context7<br/>Official Docs]
F[HuggingFace<br/>Model Downloads]
G[OpenAI/Anthropic<br/>AI Synthesis]
H[Ollama<br/>Local LLM Server]
end
subgraph "๐ง Core Engine"
I[Embedding Providers]
J[Search Algorithm]
K[Result Processor]
L[Terminal Renderer]
end
A --> I
B --> I
C --> J
D --> J
E --> J
F --> B
G --> K
H --> I
I --> J
J --> K
K --> L
L --> M[๐ฅ๏ธ User Terminal]
๐ Core Features
๐ Lightning-Fast Documentation Search
Get instant access to documentation and code examples:
๐ Web Documentation Search
Returns: Instant access to official docs and tutorials via DuckDuckGo
๐ Official Documentation Browser
Returns: Real-time official documentation with examples
๐ก Code Snippet Search
Returns: Working code examples with clear explanations
๐ Local Document Search (RAG)
Returns: Semantic search through your indexed documents
๐จ Beautiful Terminal Experience
Every response features:
- ๐ Clear Documentation - Well-formatted, readable content
- ๐ก Code Examples - Syntax-highlighted, runnable code
- ๐ Quick Results - Instant access to what you need
- ๐ Source Links - Direct links to official documentation
๐ค Optional AI Enhancement
Add AI analysis when you need deeper insights (completely optional):
# OpenAI (GPT-4, GPT-3.5)
# Anthropic (Claude)
# Groq (Ultra-fast inference)
# OpenRouter (Multi-model access)
# HuggingFace (Open-source models)
# Custom endpoints (Self-hosted models)
๐ Local Document Search (RAG)
Index and search your own documentation and code files:
# 1. Index your documents
# 2. Enable local search
# 3. Search your indexed content
Benefits:
- ๐ Private & Offline - Your documents never leave your machine
- ๐ฏ Semantic Search - Uses same embedding models as web search
- ๐ค AI Integration - Optional LLM synthesis from your own docs
- ๐ File Formats - Supports
.md,.txt,.pdf,.docx+ web URLs
๐ Quick Start
1. Installation
# Using Cargo (Recommended)
# Using shell script
|
# Manual download from releases
# https://github.com/neur0map/manx/releases/latest
2. Core Commands
# ๐ Search web documentation instantly
# ๐ Browse official documentation
# ๐ก Find working code snippets
# ๐ Index your personal documentation (optional)
3. Context7 API Configuration (Recommended)
# Get higher rate limits for documentation access
# Test that everything is working
# Optional: Add AI enhancement
๐ Complete Command Reference
๐ Search Commands
Web Search (DuckDuckGo-powered)
Documentation Browser
Code Snippets
Result Retrieval
๐ Knowledge Management
# Index local documents
# Deep crawl documentation sites (NEW!)
# Manage indexed sources
# Cache management
โ๏ธ Configuration
# View current settings
# Context7 API (for official docs - recommended)
# AI Provider Configuration (optional)
# Switch between models
# Remove API keys / Disable AI
# Other Settings
๐ง Personal Knowledge Base
Index your documentation and notes for instant search:
๐ Index Your Knowledge
# Personal development notes
# Team knowledge base
# Web documentation (single page)
# Deep crawl entire documentation sites
๐ Unified Search Experience
Returns:
- ๐ Official docs (FastAPI, OAuth, JWT guides)
- ๐ Your notes (team auth procedures, troubleshooting)
- ๐ Direct links to source documentation and files
๐ก๏ธ Security Features
- PDF Security: Validates PDFs for malicious content
- Content Sanitization: Cleans and validates all indexed content
- Local Processing: RAG runs entirely locally
- Privacy Control: Core functionality works entirely offline
๐พ Supported Formats
- Documents:
.md,.txt,.docx,.pdf - Web Content: HTML pages with automatic text extraction
- Code Files: Syntax-aware indexing
- URLs: Single page or deep crawl entire documentation sites
- Deep Crawling: Automatically discovers and indexes interconnected documentation pages
๐ค Optional AI Features
๐ฏ Enhanced Analysis (When Enabled)
When you configure an AI provider, responses include deeper analysis:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ Documentation Results โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
1. React Hooks Introduction
https://reactjs.org/docs/hooks-intro.html
2. useState Hook Documentation
https://reactjs.org/docs/hooks-state.html
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ค AI Analysis (Optional) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โฏ Quick Summary
React hooks allow you to use state and lifecycle
features in functional components.
โฏ Key Insights
โข useState manages component state
โข useEffect handles side effects
โข Custom hooks enable logic reuse
๐ง Provider-Specific Features
OpenAI
- GPT-4, GPT-3.5-turbo
- Function calling support
- Streaming responses
- High-quality synthesis
Anthropic
- Claude 3.5 Sonnet
- Large context windows
- Excellent code understanding
- Safety-focused responses
Groq
- Ultra-fast inference
- Llama 3.1 models
- Cost-effective
- Low latency
๐๏ธ Fine-grained Control
# Global AI settings
# Per-command control
๐ Context7 Integration
Access real-time official documentation:
โก Rate Limiting Solutions
# Without API key: Shared rate limits (very restrictive)
# May hit rate limits after few searches
# With API key: Dedicated access (recommended)
๐ Get Your Context7 API Key
- Visit Context7 Dashboard
- Create account or sign in
- Generate API key (starts with
sk-) - Configure:
manx config --api-key "sk-your-key"
๐ Performance & Features
โก Performance
- Search Speed: < 1 second (snippets), < 2 seconds (web search)
- Binary Size: 5.4MB single file
- Memory Usage: < 15MB RAM
- Startup Time: < 50ms
- Cache Support: Smart auto-caching
๐ง Technical Features
- Multi-threading: Parallel search processing
- Smart Embeddings: Hash-based (default) + ONNX neural models
- Vector Storage: Local file-based RAG system
- HTTP/2: Modern API communication
- Cross-platform: Linux, macOS, Windows
๐ง Semantic Search & Embeddings
Manx features a flexible embedding system that automatically chooses the best search method:
๐ Getting Started (3 Commands)
# 1. Works great immediately (no setup)
# 2. Optional: Install better search (one-time setup)
# 3. Now enjoy superior semantic search
๐ Capability Comparison
| Feature | Hash (Default) | Neural Models |
|---|---|---|
| Setup | None required | 1 command |
| Speed | 0ms (instant) | 0ms (after loading) |
| Storage | 0MB | 87-400MB |
| Understanding | Keyword matching | Semantic + contextual |
| Privacy | 100% offline | 100% local processing |
| Quality | Good for exact terms | Excellent for concepts |
โ๏ธ Advanced Configuration
# Management commands
# Provider switching (instant)
HuggingFace installation recommended - best search quality + privacy + no API costs.
๐ฏ Real-World Use Cases
๐จโ๐ป Individual Developer
# Morning workflow: Check React patterns
# Returns: Official React docs + your optimization notes
# Debug session: Memory leak investigation
# Returns: MDN docs + Stack Overflow + your debugging notes
# Learning: New framework exploration
# Returns: Official Svelte docs with clear examples
๐ฅ Development Team
# Onboard new developer
# Returns: Official CI/CD docs + team procedures
# Solve production issue
# Returns: K8s docs + team runbooks + troubleshooting guides
๐ Privacy-Focused Usage
# Index sensitive documentation locally
# Pure local search - works completely offline
# Team knowledge stays private
# Uses only local knowledge + official docs (no AI calls)
๐ ๏ธ Installation Options
Cargo Installation (Recommended)
Shell Script Installer
|
Manual Binary Download
-
Download for your platform:
-
Install:
From Source
Configuration File Location
Full Configuration Example
Environment Variables
# Disable colors
# Custom cache dir
# Context7 API key
# Enable debug logging
Common Issues
Want to Add AI Analysis?
# Check current configuration
# Set up an AI provider (optional)
# Test enhanced functionality
Managing AI Configuration
# Switch between providers
# Disable AI completely
# Remove specific API keys
"No results found"
# Check Context7 API key setup
# Clear cache and retry
Rate Limiting Issues
# Without Context7 API key, you'll hit shared limits quickly
# This provides much higher rate limits
Local RAG Not Finding Documents
# Check indexed sources
# Re-index if needed
Debug Mode
# Enable detailed logging
|
# Check configuration
# View cache stats
๐ค Contributing
We welcome contributions! Areas where help is needed:
- โก Performance - Make search even faster
- ๐ Document Parsers - Support for more file formats
- ๐จ Terminal UI - Enhance the visual experience
- ๐งช Testing - Expand test coverage
- ๐ Documentation - Improve guides and examples
Development Setup
๐ License
MIT License - see LICENSE for details.
๐ Acknowledgments
๐ Core Search Infrastructure
- Context7 - Excellent MCP documentation API providing real-time access to official documentation
- DuckDuckGo - Privacy-focused search engine powering our web search functionality
- Spider-rs - High-performance web crawler enabling our deep documentation site indexing
๐ง AI & Embedding Systems
- HuggingFace - Transformers and embedding models for semantic search
- ONNX Runtime - Cross-platform ML inference for local embedding models
- Ollama - Local LLM server integration
- OpenAI & Anthropic - AI analysis and synthesis capabilities
โ๏ธ Core Rust Libraries
- Tokio - Async runtime powering all network operations
- Reqwest - HTTP client for API communications
- Scraper - HTML parsing and content extraction
- Clap - Command-line argument parsing
- Serde - Serialization/deserialization framework
- Colored - Terminal color output
- Anyhow - Error handling and context
- Fuzzy-Matcher - Fuzzy string matching for enhanced search
- Indicatif - Progress bars and spinners for user feedback
๐ Document Processing
- docx-rs - Microsoft Word document processing
- WalkDir - Recursive directory traversal
- UUID - Unique identifier generation
๐ Community & Contributors
- Rust Community - Outstanding ecosystem, tooling, and documentation
- Contributors - Making Manx better every day through feedback and contributions
- Open Source Maintainers - All the library authors who make projects like this possible
๐ง Roadmap & TODOs
๐ฐ Cost & Usage Tracking
- Add cost calculation functionality to LlmResponse struct
- Implement per-provider pricing models and cost tracking
- Add usage statistics and cost reporting commands
- Implement token count breakdown (input/output/cached tokens)
- Implementation of local LLM support
Built with โค๏ธ for developers who need answers fast
Lightning-fast documentation search - right in your terminal