manx-cli 0.4.0

A blazing-fast CLI documentation finder powered by Context7 MCP
manx-cli-0.4.0 is not a library.
Visit the last successful build: manx-cli-0.5.9

๐Ÿš€ Manx - Lightning-Fast Documentation Finder

Blazing-fast CLI tool for developers to find documentation, code snippets, and answers instantly

GitHub Release Crates.io Version GitHub Downloads Crates.io Downloads License Language Binary Size

๐Ÿš€ Quick Start โ€ข ๐Ÿ“š Documentation โ€ข โš™๏ธ Configuration

โœจ What Makes Manx Special?

Manx is the fastest way to find documentation and code snippets from your terminal with three levels of capability:

๐Ÿš€ Default Mode

Works immediately - no setup

โšก Hash Embeddings
Built-in algorithm (0ms)

๐Ÿ“š Official Docs
Context7 integration

๐Ÿ” Keyword Search
Great exact matching

๐Ÿ’พ Zero Storage
No downloads needed

๐Ÿง  Enhanced Mode

Better search - 1 command setup

๐Ÿค– Neural Embeddings
HuggingFace models (87-400MB)

๐ŸŽฏ Semantic Understanding
"database" = "data storage"

๐Ÿ“Š Intent Matching
Superior result relevance

๐Ÿ”„ Easy Installation
manx embedding download

๐Ÿ“‚ RAG Mode

Your docs + AI - local setup

๐Ÿ”’ Private Documents
Your indexed files only

๐ŸŽฏ Semantic + AI Search
Your knowledge + LLM synthesis

๐Ÿ“ Multi-format Support
PDF, Markdown, DOCX, URLs

๐Ÿ” Use --rag flag
manx search "topic" --rag

๐Ÿค– AI Mode

Full synthesis - API key setup

๐Ÿง  Neural + AI Analysis
Best of both worlds

๐Ÿ’ฌ Comprehensive Answers
Code + explanations + citations

๐ŸŒ Multi-Provider Support
OpenAI, Anthropic, Groq, etc.

๐ŸŽ›๏ธ Fine-grained Control
Per-command AI toggle

Start with Default โ†’ Upgrade to Enhanced โ†’ Index your docs (RAG) โ†’ Add AI when needed

๐Ÿ”ง How Manx Works Under the Hood

๐Ÿ“Š Search Architecture Flow

graph TD
    A[๐Ÿ” User Query] --> B{Search Command}
    B --> C[snippet/search/doc]
    C --> D[Query Processing]
    
    D --> E{Embedding Provider}
    E -->|Default| F[๐Ÿ”ฅ Hash Algorithm]
    E -->|Enhanced| G[๐Ÿง  Neural Model]
    E -->|API| H[โ˜๏ธ OpenAI/HF API]
    
    F --> I[Vector Generation]
    G --> I
    H --> I
    
    I --> J{Data Sources}
    J -->|Official| K[๐Ÿ“š Context7 API]
    J -->|Local| L[๐Ÿ“ Indexed Docs]
    J -->|Cache| M[๐Ÿ’พ Local Cache]
    
    K --> N[Semantic Search]
    L --> N  
    M --> N
    
    N --> O[Result Ranking]
    O --> P{AI Enhancement}
    P -->|Disabled| Q[๐Ÿ“ Documentation Results]
    P -->|Enabled| R[๐Ÿค– LLM Analysis]
    
    R --> S[๐ŸŽฏ Enhanced Response]
    Q --> T[๐Ÿ“ฑ Terminal Output]
    S --> T

โš™๏ธ Embedding System Architecture

graph LR
    A[User Query] --> B{Embedding Config}
    
    B -->|hash| C[๐Ÿ”ฅ Hash Provider<br/>384D, 0ms, 0MB]
    B -->|onnx:model| D[๐Ÿง  ONNX Provider<br/>384-768D, 0ms, 87-400MB]  
    B -->|openai:model| E[โ˜๏ธ OpenAI Provider<br/>1536-3072D, ~100ms, API]
    B -->|ollama:model| F[๐Ÿ  Ollama Provider<br/>Variable, ~50ms, Local]
    
    C --> G[Word Hashing<br/>+ N-gram Features]
    D --> H[Neural Network<br/>Inference]
    E --> I[REST API Call]
    F --> J[Local Model Server]
    
    G --> K[Vector Output]
    H --> K
    I --> K
    J --> K
    
    K --> L[Cosine Similarity<br/>Search]
    L --> M[Ranked Results]

๐Ÿ”„ Configuration Workflow

sequenceDiagram
    participant U as User
    participant C as CLI
    participant M as Model Manager
    participant P as Provider
    participant S as Search Engine
    
    Note over U,S: Initial Setup (Optional)
    U->>C: manx embedding list --available
    C->>U: Show HuggingFace models
    
    U->>C: manx embedding download model-name
    C->>M: Download from HuggingFace
    M->>M: Extract dimensions from config.json
    M->>C: Model installed + metadata saved
    
    U->>C: manx config --embedding-provider onnx:model-name
    C->>M: Load model metadata
    M->>C: Dimension: 768, Path: ~/.cache/manx/models/
    C->>C: Update config with detected dimension
    
    Note over U,S: Daily Usage
    U->>C: manx snippet react "hooks"
    C->>P: Initialize provider from config
    P->>P: Load model (onnx) or use algorithm (hash)
    P->>S: Generate embeddings
    S->>U: Search results with semantic ranking

๐Ÿ’พ Data Flow & Storage

graph TB
    subgraph "๐Ÿ  Local Storage"
        A[~/.config/manx/<br/>config.json]
        B[~/.cache/manx/models/<br/>ONNX files + metadata]
        C[~/.cache/manx/rag/<br/>Indexed documents]
        D[~/.cache/manx/cache/<br/>API responses]
    end
    
    subgraph "๐ŸŒ External APIs"
        E[Context7<br/>Official Docs]
        F[HuggingFace<br/>Model Downloads]  
        G[OpenAI/Anthropic<br/>AI Synthesis]
        H[Ollama<br/>Local LLM Server]
    end
    
    subgraph "๐Ÿ”ง Core Engine"
        I[Embedding Providers]
        J[Search Algorithm]
        K[Result Processor]
        L[Terminal Renderer]
    end
    
    A --> I
    B --> I
    C --> J
    D --> J
    
    E --> J
    F --> B
    G --> K
    H --> I
    
    I --> J
    J --> K
    K --> L
    L --> M[๐Ÿ–ฅ๏ธ User Terminal]

๐ŸŒŸ Core Features

๐Ÿš€ Lightning-Fast Documentation Search

Get instant access to documentation and code examples:

๐Ÿ” Web Documentation Search

manx search "rust async programming"

Returns: Instant access to official docs and tutorials via DuckDuckGo

๐Ÿ“š Official Documentation Browser

manx doc python "async functions"

Returns: Real-time official documentation with examples

๐Ÿ’ก Code Snippet Search

manx snippet react "useEffect cleanup"

Returns: Working code examples with clear explanations

๐Ÿ“ Local Document Search (RAG)

manx search "authentication" --rag

Returns: Semantic search through your indexed documents

๐ŸŽจ Beautiful Terminal Experience

Every response features:

  • ๐Ÿ“– Clear Documentation - Well-formatted, readable content
  • ๐Ÿ’ก Code Examples - Syntax-highlighted, runnable code
  • ๐Ÿ“Š Quick Results - Instant access to what you need
  • ๐Ÿ”— Source Links - Direct links to official documentation

๐Ÿค– Optional AI Enhancement

Add AI analysis when you need deeper insights (completely optional):

# OpenAI (GPT-4, GPT-3.5)
manx config --openai-api "sk-your-openai-key"

# Anthropic (Claude)
manx config --anthropic-api "sk-ant-your-anthropic-key"  

# Groq (Ultra-fast inference)
manx config --groq-api "gsk-your-groq-key"

# OpenRouter (Multi-model access)
manx config --openrouter-api "sk-or-your-openrouter-key"

# HuggingFace (Open-source models)
manx config --huggingface-api "hf-your-huggingface-key"

# Custom endpoints (Self-hosted models)
manx config --custom-endpoint "http://localhost:8000/v1"

๐Ÿ“‚ Local Document Search (RAG)

Index and search your own documentation and code files:

# 1. Index your documents
manx index /path/to/your/docs
manx index /path/to/your/code

# 2. Enable local search
manx config --rag-enabled

# 3. Search your indexed content
manx search "authentication patterns" --rag
manx snippet python "async database" --rag  
manx doc fastapi "middleware setup" --rag

Benefits:

  • ๐Ÿ”’ Private & Offline - Your documents never leave your machine
  • ๐ŸŽฏ Semantic Search - Uses same embedding models as web search
  • ๐Ÿค– AI Integration - Optional LLM synthesis from your own docs
  • ๐Ÿ“ File Formats - Supports .md, .txt, .pdf, .docx + web URLs

๐Ÿš€ Quick Start

1. Installation

# Using Cargo (Recommended)
cargo install manx-cli

# Using shell script
curl -fsSL https://raw.githubusercontent.com/neur0map/manx/main/install.sh | bash

# Manual download from releases
# https://github.com/neur0map/manx/releases/latest

2. Core Commands

# ๐Ÿ” Search web documentation instantly
manx search "docker compose production setup"

# ๐Ÿ“š Browse official documentation
manx doc fastapi "authentication middleware"

# ๐Ÿ’ก Find working code snippets
manx snippet react "custom hooks patterns"

# ๐Ÿ“ Index your personal documentation (optional)
manx index ~/dev-notes/                               # Local directory
manx index https://docs.fastapi.tiangolo.com --crawl  # Deep crawl documentation site
manx search "team coding standards" --rag

3. Context7 API Configuration (Recommended)

# Get higher rate limits for documentation access
manx config --api-key "sk-your-context7-key"

# Test that everything is working
manx snippet python "list comprehensions"

# Optional: Add AI enhancement
manx config --openai-api "sk-your-openai-key"
manx search "topic"  # Now includes AI analysis when helpful

๐Ÿ“‹ Complete Command Reference

๐Ÿ” Search Commands

Web Search (DuckDuckGo-powered)

manx search "kubernetes deployment"
manx search "react hooks patterns"
manx search "python async" --limit 5

Documentation Browser

manx doc fastapi "authentication"
manx doc react@18 "useState patterns"
manx doc python "async functions"

Code Snippets

manx snippet react "useEffect cleanup"  
manx snippet fastapi "middleware setup"
manx snippet python "decorators"

Result Retrieval

manx get doc-3                # Get specific result
manx get snippet-7 -o code.md # Export to file

๐Ÿ“ Knowledge Management

# Index local documents
manx index ~/documentation/          # Directory
manx index ./README.md               # Single file  
manx index https://docs.example.com  # Web URL

# Deep crawl documentation sites (NEW!)
manx index https://docs.rust-lang.org --crawl                    # Discover all linked pages
manx index https://fastapi.tiangolo.com --crawl --max-depth 2    # Limited depth crawling
manx index https://docs.python.org --crawl --max-pages 10        # Limited page count

# Manage indexed sources
manx sources list                    # View all sources
manx sources clear                   # Clear all indexed docs

# Cache management
manx cache stats                     # Show cache info
manx cache clear                     # Clear cache

โš™๏ธ Configuration

# View current settings
manx config --show

# Context7 API (for official docs - recommended)
manx config --api-key "sk-context7-key"

# AI Provider Configuration (optional)
manx config --openai-api "sk-key"       # OpenAI
manx config --anthropic-api "sk-key"    # Anthropic  
manx config --groq-api "gsk-key"        # Groq
manx config --llm-provider "groq"       # Set preferred provider
manx config --llm-model "llama-3.1-8b"  # Set specific model

# Switch between models
manx config --llm-provider "openai" --llm-model "gpt-4"
manx config --llm-provider "anthropic" --llm-model "claude-3-sonnet"

# Remove API keys / Disable AI
manx config --openai-api ""             # Remove OpenAI key
manx config --llm-provider ""           # Disable AI entirely
manx config --anthropic-api ""          # Remove Anthropic key

# Other Settings
manx config --cache-dir ~/my-cache      # Custom cache location
manx config --auto-cache off            # Disable auto-caching

๐Ÿง  Personal Knowledge Base

Index your documentation and notes for instant search:

๐Ÿ“š Index Your Knowledge

# Personal development notes
manx index ~/coding-notes/
manx index ~/project-documentation/

# Team knowledge base  
manx index ~/company-wiki/
manx index ~/internal-procedures/

# Web documentation (single page)
manx index https://your-team-docs.com
manx index https://internal-api-docs.example.com

# Deep crawl entire documentation sites
manx index https://docs.your-framework.com --crawl              # Discover all pages automatically
manx index https://internal-wiki.company.com --crawl --max-depth 3  # Limit crawl depth
manx index https://team-knowledge.com --crawl --max-pages 50    # Limit total pages crawled

๐Ÿ” Unified Search Experience

manx snippet "authentication setup"

Returns:

  • ๐ŸŒ Official docs (FastAPI, OAuth, JWT guides)
  • ๐Ÿ“ Your notes (team auth procedures, troubleshooting)
  • ๐Ÿ”— Direct links to source documentation and files

๐Ÿ›ก๏ธ Security Features

  • PDF Security: Validates PDFs for malicious content
  • Content Sanitization: Cleans and validates all indexed content
  • Local Processing: RAG runs entirely locally
  • Privacy Control: Core functionality works entirely offline

๐Ÿ’พ Supported Formats

  • Documents: .md, .txt, .docx, .pdf
  • Web Content: HTML pages with automatic text extraction
  • Code Files: Syntax-aware indexing
  • URLs: Single page or deep crawl entire documentation sites
  • Deep Crawling: Automatically discovers and indexes interconnected documentation pages

๐Ÿค– Optional AI Features

๐ŸŽฏ Enhanced Analysis (When Enabled)

When you configure an AI provider, responses include deeper analysis:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿ“– Documentation Results                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

1. React Hooks Introduction
   https://reactjs.org/docs/hooks-intro.html
   
2. useState Hook Documentation  
   https://reactjs.org/docs/hooks-state.html
   
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿค– AI Analysis (Optional)               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
  โฏ Quick Summary
  React hooks allow you to use state and lifecycle 
  features in functional components.

  โฏ Key Insights
  โ€ข useState manages component state
  โ€ข useEffect handles side effects  
  โ€ข Custom hooks enable logic reuse

๐Ÿ”ง Provider-Specific Features

OpenAI

  • GPT-4, GPT-3.5-turbo
  • Function calling support
  • Streaming responses
  • High-quality synthesis

Anthropic

  • Claude 3.5 Sonnet
  • Large context windows
  • Excellent code understanding
  • Safety-focused responses

Groq

  • Ultra-fast inference
  • Llama 3.1 models
  • Cost-effective
  • Low latency

๐ŸŽ›๏ธ Fine-grained Control

# Global AI settings
manx config --llm-provider "anthropic"
manx config --llm-model "claude-3-sonnet"

# Per-command control
manx search "topic"              # Fast documentation search
manx search "topic" --no-llm     # Force no AI analysis
manx snippet react hooks        # Code examples with optional AI insights
manx snippet react --no-llm     # Just the documentation

๐Ÿ”— Context7 Integration

Access real-time official documentation:

โšก Rate Limiting Solutions

# Without API key: Shared rate limits (very restrictive)
manx snippet react hooks
# May hit rate limits after few searches

# With API key: Dedicated access (recommended)
manx config --api-key "sk-your-context7-key"
manx snippet react hooks  # Much higher limits

๐Ÿ”‘ Get Your Context7 API Key

  1. Visit Context7 Dashboard
  2. Create account or sign in
  3. Generate API key (starts with sk-)
  4. Configure: manx config --api-key "sk-your-key"

๐Ÿ“Š Performance & Features

โšก Performance

  • Search Speed: < 1 second (snippets), < 2 seconds (web search)
  • Binary Size: 5.4MB single file
  • Memory Usage: < 15MB RAM
  • Startup Time: < 50ms
  • Cache Support: Smart auto-caching

๐Ÿ”ง Technical Features

  • Multi-threading: Parallel search processing
  • Smart Embeddings: Hash-based (default) + ONNX neural models
  • Vector Storage: Local file-based RAG system
  • HTTP/2: Modern API communication
  • Cross-platform: Linux, macOS, Windows

๐Ÿง  Semantic Search & Embeddings

Manx features a flexible embedding system that automatically chooses the best search method:

๐Ÿš€ Getting Started (3 Commands)

# 1. Works great immediately (no setup)
manx snippet react "state management"

# 2. Optional: Install better search (one-time setup)
manx embedding download sentence-transformers/all-MiniLM-L6-v2
manx config --embedding-provider onnx:sentence-transformers/all-MiniLM-L6-v2

# 3. Now enjoy superior semantic search
manx snippet react "state management"  # Much smarter results

๐Ÿ“Š Capability Comparison

Feature Hash (Default) Neural Models
Setup None required 1 command
Speed 0ms (instant) 0ms (after loading)
Storage 0MB 87-400MB
Understanding Keyword matching Semantic + contextual
Privacy 100% offline 100% local processing
Quality Good for exact terms Excellent for concepts

โš™๏ธ Advanced Configuration

# Management commands
manx embedding list --available     # See available models
manx embedding status               # Check current setup
manx embedding test "your query"    # Test search quality

# Provider switching (instant)
manx config --embedding-provider hash                    # Default algorithm
manx config --embedding-provider onnx:all-MiniLM-L6-v2   # Local neural model  
manx config --embedding-provider openai:text-embedding-3 # API-based (requires key)

HuggingFace installation recommended - best search quality + privacy + no API costs.


๐ŸŽฏ Real-World Use Cases

๐Ÿ‘จโ€๐Ÿ’ป Individual Developer

# Morning workflow: Check React patterns
manx snippet react "performance optimization"
# Returns: Official React docs + your optimization notes

# Debug session: Memory leak investigation  
manx search "javascript memory leaks"
# Returns: MDN docs + Stack Overflow + your debugging notes

# Learning: New framework exploration
manx doc svelte "component lifecycle"  
# Returns: Official Svelte docs with clear examples

๐Ÿ‘ฅ Development Team

# Onboard new developer
manx index ~/team-handbook/
manx index ~/coding-standards/
manx snippet "deployment process"
# Returns: Official CI/CD docs + team procedures

# Solve production issue
manx search "kubernetes pod restart loops"
# Returns: K8s docs + team runbooks + troubleshooting guides

๐Ÿ”’ Privacy-Focused Usage

# Index sensitive documentation locally
manx index ~/classified-procedures/
manx snippet "security protocols"
# Pure local search - works completely offline

# Team knowledge stays private
manx snippet "internal processes"
# Uses only local knowledge + official docs (no AI calls)

๐Ÿ› ๏ธ Installation Options

Cargo Installation (Recommended)

cargo install manx-cli
manx --version

Shell Script Installer

curl -fsSL https://raw.githubusercontent.com/neur0map/manx/main/install.sh | bash

Manual Binary Download

  1. Download for your platform:

  2. Install:

    chmod +x manx-*
    sudo mv manx-* /usr/local/bin/manx
    

From Source

git clone https://github.com/neur0map/manx.git
cd manx
cargo build --release
sudo cp target/release/manx /usr/local/bin/

Configuration File Location

~/.config/manx/config.json

Full Configuration Example

{
  "api_key": "sk-your-context7-key",
  "cache_dir": null,
  "default_limit": 10,
  "offline_mode": false,
  "color_output": true,
  "auto_cache_enabled": true,
  "cache_ttl_hours": 24,
  "max_cache_size_mb": 100,
  "rag": {
    "enabled": true,
    "index_path": "~/.cache/manx/rag_index",
    "max_results": 10,
    "allow_pdf_processing": false
  },
  "llm": {
    "openai_api_key": "sk-your-openai-key",
    "anthropic_api_key": "sk-ant-your-anthropic-key",
    "groq_api_key": "gsk-your-groq-key",
    "openrouter_api_key": "sk-or-your-openrouter-key",
    "huggingface_api_key": "hf-your-huggingface-key",
    "custom_endpoint": "http://localhost:8000/v1",
    "preferred_provider": "OpenAI",
    "model_name": "gpt-4"
  }
}

Environment Variables

export NO_COLOR=1                    # Disable colors
export MANX_CACHE_DIR=~/cache        # Custom cache dir
export MANX_API_KEY=sk-xxx           # Context7 API key
export MANX_DEBUG=1                  # Enable debug logging

Common Issues

Want to Add AI Analysis?

# Check current configuration
manx config --show

# Set up an AI provider (optional)
manx config --openai-api "sk-your-key"

# Test enhanced functionality
manx snippet python "functions"

Managing AI Configuration

# Switch between providers
manx config --llm-provider "anthropic"
manx config --llm-model "claude-3-sonnet"

# Disable AI completely
manx config --llm-provider ""

# Remove specific API keys
manx config --openai-api ""

"No results found"

# Check Context7 API key setup
manx config --api-key "sk-your-context7-key"

# Clear cache and retry
manx cache clear
manx snippet fastapi

Rate Limiting Issues

# Without Context7 API key, you'll hit shared limits quickly
manx config --api-key "sk-your-context7-key"

# This provides much higher rate limits

Local RAG Not Finding Documents

# Check indexed sources
manx sources list

# Re-index if needed
manx sources clear
manx index ~/your-docs/

Debug Mode

# Enable detailed logging
manx --debug snippet react hooks 2>&1 | tee debug.log

# Check configuration
manx config --show

# View cache stats
manx cache stats

๐Ÿค Contributing

We welcome contributions! Areas where help is needed:

  • โšก Performance - Make search even faster
  • ๐Ÿ“„ Document Parsers - Support for more file formats
  • ๐ŸŽจ Terminal UI - Enhance the visual experience
  • ๐Ÿงช Testing - Expand test coverage
  • ๐Ÿ“– Documentation - Improve guides and examples

Development Setup

git clone https://github.com/neur0map/manx.git
cd manx
cargo build
cargo test
./target/debug/manx --help

๐Ÿ“œ License

MIT License - see LICENSE for details.

๐Ÿ™ Acknowledgments

๐Ÿ” Core Search Infrastructure

  • Context7 - Excellent MCP documentation API providing real-time access to official documentation
  • DuckDuckGo - Privacy-focused search engine powering our web search functionality
  • Spider-rs - High-performance web crawler enabling our deep documentation site indexing

๐Ÿง  AI & Embedding Systems

  • HuggingFace - Transformers and embedding models for semantic search
  • ONNX Runtime - Cross-platform ML inference for local embedding models
  • Ollama - Local LLM server integration
  • OpenAI & Anthropic - AI analysis and synthesis capabilities

โš™๏ธ Core Rust Libraries

  • Tokio - Async runtime powering all network operations
  • Reqwest - HTTP client for API communications
  • Scraper - HTML parsing and content extraction
  • Clap - Command-line argument parsing
  • Serde - Serialization/deserialization framework
  • Colored - Terminal color output
  • Anyhow - Error handling and context
  • Fuzzy-Matcher - Fuzzy string matching for enhanced search
  • Indicatif - Progress bars and spinners for user feedback

๐Ÿ“„ Document Processing

  • docx-rs - Microsoft Word document processing
  • WalkDir - Recursive directory traversal
  • UUID - Unique identifier generation

๐ŸŒŸ Community & Contributors

  • Rust Community - Outstanding ecosystem, tooling, and documentation
  • Contributors - Making Manx better every day through feedback and contributions
  • Open Source Maintainers - All the library authors who make projects like this possible

๐Ÿšง Roadmap & TODOs

๐Ÿ’ฐ Cost & Usage Tracking

  • Add cost calculation functionality to LlmResponse struct
  • Implement per-provider pricing models and cost tracking
  • Add usage statistics and cost reporting commands
  • Implement token count breakdown (input/output/cached tokens)
  • Implementation of local LLM support

Built with โค๏ธ for developers who need answers fast

โฌ†๏ธ Back to Top

Manx Demo

Lightning-fast documentation search - right in your terminal