A fast, Rust-based command-line tool for interacting with Large Language Models.
Quick Start
# Option 1: Install from crates.io (when published)
# Option 2: Install from source
# Add a provider
# Set your API key
# Start chatting
# set default provider and model
# Direct prompt with specific model
Key Features
- 🚀 Lightning Fast - ~3ms cold start (50x faster than Python alternatives)
- 🔧 Universal - Works with any OpenAI-compatible API
- 🧠 Smart - Built-in vector database and RAG support
- 🛠️ Tools - Model Context Protocol (MCP) support for extending LLM capabilities
- 🔍 Web Search - Integrated web search with multiple providers (Brave, Exa, Serper) for enhanced context
- 👁️ Vision Support - Process and analyze images with vision-capable models
- � PDF Support - Read and process PDF files with optional dependency
- 🔐 Secure - Encrypted configuration sync
- 💬 Intuitive - Simple commands with short aliases
- 🎨 Flexible Templates - Configure request/response formats for any LLM API
- ⚡ Shell Completion - Tab completion for commands, providers, models, and more
Shell Completion
lc supports comprehensive tab completion for all major shells (Bash, Zsh, Fish, PowerShell, Elvish) with both static and dynamic completion:
# Generate completion script for your shell
# Dynamic provider completion
# Command completion
For detailed setup instructions, see Shell Completion Guide.
Documentation
For comprehensive documentation, visit lc.viwq.dev
Quick Links
- Installation Guide
- Quick Start Tutorial
- Command Reference
- Provider Setup
- Vector Database & RAG
- Model Context Protocol (MCP)
- Template System - Configure custom request/response formats
Supported Providers
Any OpenAI-compatible API can be used with lc. Here are some popular providers:
Anthropic, Gemini, and Amazon Bedrock also supported.
- ai21 - https://api.ai21.com/studio/v1 (API Key: ✓)
- amazon_bedrock - https://bedrock-runtime.us-east-1.amazonaws.com (API Key: ✓) - See Bedrock Setup
- cerebras - https://api.cerebras.ai/v1 (API Key: ✓)
- chub - https://inference.chub.ai/v1 (API Key: ✓)
- chutes - https://llm.chutes.ai/v1 (API Key: ✓)
- claude - https://api.anthropic.com/v1 (API Key: ✓)
- cohere - https://api.cohere.com/v2 (API Key: ✓)
- deepinfra - https://api.deepinfra.com/v1/openai (API Key: ✓)
- digitalocean - https://inference.do-ai.run/v1 (API Key: ✓)
- fireworks - https://api.fireworks.ai/inference/v1 (API Key: ✓)
- gemini - https://generativelanguage.googleapis.com (API Key: ✓)
- github - https://models.github.ai (API Key: ✓)
- github-copilot - https://api.individual.githubcopilot.com (API Key: ✓)
- grok - https://api.x.ai/v1 (API Key: ✓)
- groq - https://api.groq.com/openai/v1 (API Key: ✓)
- huggingface - https://router.huggingface.co/v1 (API Key: ✓)
- hyperbolic - https://api.hyperbolic.xyz/v1 (API Key: ✓)
- kilo - https://kilocode.ai/api/openrouter (API Key: ✓)
- meta - https://api.llama.com/v1 (API Key: ✓)
- mistral - https://api.mistral.ai/v1 (API Key: ✓)
- nebius - https://api.studio.nebius.com/v1 (API Key: ✓)
- novita - https://api.novita.ai/v3/openai (API Key: ✓)
- nscale - https://inference.api.nscale.com/v1 (API Key: ✓)
- nvidia - https://integrate.api.nvidia.com/v1 (API Key: ✓)
- ollama - http://localhost:11434/v1 (API Key: ✓)
- openai - https://api.openai.com/v1 (API Key: ✓)
- openrouter - https://openrouter.ai/api/v1 (API Key: ✓)
- perplexity - https://api.perplexity.ai (API Key: ✓)
- poe - https://api.poe.com/v1 (API Key: ✓)
- requesty - https://router.requesty.ai/v1 (API Key: ✓)
- sambanova - https://api.sambanova.ai/v1 (API Key: ✓)
- together - https://api.together.xyz/v1 (API Key: ✓)
- venice - https://api.venice.ai/api/v1 (API Key: ✓)
- vercel - https://ai-gateway.vercel.sh/v1 (API Key: ✓)
Amazon Bedrock Setup
Amazon Bedrock requires a special configuration due to its different endpoints for model listing and chat completions:
# Add Bedrock provider with different endpoints
# Set your AWS Bearer Token
# List available models
# Use Bedrock models
# Interactive chat with Bedrock
Key differences for Bedrock:
- Models endpoint: Uses
https://bedrock.us-east-1.amazonaws.com/foundation-models - Chat endpoint: Uses
https://bedrock-runtime.us-east-1.amazonaws.com/model/{model_name}/converse - Authentication: Requires AWS Bearer Token for Bedrock
- Model names: Use full Bedrock model identifiers (e.g.,
amazon.nova-pro-v1:0)
The {model_name} placeholder in the chat URL is automatically replaced with the actual model name when making requests.
Example Usage
# Direct prompt with specific model
# Interactive chat session
# Create embeddings
# Search similar content
# RAG-enhanced chat
# Use MCP tools for internet access
# Multiple MCP tools
# Web search integration
# Search with specific query
# Generate images from text prompts
Web Search Integration
lc supports web search integration to enhance prompts with real-time information:
# Configure Brave Search
# Configure Exa (AI-powered search)
# Configure Serper (Google Search API)
# Set default search provider
# Direct search
# Use search results as context
# Search with custom query
Image Generation
lc supports text-to-image generation using compatible providers:
# Basic image generation
# Generate with specific model and size
# Generate multiple images
# Save to specific directory
# Use short alias
# Generate with specific provider
# Debug mode to see API requests
Supported Parameters:
-m, --model: Image generation model (e.g., dall-e-2, dall-e-3)-p, --provider: Provider to use (openai, etc.)-s, --size: Image size (256x256, 512x512, 1024x1024, 1792x1024, 1024x1792)-c, --count: Number of images to generate (1-10, default: 1)-o, --output: Output directory for saved images (default: current directory)--debug: Enable debug mode to see API requests
Note: Image generation is currently supported by OpenAI-compatible providers. Generated images are automatically saved with timestamps and descriptive filenames.
Vision/Image Support
lc supports image inputs for vision-capable models across multiple providers:
# Single image analysis
# Multiple images
# Image from URL
# Interactive chat with images
# Find vision-capable models
# Combine with other features
Supported formats: JPG, PNG, GIF, WebP (max 20MB per image)
Model Context Protocol (MCP)
lc supports MCP servers to extend LLM capabilities with external tools:
# Add an MCP server
# List available functions
# Use tools in prompts
# Interactive chat with tools
Platform Support for MCP Daemon:
- Unix systems (Linux, macOS, WSL2): Full MCP daemon support with persistent connections via Unix sockets (enabled by default with the
unix-socketsfeature) - Windows: MCP daemon functionality is not available due to lack of Unix socket support. Direct MCP connections without the daemon work on all platforms.
- WSL2: Full Unix compatibility including MCP daemon support (works exactly like Linux)
To build without Unix socket support:
Learn more about MCP in our documentation.
File Attachments and PDF Support
lc can process and analyze various file types, including PDFs:
# Attach text files to your prompt
# Process PDF files (requires PDF feature)
# Multiple file attachments
# Combine with other features
# Combine images with text attachments
Note: PDF support requires the pdf feature (enabled by default). To build without PDF support:
To explicitly enable PDF support:
Template System
lc supports configurable request/response templates, allowing you to work with any LLM API format without code changes:
# Fix GPT-5's max_completion_tokens and temperature requirement
[]
= """
{
"model": "{{ model }}",
"messages": {{ messages | json }}{% if max_tokens %},
"max_completion_tokens": {{ max_tokens }}{% endif %},
"temperature": 1{% if tools %},
"tools": {{ tools | json }}{% endif %}{% if stream %},
"stream": {{ stream }}{% endif %}
}
"""
See Template System Documentation and config_samples/templates_sample.toml for more examples.
Features
lc supports several optional features that can be enabled or disabled during compilation:
Default Features
pdf: Enables PDF file processing and analysisunix-sockets: Enables Unix domain socket support for MCP daemon (Unix systems only)
Build Options
# Build with all default features
# Build with minimal features (no PDF, no Unix sockets)
# Build with only PDF support (no Unix sockets)
# Build with only Unix socket support (no PDF)
# Explicitly enable all features
Note: The unix-sockets feature is only functional on Unix-like systems (Linux, macOS, BSD, WSL2). On Windows native command prompt/PowerShell, this feature has no effect and MCP daemon functionality is not available regardless of the feature flag. WSL2 provides full Unix compatibility.
| Feature | Windows | macOS | Linux | WSL2 |
|---|---|---|---|---|
| MCP Daemon | ❌ | ✅ | ✅ | ✅ |
| Direct MCP | ✅ | ✅ | ✅ | ✅ |
Contributing
Contributions are welcome! Please see our Contributing Guide.
License
MIT License - see LICENSE file for details.
For detailed documentation, examples, and guides, visit lc.viwq.dev