LLM Client (lc)
A fast, Rust-based command-line tool for interacting with Large Language Models.
Quick Start
Installation
# Option 1: One-liner install script (recommended)
|
# Option 2: Install from crates.io
# Option 3: Install from source
# Add a provider
# Set your API key
# Start chatting
# set default provider and model
# Direct prompt with specific model
System Requirements
Before building from source, ensure you have the required system dependencies:
- Linux (Ubuntu/Debian):
sudo apt install -y pkg-config libssl-dev build-essential - Linux (RHEL/CentOS/Fedora):
sudo yum install -y pkgconfig openssl-devel gcc(ordnf) - macOS:
xcode-select --install(+ Homebrew if needed:brew install pkg-config openssl@3) - Windows: Visual Studio Build Tools with C++ support
These dependencies are required for Rust crates that link against OpenSSL and native libraries.
📖 Full installation instructions: Installation Guide 🔧 Having build issues? See Troubleshooting Guide
Key Features
- 🚀 Lightning Fast - ~3ms cold start (50x faster than Python alternatives)
- 🔧 Universal - Works with any OpenAI-compatible API
- 🧠 Smart - Built-in vector database and RAG support
- 🛠️ Tools - Model Context Protocol (MCP) support for extending LLM capabilities
- 🔍 Web Search - Integrated web search with multiple providers (Brave, Exa, Serper) for enhanced context
- 👁️ Vision Support - Process and analyze images with vision-capable models
- � PDF Support - Read and process PDF files with optional dependency
- 🔐 Secure - Encrypted configuration sync
- 💬 Intuitive - Simple commands with short aliases
- 🎨 Flexible Templates - Configure request/response formats for any LLM API
- ⚡ Shell Completion - Tab completion for commands, providers, models, and more
Documentation
For comprehensive documentation, visit lc.viwq.dev
Quick Links
- Installation Guide
- Quick Start Tutorial
- Command Reference
- Provider Setup
- Vector Database & RAG
- Model Context Protocol (MCP)
- Template System - Configure custom request/response formats
Example Usage
# Direct prompt with specific model
# Interactive chat session
# find embedding models
# create embeddings for your text
# Embed files with intelligent chunking
# above will create a vector db with knowledge
# you can get all vector dbs by using below command
## to get details of the vector db
# Search similar content
# RAG-enhanced chat
# Adding mcp server
# to list mcp servers
# to list all the functions in a mcp
# to invoke a mcp function
# Use playwright tools with chat
# Web search integration
# Search with specific query
# Generate images from text prompts
TLS Configuration and Debugging
lc uses secure HTTPS connections by default with proper certificate verification. For development and debugging scenarios, you may need to disable TLS verification:
# macOS/Linux/Unix - Disable TLS certificate verification for development/debugging
# ⚠️ WARNING: Only use this for development with tools like Proxyman, Charles, etc.
LC_DISABLE_TLS_VERIFY=1
LC_DISABLE_TLS_VERIFY=1
LC_DISABLE_TLS_VERIFY=1
REM Windows Command Prompt
set LC_DISABLE_TLS_VERIFY=1
lc -m openai:gpt-4 "Hello world"
lc embed -m openai:text-embedding-3-small "test text"
# Windows PowerShell
$env:LC_DISABLE_TLS_VERIFY="1"
lc -m openai:gpt-4 "Hello world"
# or inline:
$env:LC_DISABLE_TLS_VERIFY=1; lc embed -m openai:text-embedding-3-small "test text"
Common Use Cases:
- HTTP Debugging Tools: When using Proxyman, Charles, Wireshark, or similar tools that intercept HTTPS traffic
- Corporate Networks: Behind corporate firewalls with custom certificates
- Development Environments: Testing with self-signed certificates
- Local Development: Working with local API servers without proper certificates
⚠️ Security Warning: The LC_DISABLE_TLS_VERIFY environment variable should NEVER be used in production environments as it disables important security checks that protect against man-in-the-middle attacks.
Alternative Solutions:
- Install Root Certificates: Install your debugging tool's root certificate in the system keychain
- Bypass Specific Domains: Configure your debugging tool to exclude specific APIs from interception
- Use System Certificates: Ensure your system's certificate store is up to date
Platform Support for MCP Daemon:
- Unix systems (Linux, macOS, WSL2): Full MCP daemon support with persistent connections via Unix sockets (enabled by default with the
unix-socketsfeature) - Windows: MCP daemon functionality is not available due to lack of Unix socket support. Direct MCP connections without the daemon work on all platforms.
- WSL2: Full Unix compatibility including MCP daemon support (works exactly like Linux)
To build without Unix socket support:
File Attachments and PDF Support
lc can process and analyze various file types, including PDFs:
# Attach text files to your prompt
# Process PDF files (requires PDF feature)
# Multiple file attachments
# Combine with other features
# Combine images with text attachments
Note: PDF support requires the pdf feature (enabled by default). To build without PDF support:
To explicitly enable PDF support:
Template System
lc supports configurable request/response templates, allowing you to work with any LLM API format without code changes:
# Fix GPT-5's max_completion_tokens and temperature requirement
[]
= """
{
"model": "{{ model }}",
"messages": {{ messages | json }}{% if max_tokens %},
"max_completion_tokens": {{ max_tokens }}{% endif %},
"temperature": 1{% if tools %},
"tools": {{ tools | json }}{% endif %}{% if stream %},
"stream": {{ stream }}{% endif %}
}
"""
See Template System Documentation and config_samples/templates_sample.toml for more examples.
Features
lc supports several optional features that can be enabled or disabled during compilation:
Default Features
pdf: Enables PDF file processing and analysisunix-sockets: Enables Unix domain socket support for MCP daemon (Unix systems only)s3-sync: Enables cloud synchronization support (S3 and S3-compatible storage)
Build Options
# Build with all default features (includes PDF, Unix sockets, and S3 sync)
# Build with minimal features (no PDF, no Unix sockets, no S3 sync)
# Build with only PDF support
# Build with PDF and S3 sync (no Unix sockets)
# Explicitly enable all features
Note: The unix-sockets feature is only functional on Unix-like systems (Linux, macOS, BSD, WSL2). On Windows native command prompt/PowerShell, this feature has no effect and MCP daemon functionality is not available regardless of the feature flag. WSL2 provides full Unix compatibility.
Windows-Specific Build Information
Compilation on Windows
S3 sync is now enabled by default on all platforms. On Windows, ensure you have:
- Visual Studio 2019 or later with C++ build tools
- Windows SDK installed
# Standard build for Windows (includes S3 sync)
# Build without S3 sync if you encounter compilation issues
# Run tests
Feature Availability
| Feature | Windows | macOS | Linux | WSL2 |
|---|---|---|---|---|
| MCP Daemon | ❌ | ✅ | ✅ | ✅ |
| Direct MCP | ✅ | ✅ | ✅ | ✅ |
| S3 Sync | ✅* | ✅ | ✅ | ✅ |
| PDF Processing | ✅ | ✅ | ✅ | ✅ |
| Vision/Images | ✅ | ✅ | ✅ | ✅ |
| Web Search | ✅ | ✅ | ✅ | ✅ |
| Vector DB/RAG | ✅ | ✅ | ✅ | ✅ |
*S3 Sync on Windows requires Visual Studio C++ build tools.
Contributing
Contributions are welcome! Please see our Contributing Guide.
License
MIT License - see LICENSE file for details.
For detailed documentation, examples, and guides, visit lc.viwq.dev