lc-cli 0.1.3

LLM Client - A fast Rust-based LLM CLI tool with provider management and chat sessions
Documentation

Documentation License Rust CI Crates.io Downloads GitHub release Platform

LLM Client (lc)

A fast, Rust-based command-line tool for interacting with Large Language Models.

Quick Start

Installation

# Option 1: One-liner install script (recommended)
curl -fsSL https://raw.githubusercontent.com/rajashekar/lc/main/install.sh | bash

# Option 2: Install from crates.io
cargo install lc-cli

# Option 3: Install from source
git clone https://github.com/rajashekar/lc.git
cd lc
cargo build --release

# Add a provider
lc providers add openai https://api.openai.com/v1
or
lc providers install openai

# Set your API key
lc keys add openai

# Start chatting
lc -m openai:gpt-4 "What is the capital of France?"
or
# set default provider and model
lc config set provider openai
lc config set model gpt-4
# Direct prompt with specific model
lc "What is the capital of France?"

System Requirements

Before building from source, ensure you have the required system dependencies:

  • Linux (Ubuntu/Debian): sudo apt install -y pkg-config libssl-dev build-essential
  • Linux (RHEL/CentOS/Fedora): sudo yum install -y pkgconfig openssl-devel gcc (or dnf)
  • macOS: xcode-select --install (+ Homebrew if needed: brew install pkg-config openssl@3)
  • Windows: Visual Studio Build Tools with C++ support

These dependencies are required for Rust crates that link against OpenSSL and native libraries.

📖 Full installation instructions: Installation Guide 🔧 Having build issues? See Troubleshooting Guide

Key Features

  • 🚀 Lightning Fast - ~3ms cold start (50x faster than Python alternatives)
  • 🔧 Universal - Works with any OpenAI-compatible API
  • 🧠 Smart - Built-in vector database and RAG support
  • 🛠️ Tools - Model Context Protocol (MCP) support for extending LLM capabilities
  • 🔍 Web Search - Integrated web search with multiple providers (Brave, Exa, Serper) for enhanced context
  • 👁️ Vision Support - Process and analyze images with vision-capable models
  • PDF Support - Read and process PDF files with optional dependency
  • 🔐 Secure - Encrypted configuration sync
  • 💬 Intuitive - Simple commands with short aliases
  • 🎨 Flexible Templates - Configure request/response formats for any LLM API
  • Shell Completion - Tab completion for commands, providers, models, and more

Documentation

For comprehensive documentation, visit lc.viwq.dev

Quick Links

Example Usage

# Direct prompt with specific model
lc -m openai:gpt-4 "Explain quantum computing"

# Interactive chat session
lc chat -m anthropic:claude-3.5-sonnet

# find embedding models
lc models embed
or
lc m e

# create embeddings for your text
lc embed -m text-embedding-3-small -v knowledge "Machine learning is a subset of AI"
lc embed -m text-embedding-3-small -v knowledge "Deep learning uses neural networks"
lc embed -m text-embedding-3-small -v knowledge "Python is popular for data science"

# Embed files with intelligent chunking
lc embed -m text-embedding-3-small -v docs -f README.md
lc embed -m text-embedding-3-small -v docs -f "*.md"
lc embed -m text-embedding-3-small -v docs -f "/path/to/docs/*.txt"

# above will create a vector db with knowledge
# you can get all vector dbs by using below command
lc vectors list

## to get details of the vector db
lc vectors stats knowledge

# Search similar content
lc similar -v knowledge "What is neural network programming?"

# RAG-enhanced chat
lc chat -v knowledge -m openai:gpt-4
lc -m openai:gpt-4 -v knowledge "Explain the relationship between AI and programming languages"

# Adding mcp server
lc mcp add playwright "npx @playwright/mcp@latest" --type stdio

# to list mcp servers
lc mcp list

# to list all the functions in a mcp
lc mcp functions playwright

# to invoke a mcp function
lc mcp invoke playwright browser_navigate url=https://google.com

# Use playwright tools with chat
lc -m openai:gpt-4o-mini -t playwright "Go to google.com and search for Model context protocol"

# Web search integration
lc search provider add brave https://api.search.brave.com/res/v1/web/search -t brave
lc search provider set brave X-Subscription-Token YOUR_API_KEY

lc --use-search brave "What are the latest developments in quantum computing?"

# Search with specific query
lc --use-search "brave:quantum computing 2024" "Summarize the findings"

# Generate images from text prompts
lc image "A futuristic city with flying cars" -m dall-e-3 -s 1024x1024 -o /tmp
lc img "Abstract art with vibrant colors" -c 2 -o ./generated_images

TLS Configuration and Debugging

lc uses secure HTTPS connections by default with proper certificate verification. For development and debugging scenarios, you may need to disable TLS verification:

# macOS/Linux/Unix - Disable TLS certificate verification for development/debugging
# ⚠️  WARNING: Only use this for development with tools like Proxyman, Charles, etc.
LC_DISABLE_TLS_VERIFY=1 lc -m openai:gpt-4 "Hello world"
LC_DISABLE_TLS_VERIFY=1 lc embed -m openai:text-embedding-3-small "test text"
LC_DISABLE_TLS_VERIFY=1 lc chat -m anthropic:claude-3.5-sonnet
REM Windows Command Prompt
set LC_DISABLE_TLS_VERIFY=1
lc -m openai:gpt-4 "Hello world"
lc embed -m openai:text-embedding-3-small "test text"
# Windows PowerShell
$env:LC_DISABLE_TLS_VERIFY="1"
lc -m openai:gpt-4 "Hello world"
# or inline:
$env:LC_DISABLE_TLS_VERIFY=1; lc embed -m openai:text-embedding-3-small "test text"

Common Use Cases:

  • HTTP Debugging Tools: When using Proxyman, Charles, Wireshark, or similar tools that intercept HTTPS traffic
  • Corporate Networks: Behind corporate firewalls with custom certificates
  • Development Environments: Testing with self-signed certificates
  • Local Development: Working with local API servers without proper certificates

⚠️ Security Warning: The LC_DISABLE_TLS_VERIFY environment variable should NEVER be used in production environments as it disables important security checks that protect against man-in-the-middle attacks.

Alternative Solutions:

  • Install Root Certificates: Install your debugging tool's root certificate in the system keychain
  • Bypass Specific Domains: Configure your debugging tool to exclude specific APIs from interception
  • Use System Certificates: Ensure your system's certificate store is up to date

Platform Support for MCP Daemon:

  • Unix systems (Linux, macOS, WSL2): Full MCP daemon support with persistent connections via Unix sockets (enabled by default with the unix-sockets feature)
  • Windows: MCP daemon functionality is not available due to lack of Unix socket support. Direct MCP connections without the daemon work on all platforms.
  • WSL2: Full Unix compatibility including MCP daemon support (works exactly like Linux)

To build without Unix socket support:

cargo build --release --no-default-features --features pdf

File Attachments and PDF Support

lc can process and analyze various file types, including PDFs:

# Attach text files to your prompt
lc -a document.txt "Summarize this document"

# Process PDF files (requires PDF feature)
lc -a report.pdf "What are the key findings in this report?"

# Multiple file attachments
lc -a file1.txt -a data.pdf -a config.json "Analyze these files"

# Combine with other features
lc -a research.pdf -v knowledge "Compare this with existing knowledge"

# Combine images with text attachments
lc -m gpt-4-vision-preview -i chart.png -a data.csv "Analyze this chart against the CSV data"

Note: PDF support requires the pdf feature (enabled by default). To build without PDF support:

cargo build --release --no-default-features

To explicitly enable PDF support:

cargo build --release --features pdf

Template System

lc supports configurable request/response templates, allowing you to work with any LLM API format without code changes:

# Fix GPT-5's max_completion_tokens and temperature requirement
[chat_templates."gpt-5.*"]
request = """
{
  "model": "{{ model }}",
  "messages": {{ messages | json }}{% if max_tokens %},
  "max_completion_tokens": {{ max_tokens }}{% endif %},
  "temperature": 1{% if tools %},
  "tools": {{ tools | json }}{% endif %}{% if stream %},
  "stream": {{ stream }}{% endif %}
}
"""

See Template System Documentation and config_samples/templates_sample.toml for more examples.

Features

lc supports several optional features that can be enabled or disabled during compilation:

Default Features

  • pdf: Enables PDF file processing and analysis
  • unix-sockets: Enables Unix domain socket support for MCP daemon (Unix systems only)
  • s3-sync: Enables cloud synchronization support (S3 and S3-compatible storage)

Build Options

# Build with all default features (includes PDF, Unix sockets, and S3 sync)
cargo build --release

# Build with minimal features (no PDF, no Unix sockets, no S3 sync)
cargo build --release --no-default-features

# Build with only PDF support
cargo build --release --no-default-features --features pdf

# Build with PDF and S3 sync (no Unix sockets)
cargo build --release --no-default-features --features "pdf,s3-sync"

# Explicitly enable all features
cargo build --release --features "pdf,unix-sockets,s3-sync"

Note: The unix-sockets feature is only functional on Unix-like systems (Linux, macOS, BSD, WSL2). On Windows native command prompt/PowerShell, this feature has no effect and MCP daemon functionality is not available regardless of the feature flag. WSL2 provides full Unix compatibility.

Windows-Specific Build Information

Compilation on Windows

S3 sync is now enabled by default on all platforms. On Windows, ensure you have:

  • Visual Studio 2019 or later with C++ build tools
  • Windows SDK installed
# Standard build for Windows (includes S3 sync)
cargo build --release

# Build without S3 sync if you encounter compilation issues
cargo build --release --no-default-features --features "pdf unix-sockets"

# Run tests
cargo test

Feature Availability

Feature Windows macOS Linux WSL2
MCP Daemon
Direct MCP
S3 Sync ✅*
PDF Processing
Vision/Images
Web Search
Vector DB/RAG

*S3 Sync on Windows requires Visual Studio C++ build tools.

Contributing

Contributions are welcome! Please see our Contributing Guide.

License

MIT License - see LICENSE file for details.


For detailed documentation, examples, and guides, visit lc.viwq.dev