lc-cli 0.1.0

LLM Client - A fast Rust-based LLM CLI tool with provider management and chat sessions
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
[![Documentation](https://img.shields.io/badge/docs-lc.viwq.dev-blue)](https://lc.viwq.dev)
[![License](https://img.shields.io/badge/license-MIT-green)](LICENSE)
[![Rust](https://img.shields.io/badge/rust-1.70%2B-orange)](https://www.rust-lang.org)

<p align="center">
<h1>LLM Client (lc)</h1>
<img src="docs-site/static/img/social-card.png" alt="LLM Client" width="450" />
</p>


A fast, Rust-based command-line tool for interacting with Large Language Models. 

## Quick Start

```bash
# Option 1: Install from crates.io (when published)
cargo install lc-cli

# Option 2: Install from source
git clone <repository-url>
cd lc
cargo build --release

# Add a provider
lc providers add openai https://api.openai.com/v1

# Set your API key
lc keys add openai

# Start chatting
lc -m openai:gpt-4 "What is the capital of France?"
or
# set default provider and model
lc config set provider openai
lc config set model gpt-4
# Direct prompt with specific model
lc "What is the capital of France?"
```

## Key Features

- 🚀 **Lightning Fast** - ~3ms cold start (50x faster than Python alternatives)
- 🔧 **Universal** - Works with any OpenAI-compatible API
- 🧠 **Smart** - Built-in vector database and RAG support
- 🛠️ **Tools** - Model Context Protocol (MCP) support for extending LLM capabilities
- 🔍 **Web Search** - Integrated web search with multiple providers (Brave, Exa, Serper) for enhanced context
- 👁️ **Vision Support** - Process and analyze images with vision-capable models
-**PDF Support** - Read and process PDF files with optional dependency
- 🔐 **Secure** - Encrypted configuration sync
- 💬 **Intuitive** - Simple commands with short aliases
- 🎨 **Flexible Templates** - Configure request/response formats for any LLM API
-**Shell Completion** - Tab completion for commands, providers, models, and more

## Shell Completion

`lc` supports comprehensive tab completion for all major shells (Bash, Zsh, Fish, PowerShell, Elvish) with both static and dynamic completion:

```bash
# Generate completion script for your shell
lc completions bash > ~/.local/share/bash-completion/completions/lc
lc completions zsh > ~/.local/share/zsh/site-functions/_lc
lc completions fish > ~/.config/fish/completions/lc.fish

# Dynamic provider completion
lc -p <TAB>                 # Shows all configured providers
lc -p g<TAB>                # Shows providers starting with "g"

# Command completion
lc providers <TAB>          # Shows provider subcommands
lc config set <TAB>         # Shows configuration options
```

For detailed setup instructions, see [Shell Completion Guide](docs/shell-completion.md).

## Documentation

For comprehensive documentation, visit **[lc.viwq.dev](https://lc.viwq.dev)**

### Quick Links

- [Installation Guide]https://lc.viwq.dev/getting-started/installation
- [Quick Start Tutorial]https://lc.viwq.dev/getting-started/quick-start
- [Command Reference]https://lc.viwq.dev/commands/overview
- [Provider Setup]https://lc.viwq.dev/features/providers
- [Vector Database & RAG]https://lc.viwq.dev/advanced/vector-database
- [Model Context Protocol (MCP)]https://lc.viwq.dev/advanced/mcp
- [Template System]docs/TEMPLATE_SYSTEM.md - Configure custom request/response formats

## Supported Providers
Any OpenAI-compatible API can be used with `lc`. Here are some popular providers:
Anthropic, Gemini, and Amazon Bedrock also supported.
  - ai21 - https://api.ai21.com/studio/v1 (API Key: ✓)
  - amazon_bedrock - https://bedrock-runtime.us-east-1.amazonaws.com (API Key: ✓) - See [Bedrock Setup]#amazon-bedrock-setup
  - cerebras - https://api.cerebras.ai/v1 (API Key: ✓)
  - chub - https://inference.chub.ai/v1 (API Key: ✓)
  - chutes - https://llm.chutes.ai/v1 (API Key: ✓)
  - claude - https://api.anthropic.com/v1 (API Key: ✓)
  - cohere - https://api.cohere.com/v2 (API Key: ✓)
  - deepinfra - https://api.deepinfra.com/v1/openai (API Key: ✓)
  - digitalocean - https://inference.do-ai.run/v1 (API Key: ✓)
  - fireworks - https://api.fireworks.ai/inference/v1 (API Key: ✓)
  - gemini - https://generativelanguage.googleapis.com (API Key: ✓)
  - github - https://models.github.ai (API Key: ✓)
  - github-copilot - https://api.individual.githubcopilot.com (API Key: ✓)
  - grok - https://api.x.ai/v1 (API Key: ✓)
  - groq - https://api.groq.com/openai/v1 (API Key: ✓)
  - huggingface - https://router.huggingface.co/v1 (API Key: ✓)
  - hyperbolic - https://api.hyperbolic.xyz/v1 (API Key: ✓)
  - kilo - https://kilocode.ai/api/openrouter (API Key: ✓)
  - meta - https://api.llama.com/v1 (API Key: ✓)
  - mistral - https://api.mistral.ai/v1 (API Key: ✓)
  - nebius - https://api.studio.nebius.com/v1 (API Key: ✓)
  - novita - https://api.novita.ai/v3/openai (API Key: ✓)
  - nscale - https://inference.api.nscale.com/v1 (API Key: ✓)
  - nvidia - https://integrate.api.nvidia.com/v1 (API Key: ✓)
  - ollama - http://localhost:11434/v1 (API Key: ✓)
  - openai - https://api.openai.com/v1 (API Key: ✓)
  - openrouter - https://openrouter.ai/api/v1 (API Key: ✓)
  - perplexity - https://api.perplexity.ai (API Key: ✓)
  - poe - https://api.poe.com/v1 (API Key: ✓)
  - requesty - https://router.requesty.ai/v1 (API Key: ✓)
  - sambanova - https://api.sambanova.ai/v1 (API Key: ✓)
  - together - https://api.together.xyz/v1 (API Key: ✓)
  - venice - https://api.venice.ai/api/v1 (API Key: ✓)
  - vercel - https://ai-gateway.vercel.sh/v1 (API Key: ✓)

### Amazon Bedrock Setup

Amazon Bedrock requires a special configuration due to its different endpoints for model listing and chat completions:

```bash
# Add Bedrock provider with different endpoints
lc providers add bedrock https://bedrock-runtime.us-east-1.amazonaws.com \
  -m /foundation-models \
  -c "https://bedrock-runtime.us-east-1.amazonaws.com/model/{model_name}/converse"

# Set your AWS Bearer Token
lc keys add bedrock

# List available models
lc providers models bedrock

# Use Bedrock models
lc -m bedrock:amazon.nova-pro-v1:0 "Hello, how are you?"

# Interactive chat with Bedrock
lc chat -m bedrock:amazon.nova-pro-v1:0
```

**Key differences for Bedrock:**
- **Models endpoint**: Uses `https://bedrock.us-east-1.amazonaws.com/foundation-models`
- **Chat endpoint**: Uses `https://bedrock-runtime.us-east-1.amazonaws.com/model/{model_name}/converse`
- **Authentication**: Requires AWS Bearer Token for Bedrock
- **Model names**: Use full Bedrock model identifiers (e.g., `amazon.nova-pro-v1:0`)

The `{model_name}` placeholder in the chat URL is automatically replaced with the actual model name when making requests.

## Example Usage

```bash
# Direct prompt with specific model
lc -m openai:gpt-4 "Explain quantum computing"

# Interactive chat session
lc chat -m anthropic:claude-3.5-sonnet

# Create embeddings
lc embed -m openai:text-embedding-3-small -v knowledge "Important information"

# Search similar content
lc similar -v knowledge "related query"

# RAG-enhanced chat
lc -v knowledge "What do you know about this topic?"

# Use MCP tools for internet access
lc -t fetch "What's the latest news about AI?"

# Multiple MCP tools
lc -t fetch,playwright "Navigate to example.com and analyze its content"

# Web search integration
lc --use-search brave "What are the latest developments in quantum computing?"

# Search with specific query
lc --use-search "brave:quantum computing 2024" "Summarize the findings"

# Generate images from text prompts
lc image "A futuristic city with flying cars" -m dall-e-3 -s 1024x1024
lc img "Abstract art with vibrant colors" -c 2 -o ./generated_images
```

### Web Search Integration

`lc` supports web search integration to enhance prompts with real-time information:

```bash
# Configure Brave Search
lc search provider add brave https://api.search.brave.com/res/v1/web/search -t brave
lc search provider set brave X-Subscription-Token YOUR_API_KEY

# Configure Exa (AI-powered search)
lc search provider add exa https://api.exa.ai -t exa
lc search provider set exa x-api-key YOUR_API_KEY

# Configure Serper (Google Search API)
lc search provider add serper https://google.serper.dev -t serper
lc search provider set serper X-API-KEY YOUR_API_KEY

# Set default search provider
lc config set search brave

# Direct search
lc search query brave "rust programming language" -f json
lc search query exa "machine learning best practices" -n 10
lc search query serper "latest AI developments" -f md

# Use search results as context
lc --use-search brave "What are the latest AI breakthroughs?"
lc --use-search exa "Explain transformer architecture"
lc --use-search serper "What are the current trends in quantum computing?"

# Search with custom query
lc --use-search "brave:specific search terms" "Analyze these results"
lc --use-search "exa:neural networks 2024" "Summarize recent advances"
lc --use-search "serper:GPT-4 alternatives 2024" "Compare the latest language models"
```

### Image Generation

`lc` supports text-to-image generation using compatible providers:

```bash
# Basic image generation
lc image "A beautiful sunset over mountains"

# Generate with specific model and size
lc image "A futuristic robot" -m dall-e-3 -s 1024x1024

# Generate multiple images
lc image "Abstract geometric patterns" -c 4

# Save to specific directory
lc image "A cozy coffee shop" -o ./my_images

# Use short alias
lc img "A magical forest" -m dall-e-2 -s 512x512

# Generate with specific provider
lc image "Modern architecture" -p openai -m dall-e-3

# Debug mode to see API requests
lc image "Space exploration" --debug
```

**Supported Parameters:**
- `-m, --model`: Image generation model (e.g., dall-e-2, dall-e-3)
- `-p, --provider`: Provider to use (openai, etc.)
- `-s, --size`: Image size (256x256, 512x512, 1024x1024, 1792x1024, 1024x1792)
- `-c, --count`: Number of images to generate (1-10, default: 1)
- `-o, --output`: Output directory for saved images (default: current directory)
- `--debug`: Enable debug mode to see API requests

**Note:** Image generation is currently supported by OpenAI-compatible providers. Generated images are automatically saved with timestamps and descriptive filenames.

### Vision/Image Support

`lc` supports image inputs for vision-capable models across multiple providers:

```bash
# Single image analysis
lc -m gpt-4-vision-preview -i photo.jpg "What's in this image?"

# Multiple images
lc -m claude-3-opus-20240229 -i before.jpg -i after.jpg "Compare these images"

# Image from URL
lc -m gemini-pro-vision -i https://example.com/image.jpg "Describe this image"

# Interactive chat with images
lc chat -m gpt-4-vision-preview -i screenshot.png

# Find vision-capable models
lc models --vision

# Combine with other features
lc -m gpt-4-vision-preview -i diagram.png -a notes.txt "Explain this diagram with the context from my notes"
```

Supported formats: JPG, PNG, GIF, WebP (max 20MB per image)

### Model Context Protocol (MCP)

`lc` supports MCP servers to extend LLM capabilities with external tools:

```bash
# Add an MCP server
lc mcp add fetch "uvx mcp-server-fetch" --type stdio

# List available functions
lc mcp functions fetch

# Use tools in prompts
lc -t fetch "Get the current weather in Tokyo"

# Interactive chat with tools
lc chat -m gpt-4 -t fetch
```

**Platform Support for MCP Daemon:**
- **Unix systems** (Linux, macOS, WSL2): Full MCP daemon support with persistent connections via Unix sockets (enabled by default with the `unix-sockets` feature)
- **Windows**: MCP daemon functionality is not available due to lack of Unix socket support. Direct MCP connections without the daemon work on all platforms.
- **WSL2**: Full Unix compatibility including MCP daemon support (works exactly like Linux)

To build without Unix socket support:
```bash
cargo build --release --no-default-features --features pdf
```

Learn more about MCP in our [documentation](https://lc.viwq.dev/advanced/mcp).

### File Attachments and PDF Support

`lc` can process and analyze various file types, including PDFs:

```bash
# Attach text files to your prompt
lc -a document.txt "Summarize this document"

# Process PDF files (requires PDF feature)
lc -a report.pdf "What are the key findings in this report?"

# Multiple file attachments
lc -a file1.txt -a data.pdf -a config.json "Analyze these files"

# Combine with other features
lc -a research.pdf -v knowledge "Compare this with existing knowledge"

# Combine images with text attachments
lc -m gpt-4-vision-preview -i chart.png -a data.csv "Analyze this chart against the CSV data"
```

**Note:** PDF support requires the `pdf` feature (enabled by default). To build without PDF support:

```bash
cargo build --release --no-default-features
```

To explicitly enable PDF support:

```bash
cargo build --release --features pdf
```

### Template System

`lc` supports configurable request/response templates, allowing you to work with any LLM API format without code changes:

```toml
# Fix GPT-5's max_completion_tokens and temperature requirement
[chat_templates."gpt-5.*"]
request = """
{
  "model": "{{ model }}",
  "messages": {{ messages | json }}{% if max_tokens %},
  "max_completion_tokens": {{ max_tokens }}{% endif %},
  "temperature": 1{% if tools %},
  "tools": {{ tools | json }}{% endif %}{% if stream %},
  "stream": {{ stream }}{% endif %}
}
"""
```

See [Template System Documentation](docs/TEMPLATE_SYSTEM.md) and [config_samples/templates_sample.toml](config_samples/templates_sample.toml) for more examples.

## Features

`lc` supports several optional features that can be enabled or disabled during compilation:

### Default Features

- `pdf`: Enables PDF file processing and analysis
- `unix-sockets`: Enables Unix domain socket support for MCP daemon (Unix systems only)

### Build Options

```bash
# Build with all default features
cargo build --release

# Build with minimal features (no PDF, no Unix sockets)
cargo build --release --no-default-features

# Build with only PDF support (no Unix sockets)
cargo build --release --no-default-features --features pdf

# Build with only Unix socket support (no PDF)
cargo build --release --no-default-features --features unix-sockets

# Explicitly enable all features
cargo build --release --features "pdf,unix-sockets"
```

**Note:** The `unix-sockets` feature is only functional on Unix-like systems (Linux, macOS, BSD, WSL2). On Windows native command prompt/PowerShell, this feature has no effect and MCP daemon functionality is not available regardless of the feature flag. WSL2 provides full Unix compatibility.


| Feature | Windows | macOS | Linux | WSL2 |
|---------|---------|-------|-------|------|
| MCP Daemon |||||
| Direct MCP |||||

## Contributing

Contributions are welcome! Please see our [Contributing Guide](https://lc.viwq.dev/development/contributing).

## License

MIT License - see [LICENSE](LICENSE) file for details.

---

For detailed documentation, examples, and guides, visit **[lc.viwq.dev](https://lc.viwq.dev)**