squid-rs 0.6.0

An AI-powered command-line tool for code reviews and suggestions.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
# squid 🦑

An AI-powered command-line tool for code reviews and suggestions. Privacy-focused and local-first - your code never leaves your hardware when using local models.

## Features

- 🤖 Chat with LLMs via OpenAI-compatible APIs
- 📄 Provide file context for AI analysis
- 🔍 AI-powered code reviews with language-specific prompts
- 🔧 Tool calling support (file read/write/search/bash operations) with multi-layered security
- 🕐 **Datetime awareness** - LLM can access current date and time (UTC or local)
- 🔒 Path validation (whitelist/blacklist) and .squidignore support
- 🛡️ User approval required for all tool executions (read/write files)
- 🌊 Streaming support for real-time responses
- 🎨 **Enhanced UI** with styled prompts, emoji icons, color-coded information
- 🦑 Friendly squid assistant personality with professional responses
- ⚙️ Configurable via environment variables
- 🔌 Works with LM Studio, OpenAI, Ollama, Mistral, and other compatible services

## Privacy & Local-First

**Your code never leaves your hardware** when using local LLM services (LM Studio, Ollama, etc.).

- 🔒 **Complete Privacy** - Run models entirely on your own machine
- 🏠 **Local-First** - No data sent to external servers with local models
- 🛡️ **You Control Your Data** - Choose between local models (private) or cloud APIs (convenient)
- 🔐 **Secure by Default** - Multi-layered security prevents unauthorized file access

**Privacy Options:**
- **Maximum Privacy**: Use LM Studio or Ollama - everything runs locally, no internet required for inference
- **Cloud Convenience**: Use OpenAI or other cloud providers - data sent to their servers for processing
- **Your Choice**: Squid works with both - you decide based on your privacy needs

All file operations require your explicit approval, regardless of which LLM service you use.

## Prerequisites

Before you begin, you'll need:

1. **Rust toolchain** (for building squid)
   ```bash
   curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
   ```

2. **An OpenAI-compatible LLM service** (choose one):

<details open>
<summary><b>Option A: LM Studio (Recommended for Local Development)</b></summary>

[LM Studio](https://lmstudio.ai/) provides a user-friendly interface for running local LLMs.

1. **Download and install** LM Studio from https://lmstudio.ai/
2. **Download a model** - We recommend **Qwen2.5-Coder** for code-related tasks:
   - In LM Studio, search for: `lmstudio-community/Qwen2.5-Coder-7B-Instruct-MLX-4bit`
   - Or browse: https://huggingface.co/lmstudio-community/Qwen2.5-Coder-7B-Instruct-MLX-4bit
   - Click download and wait for it to complete
3. **Load the model** - Select the downloaded model in LM Studio
4. **Start the local server**:
   - Click the "Local Server" tab (↔️ icon on the left)
   - Click "Start Server"
   - Default endpoint: `http://127.0.0.1:1234/v1`
   - Note: No API key required for local server

**Alternative models in LM Studio:**
- `Meta-Llama-3.1-8B-Instruct` - General purpose
- `deepseek-coder` - Code-focused
- Any other model compatible with your hardware

</details>

<details>
<summary><b>Option B: Ollama (Lightweight CLI Option)</b></summary>

[Ollama](https://ollama.com/) is a lightweight, command-line tool for running LLMs.

1. **Install Ollama**:
   ```bash
   # macOS
   brew install ollama
   
   # Linux
   curl -fsSL https://ollama.com/install.sh | sh
   
   # Or download from https://ollama.com/
   ```

2. **Start Ollama service**:
   ```bash
   ollama serve
   ```

3. **Pull the recommended model** - **Qwen2.5-Coder**:
   ```bash
   ollama pull qwen2.5-coder
   ```
   - Model page: https://ollama.com/library/qwen2.5-coder
   - Available sizes: 0.5B, 1.5B, 3B, 7B, 14B, 32B
   - Default (7B) is recommended for most use cases

4. **Verify it's running**:
   ```bash
   ollama list  # Should show qwen2.5-coder
   curl http://localhost:11434/api/tags  # API check
   ```

**Alternative models in Ollama:**
- `codellama` - Code generation
- `deepseek-coder` - Code understanding
- `llama3.1` - General purpose
- See all at https://ollama.com/library

</details>

<details>
<summary><b>Option C: OpenAI API</b></summary>

Use OpenAI's cloud API for access to GPT models:

1. **Get an API key** from https://platform.openai.com/api-keys
2. **Add credits** to your OpenAI account
3. **Choose a model**: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`, etc.

</details>

<details>
<summary><b>Option D: Mistral API</b></summary>

Use Mistral's cloud API for access to their powerful models:

1. **Get an API key** from https://console.mistral.ai/
2. **Choose a model**: `devstral-2512`, `mistral-large-latest`, `mistral-small-latest`, etc.
3. **Configure**: Mistral API is OpenAI-compatible, so it works seamlessly with Squid

</details>

<details>
<summary><b>Option E: Other OpenAI-Compatible Services</b></summary>

Squid works with any OpenAI-compatible REST API:
- **OpenRouter** (https://openrouter.ai/) - Access to multiple LLM providers
- **Together AI** (https://together.ai/) - Fast inference
- **Anyscale** (https://anyscale.com/) - Enterprise solutions
- **Local APIs** - Any custom OpenAI-compatible endpoint

</details>

## Installation

### From crates.io (Recommended)

```bash
cargo install squid-rs
```

This installs the `squid` command globally from crates.io. You can then use `squid` from anywhere.

### From Source

Clone the repository and install locally:

```bash
git clone https://github.com/DenysVuika/squid.git
cd squid
cargo install --path .
```

### For Development

```bash
cargo build --release
```

For development, use `cargo run --` instead of `squid` in the examples below.

## Configuration

You can configure squid in two ways:

### Option 1: Interactive Setup (Recommended)

Use the `init` command to create a `squid.config.json` file:

#### Interactive Mode (Default)

```bash
# Initialize in current directory
squid init

# Initialize in a specific directory
squid init ./my-project
squid init /path/to/project
```

This will prompt you for:
- **API URL**: The base URL for your LLM service (e.g., `http://127.0.0.1:1234/v1`)
- **API Model**: The model identifier (e.g., `local-model`, `qwen2.5-coder`, `gpt-4`)
- **API Key**: Optional API key (leave empty for local models like LM Studio or Ollama)
- **Log Level**: Logging verbosity (`error`, `warn`, `info`, `debug`, `trace`)

**Example session:**
```
$ squid init
INFO: Initializing squid configuration in "."...
? API URL: http://127.0.0.1:1234/v1
? API Model: local-model
? API Key (optional, press Enter to skip): 
? Log Level: error

Configuration saved to: "squid.config.json"
  API URL: http://127.0.0.1:1234/v1
  API Model: local-model
  API Key: [not set]
  Log Level: error

✓ Default permissions configured
  Allowed: ["now"]

✓ Created .squidignore with default patterns
  Edit this file to customize which files squid should ignore
```

**Re-running init on existing config:**

When you run `squid init` on a directory that already has a config file, it will:
- Use existing values as defaults in prompts
- **Smart merge permissions**: Preserve your custom permissions + add new defaults
- Update version to match current app version

```
$ squid init --url http://127.0.0.1:1234/v1 --model local-model --api-key "" --log-level info
Found existing configuration, using current values as defaults...

Configuration saved to: "./squid.config.json"
  API URL: http://127.0.0.1:1234/v1
  API Model: local-model
  API Key: [configured]
  Log Level: info

✓ Added new default permissions: ["now"]

✓ Current tool permissions:
  Allowed: ["bash:git status", "bash:ls", "now"]
  Denied: ["write_file"]

✓ Using existing .squidignore file
```

In this example:
- User's existing permissions (`bash:git status`, `bash:ls`, `write_file` denial) are preserved
- New default permission (`now`) was automatically added
- Config version updated from 0.4.0 to 0.5.0

#### Non-Interactive Mode

You can also provide configuration values via command-line arguments to skip the interactive prompts:

```bash
# Initialize with all parameters
squid init --url http://127.0.0.1:1234/v1 --model local-model --log-level error

# Initialize in a specific directory with parameters
squid init ./my-project --url http://localhost:11434/v1 --model qwen2.5-coder --log-level error

# Partial parameters (will prompt for missing values)
squid init --url http://127.0.0.1:1234/v1 --model gpt-4
# Will still prompt for API Key and Log Level

# Include API key for cloud services
squid init --url https://api.openai.com/v1 --model gpt-4 --api-key sk-your-key-here --log-level error
```

**Available options:**
- `--url <URL>` - API URL (e.g., `http://127.0.0.1:1234/v1`)
- `--model <MODEL>` - API Model (e.g., `local-model`, `qwen2.5-coder`, `gpt-4`)
- `--api-key <KEY>` - API Key (optional for local models)
- `--log-level <LEVEL>` - Log Level (`error`, `warn`, `info`, `debug`, `trace`)

The configuration is saved to `squid.config.json` in the specified directory (or current directory if not specified). This file can be committed to your repository to share project settings with your team.

### Option 2: Manual Configuration

Create a `.env` file in the project root:

```bash
# OpenAI API Configuration (for LM Studio or OpenAI)
API_URL=http://127.0.0.1:1234/v1
API_MODEL=local-model
API_KEY=not-needed
```

**Important Notes:**
- `squid.config.json` takes precedence over `.env` variables. If both exist, the config file will be used.
- **Commit `squid.config.json`** to your repository to share project settings with your team
- **Keep `.env` private** - it should contain sensitive information like API keys and is excluded from git
- For cloud API services (OpenAI, etc.), store the actual API key in `.env` and omit `api_key` from `squid.config.json`

### Configuration Options

- `API_URL`: The base URL for the API endpoint
  - LM Studio: `http://127.0.0.1:1234/v1` (default)
  - Ollama: `http://localhost:11434/v1`
  - OpenAI: `https://api.openai.com/v1`
  - Other: Your provider's base URL
  
- `API_MODEL`: The model to use
  - LM Studio: `local-model` (uses whatever model is loaded)
  - Ollama: `qwen2.5-coder` (recommended) or any pulled model
  - OpenAI: `gpt-4`, `gpt-3.5-turbo`, etc.
  - Other: Check your provider's model names
  
- `API_KEY`: Your API key
  - LM Studio: `not-needed` (no authentication required)
  - Ollama: `not-needed` (no authentication required)
  - OpenAI: Your actual API key (e.g., `sk-...`)
  - Other: Your provider's API key

- `LOG_LEVEL`: Logging verbosity (optional, default: `error`)
  - `error`: Only errors (default)
  - `warn`: Warnings and errors
  - `info`: Informational messages
  - `debug`: Detailed debugging information
  - `trace`: Very verbose output

- `permissions`: Tool execution permissions (optional)
  - `allow`: Array of tool names that run without confirmation (default: `["now"]`)
  - `deny`: Array of tool names that are completely blocked (default: `[]`)
  - **Granular bash permissions**: Use `"bash:command"` format for specific commands
    - `"bash"` - allows all bash commands (dangerous patterns still blocked)
    - `"bash:ls"` - allows only `ls` commands (ls, ls -la, etc.)
    - `"bash:git status"` - allows only `git status` commands
  - ⚠️ **Important**: Dangerous bash commands (`rm`, `sudo`, `chmod`, `dd`, `curl`, `wget`, `kill`) are **always blocked** regardless of permissions
  - Example:
    ```json
    "permissions": {
      "allow": ["now", "read_file", "grep", "bash:ls", "bash:git status"],
      "deny": ["write_file", "bash:rm"]
    }
    ```
  - When prompted for tool approval, you can choose:
    - **Yes (this time)** - Allow once, ask again next time
    - **No (skip)** - Deny once, ask again next time
    - **Always** - Add to allow list and auto-save config (bash commands save as `bash:command`)
    - **Never** - Add to deny list and auto-save config (bash commands save as `bash:command`)
  - See [Security Documentation]docs/SECURITY.md#-tool-permissions-allowdeny-lists for details

## Usage

> **Note:** The examples below use the `squid` command (after installation with `cargo install --path .`).  
> For development, replace `squid` with `cargo run --` (e.g., `cargo run -- ask "question"`).

### Ask a Question

```bash
# Basic question (streaming by default)
squid ask "What is Rust?"

# With additional context using -m
squid ask "Explain Rust" -m "Focus on memory safety"

# Use a custom system prompt
squid ask "Explain Rust" -p custom-prompt.md

# Disable streaming for complete response at once (useful for scripting)
squid ask "Explain async/await in Rust" --no-stream
```

By default, responses are streamed in real-time, displaying tokens as they are generated. Use `--no-stream` to get the complete response at once (useful for piping or scripting).

### Ask About a File

```bash
# Basic file question (streams by default)
squid ask -f sample-files/sample.txt "What are the key features mentioned?"

# With additional context using -m
squid ask -f src/main.rs "What does this do?" -m "Focus on error handling"

# Use a custom system prompt for specialized analysis
squid ask -f src/main.rs "Review this" -p expert-reviewer-prompt.md

# Disable streaming for complete response
squid ask -f code.rs --no-stream "Explain what this code does"
```

This will read the file content and include it in the prompt, allowing the AI to answer questions based on the file's content.

### Review Code

```bash
# Review a file with language-specific prompts (streams by default)
squid review src/main.rs

# Focus on specific aspects
squid review styles.css -m "Focus on performance issues"

# Get complete review at once (no streaming)
squid review app.ts --no-stream
```

The review command automatically selects the appropriate review prompt based on file type:
- **Rust** (`.rs`) - Ownership, safety, idioms, error handling
- **TypeScript/JavaScript** (`.ts`, `.js`, `.tsx`, `.jsx`) - Type safety, modern features, security
- **HTML** (`.html`, `.htm`) - Semantics, accessibility, SEO
- **CSS** (`.css`, `.scss`, `.sass`) - Performance, responsive design, maintainability
- **Other files** - Generic code quality and best practices



### Tool Calling (with Multi-Layered Security)

The LLM has been trained to intelligently use tools when needed. It understands when to read, write, or search files based on your questions. 

**Security Layers:**
1. **Path Validation** - Automatically blocks system directories (`/etc`, `/root`, `~/.ssh`, etc.)
2. **Ignore Patterns** - `.squidignore` file blocks specified files/directories (like `.gitignore`)
3. **User Approval** - Manual confirmation required for each operation

For details, see [Security Features](docs/SECURITY.md).

```bash
# LLM intelligently reads files when you ask about them
squid ask "Read the README.md file and summarize it"
squid ask "What dependencies are in Cargo.toml?"
squid ask "Analyze the main.rs file for me"
# You'll be prompted: "Allow reading file: [filename]? (Y/n)"

# LLM can write files
squid ask "Create a hello.txt file with 'Hello, World!'"
# You'll be prompted with a preview: "Allow writing to file: hello.txt?"

# Use custom prompts with tool calling
squid ask -p expert-coder.md "Read Cargo.toml and suggest optimizations"

# LLM can search for patterns in files using grep
squid ask "Search for all TODO comments in the src directory"
squid ask "Find all function definitions in src/main.rs"
squid ask "Search for 'API_URL' in the project"
squid ask "Find all uses of 'unwrap' in the codebase"
squid ask "Show me all error handling patterns in src/tools.rs"
# You'll be prompted: "Allow searching for pattern '...' in: [path]? (Y/n)"
# Results show file path, line number, and matched content

# LLM can get current date and time
squid ask "What time is it now?"
squid ask "What's the current date?"
# You'll be prompted: "Allow getting current date and time? (Y/n)"
# Returns datetime in RFC 3339 format

# LLM can execute safe bash commands
squid ask "What files are in this directory?"
squid ask "Show me the git status"
squid ask "List all .rs files in src/"
# You'll be prompted: "Allow executing bash command: [command]? (Y/n)"
# Dangerous commands (rm, sudo, chmod, dd, curl, wget, kill) are automatically blocked

# Use --no-stream for non-streaming mode
squid ask --no-stream "Read Cargo.toml and list all dependencies"
```

**Available Tools:**
- 📖 **read_file** - Read file contents from the filesystem
- 📝 **write_file** - Write content to files
- 🔍 **grep** - Search for patterns in files using regex (supports directories and individual files)
- 🕐 **now** - Get current date and time in RFC 3339 format (UTC or local timezone)
- 💻 **bash** - Execute safe, non-destructive bash commands (ls, git status, cat, etc.)

**Key Features:**
- 🤖 **Intelligent tool usage** - LLM understands when to read/write/search files from natural language
- 🛡️ **Path validation** - Automatic blocking of system and sensitive directories
- 📂 **Ignore patterns** - `.squidignore` file for project-specific file blocking
- 🔒 **Security approval** - All tool executions require user confirmation
- 📋 **Content preview** - File write operations show what will be written
- ⌨️ **Simple controls** - Press `Y` to allow or `N` to skip
- 📝 **Full logging** - All tool calls are logged for transparency
- 🔍 **Regex support** - Grep tool supports regex patterns with configurable case sensitivity
- 💻 **Bash execution** - Run safe, read-only commands for system inspection (dangerous commands **always** blocked, even with permissions)
- 🔐 **Privacy preserved** - With local models (LM Studio/Ollama), all file operations happen locally on your machine

**Using .squidignore:**

Create a `.squidignore` file in your project root to block specific files and directories:

```bash
# .squidignore - Works like .gitignore
*.log
.env
target/
node_modules/
__pycache__/
```

Patterns are automatically enforced - the LLM cannot access ignored files even if approved.

## Documentation

- **[Quick Start Guide]docs/QUICKSTART.md** - Get started in 5 minutes
- **[Security Features]docs/SECURITY.md** - Tool approval and security safeguards
- **[System Prompts Reference]docs/PROMPTS.md** - Guide to all system prompts and customization
- **[Examples]docs/EXAMPLES.md** - Comprehensive usage examples and workflows
- **[Changelog]CHANGELOG.md** - Version history and release notes
- **[Sample File]sample-files/sample.txt** - Test file for trying out the file context feature
- **[Example Files]sample-files/README.md** - Test files for code review prompts
- **[AI Agents Guide]AGENTS.md** - Instructions for AI coding assistants working on this project

### Testing

Try the code review and security features with the provided test scripts:

```bash
# Test code reviews (automated)
./tests/test-reviews.sh

# Test security approval (interactive)
./tests/test-security.sh

# Or test individual examples
squid review sample-files/example.rs
squid review sample-files/example.ts --stream
squid review sample-files/example.html -m "Focus on accessibility"
```

See **[tests/README.md](tests/README.md)** for complete testing documentation and **[sample-files/README.md](sample-files/README.md)** for details on each example file.



## Examples

<details open>
<summary><b>Using with LM Studio</b></summary>

1. Download and install LM Studio from https://lmstudio.ai/
2. Download the recommended model: `lmstudio-community/Qwen2.5-Coder-7B-Instruct-MLX-4bit`
3. Load the model in LM Studio
4. Start the local server (↔️ icon → "Start Server")
5. Set up your `.env`:
   ```bash
   API_URL=http://127.0.0.1:1234/v1
   API_MODEL=local-model
   API_KEY=not-needed
   ```
6. Run:
   ```bash
   squid ask "Write a hello world program in Rust"
   # Or with a file
   squid ask -f sample-files/sample.txt "What is this document about?"
   # Use --no-stream for complete response at once
   squid ask --no-stream "Quick question"
   ```

</details>

<details>
<summary><b>Using with Ollama</b></summary>

1. Install Ollama from https://ollama.com/
2. Start Ollama service:
   ```bash
   ollama serve
   ```
3. Pull the recommended model:
   ```bash
   ollama pull qwen2.5-coder
   ```
4. Set up your `.env`:
   ```bash
   API_URL=http://localhost:11434/v1
   API_MODEL=qwen2.5-coder
   API_KEY=not-needed
   ```
5. Run:
   ```bash
   squid ask "Write a hello world program in Rust"
   # Or with a file
   squid ask -f mycode.rs "Explain this code"
   # Use --no-stream if needed
   squid ask --no-stream "Quick question"
   ```

</details>

<details>
<summary><b>Using with OpenAI</b></summary>

1. Get your API key from https://platform.openai.com/api-keys
2. Set up your `.env`:
   ```bash
   API_URL=https://api.openai.com/v1
   API_MODEL=gpt-4
   API_KEY=sk-your-api-key-here
   ```
3. Run:
   ```bash
   squid ask "Explain the benefits of Rust"
   # Or analyze a file
   squid ask -f mycode.rs "Review this code for potential improvements"
   # Use --no-stream for scripting
   result=$(squid ask --no-stream "Generate a function name")
   ```

</details>

<details>
<summary><b>Using with Mistral API</b></summary>

1. Get your API key from https://console.mistral.ai/
2. Set up your `.env`:
   ```bash
   API_URL=https://api.mistral.ai/v1
   API_MODEL=devstral-2512
   API_KEY=your-mistral-api-key-here
   ```
3. Run:
   ```bash
   squid ask "Write a function to parse JSON"
   # Or use code review
   squid review myfile.py
   # Mistral models work great for code-related tasks
   ```

</details>

## License

Apache-2.0 License. See `LICENSE` file for details.