nlsh 0.1.0

AI-powered cross-platform shell utility with natural language command processing
# RustShell Architecture

## Overview
RustShell is a cross-platform terminal application that bridges the gap between natural language input and OS-specific commands. The system transforms user intent expressed in natural language into appropriate commands for the target operating system.

## Current Architecture

### Core Components

```
┌─────────────────────────────────────────────────────────────┐
│                    RustShell Application                    │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │  Main CLI   │  │ Interactive │  │   Command Parser    │  │
│  │   Handler   │  │    Mode     │  │   & Dispatcher      │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │   Alias     │  │  Tab        │  │   Command History   │  │
│  │  Manager    │  │ Completion  │  │     & Hints         │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
├─────────────────────────────────────────────────────────────┤
│                    ShellCommand Trait                       │
├─────────────────────────────────────────────────────────────┤
│  ┌──────────────────────────────────────────────────────┐   │
│  │              Command Implementations                 │   │
│  │  • MakeDir    • CopyFile     • ListDir             │   │
│  │  • MakeFile   • MoveFile     • ExecuteCommand      │   │
│  │  • RemoveFile • RemoveDir    • ShowFile            │   │
│  │  • ChangeDir  • CurrentPath  • FindFiles           │   │
│  │  • CompressFiles • AliasCommand • PipeCommand      │   │
│  └──────────────────────────────────────────────────────┘   │
├─────────────────────────────────────────────────────────────┤
│             Cross-Platform Command Execution                │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │   Windows   │  │    Linux    │  │       macOS         │  │
│  │PowerShell/  │  │   sh/bash   │  │      sh/bash        │  │
│  │    cmd      │  │  commands   │  │     commands        │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
└─────────────────────────────────────────────────────────────┘
```

## Proposed LLM-Enhanced Architecture

### Enhanced System Flow

```
┌─────────────────────────────────────────────────────────────┐
│                     User Input Layer                        │
│    Natural Language: "create a directory and navigate to it"│
└─────────────────────────┬───────────────────────────────────┘
┌─────────────────────────▼───────────────────────────────────┐
│                   LLM Integration Layer                     │
│  ┌─────────────────────────────────────────────────────────┐ │
│  │              LLM API Client                             │ │
│  │  • OpenAI GPT-4/3.5  • Anthropic Claude               │ │
│  │  • Local Models      • Custom Endpoints                │ │
│  └─────────────────────────────────────────────────────────┘ │
│  ┌─────────────────────────────────────────────────────────┐ │
│  │            Prompt Engineering                           │ │
│  │  • OS Context       • Command Templates                │ │
│  │  • Safety Rules     • Output Formatting                │ │
│  └─────────────────────────────────────────────────────────┘ │
└─────────────────────────┬───────────────────────────────────┘
┌─────────────────────────▼───────────────────────────────────┐
│                Command Processing Layer                     │
│  ┌─────────────────────────────────────────────────────────┐ │
│  │            Command Validator                            │ │
│  │  • Safety Checks    • Syntax Validation                │ │
│  │  • Permission Req.  • Confirmation Prompts             │ │
│  └─────────────────────────────────────────────────────────┘ │
│  ┌─────────────────────────────────────────────────────────┐ │
│  │           OS-Specific Translator                        │ │
│  │  • Windows Commands • Linux Commands                   │ │
│  │  • macOS Commands   • Command Chaining                 │ │
│  └─────────────────────────────────────────────────────────┘ │
└─────────────────────────┬───────────────────────────────────┘
┌─────────────────────────▼───────────────────────────────────┐
│                 Execution Layer (Existing)                  │
│              Current RustShell Implementation               │
└─────────────────────────────────────────────────────────────┘
```

### Key Architectural Decisions

#### 1. Language Choice: Rust (Recommended to Keep)
**Pros:**
- Excellent cross-platform support
- Memory safety and performance
- Strong ecosystem for CLI tools
- Zero-cost abstractions
- Great HTTP client libraries (reqwest, tokio)

**Cons:**
- Longer compilation times
- Steeper learning curve

**Alternative: Python/TypeScript**
- Python: Easier AI/ML integration, faster prototyping
- TypeScript: Good for web APIs, familiar syntax
- Both have mature HTTP libraries

**Recommendation: Stay with Rust** for performance and system-level operations.

#### 2. LLM Integration Strategy

**Primary Approach: API-First**
```rust
struct LLMClient {
    provider: LLMProvider,
    api_key: String,
    endpoint: String,
    model: String,
}

enum LLMProvider {
    OpenAI,
    Anthropic,
    Local(String), // Local model endpoint
    Custom(String),
}
```

**Fallback Approach: Local Models**
- Integration with local LLM runners (Ollama, llama.cpp)
- Offline capability
- Privacy-focused option

#### 3. Prompt Engineering Framework

```rust
struct PromptTemplate {
    system_prompt: String,
    os_context: String,
    safety_rules: Vec<String>,
    output_format: String,
}

impl PromptTemplate {
    fn build_prompt(&self, user_input: &str, os: &OsType) -> String {
        format!(
            "{}\n\nOS: {}\nUser Request: {}\n\nRules:\n{}\n\nOutput Format:\n{}",
            self.system_prompt,
            os.to_string(),
            user_input,
            self.safety_rules.join("\n"),
            self.output_format
        )
    }
}
```

#### 4. Safety and Validation Layer

```rust
struct CommandValidator {
    dangerous_patterns: Vec<Regex>,
    require_confirmation: Vec<String>,
    blocked_commands: Vec<String>,
}

impl CommandValidator {
    fn validate(&self, command: &str) -> ValidationResult {
        // Check for dangerous patterns
        // Require confirmation for destructive operations
        // Block potentially harmful commands
    }
}
```

## Data Flow

1. **User Input** → Natural language or traditional command
2. **Intent Detection** → Determine if LLM processing is needed
3. **LLM Processing** → Convert natural language to OS-specific commands
4. **Validation** → Safety checks and confirmation prompts
5. **Translation** → OS-specific command generation
6. **Execution** → Use existing RustShell command system
7. **Feedback** → Display results and learn from interactions

## Configuration System

```toml
[llm]
provider = "openai"  # openai, anthropic, local, custom
model = "gpt-3.5-turbo"
api_key_env = "OPENAI_API_KEY"
endpoint = "https://api.openai.com/v1/chat/completions"
timeout = 30
max_tokens = 150

[safety]
require_confirmation = ["rm", "delete", "format", "sudo"]
dangerous_patterns = ["rm -rf /", "format c:", "sudo rm"]
enable_dry_run = true

[features]
enable_llm = true
fallback_to_traditional = true
cache_responses = true
offline_mode = false
```

## File Structure Enhancement

```
rustshell/
├── src/
│   ├── main.rs                 # Entry point
│   ├── cli/
│   │   ├── mod.rs
│   │   ├── interactive.rs      # Interactive mode
│   │   └── command_mode.rs     # Direct command execution
│   ├── llm/
│   │   ├── mod.rs
│   │   ├── client.rs           # LLM API clients
│   │   ├── prompts.rs          # Prompt templates
│   │   └── providers/          # Different LLM providers
│   │       ├── openai.rs
│   │       ├── anthropic.rs
│   │       └── local.rs
│   ├── commands/
│   │   ├── mod.rs              # Existing command system
│   │   ├── validator.rs        # Command validation
│   │   └── translator.rs       # OS-specific translation
│   ├── config/
│   │   ├── mod.rs
│   │   └── settings.rs         # Configuration management
│   └── utils/
│       ├── mod.rs
│       ├── os_detection.rs
│       └── cache.rs            # Response caching
├── config/
│   └── rustshell.toml          # Default configuration
├── prompts/                    # Prompt templates
│   ├── system.txt
│   ├── windows.txt
│   ├── linux.txt
│   └── macos.txt
└── tests/
    ├── integration/
    └── unit/
```

## Global Installation Strategy

1. **Cargo Installation**
   ```bash
   cargo install --path .
   ```

2. **Binary Distribution**
   - GitHub Releases with pre-built binaries
   - Cross-compilation for Windows, Linux, macOS

3. **Package Managers**
   - Homebrew (macOS/Linux)
   - Chocolatey (Windows)
   - Snap (Linux)

4. **IDE Integration**
   - VS Code extension
   - Terminal integration scripts

## Performance Considerations

- **Caching**: Cache LLM responses for repeated queries
- **Local Fallback**: Traditional command parsing for speed
- **Async Operations**: Non-blocking LLM API calls
- **Batching**: Group multiple operations when possible

## Security Considerations

- **API Key Management**: Secure storage and environment variables
- **Command Validation**: Multi-layer safety checks
- **Sandboxing**: Optional dry-run mode
- **User Confirmation**: For destructive operations

## Scalability

- **Plugin System**: Extensible command modules
- **Custom Providers**: Support for enterprise LLM endpoints
- **Configuration Profiles**: Different setups for different contexts
- **Telemetry**: Optional usage analytics for improvement