# SCUD CLI (Rust)
Fast, simple task master for AI-driven development - Rust implementation.
## Overview
This is a high-performance Rust rewrite of the SCUD task management system. It replaces the external `task-master` CLI with a fast, single-binary solution that:
- ⚡ **50x faster** startup time (~10ms vs ~500ms)
- 🎯 **42x token reduction** (~500 tokens vs ~21k tokens per operation)
- 📦 **Simple distribution** - single binary, no dependencies
- 🔧 **Direct LLM integration** - no MCP overhead
## Architecture
```
scud (Rust Binary)
├── Core Commands (No AI - Instant)
│ ├── init # Initialize .taskmaster/
│ ├── tags # List tags
│ ├── use-tag # Switch active tag
│ ├── list # List tasks with filters
│ ├── show # Show task details
│ ├── set-status # Update task status
│ ├── next # Find next available task (--claim for dynamic-wave)
│ ├── stats # Show statistics
│ └── doctor # [EXPERIMENTAL] Diagnose stuck states
│
├── AI Commands (Direct Anthropic API)
│ ├── parse-prd # Parse PRD markdown into tasks
│ ├── analyze-complexity # Analyze task complexity
│ ├── expand # Break down complex tasks
│ └── research # AI-powered research
│
└── Storage (SCG)
└── .scud/tasks/tasks.scg
```
## Building
### Development
```bash
cargo build
```
### Release (Optimized)
```bash
cargo build --release
```
## Usage
### Core Commands
```bash
# Initialize SCUD
scud init
# List tags
scud tags
# Switch to a tag
scud use-tag auth
# List tasks
scud list
scud list --status pending
# Show task details
scud show 3
# Update task status
scud set-status 3 in-progress
# Find next available task
scud next
# Show statistics
scud stats
```
### [EXPERIMENTAL] Dynamic-Wave Mode
Dynamic-wave mode allows agents to auto-claim tasks and maintain workflow health:
```bash
# Find and auto-claim the next available task
scud next --claim --name agent-1
# Release all tasks claimed by an agent
scud next --release --name agent-1
```
**IMPORTANT:** When using `--claim`, agents MUST run `scud set-status <id> done` when finishing a task. This ensures dependent tasks become unblocked.
### [EXPERIMENTAL] Doctor Command
Diagnose stuck workflow states:
```bash
# Check for issues in all tags
scud doctor
# Check specific tag with custom stale threshold
scud doctor --tag auth --stale-hours 12
# Auto-fix recoverable issues (stale locks, orphan tasks)
scud doctor --fix
```
The doctor command detects:
- Stale locks (tasks locked >24h by default)
- Tasks blocked by cancelled/missing dependencies
- Orphan in-progress tasks (not locked, stale)
- Missing active tag
- Corrupt storage files
### AI Commands
**Requires:** API key environment variable (see [Provider Configuration](#provider-configuration))
```bash
# Parse PRD into tasks
scud parse-prd docs/features/auth.md --tag auth
# Analyze complexity
scud analyze-complexity # All tasks
scud analyze-complexity --task 5 # Specific task
# Expand complex tasks
scud expand 7 # Specific task
scud expand --all # All tasks >13 complexity
# Research a topic
scud research "OAuth 2.0 best practices"
```
## Performance Comparison
| Startup | ~500ms | ~10ms | **50x faster** |
| List tasks | ~100ms | ~5ms | **20x faster** |
| Parse PRD | ~3-5s | ~2-3s | ~40% faster |
| Token overhead | ~21k | ~500 | **42x reduction** |
## Provider Configuration
SCUD supports multiple LLM providers: **xAI (Grok)**, **Anthropic (Claude)**, **OpenAI (GPT)**, and **OpenRouter**.
### Quick Start
```bash
# Initialize with xAI (Grok) - recommended for fast code generation
scud init --provider xai
export XAI_API_KEY=your-key
# Or initialize with Anthropic (Claude)
scud init --provider anthropic
export ANTHROPIC_API_KEY=your-key
# Interactive mode - prompt for provider
scud init
```
### Configuration File
The configuration is stored in `.taskmaster/config.toml`:
```toml
[llm]
provider = "xai"
model = "grok-code-fast-1"
max_tokens = 4096
```
For complete provider documentation, see [PROVIDERS.md](./PROVIDERS.md).
### Supported Providers
| xAI | `XAI_API_KEY` | `grok-code-fast-1` |
| Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514` |
| OpenAI | `OPENAI_API_KEY` | `gpt-4-turbo` |
| OpenRouter | `OPENROUTER_API_KEY` | `anthropic/claude-sonnet-4` |
## Data Models
### Task
```rust
struct Task {
id: String,
title: String,
description: String,
status: TaskStatus, // pending, in-progress, done, etc.
complexity: u32, // Fibonacci scale: 1,2,3,5,8,13,21
priority: Priority, // high, medium, low
dependencies: Vec<String>, // Task IDs this depends on
details: Option<String>, // Technical details
test_strategy: Option<String>,
complexity_analysis: Option<String>,
created_at: Option<String>,
updated_at: Option<String>,
}
```
### Phase
```rust
struct Phase {
name: String,
tasks: Vec<Task>,
}
```
### Config
```toml
[llm]
provider = "xai"
model = "grok-code-fast-1"
max_tokens = 4096
```
## LLM Integration
### Direct Anthropic API
- No MCP server overhead
- Simple HTTP requests
- Minimal token usage
- Fast response times
### Prompt Templates
Located in `src/llm/prompts.rs`:
- `parse_prd()` - Converts markdown to structured tasks
- `analyze_complexity()` - Scores task difficulty
- `expand_task()` - Breaks down complex tasks
- `research_topic()` - AI research assistant
## Integration with SCUD
The Rust CLI integrates seamlessly with the existing SCUD system:
1. `bin/scud.js` detects and delegates to Rust binary
2. Falls back to debug build if release not available
3. Auto-builds if binary not found
4. All agents and slash commands work unchanged
## Development
### Project Structure
```
scud-cli/
├── Cargo.toml
├── src/
│ ├── main.rs # CLI entry point
│ ├── commands/
│ │ ├── mod.rs
│ │ ├── init.rs # Core commands
│ │ ├── tags.rs
│ │ ├── ...
│ │ └── ai/ # AI commands
│ │ ├── parse_prd.rs
│ │ ├── analyze_complexity.rs
│ │ ├── expand.rs
│ │ └── research.rs
│ ├── models/
│ │ ├── task.rs
│ │ └── phase.rs
│ ├── storage/
│ │ └── mod.rs # JSON I/O
│ └── llm/
│ ├── client.rs # Anthropic API
│ └── prompts.rs # Prompt templates
```
### Adding New Commands
1. Add command to `Commands` enum in `main.rs`
2. Create handler in `src/commands/`
3. Add to `rustCommands` array in `bin/scud.js`
4. Update help text
### Adding New LLM Prompts
1. Add prompt function to `src/llm/prompts.rs`
2. Create command handler in `src/commands/ai/`
3. Use `LLMClient::complete()` or `complete_json()`
## Testing
```bash
# Build and test
cargo build
cargo test
# Test specific command
cargo run -- init
cargo run -- tags
cargo run -- --help
# Test AI commands (requires API key)
export ANTHROPIC_API_KEY=sk-...
cargo run -- parse-prd test.md --tag test
```
## Distribution
### As Standalone Binary
```bash
cargo build --release
# Binary: target/release/scud
# Copy to /usr/local/bin or similar
```
### As Part of npm Package
The SCUD npm package includes the Rust binary:
- Pre-built binaries for major platforms
- Auto-built on first use if needed
- Seamless integration via `bin/scud.js`
## Future Enhancements
- [ ] Cross-compilation for multiple platforms
- [ ] Pre-built binaries in npm package
- [ ] Configuration file support
- [ ] Additional LLM providers (OpenAI, etc.)
- [ ] Offline mode for core commands
- [ ] Task export/import
- [ ] Custom prompt templates
- [ ] Parallel task execution analysis
- [ ] Integration tests with real API calls
## License
MIT
## Contributing
See main SCUD repository for contribution guidelines.