run
a.k.a. runtool: the bridge between human and AI tooling
Define functions in a Runfile. Your AI agent discovers and executes them via the built-in MCP server. You run them from the terminal too with instant startup and tab completion. Shell, Python, Node—whatever fits the task.
This project is a specialized fork of sporto/run-rust, optimized specifically for Model Context Protocol (MCP) integration, enabling automatic tool discovery for AI agents.
Quick Start
Here's a simple example:
# Runfile
# @desc Deploy to an environment
# @arg env Target environment (staging|prod)
# @arg version Version to deploy (defaults to "latest")
) {
}
Run it from your terminal:
Or point your AI agent at the Runfile, and it can discover and execute this tool automatically.
Table of Contents
- AI Agent Integration (MCP)
- Installation
- The Runfile Syntax
- Configuration
- Real-World Examples
- License
AI Agent Integration (MCP)
run has built-in support for the Model Context Protocol (MCP), allowing AI agents like Claude to discover and execute your Runfile functions as tools.
MCP Server Mode
Configure in your AI client. For Claude Desktop, add this configuration to:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
run automatically discovers your Runfile by searching up from the current working directory. MCP output files (.run-output) are written next to your Runfile. To override the output location explicitly, set RUN_MCP_OUTPUT_DIR in the environment.
Now your AI agent can discover and call your tools automatically.
For debugging purposes, you can start run as an MCP server yourself:
Describing Tools for AI
Use @desc to describe what a function does, and declare parameters in the function signature:
# @desc Search the codebase for a regex pattern
) {
#!/usr/bin/env python3
for)
for
)
for))
if )
)
}
# @desc Deploy the application to a specific environment
) {
}
Functions with @desc are automatically exposed as MCP tools with typed parameters.
Inspect Tool Schema
View the generated JSON schema for all MCP-enabled functions:
This outputs the tool definitions that AI agents will see—useful for debugging and validation.
Built-in MCP Tools
In addition to your Runfile functions, run provides built-in tools for managing execution context:
-
set_cwd- Change the current working directory. Useful for multi-project workflows where the agent needs to switch between different project contexts.set_cwd(path: string) -
get_cwd- Get the current working directory. Helps agents understand their current execution context.get_cwd()
These tools allow AI agents to navigate your filesystem and work with multiple projects in a single MCP session.
Installation
Recommended
macOS/Linux (Homebrew)
Windows (Scoop)
scoop bucket add nihilok https://github.com/nihilok/scoop-bucket
scoop install runtool
Alternative: Cargo
Works on all platforms:
Tab Completions
Auto-detect your shell and install completions:
Supports bash, zsh, fish, and powershell.
The Runfile Syntax
run parses your Runfile to find function definitions. The syntax is designed to be familiar to anyone who has used bash or sh.
Basic Functions
For simple, one-line commands, you don't need braces.
# Usage: run dev
Block Syntax
Use {} for multi-statement functions. This avoids the need for trailing backslashes.
Function Signatures
Declare parameters directly in the function signature for cleaner, self-documenting code:
# @desc Deploy to an environment
) {
}
# @desc Resize an image
) {
}
Type annotations (str, int, bool) are used for:
- MCP JSON schema generation (AI agents see typed parameters)
- Self-documenting functions
- Optional runtime validation
Default values make parameters optional:
# version defaults to "latest" if not provided
) { }
Legacy positional syntax still works for simple cases:
# Access arguments as $1, $2, $@
Combining with @arg for descriptions:
# @desc Deploy the application
# @arg env Target environment (staging|prod)
# @arg version Version tag to deploy
) {
}
When both signature params and @arg exist, the signature defines names/types/defaults, and @arg provides descriptions for MCP.
Syntax Guide
run allows you to embed Python, Node.js, or other interpreted languages directly inside shell functions using shebang detection. When run encounters a shebang line (e.g., #!/usr/bin/env python) as the first line of a function body, it automatically:
- Detects the interpreter from the shebang path
- Extracts the function body (excluding the shebang line itself)
- Executes the content using that interpreter
Key behaviors:
- Argument passing: Shell arguments (like
$1,$2) are passed to the script as command-line arguments. In Python, access them viasys.argv[1],sys.argv[2], etc. In Node.js, useprocess.argv[2],process.argv[3], etc. - Function signature integration: If your function declares typed parameters (e.g.,
analyze(file: str)), these are still accessed positionally in the embedded script. - Shebang precedence: If both a shebang and
@shellattribute are present,@shelltakes precedence.
Example:
# @desc Analyze a JSON file
) {
#!/usr/bin/env python3
# $file becomes sys.argv[1]
)
)
)
}
When you run run analyze data.json, run detects the Python shebang and executes the function body as a Python script with data.json passed as sys.argv[1].
This polyglot approach lets you mix shell orchestration with specialized scripting languages in a single Runfile without needing separate script files.
Attributes & Polyglot Scripts
You can use comment attributes (# @key value) or shebang lines to modify function behaviour and select interpreters.
Platform Guards (@os)
Restrict functions to specific operating systems. This allows you to define platform-specific implementations of the same task.
# @os windows
When you run run clean, only the variant matching your current OS will execute.
You can also use inline platform guards within a single function to define OS-aware implementations:
# @desc Run the build process (OS-aware)
run evaluates these guards at runtime and executes only the block matching your current platform.
Interpreter Selection
There are two ways to specify a custom interpreter:
1. Shebang detection
The first line of your function body can be a shebang, just like standalone scripts:
analyze() {
#!/usr/bin/env python
import sys, json
with open(sys.argv[1]) as f:
data = json.load(f)
print(f"Found {len(data)} records")
}
server() {
#!/usr/bin/env node
const port = process.argv[1] || 3000;
require('http').createServer((req, res) => {
res.end('Hello!');
}).listen(port);
}
2. Attribute syntax (@shell):
Use comment attributes for explicit control or when you need to override a shebang:
# @shell python3
,
=
}
Precedence: If both are present, @shell takes precedence over the shebang.
Supported interpreters: python, python3, node, ruby, pwsh, bash, sh
Nested Namespaces
Organise related commands using colons. run parses name:subname as a single identifier.
Execute them with spaces:
Function Composition
Functions can call other functions defined in the same Runfile, enabling task composition and code reuse without duplication.
# Base tasks
# Deploy depends on successful build
When you run run ci, all compatible functions are automatically injected into the execution scope, so you can call them directly without spawning new processes.
Key features:
- Functions can call sibling functions defined in the same file
- Exit codes are properly propagated (use
|| exit 1to stop on failure) - Works across different shells when interpreters are compatible
- Top-level variables are also available to all functions
Configuration
Environment Variables
run supports the following environment variables to customize its behavior:
RUN_SHELL
Override the default shell interpreter for executing functions.
Default behavior:
- Unix/Linux/macOS:
sh - Windows:
pwsh(if available), otherwisepowershell
Usage:
# One-time override
RUN_SHELL=zsh
# Set for your session
Note: Commands in your Runfile must be compatible with the configured shell, unless an explicit interpreter (e.g., # @shell python) is defined for that function.
RUN_MCP_OUTPUT_DIR
Specify where to write output files when running in MCP mode.
Default behavior:
- Outputs are written to
.run-output/directory next to your Runfile - If the Runfile location cannot be determined, falls back to system temp directory
Usage:
# Set output directory
RUN_MCP_OUTPUT_DIR=/tmp/run-output
# Or configure in Claude Desktop MCP settings
{
}
Why this matters: When running in MCP mode, outputs longer than 32 lines are automatically truncated in the response to the AI agent, with the full output saved to a file. This prevents overwhelming the AI with massive outputs while still preserving access to the complete data.
Global Runfile
Create a ~/.runfile in your home directory to define global commands available anywhere.
# ~/.runfile
# Usage: run update
# Usage: run clone <repo>
If a local ./Runfile exists, run looks there first. If the command isn't found locally, it falls back to ~/.runfile.
MCP Output Files
When running in MCP mode, outputs longer than 32 lines are truncated and the full text is saved under a .run-output directory next to your Runfile. The directory is created automatically. To override the location (e.g., for sandboxing), set RUN_MCP_OUTPUT_DIR before starting the server:
RUN_MCP_OUTPUT_DIR=/tmp/run-output
Real-World Examples
Here are practical examples demonstrating how to use run for real-world workflows.
Docker Management
Organize Docker commands with nested namespaces:
# @desc Build the Docker image
# @desc Start all services in detached mode
# @desc View logs for a service
# @arg service The service name (defaults to "app")
) {
}
# @desc Open a shell in a container
# @arg service The service name (defaults to "app")
) {
}
Run them like this:
CI/CD Pipeline with Function Composition
Build reusable tasks that compose together:
# @desc Run linting checks
# @desc Run all tests
# @desc Build release binary
# @desc Run the complete CI pipeline
# @desc Deploy to an environment
# @arg env Target environment (staging|prod)
) {
# Ensure build succeeds first
||
}
Polyglot Scripts - Python Data Analysis
Embed Python directly in your Runfile for data processing:
# @desc Analyze a JSON file and print statistics
# @arg file Path to the JSON file
) {
#!/usr/bin/env python3
)
)
)
if ) ) > 0
) if ) else
)
}
# @desc Convert CSV to JSON
# @arg input Input CSV file
# @arg output Output JSON file
) {
#!/usr/bin/env python3
)
)
)
)
)
)
}
Secure Database Access for AI Agents
Important security feature: AI agents only see function descriptions and parameters via MCP—never the implementation details. This means secrets in your function bodies remain hidden.
# @desc Run a read-only query against the database
# @arg query The SQL query to execute
# @arg env The target environment (local|staging|prod)
) {
#!/usr/bin/env python3
# AI agents can't see this implementation!
# Connection strings can be hardcoded or loaded from environment
)
# Or: db_url = "postgresql://user:pass@localhost/mydb"
if
)
)
)
)
)
)
for)
)
{
#!/usr/bin/env python3
)
)
)
# Query information_schema for column info
))
)
for)
)
Security Benefit: When you run run --inspect, the AI sees:
The function body—including any credentials, API keys, or implementation logic—is never exposed to the AI agent. This allows you to safely hardcode secrets or reference them from your environment without risk of leakage.
Platform-Specific Commands
Define different implementations for different operating systems:
# @desc Clean build artifacts
# @os windows
# @desc Clean build artifacts
# @os unix
# Or use inline platform guards:
# @desc Open the project in the default editor
Node.js Web Server
Embed a Node.js server directly in your Runfile:
# @desc Start a development web server
# @arg port Port number to listen on
) {
#!/usr/bin/env node
);
);
);
|| ;
) => ));
);
}
License
MIT