Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
๐ Quick Start
VS Code Extension (Syntax Highlighting + LSP)
For full IDE support including syntax highlighting, IntelliSense, and error diagnostics:
# Install the extension from marketplace
# Build the Language Server (for IntelliSense)
# The extension will auto-detect the LSP server
Features: Syntax highlighting, autocompletion, hover docs, go-to-definition, error diagnostics.
Installation
From Source (recommended for latest features):
&&
From Cargo:
From Homebrew (macOS/Linux):
Usage:
# Launch interactive TUI (recommended)
# Or classic REPL
# Run a script file
# Evaluate inline expression
# JSON output mode
# Type INFERENCE โ types are automatically inferred
name = "AetherShell" # inferred as String
count = 42 # inferred as Int
scores = [95, 87, 92, 88] # inferred as Array<Int>
# Type ANNOTATIONS โ explicit when needed for clarity
config: Record = {host: "localhost", port: 8080}
handler: fn(Int) -> Int = fn(x) => x * 2
# Typed pipelines โ structured data, not text streams
[1, 2, 3, 4, 5] | map(fn(x) => x * 2) | sum() # => 30
# Pattern matching
match type_of(count) {
"Int" => "Integer: ${count}",
"String" => "Text",
_ => "Unknown"
}
# AI query with vision
ai("What's in this image?", {images: ["photo.jpg"]})
# Autonomous agent with tool access
agent("Find security issues in src/", ["ls", "cat", "grep"])
# Agent-to-Agent (A2A) protocol for multi-agent collaboration
a2a_send("analyzer", {task: "review code", files: ls("./src")})
# NANDA consensus for distributed agent decisions
nanda_propose("deployment", {version: "2.0", approve_threshold: 0.7})
๐ Note: Set
OPENAI_API_KEYfor AI features:export OPENAI_API_KEY="sk-..."
โจ Features
๐ค AI-Native Shell
- Multi-modal AI: Images, audio, video analysis
- Autonomous agents with tool access
- MCP Protocol: 130+ tools across 27 categories
- A2A Protocol: Agent-to-agent communication
- A2UI Protocol: Agent-to-user interface
- NANDA: Distributed consensus for agent networks
- Multi-provider: OpenAI, Ollama, local models
- RAG & Knowledge Graphs built-in
๐ Typed Pipelines
- Hindley-Milner type inference
- Structured data: Records, Arrays, Tables
- First-class functions and lambdas
- Pattern matching expressions
๐ง ML & Enterprise
- Neural networks creation & evolution
- Reinforcement learning (Q-Learning, DQN)
- Enterprise RBAC with role-based access
- Audit logging & compliance reporting
- SSO integration (SAML, OAuth, OIDC)
- Cluster management for distributed AI
๐จ Developer Experience
- Interactive TUI with tabs & themes
- Language Server Protocol (LSP)
- VS Code extension with IntelliSense
- Plugin system with TOML manifests
- WASM support for browser REPL
- Package management & imports
๐ฏ What Makes AetherShell Unique?
AetherShell is the only shell combining these capabilities:
| Feature | AetherShell | Traditional Shells | Nushell |
|---|---|---|---|
| AI Agents with Tools | โ | โ | โ |
| Multi-modal AI (Vision/Audio/Video) | โ | โ | โ |
| MCP Protocol (130+ tools) | โ | โ | โ |
| A2A (Agent-to-Agent) | โ | โ | โ |
| A2UI (Agent-to-User Interface) | โ | โ | โ |
| NANDA Consensus Protocol | โ | โ | โ |
| Neural Networks Built-in | โ | โ | โ |
| Hindley-Milner Types | โ | โ | โ |
| Typed Pipelines | โ | โ | โ |
| Enterprise (RBAC, Audit, SSO) | โ | โ | โ |
| Language Server Protocol (LSP) | โ | โ | โ |
Bash vs AetherShell: A Quick Comparison
Find large Rust files and show their sizes:
# Bash: Text parsing, fragile, hard to read
| | |
# AetherShell: Typed, composable, readable
ls("./src")
| where(fn(f) => f.ext == ".rs" && f.size > 1024)
| map(fn(f) => {name: f.name, size: f.size})
| sort_by(fn(f) => f.size, "desc")
| take(5)
Analyze JSON API response:
# Bash: Requires jq, string manipulation
|
# AetherShell: Native JSON, type-safe field access
repo = http_get("https://api.github.com/repos/nervosys/AetherShell")
print("Stars: ${repo.stargazers_count}, Forks: ${repo.forks_count}")
Ask AI to explain an error:
# Bash: Not possible without external scripts
# AetherShell: Built-in AI with context
error_log = cat("error.log") | where(fn(l) => contains(l, "FATAL")) | first()
ai("Explain this error and suggest a fix:", {context: error_log})
๐ Language Features at a Glance
AetherShell is a typed functional language with 215+ built-in functions across these categories:
Types & Literals
Intโ42,-7Floatโ3.14,2.0Stringโ"hello","${var}"Boolโtrue,falseNullโnullArrayโ[1, 2, 3]Recordโ{a: 1, b: 2}Lambdaโfn(x) => x * 2
Operators
- Arithmetic:
+-*/%** - Comparison:
==!=<<=>>= - Logical:
&&||! - Pipeline:
| - Member:
.
Control Flow
matchexpressions- Pattern guards
- Wildcard
_patterns - Lambda functions
- Pipeline chaining
Builtin Categories (215+ functions)
| Category | Examples | Count |
|---|---|---|
| Core | help, print, echo, type_of, len |
15 |
| Functional | map, where, reduce, take, any, all, first |
12 |
| String | split, join, trim, upper, lower, replace |
10 |
| Array | flatten, reverse, slice, range, zip, push |
8 |
| Math | abs, min, max, sqrt, pow, floor, ceil |
8 |
| Aggregate | sum, avg, product, unique, values, keys |
6 |
| File System | ls, cat, pwd, cd, exists, mkdir, rm |
11 |
| Config | config, config_get, config_set, themes |
7 |
| Debugging | debug, dbg, trace, assert, type_assert, inspect |
7 |
| Async | async, await, futures support |
3 |
| Errors | try/catch, throw, is_error |
4 |
| AI | ai, agent, swarm, rag_query, finetune_start |
20+ |
| Enterprise | role_create, audit_log, sso_init, compliance_check |
22 |
| Distributed | cluster_create, job_submit, aggregate_results |
15 |
| Platform | platform, is_windows, is_linux, features |
12 |
| MCP Protocol | mcp_tools, mcp_call, 130+ tool integrations |
130+ |
๐ Examples
Core Syntax โ Type Inference & Annotations
AetherShell uses Hindley-Milner type inference with optional explicit annotations:
# TYPE INFERENCE โ compiler infers types automatically
age = 42 # inferred: Int
pi = 3.14159 # inferred: Float
name = "AetherShell" # inferred: String
active = true # inferred: Bool
# TYPE ANNOTATIONS โ explicit when clarity is needed
config: Record = {host: "localhost", port: 8080, debug: true}
scores: Array<Int> = [95, 87, 92, 88]
matrix: Array<Array<Int>> = [[1, 2], [3, 4]]
# String interpolation (type inferred)
greeting = "Hello, ${name}! You're ${age} years old."
# Records โ structured data with field access
user = {name: "Alice", age: 30, admin: true} # inferred: Record
print(user.name) # => "Alice"
# Lambdas โ annotate for complex signatures
double = fn(x) => x * 2 # inferred: fn(Int) -> Int
add: fn(Int, Int) -> Int = fn(a, b) => a + b # explicit return type
greet = fn(s) => "Hi, ${s}!" # inferred: fn(String) -> String
print(double(21)) # => 42
print(add(10, 20)) # => 30
Strong Types โ Runtime Safety
# Type inspection (no annotation needed)
type_of(42) # => "Int"
type_of(3.14) # => "Float"
type_of("hello") # => "String"
type_of([1, 2, 3]) # => "Array"
type_of({a: 1}) # => "Record"
type_of(fn(x) => x) # => "Lambda"
# Type assertions for validation
type_assert(42, "Int") # Passes
type_assert("hello", "String") # Passes
type_assert([1,2,3], "Array") # Passes
# Pattern matching on types (inference works here too)
process = fn(val) => match type_of(val) {
"Int" => val * 2,
"String" => upper(val),
"Array" => len(val),
_ => null
}
process(21) # => 42
process("hello") # => "HELLO"
process([1,2,3,4,5]) # => 5
Functional Pipelines โ Structured Data, Not Text
Unlike traditional shells that pipe text, AetherShell pipes typed values:
# Transform: map applies a function to each element
numbers = [1, 2, 3, 4, 5] # inferred: Array<Int>
squared = numbers | map(fn(x) => x * x) # => [1, 4, 9, 16, 25]
# Filter: where keeps elements matching a predicate
evens = numbers | where(fn(x) => x % 2 == 0) # => [2, 4]
# Aggregate: reduce combines elements into one value
total = numbers | reduce(fn(acc, x) => acc + x, 0) # => 15
# Chain operations โ type flows through the pipeline
result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
| where(fn(x) => x % 2 == 0) # [2, 4, 6, 8, 10]
| map(fn(x) => x ** 2) # [4, 16, 36, 64, 100]
| reduce(fn(a, b) => a + b, 0) # 220
# Array manipulation (types inferred)
reversed = [1, 2, 3, 4, 5] | reverse # => [5, 4, 3, 2, 1]
flat = [[1, 2], [3, 4]] | flatten # => [1, 2, 3, 4]
sliced = [1, 2, 3, 4, 5] | slice(1, 4) # => [2, 3, 4]
# Predicate checks
has_large = [1, 2, 3, 4, 5] | any(fn(x) => x > 4) # => true
all_even = [2, 4, 6, 8] | all(fn(x) => x % 2 == 0) # => true
Pattern Matching โ Exhaustive Type-Safe Control Flow
# Match on values with range patterns (inference works)
grade = fn(score) => match score {
100 => "Perfect!",
90..99 => "A",
80..89 => "B",
70..79 => "C",
_ => "Keep trying"
}
grade(95) # => "A"
grade(100) # => "Perfect!"
# Match with guards for complex conditions
classify = fn(n) => match n {
x if x < 0 => "negative",
0 => "zero",
x if x > 0 => "positive"
}
classify(-5) # => "negative"
classify(42) # => "positive"
# Type-based dispatch โ annotate for polymorphic functions
describe: fn(Any) -> String = fn(val) => match type_of(val) {
"Int" => "Integer: ${val}",
"Float" => "Decimal: ${val}",
"String" => "Text (${len(val)} chars): ${val}",
"Array" => "Collection of ${len(val)} items",
"Record" => "Object with keys: ${keys(val)}",
_ => "Unknown type"
}
describe(42) # => "Integer: 42"
describe("hello") # => "Text (5 chars): hello"
describe([1, 2, 3]) # => "Collection of 3 items"
describe({x: 1, y: 2}) # => "Object with keys: [x, y]"
String Operations โ Built-in Text Processing
# Manipulation
split("a,b,c", ",") # => ["a", "b", "c"]
join(["a", "b", "c"], "-") # => "a-b-c"
trim(" hello ") # => "hello"
upper("hello") # => "HELLO"
lower("WORLD") # => "world"
replace("foo bar foo", "foo", "baz") # => "baz bar baz"
# Queries
contains("hello world", "world") # => true
starts_with("hello", "hel") # => true
ends_with("hello", "lo") # => true
len("hello") # => 5
Math Operations โ Scientific Computing
# Basic math
abs(-42) # => 42
min(5, 3) # => 3
max(5, 3) # => 5
pow(2, 10) # => 1024
sqrt(16) # => 4.0
# Rounding
floor(3.7) # => 3
ceil(3.2) # => 4
round(3.5) # => 4
# Statistical (on arrays)
sum([1, 2, 3, 4, 5]) # => 15
avg([10, 20, 30]) # => 20
product([2, 3, 4]) # => 24
unique([1, 2, 2, 3, 3, 3]) # => [1, 2, 3]
Error Handling โ Try/Catch/Throw
# Safe operations with try/catch
result = try {
risky_operation()
} catch {
"default_value"
}
# Catch with error binding
result = try {
parse_config("invalid.toml")
} catch e {
print("Error: ${e}")
default_config()
}
# Throw custom errors
validate = fn(x) => {
if x < 0 {
throw "Value must be non-negative"
}
x
}
# Check for errors
is_error(try { throw "oops" } catch e { e }) # => true
Async/Await โ Concurrent Operations
# Define async functions (type inferred from return)
fetch_data = async fn(url) => http_get(url)
# Await results
data = await fetch_data("https://api.example.com/data")
# Parallel operations with futures (types flow through)
urls = ["https://api1.com", "https://api2.com", "https://api3.com"]
futures = urls | map(fn(u) => async fn() => http_get(u))
results = futures | map(fn(f) => await f())
# When explicit types help readability:
timeout: Duration = 30s
response: Result<Record, Error> = await http_get_with_timeout(url, timeout)
Debugging โ Development Tools
# Debug prints value with type and returns it (for chaining)
[1, 2, 3] | debug() | map(fn(x) => x * 2)
# Prints: [Debug] Array<Int>: [1, 2, 3]
# Returns: [2, 4, 6]
# Trace with labels for pipeline debugging
[1, 2, 3, 4, 5]
| trace("input")
| where(fn(x) => x > 2) | trace("filtered")
| map(fn(x) => x * 2) | trace("doubled")
# Prints each stage with labels
# Assertions for testing
assert(1 + 1 == 2)
assert(len("hello") == 5, "Length should be 5")
# Type assertions (explicit check)
type_assert(42, "Int")
type_assert([1, 2, 3], "Array")
# Deep inspection (inference works)
info = inspect([1, 2, 3])
# => {type: "Array", len: 3, values: [1, 2, 3]}
File System โ Structured Output
# List files with structured data (inference handles types)
files = ls("./src")
| where(fn(f) => f.size > 1000)
| map(fn(f) => {name: f.name, kb: f.size / 1024})
| take(5)
# Read and process files
line_count = cat("config.toml") | split("\n") | len()
# Check existence (type inferred)
file_exists = exists("./src/main.rs") # => true
# Get current directory
cwd = pwd() # => "/home/user/project"
Configuration System โ XDG-Compliant
# Get full configuration as Record
config()
# Get specific values with dot notation (types inferred)
theme = config_get("colors.theme") # => "tokyo-night"
max_history = config_get("history.max_size") # => 10000
# Set values persistently
config_set("colors.theme", "dracula")
config_set("editor.tab_size", 4)
# Get all paths (XDG Base Directory compliant)
paths = config_path()
print(paths.config_file) # ~/.config/aether/config.toml
print(paths.data_dir) # ~/.local/share/aether
# List all 38 built-in themes
available_themes = themes() | take(8)
# => ["catppuccin", "dracula", "github-dark", "gruvbox",
# "monokai", "nord", "one-dark", "tokyo-night"]
AI Agents with Tool Access
# Simple agent with goal and tools
agent("Find all files larger than 1MB in src/", ["ls", "du"])
# Agent with full configuration
agent({
goal: "Identify and fix code style violations",
tools: ["ls", "cat", "grep", "git"],
max_steps: 20,
dry_run: true, # Preview actions before executing
model: "openai:gpt-4o"
})
# Multi-agent swarm for complex tasks
swarm({
coordinator: "Orchestrate a full security audit",
agents: [
{role: "scanner", goal: "Find vulnerable dependencies"},
{role: "reviewer", goal: "Check for SQL injection"},
{role: "reporter", goal: "Generate findings report"}
],
tools: ["ls", "cat", "grep", "cargo"]
})
Hierarchical Agent Swarms โ Complex Task Decomposition
# Coordinator agent spawns specialized subagents for a large codebase refactor
refactor_swarm = swarm_create({
name: "codebase_modernizer",
coordinator: {
goal: "Modernize legacy codebase to async/await patterns",
strategy: "divide_and_conquer",
model: "openai:gpt-4o"
}
})
# Coordinator analyzes scope and spawns specialized subagents dynamically
swarm_spawn(refactor_swarm, {
role: "analyzer",
goal: "Map all sync functions that could be async",
tools: ["grep", "cat", "ast_parse"],
on_complete: fn(results) => {
# Spawn worker agents for each module discovered
results.modules | map(fn(mod) => {
swarm_spawn(refactor_swarm, {
role: "refactorer",
goal: "Convert ${mod.name} to async/await",
tools: ["cat", "edit", "git"],
context: mod,
parent: "analyzer"
})
})
}
})
# Monitor swarm progress in real-time
swarm_status(refactor_swarm)
# => {active: 5, completed: 12, pending: 3, failed: 0}
# Stream progress updates
swarm_watch(refactor_swarm, fn(event) => {
match event.type {
"spawn" => print("๐ ${event.agent.role}: ${event.agent.goal}"),
"progress" => print("โณ ${event.agent.role}: ${event.progress}%"),
"complete" => print("โ
${event.agent.role} finished: ${event.summary}"),
"error" => print("โ ${event.agent.role} failed: ${event.error}")
}
})
# Wait for full completion with timeout
final_result = swarm_await(refactor_swarm, {timeout: 30m})
print("Refactored ${final_result.files_changed} files across ${final_result.modules} modules")
Long-Running Task Orchestration
# Complex ML pipeline with checkpoint/resume
ml_pipeline = swarm_create({
name: "training_pipeline",
persistence: "checkpoint", # Auto-save progress
resume_on_failure: true
})
# Phase 1: Data preparation (spawns subagents per data source)
swarm_spawn(ml_pipeline, {
role: "data_coordinator",
goal: "Prepare training data from multiple sources",
on_start: fn() => {
data_sources = ["s3://bucket/raw", "postgres://db/features", "local://cache"]
data_sources | map(fn(src) => {
swarm_spawn(ml_pipeline, {
role: "data_worker",
goal: "Extract and clean data from ${src}",
tools: ["s3", "sql", "pandas"],
context: {source: src},
checkpoint_interval: 5m # Save progress every 5 minutes
})
})
}
})
# Phase 2: Model training (auto-spawns after Phase 1)
swarm_spawn(ml_pipeline, {
role: "trainer",
goal: "Train model on prepared data",
depends_on: ["data_coordinator"], # Wait for all data workers
tools: ["pytorch", "tensorboard", "gpu"],
resources: {gpu: 4, memory: "64GB"},
max_runtime: 4h
})
# Phase 3: Evaluation & deployment
swarm_spawn(ml_pipeline, {
role: "evaluator",
goal: "Validate model and deploy if metrics pass",
depends_on: ["trainer"],
tools: ["pytest", "mlflow", "k8s"],
on_complete: fn(metrics) => {
if metrics.accuracy > 0.95 {
swarm_spawn(ml_pipeline, {
role: "deployer",
goal: "Deploy model to production",
tools: ["docker", "k8s", "istio"]
})
}
}
})
# Start the pipeline
swarm_start(ml_pipeline)
# Check detailed status
status = swarm_status(ml_pipeline, {detailed: true})
status.agents | map(fn(a) => "${a.role}: ${a.status} (${a.progress}%)")
Multi-Modal AI
# Analyze images
ai("What's in this screenshot?", {images: ["screenshot.png"]})
# Process audio
ai("Transcribe and summarize this meeting", {audio: ["meeting.mp3"]})
# Video analysis
ai("Extract the key steps from this tutorial", {video: ["tutorial.mp4"]})
Typed Functional Pipelines
# File system operations return typed Records, not text
large_rust_files = ls("./src")
| where(fn(f) => f.ext == ".rs" && f.size > 1000)
| map(fn(f) => {name: f.name, kb: f.size / 1024})
| sort_by(fn(f) => f.kb, "desc")
| take(5)
# Statistical operations (types flow through)
scores = [85, 92, 78, 95, 88]
total = scores | sum() # => 438
average = scores | avg() # => 87.6
unique_ids = [1, 2, 1, 3, 2] | unique() # => [1, 2, 3]
record_values = {a: 1, b: 2} | values() # => [1, 2]
Agentic Protocols โ MCP, A2A, A2UI, NANDA
AetherShell provides first-class support for modern agent communication protocols:
MCP (Model Context Protocol)
# 130+ tools across 27 categories
all_tools = mcp_tools()
print(len(all_tools)) # => 130
# Filter by category
mcp_tools({category: "development"}) # git, cargo, npm, etc.
mcp_tools({category: "machinelearning"}) # ollama, tensorboard, etc.
mcp_tools({category: "kubernetes"}) # kubectl, helm, k9s, etc.
# Execute tools via MCP protocol
mcp_call("git", {command: "status"})
mcp_call("cargo", {command: "build --release"})
# Register custom MCP server
mcp_register("my-tools", {
endpoint: "http://localhost:8080",
capabilities: ["code-review", "test-gen"]
})
A2A (Agent-to-Agent Protocol)
# Direct agent communication
a2a_send("analyzer", {
task: "Review this code for security issues",
payload: code_snippet,
priority: "high"
})
# Receive responses from other agents
response = a2a_receive("analyzer", {timeout: 30s})
# Broadcast to all agents in swarm
a2a_broadcast({
type: "status_update",
status: "phase_1_complete",
results: analysis_results
})
# Subscribe to agent channels
a2a_subscribe("security-alerts", fn(msg) => {
if msg.severity == "critical" {
alert_user(msg.details)
}
})
A2UI (Agent-to-User Interface)
# Rich notifications
a2ui_notify("Analysis Complete", {
body: "Found 3 security issues",
type: "warning",
actions: ["View", "Dismiss"]
})
# Interactive prompts
choice = a2ui_prompt("Select deployment target:", {
options: ["staging", "production", "canary"],
default: "staging"
})
# Render structured data in TUI
a2ui_render({
type: "table",
title: "Scan Results",
columns: ["File", "Issue", "Severity"],
rows: scan_results
})
# Progress indicators
task_id = a2ui_progress("Processing files...", {total: 100})
a2ui_progress_update(task_id, 50) # 50% complete
NANDA (Networked Agent Negotiation & Decision Architecture)
# Multi-agent consensus for critical decisions
proposal = nanda_propose({
action: "deploy_to_production",
rationale: "All tests pass, security scan clean",
required_votes: 3
})
# Agents vote on proposals
nanda_vote(proposal.id, {
decision: "approve",
confidence: 0.95,
conditions: ["monitoring_enabled"]
})
# Wait for consensus
result = nanda_consensus(proposal.id, {timeout: 60s})
if result.approved {
deploy()
}
# Dispute resolution
nanda_escalate(proposal.id, {
reason: "Conflicting requirements detected",
evidence: conflict_log
})
Neural Networks & Evolution
# Create a neural network with layer sizes
brain = nn_create("agent", [4, 8, 2]) # 4 inputs, 8 hidden, 2 outputs
# Evolutionary optimization
pop = population(100, {genome_size: 10})
evolved = evolve(pop, fitness_fn, {generations: 50})
# Reinforcement learning
learner = rl_agent("learner", 16, 4)
๐ Real-World Use Cases
DevOps: Log Analysis Pipeline
# Parse and analyze application logs
error_logs = cat("/var/log/app.log")
| split("\n")
| where(fn(line) => contains(line, "ERROR"))
| map(fn(line) => {
timestamp: line | slice(0, 19),
level: "ERROR",
message: line | slice(27, len(line))
})
| take(10)
# Count errors by hour
error_counts = error_logs
| map(fn(e) => e.timestamp | slice(0, 13)) # Extract hour
| unique()
| map(fn(hour) => {
hour: hour,
count: error_logs | where(fn(e) => starts_with(e.timestamp, hour)) | len()
})
Data Science: CSV Processing
# Process CSV data with type-safe pipelines
raw_data = cat("sales.csv") | split("\n")
headers = raw_data | first()
rows = raw_data | slice(1, len(raw_data)) | map(fn(row) => split(row, ","))
# Parse into Records (type annotation for complex transformations)
sales: Array<Record> = rows | map(fn(r) => {
date: r[0],
product: r[1],
quantity: r[2] + 0, # Convert to Int
price: r[3] + 0.0 # Convert to Float
})
# Statistical analysis
total_revenue = sales | map(fn(s) => s.quantity * s.price) | sum()
avg_order = sales | map(fn(s) => s.quantity) | avg()
top_products = sales
| map(fn(s) => s.product)
| unique()
| take(5)
print("Total Revenue: $${total_revenue}")
print("Average Order Size: ${avg_order} units")
Security: Automated Code Audit
# AI-powered security scan
agent({
goal: "Find potential security vulnerabilities in the codebase",
tools: ["grep", "cat", "ls"],
max_steps: 20
})
# Search for hardcoded secrets
ls("./src")
| where(fn(f) => ends_with(f.name, ".rs"))
| map(fn(f) => {file: f.name, content: cat(f.path)})
| where(fn(f) => contains(f.content, "password") || contains(f.content, "secret"))
System Administration: Disk Usage Report
# Generate disk usage report (types flow through pipeline)
ls("/home")
| map(fn(d) => {
name: d.name,
size_mb: d.size / (1024 * 1024),
files: len(ls(d.path))
})
| where(fn(d) => d.size_mb > 100)
| map(fn(d) => "${d.name}: ${round(d.size_mb)}MB (${d.files} files)")
AI-Assisted Development
# Generate documentation from code
code = cat("src/main.rs")
docs = ai("Generate comprehensive API documentation for this Rust code:", {
context: code,
model: "openai:gpt-4o"
})
# Intelligent code review
agent({
goal: "Review the recent git changes and suggest improvements for:
- Performance optimizations
- Security issues
- Code style consistency",
tools: ["git", "cat", "grep"],
max_steps: 15
})
# Generate tests with context awareness
module_code = cat("src/utils.rs")
test_code = ai("Write comprehensive unit tests covering edge cases:", {
context: module_code,
model: "openai:gpt-4o"
})
# Explain complex code
complex_fn = cat("src/parser.rs") | slice(100, 200)
ai("Explain what this function does in simple terms:", {context: complex_fn})
Infrastructure: Kubernetes Monitoring
# List pods with structured output (types flow through)
pods = mcp_call("kubectl", {command: "get pods -o json"})
| map(fn(pod) => {
name: pod.metadata.name,
status: pod.status.phase,
restarts: pod.status.containerStatuses[0].restartCount
})
| where(fn(p) => p.restarts > 0)
Enterprise: RBAC & Compliance
# Create roles with typed permissions
permissions = [
{resource: "reports", actions: ["read", "export"]},
{resource: "dashboards", actions: ["read", "create"]}
]
role_create("data_analyst", permissions, "Data analytics team role")
# Grant roles to users
role_grant("user_123", "data_analyst")
# Check permissions before operations
can_export = check_permission("user_123", "reports", "export")
if can_export {
audit_log("report_export", {user: "user_123", report: "Q4_sales"})
# ... export the report
}
# Compliance reporting
compliance_result = compliance_check("GDPR")
report = compliance_report("SOC2", "json")
AI: Fine-tuning & RAG
# Start model fine-tuning
finetune_start("gpt-4o-mini", "training_data.jsonl", {
epochs: 3,
learning_rate: 0.0001
})
# Check fine-tuning status
finetune_status("ft-abc123")
# Build knowledge base with RAG
rag_index("project_docs", ["README.md", "docs/*.md"])
rag_query("project_docs", "How do I configure themes?")
# Knowledge graphs
kg_add("AetherShell", "language", "Rust")
kg_relate("AetherShell", "has_feature", "typed_pipelines")
kg_query({entity: "AetherShell"})
Distributed Computing
# Create a compute cluster
cluster_create("ml_cluster", {max_nodes: 10})
# Add worker nodes
cluster_add_node("ml_cluster", "worker_1", {capabilities: ["gpu", "ml"]})
cluster_add_node("ml_cluster", "worker_2", {capabilities: ["gpu", "ml"]})
# Submit distributed jobs
job_submit("ml_cluster", "train_model", {
model: "neural_net",
data: "training_set.csv"
})
# Monitor cluster status
cluster_status("ml_cluster")
Interactive Data Exploration
# Explore JSON APIs (types inferred from response)
response = http_get("https://api.github.com/repos/nervosys/AetherShell")
print("Stars: ${response.stargazers_count}")
print("Forks: ${response.forks_count}")
print("Language: ${response.language}")
# Transform API data
topics_upper = response.topics | map(fn(t) => upper(t)) | join(", ")
# Build a dashboard from multiple endpoints
repos = http_get("https://api.github.com/users/nervosys/repos")
stats = repos | map(fn(r) => {
name: r.name,
stars: r.stargazers_count,
lang: r.language
}) | where(fn(r) => r.stars > 0) | sort_by(fn(r) => r.stars, "desc")
Git Workflow Automation
# Get recent commits with structured data
commits = mcp_call("git", {command: "log --oneline -10"})
| split("\n")
| map(fn(line) => {
hash: line | slice(0, 7),
message: line | slice(8, len(line))
})
# Find commits by pattern
bug_fixes = commits | where(fn(c) => contains(lower(c.message), "fix"))
# Analyze git blame for a file
blame = mcp_call("git", {command: "blame src/main.rs"})
authors = blame | split("\n")
| map(fn(l) => l | split(" ") | first())
| unique()
Build & Deploy Automation
# Platform-aware build script
build_cmd = match platform() {
"windows" => "cargo build --release --target x86_64-pc-windows-msvc",
"linux" => "cargo build --release --target x86_64-unknown-linux-gnu",
"macos" => "cargo build --release --target aarch64-apple-darwin",
_ => "cargo build --release"
}
# Conditional feature flags
enabled_features = features()
build_with_ai = if has_feature("ai") { "--features ai" } else { "" }
# Multi-platform detection
if is_windows() {
print("Building for Windows...")
} else if is_linux() {
print("Building for Linux...")
} else if is_macos() {
print("Building for macOS...")
}
Monitoring & Alerting
# Check system health and alert (annotate function for clarity)
health_check: fn() -> Record = fn() => {
cpu = mcp_call("system", {metric: "cpu_usage"})
memory = mcp_call("system", {metric: "memory_usage"})
disk = mcp_call("system", {metric: "disk_usage"})
{cpu: cpu, memory: memory, disk: disk}
}
status = health_check()
# Alert on high resource usage
if status.cpu > 90 || status.memory > 85 {
alert = ai("Generate an alert message for high resource usage:", {
context: "CPU: ${status.cpu}%, Memory: ${status.memory}%"
})
print(alert)
}
๐ฎ TUI Interface
Launch the beautiful terminal UI with ae tui:
| Tab | Description |
|---|---|
| Chat | Conversational AI with multi-modal support |
| Agents | Deploy and monitor AI agent swarms |
| Media | View images, play audio, preview videos |
| Help | Quick reference and documentation |
Keyboard shortcuts:
Tabโ Switch tabsEnterโ Send message / activateSpaceโ Select media filesqโ QuitCtrl+Cโ Force quit
๐ Full guide: docs/TUI_GUIDE.md
๐ฆ Installation
From Source (Recommended)
From Crates.io
VS Code Extension
Get syntax highlighting, snippets, and integrated REPL:
&&
# Press F5 to test
โ๏ธ Configuration
Environment Variables
# AI Provider (required for AI features)
# Agent permissions
# Alternative AI backend
# or "openai"
Secure Key Storage
# Store keys in OS credential manager (recommended)
# View stored keys (masked)
๐ Documentation
| Document | Description |
|---|---|
| Quick Reference | One-page syntax guide |
| TUI Guide | Terminal UI documentation |
| Type System | Type inference details |
| MCP Servers | Tool integration guide |
| AI Backends | Provider configuration |
| Security | Security assessment |
Example Scripts
| File | Topic |
|---|---|
| 00_hello.ae | Basic syntax |
| 01_pipelines.ae | Typed pipelines |
| 02_tables.ae | Table operations |
| 04_match.ae | Pattern matching |
| 05_ai.ae | AI integration |
| 06_agent.ae | Agent deployment |
| 09_tui_multimodal.ae | Multi-modal TUI |
Coverage Test Scripts
| File | Topic |
|---|---|
| syntax_comprehensive.ae | All AST constructs |
| builtins_core.ae | Core functions |
| builtins_functional.ae | Functional ops |
| builtins_string.ae | String operations |
| builtins_array.ae | Array operations |
| builtins_math.ae | Math functions |
| builtins_aggregate.ae | Aggregate functions |
| builtins_config.ae | Config & themes |
๐งช Testing
AetherShell has comprehensive test coverage with 100% pass rate.
# Run the full test coverage suite
# Run specific test categories
# Run all library tests
Test Coverage Summary
| Category | Tests | Status |
|---|---|---|
| Builtins Coverage | 23 | โ |
| Theme System | 6 | โ |
| Core Builtins | 2 | โ |
| Evaluator | 6 | โ |
| Pipelines | 1 | โ |
| Type Inference | 10 | โ |
| Smoke Tests | 4 | โ |
| .ae Syntax Tests | 8 files | โ |
Test files: See TESTING.md for the complete testing strategy and tests/coverage/ for syntax coverage tests.
๐ฃ๏ธ Roadmap
See ROADMAP.md for the complete development roadmap with detailed progress tracking.
โ Completed (January 2026)
- 215+ builtins with comprehensive test coverage
- 38 built-in color themes with XDG-compliant config
- Neural network primitives & evolutionary algorithms
- 130+ MCP tools with protocol compliance
- Multi-modal AI (images, audio, video)
- Reinforcement learning (Q-Learning, DQN, Actor-Critic)
- Distributed agent swarms & cluster management
- Language Server Protocol (LSP) for IDE integration
- VS Code extension v0.2.0 with IntelliSense
- Enterprise features (RBAC, Audit, SSO, Compliance)
- Fine-tuning API for custom model training
- RAG & knowledge graphs
- Plugin system with TOML manifests
- WASM support (browser-based shell)
- Package management & module imports
- 100% test pass rate
๐ Coming Soon
- Advanced video streaming
- Mobile platform support
๐ค Contributing
We welcome contributions! See our development setup:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
๐ License
Licensed under the Apache License 2.0.