aether_shell 0.1.0

The world's first multi-agent shell with typed functional pipelines and multi-modal AI
docs.rs failed to build aether_shell-0.1.0
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

ร†ther Shell (รฆ) ๐Ÿš€

The world's first multi-agent shell with typed functional pipelines and multi-modal AI. Built in Rust for safety and performance, featuring revolutionary AI protocols found nowhere else.

"What if your shell could coordinate teams of AI agents, negotiate consensus, and process images, audio, and videoโ€”all with type-safe functional pipelines?"


โšก Quick Start

# Install

git clone https://github.com/nervosys/AetherShell && cd AetherShell

cargo install --path . --bin ae


# Launch interactive TUI

ae --tui


# Or classic REPL

ae

# Typed pipelines - not text streams!
[1,2,3,4,5] | map(fn(x) => x * 2) | reduce(fn(a,b) => a + b, 0)
# => 30

# AI query (set AETHER_AI=openai and OPENAI_API_KEY for real responses)
ai("Explain quantum computing in simple terms")

# AI with vision
ai("Describe this image", {images: ["photo.jpg"]})

# AI agent with tools
agent("Find all TODO comments in src/", ["ls", "cat", "grep"])

# Neural network creation
let brain = nn_create("agent", [4, 8, 2])

Set your API key: export OPENAI_API_KEY="sk-..." or ae ai keys store openai sk-...

๐Ÿ“– Full Documentation | ๐ŸŽฎ TUI Guide | ๐Ÿ“š Examples


๐ŸŽฏ What Makes AetherShell Unique?

AetherShell is the ONLY shell in the world that combines:

๐Ÿฅ‡ Exclusive Features

  1. AI Agents with Tool Access ๐Ÿค–

    • Deploy AI agents that can use shell tools (ls, cat, grep, etc.)
    • Goal-directed task execution with step limits
    • Dry-run mode for previewing actions
    • Multi-agent swarm orchestration coming soon!
  2. AI Communication Protocols ๐Ÿ’ฌ

    • MCP: Model Context Protocol with 130+ tools across 27 categories
    • A2A: Agent-to-Agent messaging framework
    • NANDA: Negotiation And Dynamic Agents for consensus
    • AgenticBinary: Binary protocol for information density
    • Syntax KB: Knowledge base for protocol discovery
  3. Multi-Modal AI Native ๐ŸŽจ

    • Analyze images with ai("prompt", {images: [...]})
    • Process audio with ai("prompt", {audio: [...]})
    • Analyze video with ai("prompt", {video: [...]})
    • Mix multiple media types in single queries
  4. Typed Functional Pipelines ๐Ÿ’Ž

    • Hindley-Milner type inference (like Haskell, OCaml)
    • Structured data: Records, Arrays, Tablesโ€”not text streams
    • First-class functions, pattern matching
    • Type safety prevents shell scripting errors
  5. ๐Ÿง  Neural Networks & Evolutionary Learning

    • In-shell neural network creation and mutation
    • Consensus networks for distributed decision making
    • Evolutionary algorithms with population optimization
    • NEAT topology evolution

โœจ Revolutionary Features

๐Ÿง  AI Integration

  • AI Queries: Direct AI queries with ai() function
  • Vision AI: Analyze images and screenshots
  • Audio Processing: Transcribe speech and audio files
  • Video Analysis: Process video content with AI-powered insights
  • Smart Agents: Deploy specialized AI agents with tool access and reasoning
  • Protocol Support: MCP, A2A, and NANDA for advanced agent coordination
  • ๐Ÿ†• Model Management: OpenRouter-style API server with local model management and format conversion
  • ๐Ÿ†• Neural Networks: Create and evolve neural networks directly in the shell
  • ๐Ÿ†• Evolutionary Learning: Population-based optimization and coevolution

๐ŸŽจ Beautiful Terminal UI (TUI)

  • Interactive Interface: Modern, responsive terminal GUI with real-time updates
  • Media Viewer: Display images, play audio, and preview videos in terminal
  • Chat Interface: Conversational AI with context-aware responses
  • Agent Dashboard: Monitor and control your AI agent swarms
  • Multimodal Sessions: Seamlessly mix text, images, audio in conversations

๐Ÿ’ช Advanced Programming Features

  • Typed Pipelines: Pass structured records/tables, not just raw text
  • Rust-Grade Safety: Memory-safe runtime with zero-cost abstractions
  • Strong Type System: Hindleyโ€“Milner inference with algebraic data types
  • Metaprogramming: Hygienic macros and AST manipulation
  • Async/Await: Built-in structured concurrency and cancellation
  • POSIX Compatibility: Run existing tools seamlessly

๐Ÿ”„ Seamless Interoperability

  • Bash Compatibility: Transpile and run existing .sh scripts
  • Command Integration: Auto-wrap unknown commands in safe shells
  • Multi-Backend AI: Support for OpenAI, Anthropic, and local providers via unified API
  • OS Tools Database: Cross-platform native command integration
  • ๐Ÿ†• XDG Compliance: Standards-compliant local storage and configuration management

๏ฟฝ Project Structure

This project is now organized with a clean directory structure:

  • ๐Ÿ“‚ src/ - Core Rust source code
  • ๐Ÿ“‚ docs/ - Documentation, specs, and development notes
  • ๐Ÿ“‚ examples/ - AetherShell example scripts
  • ๐Ÿ“‚ demos/ - Showcase demos and advanced examples
  • ๐Ÿ“‚ test-scripts/ - Manual test scripts (builtins & integration)
  • ๐Ÿ“‚ tests/ - Rust unit and integration tests
  • ๐Ÿ“‚ web/ - Web terminal components
  • ๐Ÿ“‚ temp/ - Temporary files (gitignored)

See PROJECT_STRUCTURE.md for detailed organization information.


๏ฟฝ๐Ÿš€ Quick Start Guide

Installation

git clone https://github.com/nervosys/AetherShell

cd AetherShell


# Install both binaries

cargo install --path . --bins


# Or install individually

cargo install --path . --bin ae       # Main Aether Shell

cargo install --path . --bin aimodel  # AI Model Management CLI (deprecated, use 'ae ai' instead)

VS Code Extension

Get professional IDE support for AetherShell:

  • Syntax highlighting for .ae files
  • Code snippets for agents, swarms, MCP servers
  • Run code directly from editor (Ctrl+Shift+R)
  • Auto-completion for built-in functions
  • Hover documentation for AI features
  • Integrated REPL and TUI

๐Ÿ“– Quick Reference: See QUICK_REFERENCE.md for all syntax, patterns, and snippets!

Install:

cd vscode-extension

npm install

npm run compile

# Press F5 to test, or package for distribution

See vscode-extension/README.md for details.

Launch Options

Classic REPL Mode:

ae

๐ŸŽจ Interactive TUI Mode (Recommended):

ae --tui

Note: TUI requires a terminal with full ANSI support (Windows Terminal, native PowerShell, or modern terminal emulators). VS Code integrated terminal may have limited support. See TUI Guide for details.

Run Scripts:

ae script.ae          # Run Aether script

ae --bash script.sh   # Run Bash script in compatibility mode

REPL Commands

  • exit / quit: Exit the REPL
  • Ctrl+D: Exit the REPL (EOF)
  • Ctrl+C: Interrupt current operation

๐ŸŽฏ Experience the Magic

๐Ÿค– AI Agents

Create an AI agent with tool access:

# Agent with goal and allowed tools
# Set AETHER_AI=openai and OPENAI_API_KEY for real AI responses
agent("Find all TODO comments in the codebase", ["ls", "cat", "grep"])

# Agent with configuration record
agent({
  goal: "Analyze the project structure",
  tools: ["ls", "cat"],
  max_steps: 5,
  dry_run: true  # Preview without executing
})

AI queries with multi-modal support:

# Simple text query
ai("Explain the difference between TCP and UDP")

# Query with images
ai("What's in this screenshot?", {images: ["screenshot.png"]})

# Query with multiple images
ai("Compare these diagrams", {images: ["diagram1.png", "diagram2.png"]})

๐Ÿ”ฎ Coming Soon: Multi-Agent Swarms

The full swarm syntax for coordinating multiple agents with different models is under active development. Current agent functionality uses single-agent mode.

๐Ÿ”ง MCP Tool Use (Model Context Protocol)

Access 130+ tools via the built-in MCP server:

# List all available MCP tools
let tools = mcp_tools()
print(len(tools))  # => 130

# Search for specific tools
let git_tools = mcp_tools({search: "git"})
# => [{name: "git", description: "Distributed version control", ...}, ...]

# Filter by category
let dev_tools = mcp_tools({category: "development"})
let ml_tools = mcp_tools({category: "machinelearning"})
let k8s_tools = mcp_tools({category: "kubernetes"})

# Execute tools via MCP protocol
let result = mcp_call("git", {command: "status"})
print(result.content)  # Git status output

let files = mcp_call("curl", {url: "https://api.github.com"})
print(files.is_error)  # false on success

# Get MCP server information
let server = mcp_server()
print(server.tool_count)       # 130
print(server.protocol_version) # "2024-11-05"

# List available resources and prompts
let resources = mcp_resources()  # tools, categories, system-info
let prompts = mcp_prompts()      # find-tool, explain-tool

Tool Categories (27 total):

  • Development: git, cargo, npm, go, rustc, make, gradle, maven
  • Containers: docker, kubectl, helm, k9s, minikube, kind
  • MachineLearning: ollama, huggingface-cli, tensorboard, wandb, mlflow
  • Cloud: aws, az, gcloud, gsutil, terraform, packer
  • Security: openssl, gpg, ssh, vault, sops, age
  • Data: jq, yq, duckdb, sqlite3, pgcli, redis-cli
  • Media: ffmpeg, imagemagick, yt-dlp, pandoc, sox
  • And more: FileSystem, NetworkTools, Archives, Monitoring...

๐Ÿ”ง Local MCP Servers (Safe Tool Access)

MCP tool discovery and execution:

# Query and execute from 130+ built-in tools
let tools := mcp_tools()
print(len(tools))  # 130

# Filter tools by category
let dev_tools := mcp_tools({category: "development"})
print(len(dev_tools))  # 23

# Execute tools via MCP protocol
let result := mcp_call("git", {command: "status"})
print(result.is_error)  # false

# Get server information
let server := mcp_server()
print(server.tool_count)  # 130

Start custom MCP servers:

# Start filesystem MCP server (safe, controlled access)
fs_server := mcp_server_start({
  name: "filesystem",
  type: "builtin",
  config: {
    allowed_paths: ["./", "~/Projects"],
    excluded_patterns: [".git/", "node_modules/"]
  }
})
print(fs_server.endpoint)  # http://localhost:3xxx

# Start multiple MCP servers
git_server := mcp_server_start({name: "git", type: "builtin"})
docker_server := mcp_server_start({name: "docker", type: "builtin"})

# Agent with MCP tool access
devops := agent_with_mcp(
  "Check git status and list recent commits",
  ["mcp:git_status", "mcp:git_log"],
  git_server.endpoint
)
print(devops.result)

๐ŸŽจ Multi-Modal AI (Images, Audio, Video)

Analyze images with AI:

# Single image
ai("What do you see in this image?", {images: ["screenshot.png"]})

# Compare multiple images
ai("Compare these photos and find similarities", {
  images: ["photo1.jpg", "photo2.jpg", "photo3.jpg"]
})

# Batch process with typed pipelines and in() operator
ls("./photos")
  | where(fn(f) => f.ext | in([".jpg", ".png"]))
  | map(fn(photo) => {
      path: photo.path,
      description: ai("Describe briefly", {images: [photo.path]})
    })
  | save_json("photo_catalog.json")

Audio transcription and analysis:

# Transcribe audio
ai("Transcribe this audio", {audio: ["meeting.mp3"]})

# Analyze sentiment
ai("What is the speaker's tone and sentiment?", {
  audio: ["interview.mp3"]
})

# Summarize podcast
ai("Extract key takeaways from this podcast", {
  audio: ["episode_42.mp3"]
})

Video content processing:

# Summarize video
ai("Summarize the key points from this video", {
  video: ["presentation.mp4"]
})

# Extract tutorial steps
ai("List the step-by-step instructions from this tutorial", {
  video: ["coding_tutorial.mp4"]
})

Multi-modal combinations:

# Analyze presentation with slides + audio
ai("Create comprehensive summary of this presentation", {
  images: ["slide1.png", "slide2.png", "slide3.png"],
  audio: ["narration.mp3"]
})

# Meeting minutes from multiple sources
ai("Generate meeting minutes with action items", {
  audio: ["meeting_audio.mp3"],
  images: ["whiteboard_photo.jpg"],
  video: ["screen_share.mp4"]
})

๐Ÿ’Ž Typed Functional Pipelines

Structured data, not text streams:

# ls returns typed records with name, path, ext, is_dir, size, modified
ls(".")
  | where(fn(f) => f.size > 1000 && f.ext == ".rs")
  | map(fn(f) => {
      name: f.name,
      size_kb: f.size / 1024,
      age_days: (now() - f.modified) / 86400
    })
  | sort_by(fn(f) => f.size_kb, "desc")
  | take(10)

Type-safe with Hindley-Milner inference:

# Types are inferred automatically
numbers = [1, 2, 3, 4, 5]  # Array<Int>
doubled = numbers | map(fn(x) => x * 2)  # Array<Int>
sum = doubled | reduce(fn(a, b) => a + b, 0)  # Int

# Complex types work seamlessly
employees = [
  {name: "Alice", age: 30, salary: 75000.0},
  {name: "Bob", age: 25, salary: 65000.0}
]  # Array<Record<name: String, age: Int, salary: Float>>

high_earners = employees
  | where(fn(e) => e.salary > 70000.0)  # Note: use 70000.0 for Float comparison
  | map(fn(e) => {name: e.name, monthly: e.salary / 12.0})

First-class functions:

# Functions are values - pass lambdas to higher-order functions
double = fn(x) => x * 2
triple = fn(x) => x * 3
square = fn(x) => x * x

[1,2,3,4,5] | map(double)   # => [2,4,6,8,10]
[1,2,3,4,5] | map(triple)   # => [3,6,9,12,15]
[1,2,3,4,5] | map(square)   # => [1,4,9,16,25]

# Chain operations
[1,2,3,4,5] | map(fn(x) => x + 1) | map(fn(x) => x * x)  # => [4,9,16,25,36]

๐Ÿ”ฅ Powerful Real-World Examples

๐Ÿค– AI-Assisted Code Review

# Use an AI agent to review code for issues
# Set AETHER_AI=openai and OPENAI_API_KEY for real AI responses
code_content = read_text("src/main.rs")
ai("Review this Rust code for potential bugs, security issues, and improvements:\n" + code_content)

๐Ÿ“Š Intelligent Data Processing Pipeline

# Type-safe data transformation with AI insights
# Process files and get AI analysis - now with ext field!
files := ls("./src") | where(fn(f) => f.ext == ".rs")
file_info := files | map(fn(f) => {name: f.name, size: f.size})
print(file_info)

# Get AI insights on project structure
ai("Based on a Rust project with these files, suggest improvements: " + to_json(file_info))

๐ŸŽจ Multi-Modal Content Analysis

# Analyze images with AI
ai("What's shown in this image? List the main objects.", {
  images: ["photo.jpg"]
})

# Compare multiple images
ai("Compare these two screenshots and describe the differences", {
  images: ["before.png", "after.png"]
})

# Analyze with audio context
ai("Transcribe and summarize this audio recording", {
  audio: ["meeting.mp3"]
})

๐Ÿ” Smart File Organization with AI Vision

# Analyze and describe images in a directory
ls("./images")
  | where(fn(f) => f.ext == ".jpg" || f.ext == ".png")
  | map(fn(photo) => {
      path: photo.path,
      name: photo.name,
      description: ai("Briefly describe this image", {images: [photo.path]})
    })
  | save_json("photo_analysis.json")

๐Ÿง  Core Language Features

Basic Syntax

Comments:

# Line comments use hash (shell style - preferred)
# Comments are ignored during execution

print("Hello") # Inline comments also work

// C-style comments are also supported for compatibility

Hello World:

print("Hello, Aether!")

Variables:

# Simple = for type inference (recommended)
name = "world"         # Type inferred as String
count = 42             # Type inferred as Int
items = [1, 2, 3]      # Type inferred as Array<Int>

# Walrus operator := also supported (same as =)
timestamp := now()     # Get current Unix timestamp
result := sh(["git", "status"])  # Run shell command

# Mutable variables
mut counter = 0        # Mutable
mut total = 100        # Also mutable

# Alternative syntax (explicit)
let name = "world"     # Explicit let keyword
let mut counter = 0    # Traditional mutable

Structured Pipelines:

[1,2,3,4] | map(fn(x) => x*x) | reduce(fn(a,b) => a+b, 0)
# โ†’ 30

Pattern Matching:

# Match on arrays and literals
let nums = [1, 2, 3]
match nums {
  [] => print("empty"),
  [x] => print("single"),
  [x, y] => print("pair"),
  _ => print("multiple elements")
}

# Match with Option types
let val = Some(42)
print(val)  # => {_tag: "Some", _value: 42}

Typed HTTP:

resp = http_get("https://api.github.com")
print(resp.status)
print(resp.body)

๐Ÿค– AI Model Management System

New: OpenRouter-Style API Server

Aether Shell now includes a comprehensive AI model management system with an OpenRouter-compatible API server, XDG-compliant local storage, and advanced model format conversion capabilities.

๐Ÿš€ Key Features

  • ๐Ÿ”Œ Multi-Provider Support: Seamlessly integrate OpenAI, Anthropic, and local models through a unified API
  • ๐Ÿ“ XDG-Compliant Storage: Local model storage following XDG Base Directory specification
  • ๐Ÿ”„ Format Conversion: Convert between GGUF, SafeTensors, PyTorch, ONNX, and TensorFlow formats
  • ๐Ÿ“ฅ Model Downloads: Direct integration with Hugging Face Hub and custom model repositories
  • ๐ŸŒ HTTP API Server: OpenAI-compatible REST API with Swagger documentation
  • โš™๏ธ CLI Management: Comprehensive command-line interface for all operations

๐Ÿ› ๏ธ AI Model CLI (ae ai)

Start the API Server:

# Start server with default settings

ae ai serve


# Custom host and port with CORS enabled

ae ai serve --host 0.0.0.0 --port 3000 --cors

Model Management:

# List all available models (local + remote providers)

ae ai list


# List models from specific provider

ae ai list --provider openai


# List only local models

ae ai list --local


# Download models locally

ae ai download microsoft/DialoGPT-medium

API Key Management:

# Store API key securely in OS credential store

ae ai keys store openai --key sk-your-key-here


# Get API key (shows masked version)

ae ai keys get openai


# Delete API key

ae ai keys delete openai


# List all stored API key providers

ae ai keys list

Configuration:

# Show current AI configuration

ae ai config

Note: The old aimodel command is deprecated but still available for backward compatibility. It will show a deprecation warning and suggest using ae ai instead.

Advanced features (coming soon): Model format conversion, storage management, provider configuration, and LLM backend management will be integrated into ae ai in future releases.

Supported LLM Backends:

  • ๐Ÿ”ฅ vLLM: High-performance inference with PagedAttention (http://localhost:8000)
  • โšก TensorRT-LLM: NVIDIA GPU-optimized inference (http://localhost:8001)
  • ๐ŸŒŠ SGLang: High-throughput serving with RadixAttention (http://localhost:30000)
  • ๐Ÿฆ™ llama.cpp: CPU/GPU inference for GGUF models (http://localhost:8080)

๐ŸŒ HTTP API Endpoints

Once the server is running, you can access these OpenAI-compatible endpoints:

# List available models

curl http://localhost:8080/v1/models


# Chat completions with different providers

# Using OpenAI

curl -X POST http://localhost:8080/v1/chat/completions \

  -H "Content-Type: application/json" \

  -H "Authorization: Bearer your-api-key" \

  -d '{
    "model": "gpt-3.5-turbo",
    "provider": "openai",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'


# Using vLLM backend

curl -X POST http://localhost:8080/v1/chat/completions \

  -H "Content-Type: application/json" \

  -d '{
    "model": "microsoft/DialoGPT-medium",
    "provider": "vllm",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'


# Using llama.cpp backend

curl -X POST http://localhost:8080/v1/chat/completions \

  -H "Content-Type: application/json" \

  -d '{
    "model": "llama-2-7b-chat",
    "provider": "llama.cpp",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'


# Generate embeddings

curl -X POST http://localhost:8080/v1/embeddings \

  -H "Content-Type: application/json" \

  -d '{
    "model": "text-embedding-ada-002",
    "provider": "openai", 
    "input": "Text to embed"
  }'


# Server health check

curl http://localhost:8080/health


# API documentation

open http://localhost:8080/swagger-ui

๐Ÿ“ Local Storage Structure

Models are stored following XDG Base Directory specification:

~/.local/share/ai-models/          # Linux/macOS

%APPDATA%/ai-models/               # Windows

โ”œโ”€โ”€ models/

โ”‚   โ”œโ”€โ”€ gguf/                     # GGUF format models

โ”‚   โ”œโ”€โ”€ safetensors/              # SafeTensors format  

โ”‚   โ”œโ”€โ”€ pytorch/                  # PyTorch models

โ”‚   โ””โ”€โ”€ onnx/                     # ONNX models

โ”œโ”€โ”€ metadata/                     # Model metadata and index

โ””โ”€โ”€ cache/                        # Temporary download cache


~/.config/ai-models/               # Configuration directory

โ”œโ”€โ”€ config.toml                   # Main configuration

โ”œโ”€โ”€ providers.toml                # Provider settings

โ””โ”€โ”€ aliases.json                  # Model aliases

โš™๏ธ LLM Backend Configuration

Configure your LLM backends in ~/.config/aether/providers.toml:

[llm_backends]

# vLLM configuration

vllm_endpoint = "http://localhost:8000"

vllm_auto_start = true

vllm_gpu_memory_utilization = 0.9

vllm_tensor_parallel_size = 1



# TensorRT-LLM configuration  

tensorrt_endpoint = "http://localhost:8001"

tensorrt_auto_start = false

tensorrt_max_batch_size = 8

tensorrt_max_input_len = 2048

tensorrt_max_output_len = 1024



# SGLang configuration

sglang_endpoint = "http://localhost:30000" 

sglang_auto_start = true

sglang_mem_fraction_static = 0.8

sglang_tp_size = 1



# llama.cpp configuration

llamacpp_endpoint = "http://localhost:8080"

llamacpp_auto_start = true

llamacpp_context_size = 4096

llamacpp_gpu_layers = -1  # Use all GPU layers available

๐Ÿ”ง Integration with Aether Shell

The AI model system integrates seamlessly with Aether Shell's existing AI features:

# Use different LLM backends seamlessly
# Set AETHER_AI environment variable to configure provider
vllm_response = ai("Hello, how are you?")

# Use MCP tools for model management
let tools = mcp_tools()
print(len(tools))  # 130 tools available

# Model information via HTTP
models = http_get("http://localhost:8080/v1/models")
print(models.status)

# Batch processing with pipelines
backends = ["openai", "ollama", "compat"]
results = backends | map(fn(backend) => {name: backend, available: true})

๐ŸŽฎ TUI Interface Guide

Navigation

  • Tab: Switch between Chat, Agents, Media, Help tabs
  • Arrow Keys: Navigate lists and selections
  • Space: Select/deselect media files
  • Enter: Send messages, activate agents
  • Esc: Exit to normal mode or quit application
  • q: Quit application (from normal mode)
  • Ctrl+C: Force quit application

Media Tab Features

  • Image Viewer: Display images directly in terminal using advanced algorithms
  • Audio Player: Play audio files with waveform visualization
  • Video Preview: Extract frames and metadata from video files
  • Format Support: 20+ media formats including PNG, JPG, MP3, MP4, WEBM, GIF
  • Batch Selection: Select multiple files for multimodal AI analysis

Agent Management

  • Create Agents: Spawn AI agents with specific capabilities
  • Monitor Status: Real-time agent status and task progress
  • Swarm Coordination: Deploy coordinated teams of agents
  • Task Assignment: Distribute work across agent networks
  • Strategy Selection: Choose from Round-Robin, Load-Balanced, or Specialized coordination

Chat Interface

  • Multimodal Messages: Include text, images, audio, and video in conversations
  • Context Awareness: AI remembers conversation history and attached media
  • Export Options: Save conversations as Markdown or JSON
  • Session Management: Multiple chat sessions with persistent history
  • Auto-Summarization: Intelligent conversation summarization

๐Ÿ› ๏ธ Advanced Features

Bash Compatibility

Run old scripts seamlessly:

ae --bash script.sh

Or pipe Bash from stdin:

echo 'echo hello | wc -l' | ae -b

Transpiler magic - turns this:

echo hello | wc -l

into Aether (conceptual transpiler output):

# Transpiled bash commands use echo() builtin and pipelines
echo("hello")

๐ŸŽจ AI & Media Configuration

Supported AI Backends

AetherShell supports multiple AI inference backends for maximum flexibility:

  • OpenAI (openai:gpt-4o-mini) - Cloud API for GPT-4V vision, GPT-4 text, Whisper audio
  • Ollama (ollama:llama3) - Local server for LLaVA vision, Llama/Mistral text
  • vLLM (vllm:meta-llama/Llama-3-8B) - High-performance local inference with PagedAttention
  • llama.cpp (llamacpp:model) - Efficient CPU/GPU inference with GGUF models
  • TGI (tgi:mixtral) - HuggingFace Text Generation Inference
  • OpenAI-Compatible (compat:mixtral) - Any OpenAI-compatible API server

๐Ÿ“– See docs/AI_BACKENDS.md for detailed backend configuration guide

Media Format Support

  • Images: PNG, JPG, JPEG, WEBP, GIF, BMP, TIFF, ICO, SVG
  • Audio: MP3, WAV, FLAC, OGG, M4A, AAC, WMA
  • Video: MP4, AVI, MOV, MKV, WEBM, FLV, WMV

Environment Setup

# For OpenAI integration

export OPENAI_API_KEY="your-api-key"


# For agent command permissions  

export AGENT_ALLOW_CMDS="ls,git,curl,python"


# For custom AI backends

export AI_BACKEND="ollama"  # or "openai" or "custom"


๐Ÿš€ Example Workflows

1. Document Analysis Pipeline

ae --tui

# 1. Load PDFs, images, audio recordings in Media tab

# 2. Select multiple files (Space key)

# 3. Chat: "Analyze these documents and create a summary report"

# 4. AI processes all media types and generates comprehensive analysis

2. Content Creation Swarm

# Deploy specialized agents for blog creation (requires AI config)
# Set AETHER_AI=openai and OPENAI_API_KEY for real responses
swarm({
  goal: "create tech blog post",
  tools: ["ls", "cat", "grep"],
  max_steps: 10,
  dry_run: true
})

3. Interactive Media Analysis

# Batch process files with AI descriptions (conceptual - requires AI config)
# Set AETHER_AI=openai and OPENAI_API_KEY for real AI responses
files = ls("./photos") | where(fn(f) => !f.is_dir)
print(len(files))

# Process each file
file_info = files | map(fn(f) => {name: f.name, path: f.path, size: f.size})
print(file_info)

4. Voice-Controlled Automation

# Audio processing with AI (requires AETHER_AI config)
# Example: transcribe audio and execute commands
# audio_result = ai("transcribe this audio", {audio: ["recording.mp3"]})

# Execute tools via tool_exec (198 tools available)
result = tool_exec("git", ["status"])
print(result.success)  # true
print(result.stdout)   # Git status output

๐Ÿงช Developer Features

Type System

  • Hindley-Milner inference: Automatic type deduction
  • Algebraic data types: Option<T>, Result<T,E>, custom enums
  • Strong safety: Compile-time error prevention
  • Generic programming: Parametric polymorphism

Metaprogramming

  • Hygienic macros: Safe code generation
  • AST manipulation: Runtime code transformation
  • Quoting/splicing: Embed code as data

Concurrency

  • Async/await: Built-in structured concurrency
  • Cancellation: Graceful task termination
  • Pipelines: Parallel data processing

OS Tools Integration

  • Cross-platform database: 200+ native OS tools (Linux/Windows/macOS)
  • 27 categories: Development, Containers, ML, Cloud, Security, Data, Media, etc.
  • Safety levels: Safe, Caution, Dangerous, Critical classification
  • MCP protocol: Standardized tool discovery and execution
  • Platform filtering: OS-specific tool availability

๐Ÿ“Š Performance & Testing

Benchmarks

  • Memory safe: Zero buffer overflows or memory leaks
  • Fast execution: Rust-powered performance
  • Concurrent pipelines: Multi-core utilization
  • Efficient AI calls: Batched multimodal requests

Test Coverage

  • 140+ tests: Comprehensive test suite
  • Unit tests: 90 library tests for core functionality
  • MCP tests: 40 tests for tool use and protocol compliance
  • Integration tests: End-to-end workflow testing
  • TUI tests: Interactive interface validation
  • AI tests: Multimodal backend testing
  • OS Tools tests: 13 tests for cross-platform command database
  • Neural/Evolution tests: ML primitive validation

๐Ÿ“š Documentation & Examples

Example Scripts

Learning Resources

๐Ÿ“š Documentation Guides

๐Ÿงช Test Examples

  • Type system: See tests/typecheck.rs for comprehensive examples
  • Bash compatibility: Check tests/transpile_bash.rs for transpilation rules
  • AI integration: Explore tests/multimodal_ai.rs for backend implementation
  • TUI features: Review tests/tui_*.rs for interface testing
  • OS Tools: Examine tests/os_tools.rs for cross-platform tool usage

๏ฟฝ Security

AetherShell implements comprehensive security controls to protect your credentials, data, and system:

Secure API Key Management

OS Credential Store Integration ๐Ÿ”

API keys are stored securely in your operating system's native credential manager:

  • Windows: Windows Credential Manager
  • macOS: Keychain
  • Linux: Secret Service API (libsecret)
# Store your API key securely

ae keys store openai sk-your-key-here


# View stored keys (masked for security)

ae keys get openai

# Output: sk-...key...1234


# List all stored providers

ae keys list


# Migrate from environment variables

ae keys migrate openai

Memory Protection ๐Ÿ›ก๏ธ

API keys are protected in memory using:

  • Secret<String> wrapping prevents accidental exposure
  • Automatic zeroization on drop clears memory
  • No key exposure in debug output, logs, or error messages
  • Temporary auth headers are automatically zeroized after use

Best Practices:

# โœ… DO: Use secure credential store

ae keys store openai $OPENAI_API_KEY


# โœ… DO: Remove from environment after migration

unset OPENAI_API_KEY


# โŒ DON'T: Keep keys in shell history or environment

export OPENAI_API_KEY="sk-..."  # Insecure!

Additional Security Features

  • Path Traversal Prevention: Symlink validation and path sanitization
  • SSRF Protection: Blocks access to internal IPs (AWS metadata, private networks)
  • Resource Limits: File size limits (100MB default), memory quotas
  • TLS Hardening: TLS 1.2+ enforcement with secure cipher suites
  • Input Validation: Comprehensive sanitization of user input and AI prompts
  • Command Whitelisting: Configurable allowlist for agent tool use

Security Documentation

Security Status: 40% risk reduction achieved (6.8/10 โ†’ 4.1/10)


๐Ÿ›ฃ๏ธ Roadmap

Recently Completed โœ… (January 2026)

  • โœ… Neural Network Primitives: In-shell neural network creation, forward pass, mutation, crossover
  • โœ… Consensus Networks: Multi-agent distributed decision making with message passing
  • โœ… Evolutionary Algorithms: Population-based optimization with configurable strategies
  • โœ… Coevolution: Multi-population coevolution for protocol learning
  • โœ… NEAT Support: Topology-evolving neuroevolution
  • โœ… AI Model Management: OpenRouter-style API server with multi-provider support
  • โœ… Local Model Storage: XDG-compliant storage with format conversion
  • โœ… Model Downloads: Hugging Face integration and CLI management tools
  • โœ… Streaming AI responses: Real-time token streaming via SSE in API server
  • โœ… Reinforcement Learning: Q-Learning, SARSA, Policy Gradient, Actor-Critic, DQN
  • โœ… Distributed agents: Network-connected agent swarms with latency/geo/cost optimization
  • โœ… IDE integration: VS Code extension and LSP language server

Near-term (Q1 2026)

  • Plugin system: Extensible architecture for custom backends
  • Advanced media: Video streaming and real-time audio processing
  • Mobile TUI: Touch-friendly interface adaptations
  • WASM support: Browser-based shell via WebAssembly

Long-term (2026+)

  • Module system: Package management and imports
  • Advanced AI strategies: Multi-modal reasoning and planning
  • Cloud deployment: Hosted agent swarms

๐Ÿค Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository
  2. Check the test suite: cargo test --tests
  3. Add your feature with corresponding tests
  4. Ensure TUI compatibility if UI changes are involved
  5. Submit a pull request with clear description

Development Setup

git clone https://github.com/nervosys/AetherShell

cd AetherShell

cargo build --release

cargo test --tests --all-features


๐Ÿ“œ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Ready to experience the future of shell interaction? Start with ae --tui and prepare to be amazed! ๐Ÿš€