Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
รther Shell (รฆ) ๐
The world's first multi-agent shell with typed functional pipelines and multi-modal AI. Built in Rust for safety and performance, featuring revolutionary AI protocols found nowhere else.
"What if your shell could coordinate teams of AI agents, negotiate consensus, and process images, audio, and videoโall with type-safe functional pipelines?"
โก Quick Start
# Install
&&
# Launch interactive TUI
# Or classic REPL
# Typed pipelines - not text streams!
[1,2,3,4,5] | map(fn(x) => x * 2) | reduce(fn(a,b) => a + b, 0)
# => 30
# AI query (set AETHER_AI=openai and OPENAI_API_KEY for real responses)
ai("Explain quantum computing in simple terms")
# AI with vision
ai("Describe this image", {images: ["photo.jpg"]})
# AI agent with tools
agent("Find all TODO comments in src/", ["ls", "cat", "grep"])
# Neural network creation
let brain = nn_create("agent", [4, 8, 2])
Set your API key: export OPENAI_API_KEY="sk-..." or ae ai keys store openai sk-...
๐ Full Documentation | ๐ฎ TUI Guide | ๐ Examples
๐ฏ What Makes AetherShell Unique?
AetherShell is the ONLY shell in the world that combines:
๐ฅ Exclusive Features
-
AI Agents with Tool Access ๐ค
- Deploy AI agents that can use shell tools (ls, cat, grep, etc.)
- Goal-directed task execution with step limits
- Dry-run mode for previewing actions
- Multi-agent swarm orchestration coming soon!
-
AI Communication Protocols ๐ฌ
- MCP: Model Context Protocol with 130+ tools across 27 categories
- A2A: Agent-to-Agent messaging framework
- NANDA: Negotiation And Dynamic Agents for consensus
- AgenticBinary: Binary protocol for information density
- Syntax KB: Knowledge base for protocol discovery
-
Multi-Modal AI Native ๐จ
- Analyze images with
ai("prompt", {images: [...]}) - Process audio with
ai("prompt", {audio: [...]}) - Analyze video with
ai("prompt", {video: [...]}) - Mix multiple media types in single queries
- Analyze images with
-
Typed Functional Pipelines ๐
- Hindley-Milner type inference (like Haskell, OCaml)
- Structured data: Records, Arrays, Tablesโnot text streams
- First-class functions, pattern matching
- Type safety prevents shell scripting errors
-
๐ง Neural Networks & Evolutionary Learning
- In-shell neural network creation and mutation
- Consensus networks for distributed decision making
- Evolutionary algorithms with population optimization
- NEAT topology evolution
โจ Revolutionary Features
๐ง AI Integration
- AI Queries: Direct AI queries with
ai()function - Vision AI: Analyze images and screenshots
- Audio Processing: Transcribe speech and audio files
- Video Analysis: Process video content with AI-powered insights
- Smart Agents: Deploy specialized AI agents with tool access and reasoning
- Protocol Support: MCP, A2A, and NANDA for advanced agent coordination
- ๐ Model Management: OpenRouter-style API server with local model management and format conversion
- ๐ Neural Networks: Create and evolve neural networks directly in the shell
- ๐ Evolutionary Learning: Population-based optimization and coevolution
๐จ Beautiful Terminal UI (TUI)
- Interactive Interface: Modern, responsive terminal GUI with real-time updates
- Media Viewer: Display images, play audio, and preview videos in terminal
- Chat Interface: Conversational AI with context-aware responses
- Agent Dashboard: Monitor and control your AI agent swarms
- Multimodal Sessions: Seamlessly mix text, images, audio in conversations
๐ช Advanced Programming Features
- Typed Pipelines: Pass structured records/tables, not just raw text
- Rust-Grade Safety: Memory-safe runtime with zero-cost abstractions
- Strong Type System: HindleyโMilner inference with algebraic data types
- Metaprogramming: Hygienic macros and AST manipulation
- Async/Await: Built-in structured concurrency and cancellation
- POSIX Compatibility: Run existing tools seamlessly
๐ Seamless Interoperability
- Bash Compatibility: Transpile and run existing
.shscripts - Command Integration: Auto-wrap unknown commands in safe shells
- Multi-Backend AI: Support for OpenAI, Anthropic, and local providers via unified API
- OS Tools Database: Cross-platform native command integration
- ๐ XDG Compliance: Standards-compliant local storage and configuration management
๏ฟฝ Project Structure
This project is now organized with a clean directory structure:
- ๐
src/- Core Rust source code - ๐
docs/- Documentation, specs, and development notes - ๐
examples/- AetherShell example scripts - ๐
demos/- Showcase demos and advanced examples - ๐
test-scripts/- Manual test scripts (builtins & integration) - ๐
tests/- Rust unit and integration tests - ๐
web/- Web terminal components - ๐
temp/- Temporary files (gitignored)
See PROJECT_STRUCTURE.md for detailed organization information.
๏ฟฝ๐ Quick Start Guide
Installation
# Install both binaries
# Or install individually
VS Code Extension
Get professional IDE support for AetherShell:
- Syntax highlighting for
.aefiles - Code snippets for agents, swarms, MCP servers
- Run code directly from editor (Ctrl+Shift+R)
- Auto-completion for built-in functions
- Hover documentation for AI features
- Integrated REPL and TUI
๐ Quick Reference: See QUICK_REFERENCE.md for all syntax, patterns, and snippets!
Install:
# Press F5 to test, or package for distribution
See vscode-extension/README.md for details.
Launch Options
Classic REPL Mode:
๐จ Interactive TUI Mode (Recommended):
Note: TUI requires a terminal with full ANSI support (Windows Terminal, native PowerShell, or modern terminal emulators). VS Code integrated terminal may have limited support. See TUI Guide for details.
Run Scripts:
REPL Commands
- exit / quit: Exit the REPL
- Ctrl+D: Exit the REPL (EOF)
- Ctrl+C: Interrupt current operation
๐ฏ Experience the Magic
๐ค AI Agents
Create an AI agent with tool access:
# Agent with goal and allowed tools
# Set AETHER_AI=openai and OPENAI_API_KEY for real AI responses
agent("Find all TODO comments in the codebase", ["ls", "cat", "grep"])
# Agent with configuration record
agent({
goal: "Analyze the project structure",
tools: ["ls", "cat"],
max_steps: 5,
dry_run: true # Preview without executing
})
AI queries with multi-modal support:
# Simple text query
ai("Explain the difference between TCP and UDP")
# Query with images
ai("What's in this screenshot?", {images: ["screenshot.png"]})
# Query with multiple images
ai("Compare these diagrams", {images: ["diagram1.png", "diagram2.png"]})
๐ฎ Coming Soon: Multi-Agent Swarms
The full swarm syntax for coordinating multiple agents with different models is under active development. Current agent functionality uses single-agent mode.
๐ง MCP Tool Use (Model Context Protocol)
Access 130+ tools via the built-in MCP server:
# List all available MCP tools
let tools = mcp_tools()
print(len(tools)) # => 130
# Search for specific tools
let git_tools = mcp_tools({search: "git"})
# => [{name: "git", description: "Distributed version control", ...}, ...]
# Filter by category
let dev_tools = mcp_tools({category: "development"})
let ml_tools = mcp_tools({category: "machinelearning"})
let k8s_tools = mcp_tools({category: "kubernetes"})
# Execute tools via MCP protocol
let result = mcp_call("git", {command: "status"})
print(result.content) # Git status output
let files = mcp_call("curl", {url: "https://api.github.com"})
print(files.is_error) # false on success
# Get MCP server information
let server = mcp_server()
print(server.tool_count) # 130
print(server.protocol_version) # "2024-11-05"
# List available resources and prompts
let resources = mcp_resources() # tools, categories, system-info
let prompts = mcp_prompts() # find-tool, explain-tool
Tool Categories (27 total):
- Development: git, cargo, npm, go, rustc, make, gradle, maven
- Containers: docker, kubectl, helm, k9s, minikube, kind
- MachineLearning: ollama, huggingface-cli, tensorboard, wandb, mlflow
- Cloud: aws, az, gcloud, gsutil, terraform, packer
- Security: openssl, gpg, ssh, vault, sops, age
- Data: jq, yq, duckdb, sqlite3, pgcli, redis-cli
- Media: ffmpeg, imagemagick, yt-dlp, pandoc, sox
- And more: FileSystem, NetworkTools, Archives, Monitoring...
๐ง Local MCP Servers (Safe Tool Access)
MCP tool discovery and execution:
# Query and execute from 130+ built-in tools
let tools := mcp_tools()
print(len(tools)) # 130
# Filter tools by category
let dev_tools := mcp_tools({category: "development"})
print(len(dev_tools)) # 23
# Execute tools via MCP protocol
let result := mcp_call("git", {command: "status"})
print(result.is_error) # false
# Get server information
let server := mcp_server()
print(server.tool_count) # 130
Start custom MCP servers:
# Start filesystem MCP server (safe, controlled access)
fs_server := mcp_server_start({
name: "filesystem",
type: "builtin",
config: {
allowed_paths: ["./", "~/Projects"],
excluded_patterns: [".git/", "node_modules/"]
}
})
print(fs_server.endpoint) # http://localhost:3xxx
# Start multiple MCP servers
git_server := mcp_server_start({name: "git", type: "builtin"})
docker_server := mcp_server_start({name: "docker", type: "builtin"})
# Agent with MCP tool access
devops := agent_with_mcp(
"Check git status and list recent commits",
["mcp:git_status", "mcp:git_log"],
git_server.endpoint
)
print(devops.result)
๐จ Multi-Modal AI (Images, Audio, Video)
Analyze images with AI:
# Single image
ai("What do you see in this image?", {images: ["screenshot.png"]})
# Compare multiple images
ai("Compare these photos and find similarities", {
images: ["photo1.jpg", "photo2.jpg", "photo3.jpg"]
})
# Batch process with typed pipelines and in() operator
ls("./photos")
| where(fn(f) => f.ext | in([".jpg", ".png"]))
| map(fn(photo) => {
path: photo.path,
description: ai("Describe briefly", {images: [photo.path]})
})
| save_json("photo_catalog.json")
Audio transcription and analysis:
# Transcribe audio
ai("Transcribe this audio", {audio: ["meeting.mp3"]})
# Analyze sentiment
ai("What is the speaker's tone and sentiment?", {
audio: ["interview.mp3"]
})
# Summarize podcast
ai("Extract key takeaways from this podcast", {
audio: ["episode_42.mp3"]
})
Video content processing:
# Summarize video
ai("Summarize the key points from this video", {
video: ["presentation.mp4"]
})
# Extract tutorial steps
ai("List the step-by-step instructions from this tutorial", {
video: ["coding_tutorial.mp4"]
})
Multi-modal combinations:
# Analyze presentation with slides + audio
ai("Create comprehensive summary of this presentation", {
images: ["slide1.png", "slide2.png", "slide3.png"],
audio: ["narration.mp3"]
})
# Meeting minutes from multiple sources
ai("Generate meeting minutes with action items", {
audio: ["meeting_audio.mp3"],
images: ["whiteboard_photo.jpg"],
video: ["screen_share.mp4"]
})
๐ Typed Functional Pipelines
Structured data, not text streams:
# ls returns typed records with name, path, ext, is_dir, size, modified
ls(".")
| where(fn(f) => f.size > 1000 && f.ext == ".rs")
| map(fn(f) => {
name: f.name,
size_kb: f.size / 1024,
age_days: (now() - f.modified) / 86400
})
| sort_by(fn(f) => f.size_kb, "desc")
| take(10)
Type-safe with Hindley-Milner inference:
# Types are inferred automatically
numbers = [1, 2, 3, 4, 5] # Array<Int>
doubled = numbers | map(fn(x) => x * 2) # Array<Int>
sum = doubled | reduce(fn(a, b) => a + b, 0) # Int
# Complex types work seamlessly
employees = [
{name: "Alice", age: 30, salary: 75000.0},
{name: "Bob", age: 25, salary: 65000.0}
] # Array<Record<name: String, age: Int, salary: Float>>
high_earners = employees
| where(fn(e) => e.salary > 70000.0) # Note: use 70000.0 for Float comparison
| map(fn(e) => {name: e.name, monthly: e.salary / 12.0})
First-class functions:
# Functions are values - pass lambdas to higher-order functions
double = fn(x) => x * 2
triple = fn(x) => x * 3
square = fn(x) => x * x
[1,2,3,4,5] | map(double) # => [2,4,6,8,10]
[1,2,3,4,5] | map(triple) # => [3,6,9,12,15]
[1,2,3,4,5] | map(square) # => [1,4,9,16,25]
# Chain operations
[1,2,3,4,5] | map(fn(x) => x + 1) | map(fn(x) => x * x) # => [4,9,16,25,36]
๐ฅ Powerful Real-World Examples
๐ค AI-Assisted Code Review
# Use an AI agent to review code for issues
# Set AETHER_AI=openai and OPENAI_API_KEY for real AI responses
code_content = read_text("src/main.rs")
ai("Review this Rust code for potential bugs, security issues, and improvements:\n" + code_content)
๐ Intelligent Data Processing Pipeline
# Type-safe data transformation with AI insights
# Process files and get AI analysis - now with ext field!
files := ls("./src") | where(fn(f) => f.ext == ".rs")
file_info := files | map(fn(f) => {name: f.name, size: f.size})
print(file_info)
# Get AI insights on project structure
ai("Based on a Rust project with these files, suggest improvements: " + to_json(file_info))
๐จ Multi-Modal Content Analysis
# Analyze images with AI
ai("What's shown in this image? List the main objects.", {
images: ["photo.jpg"]
})
# Compare multiple images
ai("Compare these two screenshots and describe the differences", {
images: ["before.png", "after.png"]
})
# Analyze with audio context
ai("Transcribe and summarize this audio recording", {
audio: ["meeting.mp3"]
})
๐ Smart File Organization with AI Vision
# Analyze and describe images in a directory
ls("./images")
| where(fn(f) => f.ext == ".jpg" || f.ext == ".png")
| map(fn(photo) => {
path: photo.path,
name: photo.name,
description: ai("Briefly describe this image", {images: [photo.path]})
})
| save_json("photo_analysis.json")
๐ง Core Language Features
Basic Syntax
Comments:
# Line comments use hash (shell style - preferred)
# Comments are ignored during execution
print("Hello") # Inline comments also work
// C-style comments are also supported for compatibility
Hello World:
print("Hello, Aether!")
Variables:
# Simple = for type inference (recommended)
name = "world" # Type inferred as String
count = 42 # Type inferred as Int
items = [1, 2, 3] # Type inferred as Array<Int>
# Walrus operator := also supported (same as =)
timestamp := now() # Get current Unix timestamp
result := sh(["git", "status"]) # Run shell command
# Mutable variables
mut counter = 0 # Mutable
mut total = 100 # Also mutable
# Alternative syntax (explicit)
let name = "world" # Explicit let keyword
let mut counter = 0 # Traditional mutable
Structured Pipelines:
[1,2,3,4] | map(fn(x) => x*x) | reduce(fn(a,b) => a+b, 0)
# โ 30
Pattern Matching:
# Match on arrays and literals
let nums = [1, 2, 3]
match nums {
[] => print("empty"),
[x] => print("single"),
[x, y] => print("pair"),
_ => print("multiple elements")
}
# Match with Option types
let val = Some(42)
print(val) # => {_tag: "Some", _value: 42}
Typed HTTP:
resp = http_get("https://api.github.com")
print(resp.status)
print(resp.body)
๐ค AI Model Management System
New: OpenRouter-Style API Server
Aether Shell now includes a comprehensive AI model management system with an OpenRouter-compatible API server, XDG-compliant local storage, and advanced model format conversion capabilities.
๐ Key Features
- ๐ Multi-Provider Support: Seamlessly integrate OpenAI, Anthropic, and local models through a unified API
- ๐ XDG-Compliant Storage: Local model storage following XDG Base Directory specification
- ๐ Format Conversion: Convert between GGUF, SafeTensors, PyTorch, ONNX, and TensorFlow formats
- ๐ฅ Model Downloads: Direct integration with Hugging Face Hub and custom model repositories
- ๐ HTTP API Server: OpenAI-compatible REST API with Swagger documentation
- โ๏ธ CLI Management: Comprehensive command-line interface for all operations
๐ ๏ธ AI Model CLI (ae ai)
Start the API Server:
# Start server with default settings
# Custom host and port with CORS enabled
Model Management:
# List all available models (local + remote providers)
# List models from specific provider
# List only local models
# Download models locally
API Key Management:
# Store API key securely in OS credential store
# Get API key (shows masked version)
# Delete API key
# List all stored API key providers
Configuration:
# Show current AI configuration
Note: The old
aimodelcommand is deprecated but still available for backward compatibility. It will show a deprecation warning and suggest usingae aiinstead.Advanced features (coming soon): Model format conversion, storage management, provider configuration, and LLM backend management will be integrated into
ae aiin future releases.
Supported LLM Backends:
- ๐ฅ vLLM: High-performance inference with PagedAttention (
http://localhost:8000) - โก TensorRT-LLM: NVIDIA GPU-optimized inference (
http://localhost:8001) - ๐ SGLang: High-throughput serving with RadixAttention (
http://localhost:30000) - ๐ฆ llama.cpp: CPU/GPU inference for GGUF models (
http://localhost:8080)
๐ HTTP API Endpoints
Once the server is running, you can access these OpenAI-compatible endpoints:
# List available models
# Chat completions with different providers
# Using OpenAI
# Using vLLM backend
# Using llama.cpp backend
# Generate embeddings
# Server health check
# API documentation
๐ Local Storage Structure
Models are stored following XDG Base Directory specification:
โ๏ธ LLM Backend Configuration
Configure your LLM backends in ~/.config/aether/providers.toml:
[]
# vLLM configuration
= "http://localhost:8000"
= true
= 0.9
= 1
# TensorRT-LLM configuration
= "http://localhost:8001"
= false
= 8
= 2048
= 1024
# SGLang configuration
= "http://localhost:30000"
= true
= 0.8
= 1
# llama.cpp configuration
= "http://localhost:8080"
= true
= 4096
= -1 # Use all GPU layers available
๐ง Integration with Aether Shell
The AI model system integrates seamlessly with Aether Shell's existing AI features:
# Use different LLM backends seamlessly
# Set AETHER_AI environment variable to configure provider
vllm_response = ai("Hello, how are you?")
# Use MCP tools for model management
let tools = mcp_tools()
print(len(tools)) # 130 tools available
# Model information via HTTP
models = http_get("http://localhost:8080/v1/models")
print(models.status)
# Batch processing with pipelines
backends = ["openai", "ollama", "compat"]
results = backends | map(fn(backend) => {name: backend, available: true})
๐ฎ TUI Interface Guide
Navigation
- Tab: Switch between Chat, Agents, Media, Help tabs
- Arrow Keys: Navigate lists and selections
- Space: Select/deselect media files
- Enter: Send messages, activate agents
- Esc: Exit to normal mode or quit application
- q: Quit application (from normal mode)
- Ctrl+C: Force quit application
Media Tab Features
- Image Viewer: Display images directly in terminal using advanced algorithms
- Audio Player: Play audio files with waveform visualization
- Video Preview: Extract frames and metadata from video files
- Format Support: 20+ media formats including PNG, JPG, MP3, MP4, WEBM, GIF
- Batch Selection: Select multiple files for multimodal AI analysis
Agent Management
- Create Agents: Spawn AI agents with specific capabilities
- Monitor Status: Real-time agent status and task progress
- Swarm Coordination: Deploy coordinated teams of agents
- Task Assignment: Distribute work across agent networks
- Strategy Selection: Choose from Round-Robin, Load-Balanced, or Specialized coordination
Chat Interface
- Multimodal Messages: Include text, images, audio, and video in conversations
- Context Awareness: AI remembers conversation history and attached media
- Export Options: Save conversations as Markdown or JSON
- Session Management: Multiple chat sessions with persistent history
- Auto-Summarization: Intelligent conversation summarization
๐ ๏ธ Advanced Features
Bash Compatibility
Run old scripts seamlessly:
Or pipe Bash from stdin:
|
Transpiler magic - turns this:
|
into Aether (conceptual transpiler output):
# Transpiled bash commands use echo() builtin and pipelines
echo("hello")
๐จ AI & Media Configuration
Supported AI Backends
AetherShell supports multiple AI inference backends for maximum flexibility:
- OpenAI (
openai:gpt-4o-mini) - Cloud API for GPT-4V vision, GPT-4 text, Whisper audio - Ollama (
ollama:llama3) - Local server for LLaVA vision, Llama/Mistral text - vLLM (
vllm:meta-llama/Llama-3-8B) - High-performance local inference with PagedAttention - llama.cpp (
llamacpp:model) - Efficient CPU/GPU inference with GGUF models - TGI (
tgi:mixtral) - HuggingFace Text Generation Inference - OpenAI-Compatible (
compat:mixtral) - Any OpenAI-compatible API server
๐ See docs/AI_BACKENDS.md for detailed backend configuration guide
Media Format Support
- Images: PNG, JPG, JPEG, WEBP, GIF, BMP, TIFF, ICO, SVG
- Audio: MP3, WAV, FLAC, OGG, M4A, AAC, WMA
- Video: MP4, AVI, MOV, MKV, WEBM, FLV, WMV
Environment Setup
# For OpenAI integration
# For agent command permissions
# For custom AI backends
# or "openai" or "custom"
๐ Example Workflows
1. Document Analysis Pipeline
# 1. Load PDFs, images, audio recordings in Media tab
# 2. Select multiple files (Space key)
# 3. Chat: "Analyze these documents and create a summary report"
# 4. AI processes all media types and generates comprehensive analysis
2. Content Creation Swarm
# Deploy specialized agents for blog creation (requires AI config)
# Set AETHER_AI=openai and OPENAI_API_KEY for real responses
swarm({
goal: "create tech blog post",
tools: ["ls", "cat", "grep"],
max_steps: 10,
dry_run: true
})
3. Interactive Media Analysis
# Batch process files with AI descriptions (conceptual - requires AI config)
# Set AETHER_AI=openai and OPENAI_API_KEY for real AI responses
files = ls("./photos") | where(fn(f) => !f.is_dir)
print(len(files))
# Process each file
file_info = files | map(fn(f) => {name: f.name, path: f.path, size: f.size})
print(file_info)
4. Voice-Controlled Automation
# Audio processing with AI (requires AETHER_AI config)
# Example: transcribe audio and execute commands
# audio_result = ai("transcribe this audio", {audio: ["recording.mp3"]})
# Execute tools via tool_exec (198 tools available)
result = tool_exec("git", ["status"])
print(result.success) # true
print(result.stdout) # Git status output
๐งช Developer Features
Type System
- Hindley-Milner inference: Automatic type deduction
- Algebraic data types:
Option<T>,Result<T,E>, custom enums - Strong safety: Compile-time error prevention
- Generic programming: Parametric polymorphism
Metaprogramming
- Hygienic macros: Safe code generation
- AST manipulation: Runtime code transformation
- Quoting/splicing: Embed code as data
Concurrency
- Async/await: Built-in structured concurrency
- Cancellation: Graceful task termination
- Pipelines: Parallel data processing
OS Tools Integration
- Cross-platform database: 200+ native OS tools (Linux/Windows/macOS)
- 27 categories: Development, Containers, ML, Cloud, Security, Data, Media, etc.
- Safety levels: Safe, Caution, Dangerous, Critical classification
- MCP protocol: Standardized tool discovery and execution
- Platform filtering: OS-specific tool availability
๐ Performance & Testing
Benchmarks
- Memory safe: Zero buffer overflows or memory leaks
- Fast execution: Rust-powered performance
- Concurrent pipelines: Multi-core utilization
- Efficient AI calls: Batched multimodal requests
Test Coverage
- 140+ tests: Comprehensive test suite
- Unit tests: 90 library tests for core functionality
- MCP tests: 40 tests for tool use and protocol compliance
- Integration tests: End-to-end workflow testing
- TUI tests: Interactive interface validation
- AI tests: Multimodal backend testing
- OS Tools tests: 13 tests for cross-platform command database
- Neural/Evolution tests: ML primitive validation
๐ Documentation & Examples
Example Scripts
examples/00_hello.ae: Basic syntax introductionexamples/05_ai.ae: AI integration examplesexamples/06_agent.ae: Agent deploymentexamples/09_tui_basic.ae: TUI usage guideexamples/10_multimodal.ae: Multimodal AI workflowsexamples/11_agent_swarm.ae: Advanced swarm coordinationexamples/12_syntax_kb.ae: Syntax Knowledge Base and AgenticBinary protocolexamples/13_agent_coordination.ae: Real-world multi-agent task distribution
Learning Resources
๐ Documentation Guides
- Quick Reference: One-page guide to all syntax and patterns
- Type System Guide: Deep dive into
letvs=and type inference - MCP Servers Guide: Complete reference for infrastructure integration
- AI Protocols Report: A2A and NANDA implementation details
- Syntax KB Guide: AgenticBinary protocol and knowledge base reference
- Syntax KB Quick Ref: Quick reference for Syntax KB builtins
- Competitive Analysis: How AetherShell compares to alternatives
- Why AetherShell?: Philosophy and unique features
๐งช Test Examples
- Type system: See
tests/typecheck.rsfor comprehensive examples - Bash compatibility: Check
tests/transpile_bash.rsfor transpilation rules - AI integration: Explore
tests/multimodal_ai.rsfor backend implementation - TUI features: Review
tests/tui_*.rsfor interface testing - OS Tools: Examine
tests/os_tools.rsfor cross-platform tool usage
๏ฟฝ Security
AetherShell implements comprehensive security controls to protect your credentials, data, and system:
Secure API Key Management
OS Credential Store Integration ๐
API keys are stored securely in your operating system's native credential manager:
- Windows: Windows Credential Manager
- macOS: Keychain
- Linux: Secret Service API (libsecret)
# Store your API key securely
# View stored keys (masked for security)
# Output: sk-...key...1234
# List all stored providers
# Migrate from environment variables
Memory Protection ๐ก๏ธ
API keys are protected in memory using:
Secret<String>wrapping prevents accidental exposure- Automatic zeroization on drop clears memory
- No key exposure in debug output, logs, or error messages
- Temporary auth headers are automatically zeroized after use
Best Practices:
# โ
DO: Use secure credential store
# โ
DO: Remove from environment after migration
# โ DON'T: Keep keys in shell history or environment
# Insecure!
Additional Security Features
- Path Traversal Prevention: Symlink validation and path sanitization
- SSRF Protection: Blocks access to internal IPs (AWS metadata, private networks)
- Resource Limits: File size limits (100MB default), memory quotas
- TLS Hardening: TLS 1.2+ enforcement with secure cipher suites
- Input Validation: Comprehensive sanitization of user input and AI prompts
- Command Whitelisting: Configurable allowlist for agent tool use
Security Documentation
- Security Audit: Comprehensive red team assessment
- Security Fixes: Implemented mitigations and status
- Memory Sanitization: API key protection details
Security Status: 40% risk reduction achieved (6.8/10 โ 4.1/10)
๐ฃ๏ธ Roadmap
Recently Completed โ (January 2026)
- โ Neural Network Primitives: In-shell neural network creation, forward pass, mutation, crossover
- โ Consensus Networks: Multi-agent distributed decision making with message passing
- โ Evolutionary Algorithms: Population-based optimization with configurable strategies
- โ Coevolution: Multi-population coevolution for protocol learning
- โ NEAT Support: Topology-evolving neuroevolution
- โ AI Model Management: OpenRouter-style API server with multi-provider support
- โ Local Model Storage: XDG-compliant storage with format conversion
- โ Model Downloads: Hugging Face integration and CLI management tools
- โ Streaming AI responses: Real-time token streaming via SSE in API server
- โ Reinforcement Learning: Q-Learning, SARSA, Policy Gradient, Actor-Critic, DQN
- โ Distributed agents: Network-connected agent swarms with latency/geo/cost optimization
- โ IDE integration: VS Code extension and LSP language server
Near-term (Q1 2026)
- Plugin system: Extensible architecture for custom backends
- Advanced media: Video streaming and real-time audio processing
- Mobile TUI: Touch-friendly interface adaptations
- WASM support: Browser-based shell via WebAssembly
Long-term (2026+)
- Module system: Package management and imports
- Advanced AI strategies: Multi-modal reasoning and planning
- Cloud deployment: Hosted agent swarms
๐ค Contributing
We welcome contributions! Here's how to get started:
- Fork the repository
- Check the test suite:
cargo test --tests - Add your feature with corresponding tests
- Ensure TUI compatibility if UI changes are involved
- Submit a pull request with clear description
Development Setup
๐ License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Ready to experience the future of shell interaction? Start with ae --tui and prepare to be amazed! ๐