orb8-cli 0.0.1

Command-line interface for orb8
Documentation

orb8

eBPF-powered observability toolkit for Kubernetes with first-class GPU telemetry

License Rust Kubernetes

orb8 (orbit) is a high-performance observability toolkit built with Rust and eBPF, designed specifically for Kubernetes clusters running AI/ML workloads. It provides deep, low-level visibility into container networking, system calls, resource utilization, and GPU performance with minimal overhead.

The name "orb8" represents orbiting around your cluster infrastructure, continuously observing and monitoring from all angles.

Why orb8?

Existing Kubernetes observability tools either focus on high-level metrics or security-specific use cases. orb8 fills the gap by providing:

  • AI Cluster Optimized: Built for large-scale GPU/TPU/Trainium workloads (GPU telemetry planned for v0.8.0)
  • eBPF Performance: Sub-1% CPU overhead with zero-copy packet inspection
  • Kubernetes Native: Auto-discovery of pods, namespaces, and nodes via Kubernetes API
  • Minimal Overhead: Designed for production environments running massive compute clusters

Features

Current (In Development)

  • Network Flow Tracking: Real-time TCP/UDP/DNS flow monitoring per container (v0.4.0)
  • System Call Monitoring: Security anomaly detection via syscall pattern analysis (v0.5.0)
  • CPU Scheduling Analysis: Identify scheduling latency and CPU throttling (v0.6.0)
  • Memory Profiling: Track allocation patterns and predict OOM events (v0.6.0)

Planned

  • GPU Telemetry (v0.8.0 - Research Phase):
    • GPU utilization tracking per pod (via DCGM)
    • GPU memory usage monitoring
    • Experimental: CUDA kernel execution tracing (feasibility TBD)
    • Multi-GPU workload balancing insights

Kubernetes Integration

  • Auto-discovery of cluster resources
  • CRD-based configuration for selective tracing
  • Prometheus metrics exporter
  • Real-time CLI dashboard
  • Namespace and pod-level filtering

Installation

Prerequisites

For Building:

  • Rust 1.75+ (stable)
  • Rust nightly toolchain with rust-src component
  • bpf-linker (install via cargo install bpf-linker)

For Running eBPF Programs:

  • Linux kernel 5.8+ with BTF support
  • Kubernetes cluster 1.25+ (for production deployment)

Platform-Specific:

  • macOS: Lima + QEMU (for VM-based eBPF testing), 20GB free disk space
  • Linux: Native support, no VM needed
  • Windows: Use WSL2, follow Linux instructions

Future Features:

  • CUDA 11.0+ (for GPU telemetry, v0.8.0+)

From Source

git clone https://github.com/Ignoramuss/orb8.git
cd orb8
cargo build --release

Deploy to Kubernetes

kubectl apply -f deploy/orb8-daemonset.yaml

Quick Start

Note: orb8 is currently in Phase 0 (Foundation). eBPF probes will be functional in v0.2.0+.

Network Monitoring (Coming in v0.4.0)

# Monitor network flows for all pods in a namespace
orb8 trace network --namespace default

# Track DNS queries across the cluster
orb8 trace dns --all-namespaces

System Call Analysis (Coming in v0.5.0)

# Monitor syscalls for security anomalies
orb8 trace syscall --pod suspicious-pod-456

GPU Telemetry (Planned for v0.8.0)

# Monitor GPU utilization for AI workloads
orb8 trace gpu --namespace ml-training

Testing

Testing the eBPF Agent (Phase 1.2+)

Phase 1.2 implements the "Hello World" eBPF probe with full probe loading.

What works:

  • eBPF probe compiles to bytecode
  • Agent loads probe into kernel
  • Probe attaches to network interfaces (loopback)
  • eBPF logs are captured in userspace

Testing on macOS (via Lima VM):

# Build the agent (compiles eBPF probes automatically)
make build-agent

# Run the agent (requires sudo, use Ctrl+C to stop)
make run-agent

Testing on Linux (native):

# Build the agent
cargo build -p orb8-agent

# Run the agent (requires root for eBPF)
sudo ./target/debug/orb8-agent

Verifying it works:

  1. Start the agent with make run-agent
  2. In another terminal (inside VM if on macOS): ping 127.0.0.1
  3. You should see logs like:
    [INFO  network_probe] Hello from eBPF! packet_len=98
    
  4. Press Ctrl+C to stop the agent

Linux Testing (Recommended: make magic-local)

# Verify your environment
make verify-setup

# Build, test, install (native, no VM)
make magic-local

Why magic-local? Direct and explicit. On Linux, make magic just redirects to magic-local anyway.

macOS Testing

For Phase 1.1 (build infrastructure only):

# Verify your environment
make verify-setup

# Quick testing without VM (recommended for Phase 1.1)
make magic-local

For Phase 1.2+ (when testing actual eBPF execution):

# Full testing with VM (can load eBPF into kernel)
make magic

What's the difference?

  • make magic-local: Builds on macOS, compiles eBPF to bytecode (fast, no VM)
  • make magic: Uses Lima VM, can actually load eBPF programs (required for Phase 1.2+)

Architecture

orb8 consists of three main components:

  1. eBPF Probe Manager: Dynamically loads and manages eBPF programs for network and syscall tracing (GPU planned)
  2. Kubernetes Controller: Watches cluster resources and orchestrates probe deployment
  3. Metrics Pipeline: Aggregates eBPF events and exports to Prometheus/CLI

See docs/ARCHITECTURE.md for detailed design documentation.

Comparison with Existing Tools

Note: orb8 is in active development. The table shows planned capabilities (see Roadmap for timeline).

Feature orb8 Pixie Tetragon kubectl-trace
GPU Telemetry Yes (planned) No No No
eBPF-based Yes Yes Yes Yes
Network Tracing Yes (planned) Yes Yes Partial
Syscall Monitoring Yes (planned) Partial Yes Yes
K8s Native Yes Yes Yes Yes
AI Workload Focus Yes No No No
Overhead <1% (target) ~2-5% <1% Varies

Roadmap

See ROADMAP.md for the full development plan.

Current Status: Pre-Alpha (Early Development)

In Progress:

  • Project initialization and scaffolding (v0.1.0)

Planned:

  • Core eBPF probe infrastructure (v0.2.0)
  • Kubernetes API integration (v0.3.0)
  • Network flow tracing (v0.4.0)
  • GPU telemetry (v0.8.0)
  • Prometheus exporter (v0.6.0)
  • CLI dashboard (v0.7.0)

Platform Support

orb8 requires Linux for eBPF functionality.

macOS

Development uses Lima/QEMU to provide a Linux VM with full eBPF support. Your code is automatically mounted from macOS into the VM.

Linux

Native development with direct kernel access. No VM required.

Windows

Use WSL2 and follow Linux instructions.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Quick Development Workflow

Step 1: Verify your environment

make verify-setup

Step 2: Build, test, and install

Linux:

make magic-local    # Native build, test, install
cargo build         # Or build manually
cargo test          # Run tests

macOS (Phase 1.1 - build infrastructure):

make magic-local    # Fast local testing
# eBPF compiles to bytecode but doesn't load (no kernel)

macOS (Phase 1.2+ - actual eBPF execution):

make magic          # Full VM-based testing
make shell          # Enter VM
orb8 --help

Expected output for Phase 1.1: Both platforms will show:

warning: target filter `bins` specified, but no targets matched
Finished `release` profile [optimized]

This is expected - probe binaries come in Phase 1.2.

Manual Development Setup

Linux:

# Verify environment
make verify-setup

# Build and test
cargo build
cargo test

# For Phase 1.1 specifically
cargo build -p orb8-probes          # eBPF build infrastructure
cargo clippy -p orb8-probes -- -D warnings

macOS:

# Quick setup (no VM)
make verify-setup
cargo build -p orb8-probes

# Full setup (with VM for eBPF execution)
make dev            # Creates VM (5-10 min first time)
make shell          # Enter VM
cargo build
cargo test

See docs/DEVELOPMENT.md for detailed setup instructions and troubleshooting.

License

Apache License 2.0 - see LICENSE for details.

Acknowledgments

Built with:

  • aya - Rust eBPF library
  • kube-rs - Kubernetes API client
  • ratatui - Terminal UI framework

Contact


Note: This project is in early development. GPU telemetry features require specific hardware and driver configurations. See docs/GPU_SETUP.md for details (coming soon).