EdgeFirst Studio Client
EdgeFirst Studio Client is the official command-line application and library for EdgeFirst Studio - the MLOps platform for 3D visual and 4D spatial perception AI. Available for Rust, Python, Android (Kotlin), and iOS/macOS (Swift). Automate dataset management, annotation workflows, model training, validation, and deployment for off-road vehicles, robotics, construction equipment, and industrial applications.
Overview
EdgeFirst Client provides seamless programmatic access to EdgeFirst Studio's comprehensive MLOps capabilities. Whether you're integrating Studio into your CI/CD pipeline, building custom training workflows, or automating data processing systems, EdgeFirst Client delivers the production-grade reliability you need.
Trusted by EdgeFirst Studio: This client library powers EdgeFirst Studio's internal training and validation services, providing a battle-tested foundation for production workloads.
Key Capabilities
- 📦 MCAP Publishing: Upload sensor recordings for automated ground-truth generation (AGTG)
- 🏷️ Dataset Management: Download datasets and annotations in multiple formats
- 🎯 Training & Validation: Monitor sessions, publish metrics, manage model artifacts
- 🚀 Model Artifacts: Upload and download trained models (ONNX, TensorFlow Lite, H5, etc.)
- 📊 Multiple Formats: Darknet/YOLO, EdgeFirst Dataset Format (Arrow), user-defined formats
- 🔌 Seamless Integration: Direct REST API access to all EdgeFirst Studio features
Features
Dataset Management
- Create snapshots from MCAP files, directories, or EdgeFirst Dataset format (Zip/Arrow)
- Upload MCAP recordings for AGTG (Automated Ground-Truth Generation) workflow
- Restore snapshots with automatic annotation (
--autolabel) and depth map generation (--autodepth) - Download datasets with support for images, LiDAR PCD, depth maps, and radar data
- Download annotations in JSON or Arrow format (EdgeFirst Dataset Format)
- Dataset groups and filtering for flexible data organization
Training Workflows
- List and manage experiments (training session groups)
- Monitor training sessions with real-time status tracking
- Publish training metrics to EdgeFirst Studio during model training
- Upload custom training artifacts for experiment tracking
- Download model artifacts and training logs
- Access model and dataset parameters for reproducibility
Validation Workflows
- List and manage validation sessions across projects
- Publish validation metrics to EdgeFirst Studio
- Upload validation files and results for analysis
- Download validation artifacts including performance reports
- Track validation task progress with status monitoring
Model Artifact Management
- Publish (upload) model artifacts from training sessions
- Download trained models in various formats (ONNX, TensorFlow Lite, H5, PyTorch, etc.)
- Used internally by EdgeFirst Studio trainers and validators
- Artifact versioning and experiment tracking
Multiple Dataset Formats
- Darknet/YOLO: Industry-standard annotation formats for object detection
- EdgeFirst Dataset Format: Arrow-based format for efficient data handling and 3D perception
- User-defined formats: API flexibility for custom dataset structures
EdgeFirst Studio Integration
- One-click deployment from EdgeFirst Studio UI
- Automatic optimization for edge devices
- Performance monitoring and analytics
- A/B testing and gradual rollouts
- Direct API access to all Studio features
Additional Features
- Task management: List and monitor background processing tasks
- Project operations: Browse and search projects and datasets
- Annotation sets: Support for multiple annotation versions per dataset
- Progress tracking: Real-time progress updates for uploads and downloads
- 3D perception support: LiDAR, RADAR, Point Cloud, depth maps
Installation
Via Cargo (Rust)
Via Pip (Python)
Mobile SDKs (Android & iOS/macOS)
Download the SDK packages from GitHub Releases:
- Android:
edgefirst-client-android-{version}.zip- Kotlin bindings with JNI libraries - iOS/macOS:
edgefirst-client-swift-{version}.zip- Swift bindings with XCFramework
See platform-specific documentation for integration instructions:
From Source
System Requirements
- MSRV (Minimum Supported Rust Version): Rust 1.90+ (Rust 2024 Edition)
- Python: 3.8+ (for Python bindings)
- Network: Access to EdgeFirst Studio (*.edgefirst.studio)
Quick Start
CLI Authentication
# Login (stores token locally for 7 days)
# View your organization info
# Use environment variables (recommended for CI/CD)
Common CLI Workflows
Download Datasets and Annotations
# List projects and datasets
# Download dataset with images
# Download annotations in Arrow format (EdgeFirst Dataset Format)
# Upload samples to dataset
For complete upload format specifications, see EdgeFirst Dataset Format.
Monitor Training and Download Models
# List training experiments
# Monitor training sessions
# Get training session details with artifacts
# Download trained model
Work with Snapshots
Snapshots preserve complete copies of sensor data, datasets, or directories for versioning and backup. Restore them with optional automatic annotation (AGTG) and depth map generation.
# List all snapshots
# Create snapshot from MCAP file
# Create snapshot from directory
# Download snapshot
# Restore snapshot to new dataset
# Restore with automatic annotation (AGTG)
# Restore with AGTG and depth map generation
# Delete snapshot
For detailed snapshot documentation, see the EdgeFirst Studio Snapshots Guide.
EdgeFirst Dataset Format
EdgeFirst Client provides tools for working with the EdgeFirst Dataset Format - an Arrow-based format optimized for 3D perception AI workflows.
What the CLI Provides
The create-snapshot command intelligently handles multiple input types:
- Folder of images: Automatically generates
dataset.arrowmanifest anddataset.zip, then uploads - Arrow manifest file: Auto-discovers matching
dataset.zipordataset/folder for images - Complete dataset directory: Validates structure and uploads as-is
- Server-side dataset: Creates snapshot from existing dataset in EdgeFirst Studio
Supported Input Structures
1. Simple folder of images (CLI handles conversion automatically):
my_images/
├── image001.jpg
├── image002.jpg
└── image003.png
2. Sequence-based dataset (video frames with temporal ordering):
my_dataset.arrow # Annotation manifest
my_dataset/ # Sensor container (or my_dataset.zip)
└── sequence_name/
├── sequence_name_001.camera.jpeg
├── sequence_name_002.camera.jpeg
└── sequence_name_003.camera.jpeg
3. Mixed dataset (sequences + standalone images):
my_dataset.arrow
my_dataset/
├── video_sequence/
│ └── video_sequence_*.camera.jpeg
├── standalone_image1.jpg
└── standalone_image2.png
CLI Examples
# Upload a folder of images (auto-generates Arrow manifest and ZIP)
# Upload using existing Arrow manifest (auto-discovers dataset.zip or dataset/)
# Upload complete dataset directory
# Create snapshot from server-side dataset (with default annotation set)
# Create snapshot from server-side dataset with specific annotation set
# Monitor server-side snapshot creation progress
# Generate Arrow manifest from images (without uploading)
# Generate with sequence detection for video frames
# Validate dataset structure before upload
Sequence Detection (--detect-sequences)
The --detect-sequences flag enables automatic detection of video frame sequences based on filename patterns. When enabled, the CLI parses filenames to identify temporal ordering.
How it works:
- Pattern matching: Looks for
{name}_{frame}.{ext}pattern (e.g.,video_001.jpg,camera_042.png) - Extracts frame number: The trailing numeric part after the last underscore becomes the frame index
- Groups by name: Files with the same prefix are grouped into sequences
Detection behavior:
| Input | --detect-sequences OFF |
--detect-sequences ON |
|---|---|---|
image.jpg |
name=image, frame=null |
name=image, frame=null |
seq_001.jpg |
name=seq_001, frame=null |
name=seq, frame=1 |
camera_042.camera.jpeg |
name=camera_042, frame=null |
name=camera, frame=42 |
video/video_100.jpg |
name=video_100, frame=null |
name=video, frame=100 |
Supported structures:
- Nested:
sequence_name/sequence_name_001.jpg(frames in subdirectories) - Flattened:
sequence_name_001.jpg(frames at root level)
⚠️ False positive considerations:
Files with names like model_v2.jpg or sample_2024.png may be incorrectly detected as sequences when --detect-sequences is enabled. If your dataset contains non-sequence files with _number suffixes, consider:
- Renaming files to avoid the
_Npattern (e.g.,model-v2.jpg) - Omitting
--detect-sequencesand manually organizing sequences into subdirectories
Supported File Types
Images: .jpg, .jpeg, .png, .camera.jpeg, .camera.png
Point Clouds: .lidar.pcd (LiDAR), .radar.pcd (Radar)
Depth Maps: .depth.png (16-bit PNG)
Radar Cubes: .radar.png (16-bit PNG with embedded dimension metadata)
See DATASET_FORMAT.md for technical details on radar cube encoding.
Annotation Support
The create-snapshot command uploads datasets with or without annotations:
- With annotations: Provide an Arrow file containing annotations (see DATASET_FORMAT.md for schema)
- Without annotations: The CLI generates an Arrow manifest with null annotation fields
When uploading unannotated datasets, EdgeFirst Studio can populate annotations via:
- Manual annotation in the Studio web interface
- AGTG (Automated Ground-Truth Generation) via
restore-snapshot --autolabel(MCAP snapshots only)
Note: The CLI does not currently parse annotations from other formats (e.g., COCO, YOLO). To upload pre-annotated datasets from these formats, first convert them to EdgeFirst Dataset Format using the annotation schema in DATASET_FORMAT.md.
Rust API
use ;
use PathBuf;
// Generate Arrow manifest from images
let images_dir = from;
let output = from;
let count = generate_arrow_from_folder?;
println!;
// Validate dataset structure before upload
let issues = validate_dataset_structure?;
for issue in &issues
Python API
# Create snapshot from local folder (auto-generates manifest)
=
=
# Create snapshot from server-side dataset
=
# Create snapshot with explicit annotation set
=
For complete format specification, see EdgeFirst Dataset Format Documentation or DATASET_FORMAT.md.
Rust Library
use ;
async
Python Library
# Create client and authenticate
=
=
# List projects and datasets
=
=
# Publish validation metrics (used by validators)
# Note: Replace with your actual validation session ID
=
=
Architecture
EdgeFirst Client is a REST API client built with:
- TLS 1.2+ enforcement for secure communication with EdgeFirst Studio
- Session token authentication with automatic renewal
- Progress tracking for long-running uploads/downloads
- Async operations powered by Tokio runtime (Rust)
- Memory-efficient streaming for large dataset transfers
Documentation
- EdgeFirst Studio Docs: doc.edgefirst.ai
- Rust API Documentation: docs.rs/edgefirst-client
- Python API Documentation: Available on PyPI
- Android SDK Documentation: See ANDROID.md
- iOS/macOS SDK Documentation: See APPLE.md
- CLI Man Page: See CLI.md
- Dataset Format Specification: EdgeFirst Dataset Format
- AGTG Workflow Tutorial: Automated Ground-Truth Generation
Support
Community Resources
- 📚 Documentation - Comprehensive guides and tutorials
- 💬 GitHub Discussions - Ask questions and share ideas
- 🐛 Issue Tracker - Report bugs and request features
EdgeFirst Ecosystem
This client is the official API gateway for EdgeFirst Studio - the complete MLOps platform for 3D visual and 4D spatial perception AI:
🚀 EdgeFirst Studio Features:
- Dataset Management: Organize, annotate, and version your perception datasets
- Automated Ground-Truth Generation (AGTG): Upload MCAP recordings and get automatic annotations
- Model Training: Train custom perception models with your datasets
- Validation & Testing: Comprehensive model validation and performance analysis
- Deployment: Deploy models to edge devices with optimized inference
- Monitoring: Real-time performance monitoring and analytics
- Collaboration: Team workspaces and project management
💰 Free Tier Available:
- 100,000 images
- 10 hours of training per month
- Full access to all features
- No credit card required
Hardware Platforms
EdgeFirst Client works seamlessly with EdgeFirst Modules:
- Operates reliably in harsh conditions with an IP67-rated enclosure and -40°C to +65°C range
- On-device integrated dataset collection, playback, and publishing
- Deploy models onto EdgeFirst Modules with full AI Acceleration up-to 40-TOPS
- Reference designs and custom hardware development services
Professional Services
Au-Zone Technologies offers comprehensive support for production deployments:
- Training & Workshops - Accelerate your team's expertise with EdgeFirst Studio
- Custom Development - Extend capabilities for your specific use cases
- Integration Services - Seamlessly connect with your existing systems and workflows
- Enterprise Support - SLAs, priority fixes, and dedicated support channels
📧 Contact: support@au-zone.com 🌐 Learn more: au-zone.com
Contributing
Contributions are welcome! Please:
- Read the Contributing Guidelines
- Check existing issues or create a new one
- Fork the repository and create a feature branch
- Submit a pull request with clear descriptions
Using AI Coding Agents? See AGENTS.md for project conventions, build commands, and pre-commit requirements.
Code Quality
This project uses SonarCloud for automated code quality analysis. Contributors can download findings and use GitHub Copilot to help fix issues:
See CONTRIBUTING.md for details.
Security
For security vulnerabilities, please use our responsible disclosure process:
- GitHub Security Advisories: Report a vulnerability
- Email: support@au-zone.com with subject "[SECURITY] EdgeFirst Client"
See SECURITY.md for complete security policy and best practices.
License
Licensed under the Apache License 2.0 - see LICENSE for details.
Copyright 2025 Au-Zone Technologies
See NOTICE for third-party software attributions included in binary releases.
🚀 Ready to streamline your perception AI workflows?
Try EdgeFirst Studio Free - No credit card required • 100,000 images • 10 hours training/month