Client

Struct Client 

Source
pub struct Client { /* private fields */ }
Expand description

Main client for interacting with EdgeFirst Studio Server.

The EdgeFirst Client handles the connection to the EdgeFirst Studio Server and manages authentication, RPC calls, and data operations. It provides methods for managing projects, datasets, experiments, training sessions, and various utility functions for data processing.

The client supports multiple authentication methods and can work with both SaaS and self-hosted EdgeFirst Studio instances.

§Features

  • Authentication: Token-based authentication with automatic persistence
  • Dataset Management: Upload, download, and manipulate datasets
  • Project Operations: Create and manage projects and experiments
  • Training & Validation: Submit and monitor ML training jobs
  • Data Integration: Convert between EdgeFirst datasets and popular formats
  • Progress Tracking: Real-time progress updates for long-running operations

§Examples

use edgefirst_client::{Client, DatasetID};
use std::str::FromStr;

// Create a new client and authenticate
let mut client = Client::new()?;
let client = client
    .with_login("your-email@example.com", "password")
    .await?;

// Or use an existing token
let base_client = Client::new()?;
let client = base_client.with_token("your-token-here")?;

// Get organization and projects
let org = client.organization().await?;
let projects = client.projects(None).await?;

// Work with datasets
let dataset_id = DatasetID::from_str("ds-abc123")?;
let dataset = client.dataset(dataset_id).await?;

Client is Clone but cannot derive Debug due to dyn TokenStorage

Implementations§

Source§

impl Client

Source

pub fn new() -> Result<Self, Error>

Create a new unauthenticated client with the default saas server.

By default, the client uses FileTokenStorage for token persistence. Use with_storage, with_memory_storage, or with_no_storage to configure storage behavior.

To connect to a different server, use with_server or with_token (tokens include the server instance).

This client is created without a token and will need to authenticate before using methods that require authentication.

§Examples
use edgefirst_client::Client;

// Create client with default file storage
let client = Client::new()?;

// Create client without token persistence
let client = Client::new()?.with_memory_storage();
Source

pub fn with_server(&self, server: &str) -> Result<Self, Error>

Returns a new client connected to the specified server instance.

The server parameter is an instance name that maps to a URL:

  • "" or "saas"https://edgefirst.studio
  • "test"https://test.edgefirst.studio
  • "stage"https://stage.edgefirst.studio
  • "dev"https://dev.edgefirst.studio
  • "{name}"https://{name}.edgefirst.studio

If a token is already set in the client, it will be dropped as tokens are specific to the server instance.

§Examples
use edgefirst_client::Client;

let client = Client::new()?.with_server("test")?;
assert_eq!(client.url(), "https://test.edgefirst.studio");
Source

pub fn with_storage(self, storage: Arc<dyn TokenStorage>) -> Self

Returns a new client with the specified token storage backend.

Use this to configure custom token storage, such as platform-specific secure storage (iOS Keychain, Android EncryptedSharedPreferences).

§Examples
use edgefirst_client::{Client, FileTokenStorage};
use std::{path::PathBuf, sync::Arc};

// Use a custom file path for token storage
let storage = FileTokenStorage::with_path(PathBuf::from("/custom/path/token"));
let client = Client::new()?.with_storage(Arc::new(storage));
Source

pub fn with_memory_storage(self) -> Self

Returns a new client with in-memory token storage (no persistence).

Tokens are stored in memory only and lost when the application exits. This is useful for testing or when you want to manage token persistence externally.

§Examples
use edgefirst_client::Client;

let client = Client::new()?.with_memory_storage();
Source

pub fn with_no_storage(self) -> Self

Returns a new client with no token storage.

Tokens are not persisted. Use this when you want to manage tokens entirely manually.

§Examples
use edgefirst_client::Client;

let client = Client::new()?.with_no_storage();
Source

pub async fn with_login( &self, username: &str, password: &str, ) -> Result<Self, Error>

Returns a new client authenticated with the provided username and password.

The token is automatically persisted to storage (if configured).

§Examples
use edgefirst_client::Client;

let client = Client::new()?
    .with_server("test")?
    .with_login("user@example.com", "password")
    .await?;
Source

pub fn with_token_path(&self, token_path: Option<&Path>) -> Result<Self, Error>

Returns a new client which will load and save the token to the specified path.

Deprecated: Use with_storage with FileTokenStorage instead for more flexible token management.

This method is maintained for backwards compatibility with existing code. It disables the default storage and uses file-based storage at the specified path.

Source

pub fn with_token(&self, token: &str) -> Result<Self, Error>

Returns a new client authenticated with the provided token.

The token is automatically persisted to storage (if configured). The server URL is extracted from the token payload.

§Examples
use edgefirst_client::Client;

let client = Client::new()?.with_token("your-jwt-token")?;
Source

pub async fn save_token(&self) -> Result<(), Error>

Persist the current token to storage.

This is automatically called when using with_login or with_token, so you typically don’t need to call this directly.

If using the legacy token_path configuration, saves to the file path. If using the new storage abstraction, saves to the configured storage.

Source

pub async fn version(&self) -> Result<String, Error>

Return the version of the EdgeFirst Studio server for the current client connection.

Source

pub async fn logout(&self) -> Result<(), Error>

Clear the token used to authenticate the client with the server.

Clears the token from memory and from storage (if configured). If using the legacy token_path configuration, removes the token file.

Source

pub async fn token(&self) -> String

Return the token used to authenticate the client with the server. When logging into the server using a username and password, the token is returned by the server and stored in the client for future interactions.

Source

pub async fn verify_token(&self) -> Result<(), Error>

Verify the token used to authenticate the client with the server. This method is used to ensure that the token is still valid and has not expired. If the token is invalid, the server will return an error and the client will need to login again.

Source

pub async fn renew_token(&self) -> Result<(), Error>

Renew the token used to authenticate the client with the server.

Refreshes the token before it expires. If the token has already expired, the server will return an error and you will need to login again.

The new token is automatically persisted to storage (if configured).

Source

pub fn url(&self) -> &str

Returns the URL of the EdgeFirst Studio server for the current client.

Source

pub async fn username(&self) -> Result<String, Error>

Returns the username associated with the current token.

Source

pub async fn token_expiration(&self) -> Result<DateTime<Utc>, Error>

Returns the expiration time for the current token.

Source

pub async fn organization(&self) -> Result<Organization, Error>

Returns the organization information for the current user.

Source

pub async fn projects(&self, name: Option<&str>) -> Result<Vec<Project>, Error>

Returns a list of projects available to the user. The projects are returned as a vector of Project objects. If a name filter is provided, only projects matching the filter are returned.

Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically.

Projects are the top-level organizational unit in EdgeFirst Studio. Projects contain datasets, trainers, and trainer sessions. Projects are used to group related datasets and trainers together.

Source

pub async fn project(&self, project_id: ProjectID) -> Result<Project, Error>

Return the project with the specified project ID. If the project does not exist, an error is returned.

Source

pub async fn datasets( &self, project_id: ProjectID, name: Option<&str>, ) -> Result<Vec<Dataset>, Error>

Returns a list of datasets available to the user. The datasets are returned as a vector of Dataset objects. If a name filter is provided, only datasets matching the filter are returned.

Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically. This ensures “Deer” returns before “Deer Roundtrip”.

Source

pub async fn dataset(&self, dataset_id: DatasetID) -> Result<Dataset, Error>

Return the dataset with the specified dataset ID. If the dataset does not exist, an error is returned.

Source

pub async fn labels(&self, dataset_id: DatasetID) -> Result<Vec<Label>, Error>

Lists the labels for the specified dataset.

Source

pub async fn add_label( &self, dataset_id: DatasetID, name: &str, ) -> Result<(), Error>

Add a new label to the dataset with the specified name.

Source

pub async fn remove_label(&self, label_id: u64) -> Result<(), Error>

Removes the label with the specified ID from the dataset. Label IDs are globally unique so the dataset_id is not required.

Source

pub async fn create_dataset( &self, project_id: &str, name: &str, description: Option<&str>, ) -> Result<DatasetID, Error>

Creates a new dataset in the specified project.

§Arguments
  • project_id - The ID of the project to create the dataset in
  • name - The name of the new dataset
  • description - Optional description for the dataset
§Returns

Returns the dataset ID of the newly created dataset.

Source

pub async fn delete_dataset(&self, dataset_id: DatasetID) -> Result<(), Error>

Deletes a dataset by marking it as deleted.

§Arguments
  • dataset_id - The ID of the dataset to delete
§Returns

Returns Ok(()) if the dataset was successfully marked as deleted.

Source

pub async fn update_label(&self, label: &Label) -> Result<(), Error>

Updates the label with the specified ID to have the new name or index. Label IDs cannot be changed. Label IDs are globally unique so the dataset_id is not required.

Source

pub async fn download_dataset( &self, dataset_id: DatasetID, groups: &[String], file_types: &[FileType], output: PathBuf, flatten: bool, progress: Option<Sender<Progress>>, ) -> Result<(), Error>

Downloads dataset samples to the local filesystem.

§Arguments
  • dataset_id - The unique identifier of the dataset
  • groups - Dataset groups to include (e.g., “train”, “val”)
  • file_types - File types to download (e.g., Image, LidarPcd)
  • output - Local directory to save downloaded files
  • flatten - If true, download all files to output root without sequence subdirectories. When flattening, filenames are prefixed with {sequence_name}_{frame}_ (or {sequence_name}_ if frame is unavailable) unless the filename already starts with {sequence_name}_, to avoid conflicts between sequences.
  • progress - Optional channel for progress updates
§Returns

Returns Ok(()) on success or an error if download fails.

§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id: DatasetID = "ds-123".try_into()?;

// Download with sequence subdirectories (default)
client
    .download_dataset(
        dataset_id,
        &[],
        &[FileType::Image],
        "./data".into(),
        false,
        None,
    )
    .await?;

// Download flattened (all files in one directory)
client
    .download_dataset(
        dataset_id,
        &[],
        &[FileType::Image],
        "./data".into(),
        true,
        None,
    )
    .await?;
Source

pub async fn annotation_sets( &self, dataset_id: DatasetID, ) -> Result<Vec<AnnotationSet>, Error>

List available annotation sets for the specified dataset.

Source

pub async fn create_annotation_set( &self, dataset_id: DatasetID, name: &str, description: Option<&str>, ) -> Result<AnnotationSetID, Error>

Create a new annotation set for the specified dataset.

§Arguments
  • dataset_id - The ID of the dataset to create the annotation set in
  • name - The name of the new annotation set
  • description - Optional description for the annotation set
§Returns

Returns the annotation set ID of the newly created annotation set.

Source

pub async fn delete_annotation_set( &self, annotation_set_id: AnnotationSetID, ) -> Result<(), Error>

Deletes an annotation set by marking it as deleted.

§Arguments
  • annotation_set_id - The ID of the annotation set to delete
§Returns

Returns Ok(()) if the annotation set was successfully marked as deleted.

Source

pub async fn annotation_set( &self, annotation_set_id: AnnotationSetID, ) -> Result<AnnotationSet, Error>

Retrieve the annotation set with the specified ID.

Source

pub async fn annotations( &self, annotation_set_id: AnnotationSetID, groups: &[String], annotation_types: &[AnnotationType], progress: Option<Sender<Progress>>, ) -> Result<Vec<Annotation>, Error>

Get the annotations for the specified annotation set with the requested annotation types. The annotation types are used to filter the annotations returned. The groups parameter is used to filter for dataset groups (train, val, test). Images which do not have any annotations are also included in the result as long as they are in the requested groups (when specified).

The result is a vector of Annotations objects which contain the full dataset along with the annotations for the specified types.

To get the annotations as a DataFrame, use the annotations_dataframe method instead.

Source

pub async fn samples_count( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, annotation_types: &[AnnotationType], groups: &[String], types: &[FileType], ) -> Result<SamplesCountResult, Error>

Source

pub async fn samples( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, annotation_types: &[AnnotationType], groups: &[String], types: &[FileType], progress: Option<Sender<Progress>>, ) -> Result<Vec<Sample>, Error>

Source

pub async fn populate_samples( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, samples: Vec<Sample>, progress: Option<Sender<Progress>>, ) -> Result<Vec<SamplesPopulateResult>, Error>

Populates (imports) samples into a dataset using the samples.populate2 API.

This method creates new samples in the specified dataset, optionally with annotations and sensor data files. For each sample, the files field is checked for local file paths. If a filename is a valid path to an existing file, the file will be automatically uploaded to S3 using presigned URLs returned by the server. The filename in the request is replaced with the basename (path removed) before sending to the server.

§Important Notes
  • annotation_set_id is REQUIRED when importing samples with annotations. Without it, the server will accept the request but will not save the annotation data. Use Client::annotation_sets to query available annotation sets for a dataset, or create a new one via the Studio UI.
  • Box2d coordinates must be normalized (0.0-1.0 range) for bounding boxes. Divide pixel coordinates by image width/height before creating Box2d annotations.
  • Files are uploaded automatically when the filename is a valid local path. The method will replace the full path with just the basename before sending to the server.
  • Image dimensions are extracted automatically for image files using the imagesize crate. The width/height are sent to the server, but note that the server currently doesn’t return these fields when fetching samples back.
  • UUIDs are generated automatically if not provided. If you need deterministic UUIDs, set sample.uuid explicitly before calling. Note that the server doesn’t currently return UUIDs in sample queries.
§Arguments
  • dataset_id - The ID of the dataset to populate
  • annotation_set_id - Required if samples contain annotations, otherwise they will be ignored. Query with Client::annotation_sets.
  • samples - Vector of samples to import with metadata and file references. For files, use the full local path - it will be uploaded automatically. UUIDs and image dimensions will be auto-generated/extracted if not provided.
§Returns

Returns the API result with sample UUIDs and upload status.

§Example
use edgefirst_client::{Annotation, Box2d, Client, DatasetID, Sample, SampleFile};

// Query available annotation sets for the dataset
let annotation_sets = client.annotation_sets(dataset_id).await?;
let annotation_set_id = annotation_sets
    .first()
    .ok_or_else(|| {
        edgefirst_client::Error::InvalidParameters("No annotation sets found".to_string())
    })?
    .id();

// Create sample with annotation (UUID will be auto-generated)
let mut sample = Sample::new();
sample.width = Some(1920);
sample.height = Some(1080);
sample.group = Some("train".to_string());

// Add file - use full path to local file, it will be uploaded automatically
sample.files = vec![SampleFile::with_filename(
    "image".to_string(),
    "/path/to/image.jpg".to_string(),
)];

// Add bounding box annotation with NORMALIZED coordinates (0.0-1.0)
let mut annotation = Annotation::new();
annotation.set_label(Some("person".to_string()));
// Normalize pixel coordinates by dividing by image dimensions
let bbox = Box2d::new(0.5, 0.5, 0.25, 0.25); // (x, y, w, h) normalized
annotation.set_box2d(Some(bbox));
sample.annotations = vec![annotation];

// Populate with annotation_set_id (REQUIRED for annotations)
let result = client
    .populate_samples(dataset_id, Some(annotation_set_id), vec![sample], None)
    .await?;
Source

pub async fn download(&self, url: &str) -> Result<Vec<u8>, Error>

Source

pub async fn annotations_dataframe( &self, annotation_set_id: AnnotationSetID, groups: &[String], types: &[AnnotationType], progress: Option<Sender<Progress>>, ) -> Result<DataFrame, Error>

👎Deprecated since 0.8.0: Use samples_dataframe() for complete 2025.10 schema support

Get the AnnotationGroup for the specified annotation set with the requested annotation types. The annotation type is used to filter the annotations returned. Images which do not have any annotations are included in the result.

Get annotations as a DataFrame (2025.01 schema).

DEPRECATED: Use Client::samples_dataframe() instead for full 2025.10 schema support including optional metadata columns.

The result is a DataFrame following the EdgeFirst Dataset Format definition with 9 columns (original schema). Does not include new optional columns added in 2025.10.

§Migration
// OLD (deprecated):
let df = client
    .annotations_dataframe(annotation_set_id, &groups, &types, None)
    .await?;

// NEW (recommended):
let df = client
    .samples_dataframe(dataset_id, Some(annotation_set_id), &groups, &types, None)
    .await?;

To get the annotations as a vector of Annotation objects, use the annotations method instead.

Source

pub async fn samples_dataframe( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, groups: &[String], types: &[AnnotationType], progress: Option<Sender<Progress>>, ) -> Result<DataFrame, Error>

Get samples as a DataFrame with complete 2025.10 schema.

This is the recommended method for obtaining dataset annotations in DataFrame format. It includes all sample metadata (size, location, pose, degradation) as optional columns.

§Arguments
  • dataset_id - Dataset identifier
  • annotation_set_id - Optional annotation set filter
  • groups - Dataset groups to include (train, val, test)
  • types - Annotation types to filter (bbox, box3d, mask)
  • progress - Optional progress callback
§Example
use edgefirst_client::Client;

let df = client
    .samples_dataframe(
        dataset_id,
        Some(annotation_set_id),
        &["train".to_string()],
        &[],
        None,
    )
    .await?;
println!("DataFrame shape: {:?}", df.shape());
Source

pub async fn snapshots( &self, name: Option<&str>, ) -> Result<Vec<Snapshot>, Error>

List available snapshots. If a name is provided, only snapshots containing that name are returned.

Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter descriptions (more specific), then alphabetically.

Source

pub async fn snapshot(&self, snapshot_id: SnapshotID) -> Result<Snapshot, Error>

Get the snapshot with the specified id.

Source

pub async fn create_snapshot( &self, path: &str, progress: Option<Sender<Progress>>, ) -> Result<Snapshot, Error>

Create a new snapshot from an MCAP file or EdgeFirst Dataset directory.

Snapshots are frozen datasets in EdgeFirst Dataset Format (Zip/Arrow pairs) that serve two primary purposes:

  1. MCAP uploads: Upload MCAP files containing sensor data (images, point clouds, IMU, GPS) to EdgeFirst Studio. Snapshots can then be restored with AGTG (Automatic Ground Truth Generation) and optional auto-depth processing.

  2. Dataset exchange: Export datasets for backup, sharing, or migration between EdgeFirst Studio instances using the create → download → upload → restore workflow.

Large files are automatically chunked into 100MB parts and uploaded concurrently using S3 multipart upload with presigned URLs. Each chunk is streamed without loading into memory, maintaining constant memory usage.

Concurrency tuning: Set MAX_TASKS to control concurrent uploads (default: half of CPU cores, min 2, max 8). Lower values work better for large files to avoid timeout issues. Higher values (16-32) are better for many small files.

§Arguments
  • path - Local file path to MCAP file or directory containing EdgeFirst Dataset Format files (Zip/Arrow pairs)
  • progress - Optional channel to receive upload progress updates
§Returns

Returns a Snapshot object with ID, description, status, path, and creation timestamp on success.

§Errors

Returns an error if:

  • Path doesn’t exist or contains invalid UTF-8
  • File format is invalid (not MCAP or EdgeFirst Dataset Format)
  • Upload fails or network error occurs
  • Server rejects the snapshot
§Example
let client = Client::new()?.with_token_path(None)?;

// Upload MCAP file with progress tracking
let (tx, mut rx) = mpsc::channel(1);
tokio::spawn(async move {
    while let Some(Progress { current, total }) = rx.recv().await {
        println!(
            "Upload: {}/{} bytes ({:.1}%)",
            current,
            total,
            (current as f64 / total as f64) * 100.0
        );
    }
});
let snapshot = client.create_snapshot("data.mcap", Some(tx)).await?;
println!("Created snapshot: {:?}", snapshot.id());

// Upload dataset directory (no progress)
let snapshot = client.create_snapshot("./dataset_export/", None).await?;
§See Also
Source

pub async fn delete_snapshot( &self, snapshot_id: SnapshotID, ) -> Result<(), Error>

Delete a snapshot from EdgeFirst Studio.

Permanently removes a snapshot and its associated data. This operation cannot be undone.

§Arguments
  • snapshot_id - The snapshot ID to delete
§Errors

Returns an error if:

  • Snapshot doesn’t exist
  • User lacks permission to delete the snapshot
  • Server error occurs
§Example
let client = Client::new()?.with_token_path(None)?;
let snapshot_id = SnapshotID::from(123);
client.delete_snapshot(snapshot_id).await?;
§See Also
Source

pub async fn create_snapshot_from_dataset( &self, dataset_id: DatasetID, description: &str, annotation_set_id: Option<AnnotationSetID>, ) -> Result<SnapshotFromDatasetResult, Error>

Create a snapshot from an existing dataset on the server.

Triggers server-side snapshot generation which exports the dataset’s images and annotations into a downloadable EdgeFirst Dataset Format snapshot.

This is the inverse of restore_snapshot - while restore creates a dataset from a snapshot, this method creates a snapshot from a dataset.

§Arguments
  • dataset_id - The dataset ID to create snapshot from
  • description - Description for the created snapshot
§Returns

Returns a SnapshotCreateResult containing the snapshot ID and task ID for monitoring progress.

§Errors

Returns an error if:

  • Dataset doesn’t exist
  • User lacks permission to access the dataset
  • Server rejects the request
§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id = DatasetID::from(123);

// Create snapshot from dataset (all annotation sets)
let result = client
    .create_snapshot_from_dataset(dataset_id, "My Dataset Backup", None)
    .await?;
println!("Created snapshot: {:?}", result.id);

// Monitor progress via task ID
if let Some(task_id) = result.task_id {
    println!("Task: {}", task_id);
}
§See Also
Source

pub async fn download_snapshot( &self, snapshot_id: SnapshotID, output: PathBuf, progress: Option<Sender<Progress>>, ) -> Result<(), Error>

Download a snapshot from EdgeFirst Studio to local storage.

Downloads all files in a snapshot (single MCAP file or directory of EdgeFirst Dataset Format files) to the specified output path. Files are downloaded concurrently with progress tracking.

Concurrency tuning: Set MAX_TASKS to control concurrent downloads (default: half of CPU cores, min 2, max 8).

§Arguments
  • snapshot_id - The snapshot ID to download
  • output - Local directory path to save downloaded files
  • progress - Optional channel to receive download progress updates
§Errors

Returns an error if:

  • Snapshot doesn’t exist
  • Output directory cannot be created
  • Download fails or network error occurs
§Example
let client = Client::new()?.with_token_path(None)?;
let snapshot_id = SnapshotID::from(123);

// Download with progress tracking
let (tx, mut rx) = mpsc::channel(1);
tokio::spawn(async move {
    while let Some(Progress { current, total }) = rx.recv().await {
        println!("Download: {}/{} bytes", current, total);
    }
});
client
    .download_snapshot(snapshot_id, PathBuf::from("./output"), Some(tx))
    .await?;
§See Also
Source

pub async fn restore_snapshot( &self, project_id: ProjectID, snapshot_id: SnapshotID, topics: &[String], autolabel: &[String], autodepth: bool, dataset_name: Option<&str>, dataset_description: Option<&str>, ) -> Result<SnapshotRestoreResult, Error>

Restore a snapshot to a dataset in EdgeFirst Studio with optional AGTG.

Restores a snapshot (MCAP file or EdgeFirst Dataset) into a dataset in the specified project. For MCAP files, supports:

  • AGTG (Automatic Ground Truth Generation): Automatically annotate detected objects with 2D masks/boxes and 3D boxes (if radar/LiDAR present)
  • Auto-depth: Generate depthmaps (Maivin/Raivin cameras only)
  • Topic filtering: Select specific MCAP topics to restore

For EdgeFirst Dataset snapshots, this simply imports the pre-existing dataset structure.

§Arguments
  • project_id - Target project ID
  • snapshot_id - Snapshot ID to restore
  • topics - MCAP topics to include (empty = all topics)
  • autolabel - Object labels for AGTG (empty = no auto-annotation)
  • autodepth - Generate depthmaps (Maivin/Raivin only)
  • dataset_name - Optional custom dataset name
  • dataset_description - Optional dataset description
§Returns

Returns a SnapshotRestoreResult with the new dataset ID and status.

§Errors

Returns an error if:

  • Snapshot or project doesn’t exist
  • Snapshot format is invalid
  • Server rejects restoration parameters
§Example
let client = Client::new()?.with_token_path(None)?;
let project_id = ProjectID::from(1);
let snapshot_id = SnapshotID::from(123);

// Restore MCAP with AGTG for "person" and "car" detection
let result = client
    .restore_snapshot(
        project_id,
        snapshot_id,
        &[],                                        // All topics
        &["person".to_string(), "car".to_string()], // AGTG labels
        true,                                       // Auto-depth
        Some("Highway Dataset"),
        Some("Collected on I-95"),
    )
    .await?;
println!("Restored to dataset: {:?}", result.dataset_id);
§See Also
Source

pub async fn experiments( &self, project_id: ProjectID, name: Option<&str>, ) -> Result<Vec<Experiment>, Error>

Returns a list of experiments available to the user. The experiments are returned as a vector of Experiment objects. If name is provided then only experiments containing this string are returned.

Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically.

Experiments provide a method of organizing training and validation sessions together and are akin to an Experiment in MLFlow terminology.
Each experiment can have multiple trainer sessions associated with it, these would be akin to runs in MLFlow terminology.

Source

pub async fn experiment( &self, experiment_id: ExperimentID, ) -> Result<Experiment, Error>

Return the experiment with the specified experiment ID. If the experiment does not exist, an error is returned.

Source

pub async fn training_sessions( &self, experiment_id: ExperimentID, name: Option<&str>, ) -> Result<Vec<TrainingSession>, Error>

Returns a list of trainer sessions available to the user. The trainer sessions are returned as a vector of TrainingSession objects. If name is provided then only trainer sessions containing this string are returned.

Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically.

Trainer sessions are akin to runs in MLFlow terminology. These represent an actual training session which will produce metrics and model artifacts.

Source

pub async fn training_session( &self, session_id: TrainingSessionID, ) -> Result<TrainingSession, Error>

Return the trainer session with the specified trainer session ID. If the trainer session does not exist, an error is returned.

Source

pub async fn validation_sessions( &self, project_id: ProjectID, ) -> Result<Vec<ValidationSession>, Error>

List validation sessions for the given project.

Source

pub async fn validation_session( &self, session_id: ValidationSessionID, ) -> Result<ValidationSession, Error>

Retrieve a specific validation session.

Source

pub async fn artifacts( &self, training_session_id: TrainingSessionID, ) -> Result<Vec<Artifact>, Error>

List the artifacts for the specified trainer session. The artifacts are returned as a vector of strings.

Source

pub async fn download_artifact( &self, training_session_id: TrainingSessionID, modelname: &str, filename: Option<PathBuf>, progress: Option<Sender<Progress>>, ) -> Result<(), Error>

Download the model artifact for the specified trainer session to the specified file path, if path is not provided it will be downloaded to the current directory with the same filename. A progress callback can be provided to monitor the progress of the download over a watch channel.

Source

pub async fn download_checkpoint( &self, training_session_id: TrainingSessionID, checkpoint: &str, filename: Option<PathBuf>, progress: Option<Sender<Progress>>, ) -> Result<(), Error>

Download the model checkpoint associated with the specified trainer session to the specified file path, if path is not provided it will be downloaded to the current directory with the same filename. A progress callback can be provided to monitor the progress of the download over a watch channel.

There is no API for listing checkpoints it is expected that trainers are aware of possible checkpoints and their names within the checkpoint folder on the server.

Source

pub async fn tasks( &self, name: Option<&str>, workflow: Option<&str>, status: Option<&str>, manager: Option<&str>, ) -> Result<Vec<Task>, Error>

Return a list of tasks for the current user.

§Arguments
  • name - Optional filter for task name (client-side substring match)
  • workflow - Optional filter for workflow/task type. If provided, filters server-side by exact match. Valid values include: “trainer”, “validation”, “snapshot-create”, “snapshot-restore”, “copyds”, “upload”, “auto-ann”, “auto-seg”, “aigt”, “import”, “export”, “convertor”, “twostage”
  • status - Optional filter for task status (e.g., “running”, “complete”, “error”)
  • manager - Optional filter for task manager type (e.g., “aws”, “user”, “kubernetes”)
Source

pub async fn task_info(&self, task_id: TaskID) -> Result<TaskInfo, Error>

Retrieve the task information and status.

Source

pub async fn task_status( &self, task_id: TaskID, status: &str, ) -> Result<Task, Error>

Updates the tasks status.

Source

pub async fn set_stages( &self, task_id: TaskID, stages: &[(&str, &str)], ) -> Result<(), Error>

Defines the stages for the task. The stages are defined as a mapping from stage names to their descriptions. Once stages are defined their status can be updated using the update_stage method.

Source

pub async fn update_stage( &self, task_id: TaskID, stage: &str, status: &str, message: &str, percentage: u8, ) -> Result<(), Error>

Updates the progress of the task for the provided stage and status information.

Source

pub async fn fetch(&self, query: &str) -> Result<Vec<u8>, Error>

Raw fetch from the Studio server is used for downloading files.

Source

pub async fn post_multipart( &self, method: &str, form: Form, ) -> Result<String, Error>

Sends a multipart post request to the server. This is used by the upload and download APIs which do not use JSON-RPC but instead transfer files using multipart/form-data.

Source

pub async fn rpc<Params, RpcResult>( &self, method: String, params: Option<Params>, ) -> Result<RpcResult, Error>
where Params: Serialize, RpcResult: DeserializeOwned,

Send a JSON-RPC request to the server. The method is the name of the method to call on the server. The params are the parameters to pass to the method. The method and params are serialized into a JSON-RPC request and sent to the server. The response is deserialized into the specified type and returned to the caller.

NOTE: This API would generally not be called directly and instead users should use the higher-level methods provided by the client.

Trait Implementations§

Source§

impl Clone for Client

Source§

fn clone(&self) -> Client

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Client

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> DynClone for T
where T: Clone,

Source§

fn __clone_box(&self, _: Private) -> *mut ()

Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Key for T
where T: Clone,

Source§

fn align() -> usize

The alignment necessary for the key. Must return a power of two.
Source§

fn size(&self) -> usize

The size of the key in bytes.
Source§

unsafe fn init(&self, ptr: *mut u8)

Initialize the key in the given memory location. Read more
Source§

unsafe fn get<'a>(ptr: *const u8) -> &'a T

Get a reference to the key from the given memory location. Read more
Source§

unsafe fn drop_in_place(ptr: *mut u8)

Drop the key in place. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> PolicyExt for T
where T: ?Sized,

Source§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow only if self and other return Action::Follow. Read more
Source§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow if either self or other returns Action::Follow. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more