pub struct Client { /* private fields */ }Expand description
Main client for interacting with EdgeFirst Studio Server.
The EdgeFirst Client handles the connection to the EdgeFirst Studio Server and manages authentication, RPC calls, and data operations. It provides methods for managing projects, datasets, experiments, training sessions, and various utility functions for data processing.
The client supports multiple authentication methods and can work with both SaaS and self-hosted EdgeFirst Studio instances.
§Features
- Authentication: Token-based authentication with automatic persistence
- Dataset Management: Upload, download, and manipulate datasets
- Project Operations: Create and manage projects and experiments
- Training & Validation: Submit and monitor ML training jobs
- Data Integration: Convert between EdgeFirst datasets and popular formats
- Progress Tracking: Real-time progress updates for long-running operations
§Examples
use edgefirst_client::{Client, DatasetID};
use std::str::FromStr;
// Create a new client and authenticate
let mut client = Client::new()?;
let client = client
.with_login("your-email@example.com", "password")
.await?;
// Or use an existing token
let base_client = Client::new()?;
let client = base_client.with_token("your-token-here")?;
// Get organization and projects
let org = client.organization().await?;
let projects = client.projects(None).await?;
// Work with datasets
let dataset_id = DatasetID::from_str("ds-abc123")?;
let dataset = client.dataset(dataset_id).await?;Client is Clone but cannot derive Debug due to dyn TokenStorage
Implementations§
Source§impl Client
impl Client
Sourcepub fn new() -> Result<Self, Error>
pub fn new() -> Result<Self, Error>
Create a new unauthenticated client with the default saas server.
By default, the client uses FileTokenStorage for token persistence.
Use with_storage,
with_memory_storage,
or with_no_storage to configure storage
behavior.
To connect to a different server, use with_server
or with_token (tokens include the server
instance).
This client is created without a token and will need to authenticate before using methods that require authentication.
§Examples
use edgefirst_client::Client;
// Create client with default file storage
let client = Client::new()?;
// Create client without token persistence
let client = Client::new()?.with_memory_storage();Sourcepub fn with_server(&self, server: &str) -> Result<Self, Error>
pub fn with_server(&self, server: &str) -> Result<Self, Error>
Returns a new client connected to the specified server instance.
The server parameter is an instance name that maps to a URL:
""or"saas"→https://edgefirst.studio(default production server)"test"→https://test.edgefirst.studio"stage"→https://stage.edgefirst.studio"dev"→https://dev.edgefirst.studio"{name}"→https://{name}.edgefirst.studio
§Server Selection Priority
When using the CLI or Python API, server selection follows this priority:
- Token’s server (highest priority) - JWT tokens encode the server they were issued for. If you have a valid token, its server is used.
with_server()/--server- Used when logging in or when no token is available. If a token exists with a different server, a warning is emitted and the token’s server takes priority.- Default
"saas"- If no token and no server specified, the production server (https://edgefirst.studio) is used.
§Important Notes
- If a token is already set in the client, calling this method will drop the token as tokens are specific to the server instance.
- Use [
parse_token_server][Self::parse_token_server] to check a token’s server before calling this method. - For login operations, call
with_server()first, then authenticate.
§Examples
use edgefirst_client::Client;
let client = Client::new()?.with_server("test")?;
assert_eq!(client.url(), "https://test.edgefirst.studio");Sourcepub fn with_storage(self, storage: Arc<dyn TokenStorage>) -> Self
pub fn with_storage(self, storage: Arc<dyn TokenStorage>) -> Self
Returns a new client with the specified token storage backend.
Use this to configure custom token storage, such as platform-specific secure storage (iOS Keychain, Android EncryptedSharedPreferences).
§Examples
use edgefirst_client::{Client, FileTokenStorage};
use std::{path::PathBuf, sync::Arc};
// Use a custom file path for token storage
let storage = FileTokenStorage::with_path(PathBuf::from("/custom/path/token"));
let client = Client::new()?.with_storage(Arc::new(storage));Sourcepub fn with_memory_storage(self) -> Self
pub fn with_memory_storage(self) -> Self
Returns a new client with in-memory token storage (no persistence).
Tokens are stored in memory only and lost when the application exits. This is useful for testing or when you want to manage token persistence externally.
§Examples
use edgefirst_client::Client;
let client = Client::new()?.with_memory_storage();Sourcepub fn with_no_storage(self) -> Self
pub fn with_no_storage(self) -> Self
Returns a new client with no token storage.
Tokens are not persisted. Use this when you want to manage tokens entirely manually.
§Examples
use edgefirst_client::Client;
let client = Client::new()?.with_no_storage();Sourcepub async fn with_login(
&self,
username: &str,
password: &str,
) -> Result<Self, Error>
pub async fn with_login( &self, username: &str, password: &str, ) -> Result<Self, Error>
Returns a new client authenticated with the provided username and password.
The token is automatically persisted to storage (if configured).
§Examples
use edgefirst_client::Client;
let client = Client::new()?
.with_server("test")?
.with_login("user@example.com", "password")
.await?;Sourcepub fn with_token_path(&self, token_path: Option<&Path>) -> Result<Self, Error>
pub fn with_token_path(&self, token_path: Option<&Path>) -> Result<Self, Error>
Returns a new client which will load and save the token to the specified path.
Deprecated: Use with_storage with
FileTokenStorage instead for more flexible token management.
This method is maintained for backwards compatibility with existing code. It disables the default storage and uses file-based storage at the specified path.
pub fn with_token(&self, token: &str) -> Result<Self, Error>
Sourcepub async fn save_token(&self) -> Result<(), Error>
pub async fn save_token(&self) -> Result<(), Error>
Persist the current token to storage.
This is automatically called when using with_login
or with_token, so you typically don’t need to call
this directly.
If using the legacy token_path configuration, saves to the file path.
If using the new storage abstraction, saves to the configured storage.
Sourcepub async fn version(&self) -> Result<String, Error>
pub async fn version(&self) -> Result<String, Error>
Return the version of the EdgeFirst Studio server for the current client connection.
Sourcepub async fn logout(&self) -> Result<(), Error>
pub async fn logout(&self) -> Result<(), Error>
Clear the token used to authenticate the client with the server.
Clears the token from memory and from storage (if configured).
If using the legacy token_path configuration, removes the token file.
Sourcepub async fn token(&self) -> String
pub async fn token(&self) -> String
Return the token used to authenticate the client with the server. When logging into the server using a username and password, the token is returned by the server and stored in the client for future interactions.
Sourcepub async fn verify_token(&self) -> Result<(), Error>
pub async fn verify_token(&self) -> Result<(), Error>
Verify the token used to authenticate the client with the server. This method is used to ensure that the token is still valid and has not expired. If the token is invalid, the server will return an error and the client will need to login again.
Sourcepub async fn renew_token(&self) -> Result<(), Error>
pub async fn renew_token(&self) -> Result<(), Error>
Renew the token used to authenticate the client with the server.
Refreshes the token before it expires. If the token has already expired, the server will return an error and you will need to login again.
The new token is automatically persisted to storage (if configured).
Sourcepub fn url(&self) -> &str
pub fn url(&self) -> &str
Returns the URL of the EdgeFirst Studio server for the current client.
Sourcepub fn server(&self) -> &str
pub fn server(&self) -> &str
Returns the server name for the current client.
This extracts the server name from the client’s URL:
https://edgefirst.studio→"saas"https://test.edgefirst.studio→"test"https://{name}.edgefirst.studio→"{name}"
§Examples
use edgefirst_client::Client;
let client = Client::new()?.with_server("test")?;
assert_eq!(client.server(), "test");
let client = Client::new()?; // default
assert_eq!(client.server(), "saas");Sourcepub async fn username(&self) -> Result<String, Error>
pub async fn username(&self) -> Result<String, Error>
Returns the username associated with the current token.
Sourcepub async fn token_expiration(&self) -> Result<DateTime<Utc>, Error>
pub async fn token_expiration(&self) -> Result<DateTime<Utc>, Error>
Returns the expiration time for the current token.
Sourcepub async fn organization(&self) -> Result<Organization, Error>
pub async fn organization(&self) -> Result<Organization, Error>
Returns the organization information for the current user.
Sourcepub async fn projects(&self, name: Option<&str>) -> Result<Vec<Project>, Error>
pub async fn projects(&self, name: Option<&str>) -> Result<Vec<Project>, Error>
Returns a list of projects available to the user. The projects are returned as a vector of Project objects. If a name filter is provided, only projects matching the filter are returned.
Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically.
Projects are the top-level organizational unit in EdgeFirst Studio. Projects contain datasets, trainers, and trainer sessions. Projects are used to group related datasets and trainers together.
Sourcepub async fn project(&self, project_id: ProjectID) -> Result<Project, Error>
pub async fn project(&self, project_id: ProjectID) -> Result<Project, Error>
Return the project with the specified project ID. If the project does not exist, an error is returned.
Sourcepub async fn datasets(
&self,
project_id: ProjectID,
name: Option<&str>,
) -> Result<Vec<Dataset>, Error>
pub async fn datasets( &self, project_id: ProjectID, name: Option<&str>, ) -> Result<Vec<Dataset>, Error>
Returns a list of datasets available to the user. The datasets are returned as a vector of Dataset objects. If a name filter is provided, only datasets matching the filter are returned.
Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically. This ensures “Deer” returns before “Deer Roundtrip”.
Sourcepub async fn dataset(&self, dataset_id: DatasetID) -> Result<Dataset, Error>
pub async fn dataset(&self, dataset_id: DatasetID) -> Result<Dataset, Error>
Return the dataset with the specified dataset ID. If the dataset does not exist, an error is returned.
Sourcepub async fn labels(&self, dataset_id: DatasetID) -> Result<Vec<Label>, Error>
pub async fn labels(&self, dataset_id: DatasetID) -> Result<Vec<Label>, Error>
Lists the labels for the specified dataset.
Sourcepub async fn add_label(
&self,
dataset_id: DatasetID,
name: &str,
) -> Result<(), Error>
pub async fn add_label( &self, dataset_id: DatasetID, name: &str, ) -> Result<(), Error>
Add a new label to the dataset with the specified name.
Sourcepub async fn remove_label(&self, label_id: u64) -> Result<(), Error>
pub async fn remove_label(&self, label_id: u64) -> Result<(), Error>
Removes the label with the specified ID from the dataset. Label IDs are globally unique so the dataset_id is not required.
Sourcepub async fn create_dataset(
&self,
project_id: &str,
name: &str,
description: Option<&str>,
) -> Result<DatasetID, Error>
pub async fn create_dataset( &self, project_id: &str, name: &str, description: Option<&str>, ) -> Result<DatasetID, Error>
Sourcepub async fn update_label(&self, label: &Label) -> Result<(), Error>
pub async fn update_label(&self, label: &Label) -> Result<(), Error>
Updates the label with the specified ID to have the new name or index. Label IDs cannot be changed. Label IDs are globally unique so the dataset_id is not required.
Sourcepub async fn groups(&self, dataset_id: DatasetID) -> Result<Vec<Group>, Error>
pub async fn groups(&self, dataset_id: DatasetID) -> Result<Vec<Group>, Error>
Lists the groups for the specified dataset.
Groups are used to organize samples into logical subsets such as “train”, “val”, “test”, etc. Each sample can belong to at most one group at a time.
§Arguments
dataset_id- The ID of the dataset to list groups for
§Returns
Returns a vector of Group objects for the dataset. Returns an
empty vector if no groups have been created yet.
§Errors
Returns an error if the dataset does not exist or cannot be accessed.
§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id: DatasetID = "ds-123".try_into()?;
let groups = client.groups(dataset_id).await?;
for group in groups {
println!("{}: {}", group.id, group.name);
}Sourcepub async fn get_or_create_group(
&self,
dataset_id: DatasetID,
name: &str,
) -> Result<u64, Error>
pub async fn get_or_create_group( &self, dataset_id: DatasetID, name: &str, ) -> Result<u64, Error>
Gets an existing group by name or creates a new one.
This is a convenience method that first checks if a group with the specified name exists, and creates it if not. This is useful when you need to ensure a group exists before assigning samples to it.
§Arguments
dataset_id- The ID of the datasetname- The name of the group (e.g., “train”, “val”, “test”)
§Returns
Returns the group ID (either existing or newly created).
§Errors
Returns an error if:
- The dataset does not exist or cannot be accessed
- The group creation fails
§Concurrency
This method handles concurrent creation attempts gracefully. If another process creates the group between the existence check and creation, this method will return the existing group’s ID.
§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id: DatasetID = "ds-123".try_into()?;
// Get or create a "train" group
let train_group_id = client
.get_or_create_group(dataset_id.clone(), "train")
.await?;
println!("Train group ID: {}", train_group_id);
// Calling again returns the same ID
let same_id = client.get_or_create_group(dataset_id, "train").await?;
assert_eq!(train_group_id, same_id);Sourcepub async fn set_sample_group_id(
&self,
sample_id: SampleID,
group_id: u64,
) -> Result<(), Error>
pub async fn set_sample_group_id( &self, sample_id: SampleID, group_id: u64, ) -> Result<(), Error>
Sets the group for a sample.
Assigns a sample to a specific group. Each sample can belong to at most one group at a time. Setting a new group replaces any existing group assignment.
§Arguments
sample_id- The ID of the sample (image) to updategroup_id- The ID of the group to assign. Useget_or_create_groupto obtain a group ID from a name.
§Returns
Returns Ok(()) on success.
§Errors
Returns an error if:
- The sample does not exist
- The group does not exist
- Insufficient permissions to modify the sample
§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id: DatasetID = "ds-123".try_into()?;
let sample_id: SampleID = 12345.into();
// Get or create the "val" group
let val_group_id = client.get_or_create_group(dataset_id, "val").await?;
// Assign the sample to the "val" group
client.set_sample_group_id(sample_id, val_group_id).await?;Sourcepub async fn download_dataset(
&self,
dataset_id: DatasetID,
groups: &[String],
file_types: &[FileType],
output: PathBuf,
flatten: bool,
progress: Option<Sender<Progress>>,
) -> Result<(), Error>
pub async fn download_dataset( &self, dataset_id: DatasetID, groups: &[String], file_types: &[FileType], output: PathBuf, flatten: bool, progress: Option<Sender<Progress>>, ) -> Result<(), Error>
Downloads dataset samples to the local filesystem.
§Arguments
dataset_id- The unique identifier of the datasetgroups- Dataset groups to include (e.g., “train”, “val”)file_types- File types to download (e.g., Image, LidarPcd)output- Local directory to save downloaded filesflatten- If true, download all files to output root without sequence subdirectories. When flattening, filenames are prefixed with{sequence_name}_{frame}_(or{sequence_name}_if frame is unavailable) unless the filename already starts with{sequence_name}_, to avoid conflicts between sequences.progress- Optional channel for progress updates
§Returns
Returns Ok(()) on success or an error if download fails.
§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id: DatasetID = "ds-123".try_into()?;
// Download with sequence subdirectories (default)
client
.download_dataset(
dataset_id,
&[],
&[FileType::Image],
"./data".into(),
false,
None,
)
.await?;
// Download flattened (all files in one directory)
client
.download_dataset(
dataset_id,
&[],
&[FileType::Image],
"./data".into(),
true,
None,
)
.await?;Sourcepub async fn annotation_sets(
&self,
dataset_id: DatasetID,
) -> Result<Vec<AnnotationSet>, Error>
pub async fn annotation_sets( &self, dataset_id: DatasetID, ) -> Result<Vec<AnnotationSet>, Error>
List available annotation sets for the specified dataset.
Sourcepub async fn create_annotation_set(
&self,
dataset_id: DatasetID,
name: &str,
description: Option<&str>,
) -> Result<AnnotationSetID, Error>
pub async fn create_annotation_set( &self, dataset_id: DatasetID, name: &str, description: Option<&str>, ) -> Result<AnnotationSetID, Error>
Create a new annotation set for the specified dataset.
§Arguments
dataset_id- The ID of the dataset to create the annotation set inname- The name of the new annotation setdescription- Optional description for the annotation set
§Returns
Returns the annotation set ID of the newly created annotation set.
Sourcepub async fn delete_annotation_set(
&self,
annotation_set_id: AnnotationSetID,
) -> Result<(), Error>
pub async fn delete_annotation_set( &self, annotation_set_id: AnnotationSetID, ) -> Result<(), Error>
Sourcepub async fn annotation_set(
&self,
annotation_set_id: AnnotationSetID,
) -> Result<AnnotationSet, Error>
pub async fn annotation_set( &self, annotation_set_id: AnnotationSetID, ) -> Result<AnnotationSet, Error>
Retrieve the annotation set with the specified ID.
Sourcepub async fn annotations(
&self,
annotation_set_id: AnnotationSetID,
groups: &[String],
annotation_types: &[AnnotationType],
progress: Option<Sender<Progress>>,
) -> Result<Vec<Annotation>, Error>
pub async fn annotations( &self, annotation_set_id: AnnotationSetID, groups: &[String], annotation_types: &[AnnotationType], progress: Option<Sender<Progress>>, ) -> Result<Vec<Annotation>, Error>
Get the annotations for the specified annotation set with the requested annotation types. The annotation types are used to filter the annotations returned. The groups parameter is used to filter for dataset groups (train, val, test). Images which do not have any annotations are also included in the result as long as they are in the requested groups (when specified).
The result is a vector of Annotations objects which contain the full dataset along with the annotations for the specified types.
To get the annotations as a DataFrame, use the annotations_dataframe
method instead.
Sourcepub async fn delete_annotations_bulk(
&self,
annotation_set_id: AnnotationSetID,
annotation_types: &[String],
sample_ids: &[SampleID],
) -> Result<(), Error>
pub async fn delete_annotations_bulk( &self, annotation_set_id: AnnotationSetID, annotation_types: &[String], sample_ids: &[SampleID], ) -> Result<(), Error>
Delete annotations in bulk from specified samples.
This method calls the annotation.bulk.del API to efficiently remove
annotations from multiple samples at once. Useful for clearing
annotations before re-importing updated data.
§Arguments
annotation_set_id- The annotation set containing the annotationsannotation_types- Types to delete: “box” for bounding boxes, “seg” for maskssample_ids- Sample IDs (image IDs) to delete annotations from
§Example
let annotation_set_id = AnnotationSetID::from(123);
let sample_ids = vec![SampleID::from(1), SampleID::from(2)];
client
.delete_annotations_bulk(
annotation_set_id,
&["box".to_string(), "seg".to_string()],
&sample_ids,
)
.await?;Sourcepub async fn add_annotations_bulk(
&self,
annotation_set_id: AnnotationSetID,
annotations: Vec<ServerAnnotation>,
) -> Result<Vec<Value>, Error>
pub async fn add_annotations_bulk( &self, annotation_set_id: AnnotationSetID, annotations: Vec<ServerAnnotation>, ) -> Result<Vec<Value>, Error>
Add annotations in bulk.
This method calls the annotation.add_bulk API to efficiently add
multiple annotations at once. The annotations must be in server format
with image_id references.
§Arguments
annotation_set_id- The annotation set to add annotations toannotations- Vector of server-format annotations to add
§Returns
Vector of created annotation records from the server.
pub async fn samples_count( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, annotation_types: &[AnnotationType], groups: &[String], types: &[FileType], ) -> Result<SamplesCountResult, Error>
pub async fn samples( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, annotation_types: &[AnnotationType], groups: &[String], types: &[FileType], progress: Option<Sender<Progress>>, ) -> Result<Vec<Sample>, Error>
Sourcepub async fn sample_names(
&self,
dataset_id: DatasetID,
groups: &[String],
progress: Option<Sender<Progress>>,
) -> Result<HashSet<String>, Error>
pub async fn sample_names( &self, dataset_id: DatasetID, groups: &[String], progress: Option<Sender<Progress>>, ) -> Result<HashSet<String>, Error>
Get all sample names in a dataset.
This is an efficient method for checking which samples already exist, useful for resuming interrupted imports. It only retrieves sample names without loading full annotation data.
§Arguments
dataset_id- The dataset to querygroups- Optional group filter (empty = all groups)progress- Optional progress channel
§Returns
A HashSet of sample names (image_name field) that exist in the dataset.
Sourcepub async fn populate_samples(
&self,
dataset_id: DatasetID,
annotation_set_id: Option<AnnotationSetID>,
samples: Vec<Sample>,
progress: Option<Sender<Progress>>,
) -> Result<Vec<SamplesPopulateResult>, Error>
pub async fn populate_samples( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, samples: Vec<Sample>, progress: Option<Sender<Progress>>, ) -> Result<Vec<SamplesPopulateResult>, Error>
Populates (imports) samples into a dataset using the samples.populate2
API.
This method creates new samples in the specified dataset, optionally
with annotations and sensor data files. For each sample, the files
field is checked for local file paths. If a filename is a valid path
to an existing file, the file will be automatically uploaded to S3
using presigned URLs returned by the server. The filename in the
request is replaced with the basename (path removed) before sending
to the server.
§Important Notes
annotation_set_idis REQUIRED when importing samples with annotations. Without it, the server will accept the request but will not save the annotation data. UseClient::annotation_setsto query available annotation sets for a dataset, or create a new one via the Studio UI.- Box2d coordinates must be normalized (0.0-1.0 range) for bounding
boxes. Divide pixel coordinates by image width/height before creating
Box2dannotations. - Files are uploaded automatically when the filename is a valid local path. The method will replace the full path with just the basename before sending to the server.
- Image dimensions are extracted automatically for image files using
the
imagesizecrate. The width/height are sent to the server, but note that the server currently doesn’t return these fields when fetching samples back. - UUIDs are generated automatically if not provided. If you need
deterministic UUIDs, set
sample.uuidexplicitly before calling. Note that the server doesn’t currently return UUIDs in sample queries.
§Arguments
dataset_id- The ID of the dataset to populateannotation_set_id- Required if samples contain annotations, otherwise they will be ignored. Query withClient::annotation_sets.samples- Vector of samples to import with metadata and file references. For files, use the full local path - it will be uploaded automatically. UUIDs and image dimensions will be auto-generated/extracted if not provided.
§Returns
Returns the API result with sample UUIDs and upload status.
§Example
use edgefirst_client::{Annotation, Box2d, Client, DatasetID, Sample, SampleFile};
// Query available annotation sets for the dataset
let annotation_sets = client.annotation_sets(dataset_id).await?;
let annotation_set_id = annotation_sets
.first()
.ok_or_else(|| {
edgefirst_client::Error::InvalidParameters("No annotation sets found".to_string())
})?
.id();
// Create sample with annotation (UUID will be auto-generated)
let mut sample = Sample::new();
sample.width = Some(1920);
sample.height = Some(1080);
sample.group = Some("train".to_string());
// Add file - use full path to local file, it will be uploaded automatically
sample.files = vec![SampleFile::with_filename(
"image".to_string(),
"/path/to/image.jpg".to_string(),
)];
// Add bounding box annotation with NORMALIZED coordinates (0.0-1.0)
let mut annotation = Annotation::new();
annotation.set_label(Some("person".to_string()));
// Normalize pixel coordinates by dividing by image dimensions
let bbox = Box2d::new(0.5, 0.5, 0.25, 0.25); // (x, y, w, h) normalized
annotation.set_box2d(Some(bbox));
sample.annotations = vec![annotation];
// Populate with annotation_set_id (REQUIRED for annotations)
let result = client
.populate_samples(dataset_id, Some(annotation_set_id), vec![sample], None)
.await?;Sourcepub async fn populate_samples_with_concurrency(
&self,
dataset_id: DatasetID,
annotation_set_id: Option<AnnotationSetID>,
samples: Vec<Sample>,
progress: Option<Sender<Progress>>,
concurrency: Option<usize>,
) -> Result<Vec<SamplesPopulateResult>, Error>
pub async fn populate_samples_with_concurrency( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, samples: Vec<Sample>, progress: Option<Sender<Progress>>, concurrency: Option<usize>, ) -> Result<Vec<SamplesPopulateResult>, Error>
Populate samples with custom upload concurrency.
Same as populate_samples but allows
specifying the maximum number of concurrent file uploads. Use this
for bulk imports where higher concurrency can significantly reduce
upload time.
pub async fn download(&self, url: &str) -> Result<Vec<u8>, Error>
Sourcepub async fn annotations_dataframe(
&self,
annotation_set_id: AnnotationSetID,
groups: &[String],
types: &[AnnotationType],
progress: Option<Sender<Progress>>,
) -> Result<DataFrame, Error>
👎Deprecated since 0.8.0: Use samples_dataframe() for complete 2025.10 schema support
pub async fn annotations_dataframe( &self, annotation_set_id: AnnotationSetID, groups: &[String], types: &[AnnotationType], progress: Option<Sender<Progress>>, ) -> Result<DataFrame, Error>
samples_dataframe() for complete 2025.10 schema supportGet the AnnotationGroup for the specified annotation set with the requested annotation types. The annotation type is used to filter the annotations returned. Images which do not have any annotations are included in the result.
Get annotations as a DataFrame (2025.01 schema).
DEPRECATED: Use Client::samples_dataframe() instead for full
2025.10 schema support including optional metadata columns.
The result is a DataFrame following the EdgeFirst Dataset Format definition with 9 columns (original schema). Does not include new optional columns added in 2025.10.
§Migration
// OLD (deprecated):
let df = client
.annotations_dataframe(annotation_set_id, &groups, &types, None)
.await?;
// NEW (recommended):
let df = client
.samples_dataframe(dataset_id, Some(annotation_set_id), &groups, &types, None)
.await?;To get the annotations as a vector of Annotation objects, use the
annotations method instead.
Sourcepub async fn samples_dataframe(
&self,
dataset_id: DatasetID,
annotation_set_id: Option<AnnotationSetID>,
groups: &[String],
types: &[AnnotationType],
progress: Option<Sender<Progress>>,
) -> Result<DataFrame, Error>
pub async fn samples_dataframe( &self, dataset_id: DatasetID, annotation_set_id: Option<AnnotationSetID>, groups: &[String], types: &[AnnotationType], progress: Option<Sender<Progress>>, ) -> Result<DataFrame, Error>
Get samples as a DataFrame with complete 2025.10 schema.
This is the recommended method for obtaining dataset annotations in DataFrame format. It includes all sample metadata (size, location, pose, degradation) as optional columns.
§Arguments
dataset_id- Dataset identifierannotation_set_id- Optional annotation set filtergroups- Dataset groups to include (train, val, test)types- Annotation types to filter (bbox, box3d, mask)progress- Optional progress callback
§Example
use edgefirst_client::Client;
let df = client
.samples_dataframe(
dataset_id,
Some(annotation_set_id),
&["train".to_string()],
&[],
None,
)
.await?;
println!("DataFrame shape: {:?}", df.shape());Sourcepub async fn snapshots(
&self,
name: Option<&str>,
) -> Result<Vec<Snapshot>, Error>
pub async fn snapshots( &self, name: Option<&str>, ) -> Result<Vec<Snapshot>, Error>
List available snapshots. If a name is provided, only snapshots containing that name are returned.
Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter descriptions (more specific), then alphabetically.
Sourcepub async fn snapshot(&self, snapshot_id: SnapshotID) -> Result<Snapshot, Error>
pub async fn snapshot(&self, snapshot_id: SnapshotID) -> Result<Snapshot, Error>
Get the snapshot with the specified id.
Sourcepub async fn create_snapshot(
&self,
path: &str,
progress: Option<Sender<Progress>>,
) -> Result<Snapshot, Error>
pub async fn create_snapshot( &self, path: &str, progress: Option<Sender<Progress>>, ) -> Result<Snapshot, Error>
Create a new snapshot from an MCAP file or EdgeFirst Dataset directory.
Snapshots are frozen datasets in EdgeFirst Dataset Format (Zip/Arrow pairs) that serve two primary purposes:
-
MCAP uploads: Upload MCAP files containing sensor data (images, point clouds, IMU, GPS) to EdgeFirst Studio. Snapshots can then be restored with AGTG (Automatic Ground Truth Generation) and optional auto-depth processing.
-
Dataset exchange: Export datasets for backup, sharing, or migration between EdgeFirst Studio instances using the create → download → upload → restore workflow.
Large files are automatically chunked into 100MB parts and uploaded concurrently using S3 multipart upload with presigned URLs. Each chunk is streamed without loading into memory, maintaining constant memory usage.
Concurrency tuning: Set MAX_TASKS to control concurrent
uploads (default: half of CPU cores, min 2, max 8). Lower values work
better for large files to avoid timeout issues. Higher values (16-32)
are better for many small files.
§Arguments
path- Local file path to MCAP file or directory containing EdgeFirst Dataset Format files (Zip/Arrow pairs)progress- Optional channel to receive upload progress updates
§Returns
Returns a Snapshot object with ID, description, status, path, and
creation timestamp on success.
§Errors
Returns an error if:
- Path doesn’t exist or contains invalid UTF-8
- File format is invalid (not MCAP or EdgeFirst Dataset Format)
- Upload fails or network error occurs
- Server rejects the snapshot
§Example
let client = Client::new()?.with_token_path(None)?;
// Upload MCAP file with progress tracking
let (tx, mut rx) = mpsc::channel(1);
tokio::spawn(async move {
while let Some(Progress { current, total }) = rx.recv().await {
println!(
"Upload: {}/{} bytes ({:.1}%)",
current,
total,
(current as f64 / total as f64) * 100.0
);
}
});
let snapshot = client.create_snapshot("data.mcap", Some(tx)).await?;
println!("Created snapshot: {:?}", snapshot.id());
// Upload dataset directory (no progress)
let snapshot = client.create_snapshot("./dataset_export/", None).await?;§See Also
restore_snapshot- Restore snapshot to datasetdownload_snapshot- Download snapshot datadelete_snapshot- Delete snapshot- AGTG Documentation
- Snapshots Guide
Sourcepub async fn delete_snapshot(
&self,
snapshot_id: SnapshotID,
) -> Result<(), Error>
pub async fn delete_snapshot( &self, snapshot_id: SnapshotID, ) -> Result<(), Error>
Delete a snapshot from EdgeFirst Studio.
Permanently removes a snapshot and its associated data. This operation cannot be undone.
§Arguments
snapshot_id- The snapshot ID to delete
§Errors
Returns an error if:
- Snapshot doesn’t exist
- User lacks permission to delete the snapshot
- Server error occurs
§Example
let client = Client::new()?.with_token_path(None)?;
let snapshot_id = SnapshotID::from(123);
client.delete_snapshot(snapshot_id).await?;§See Also
create_snapshot- Upload snapshotsnapshots- List all snapshots
Sourcepub async fn create_snapshot_from_dataset(
&self,
dataset_id: DatasetID,
description: &str,
annotation_set_id: Option<AnnotationSetID>,
) -> Result<SnapshotFromDatasetResult, Error>
pub async fn create_snapshot_from_dataset( &self, dataset_id: DatasetID, description: &str, annotation_set_id: Option<AnnotationSetID>, ) -> Result<SnapshotFromDatasetResult, Error>
Create a snapshot from an existing dataset on the server.
Triggers server-side snapshot generation which exports the dataset’s images and annotations into a downloadable EdgeFirst Dataset Format snapshot.
This is the inverse of restore_snapshot -
while restore creates a dataset from a snapshot, this method creates a
snapshot from a dataset.
§Arguments
dataset_id- The dataset ID to create snapshot fromdescription- Description for the created snapshot
§Returns
Returns a SnapshotCreateResult containing the snapshot ID and task ID
for monitoring progress.
§Errors
Returns an error if:
- Dataset doesn’t exist
- User lacks permission to access the dataset
- Server rejects the request
§Example
let client = Client::new()?.with_token_path(None)?;
let dataset_id = DatasetID::from(123);
// Create snapshot from dataset (all annotation sets)
let result = client
.create_snapshot_from_dataset(dataset_id, "My Dataset Backup", None)
.await?;
println!("Created snapshot: {:?}", result.id);
// Monitor progress via task ID
if let Some(task_id) = result.task_id {
println!("Task: {}", task_id);
}§See Also
create_snapshot- Upload local files as snapshotrestore_snapshot- Restore snapshot to datasetdownload_snapshot- Download snapshot
Sourcepub async fn download_snapshot(
&self,
snapshot_id: SnapshotID,
output: PathBuf,
progress: Option<Sender<Progress>>,
) -> Result<(), Error>
pub async fn download_snapshot( &self, snapshot_id: SnapshotID, output: PathBuf, progress: Option<Sender<Progress>>, ) -> Result<(), Error>
Download a snapshot from EdgeFirst Studio to local storage.
Downloads all files in a snapshot (single MCAP file or directory of EdgeFirst Dataset Format files) to the specified output path. Files are downloaded concurrently with progress tracking.
Concurrency tuning: Set MAX_TASKS to control concurrent
downloads (default: half of CPU cores, min 2, max 8).
§Arguments
snapshot_id- The snapshot ID to downloadoutput- Local directory path to save downloaded filesprogress- Optional channel to receive download progress updates
§Errors
Returns an error if:
- Snapshot doesn’t exist
- Output directory cannot be created
- Download fails or network error occurs
§Example
let client = Client::new()?.with_token_path(None)?;
let snapshot_id = SnapshotID::from(123);
// Download with progress tracking
let (tx, mut rx) = mpsc::channel(1);
tokio::spawn(async move {
while let Some(Progress { current, total }) = rx.recv().await {
println!("Download: {}/{} bytes", current, total);
}
});
client
.download_snapshot(snapshot_id, PathBuf::from("./output"), Some(tx))
.await?;§See Also
create_snapshot- Upload snapshotrestore_snapshot- Restore snapshot to datasetdelete_snapshot- Delete snapshot
Sourcepub async fn restore_snapshot(
&self,
project_id: ProjectID,
snapshot_id: SnapshotID,
topics: &[String],
autolabel: &[String],
autodepth: bool,
dataset_name: Option<&str>,
dataset_description: Option<&str>,
) -> Result<SnapshotRestoreResult, Error>
pub async fn restore_snapshot( &self, project_id: ProjectID, snapshot_id: SnapshotID, topics: &[String], autolabel: &[String], autodepth: bool, dataset_name: Option<&str>, dataset_description: Option<&str>, ) -> Result<SnapshotRestoreResult, Error>
Restore a snapshot to a dataset in EdgeFirst Studio with optional AGTG.
Restores a snapshot (MCAP file or EdgeFirst Dataset) into a dataset in the specified project. For MCAP files, supports:
- AGTG (Automatic Ground Truth Generation): Automatically annotate detected objects with 2D masks/boxes and 3D boxes (if radar/LiDAR present)
- Auto-depth: Generate depthmaps (Maivin/Raivin cameras only)
- Topic filtering: Select specific MCAP topics to restore
For EdgeFirst Dataset snapshots, this simply imports the pre-existing dataset structure.
§Arguments
project_id- Target project IDsnapshot_id- Snapshot ID to restoretopics- MCAP topics to include (empty = all topics)autolabel- Object labels for AGTG (empty = no auto-annotation)autodepth- Generate depthmaps (Maivin/Raivin only)dataset_name- Optional custom dataset namedataset_description- Optional dataset description
§Returns
Returns a SnapshotRestoreResult with the new dataset ID and status.
§Errors
Returns an error if:
- Snapshot or project doesn’t exist
- Snapshot format is invalid
- Server rejects restoration parameters
§Example
let client = Client::new()?.with_token_path(None)?;
let project_id = ProjectID::from(1);
let snapshot_id = SnapshotID::from(123);
// Restore MCAP with AGTG for "person" and "car" detection
let result = client
.restore_snapshot(
project_id,
snapshot_id,
&[], // All topics
&["person".to_string(), "car".to_string()], // AGTG labels
true, // Auto-depth
Some("Highway Dataset"),
Some("Collected on I-95"),
)
.await?;
println!("Restored to dataset: {:?}", result.dataset_id);§See Also
create_snapshot- Upload snapshotdownload_snapshot- Download snapshot- AGTG Documentation
Sourcepub async fn experiments(
&self,
project_id: ProjectID,
name: Option<&str>,
) -> Result<Vec<Experiment>, Error>
pub async fn experiments( &self, project_id: ProjectID, name: Option<&str>, ) -> Result<Vec<Experiment>, Error>
Returns a list of experiments available to the user. The experiments are returned as a vector of Experiment objects. If name is provided then only experiments containing this string are returned.
Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically.
Experiments provide a method of organizing training and validation
sessions together and are akin to an Experiment in MLFlow terminology.
Each experiment can have multiple trainer sessions associated with it,
these would be akin to runs in MLFlow terminology.
Sourcepub async fn experiment(
&self,
experiment_id: ExperimentID,
) -> Result<Experiment, Error>
pub async fn experiment( &self, experiment_id: ExperimentID, ) -> Result<Experiment, Error>
Return the experiment with the specified experiment ID. If the experiment does not exist, an error is returned.
Sourcepub async fn training_sessions(
&self,
experiment_id: ExperimentID,
name: Option<&str>,
) -> Result<Vec<TrainingSession>, Error>
pub async fn training_sessions( &self, experiment_id: ExperimentID, name: Option<&str>, ) -> Result<Vec<TrainingSession>, Error>
Returns a list of trainer sessions available to the user. The trainer sessions are returned as a vector of TrainingSession objects. If name is provided then only trainer sessions containing this string are returned.
Results are sorted by match quality: exact matches first, then case-insensitive exact matches, then shorter names (more specific), then alphabetically.
Trainer sessions are akin to runs in MLFlow terminology. These represent an actual training session which will produce metrics and model artifacts.
Sourcepub async fn training_session(
&self,
session_id: TrainingSessionID,
) -> Result<TrainingSession, Error>
pub async fn training_session( &self, session_id: TrainingSessionID, ) -> Result<TrainingSession, Error>
Return the trainer session with the specified trainer session ID. If the trainer session does not exist, an error is returned.
Sourcepub async fn validation_sessions(
&self,
project_id: ProjectID,
) -> Result<Vec<ValidationSession>, Error>
pub async fn validation_sessions( &self, project_id: ProjectID, ) -> Result<Vec<ValidationSession>, Error>
List validation sessions for the given project.
Sourcepub async fn validation_session(
&self,
session_id: ValidationSessionID,
) -> Result<ValidationSession, Error>
pub async fn validation_session( &self, session_id: ValidationSessionID, ) -> Result<ValidationSession, Error>
Retrieve a specific validation session.
Sourcepub async fn artifacts(
&self,
training_session_id: TrainingSessionID,
) -> Result<Vec<Artifact>, Error>
pub async fn artifacts( &self, training_session_id: TrainingSessionID, ) -> Result<Vec<Artifact>, Error>
List the artifacts for the specified trainer session. The artifacts are returned as a vector of strings.
Sourcepub async fn download_artifact(
&self,
training_session_id: TrainingSessionID,
modelname: &str,
filename: Option<PathBuf>,
progress: Option<Sender<Progress>>,
) -> Result<(), Error>
pub async fn download_artifact( &self, training_session_id: TrainingSessionID, modelname: &str, filename: Option<PathBuf>, progress: Option<Sender<Progress>>, ) -> Result<(), Error>
Download the model artifact for the specified trainer session to the specified file path, if path is not provided it will be downloaded to the current directory with the same filename. A progress callback can be provided to monitor the progress of the download over a watch channel.
Sourcepub async fn download_checkpoint(
&self,
training_session_id: TrainingSessionID,
checkpoint: &str,
filename: Option<PathBuf>,
progress: Option<Sender<Progress>>,
) -> Result<(), Error>
pub async fn download_checkpoint( &self, training_session_id: TrainingSessionID, checkpoint: &str, filename: Option<PathBuf>, progress: Option<Sender<Progress>>, ) -> Result<(), Error>
Download the model checkpoint associated with the specified trainer session to the specified file path, if path is not provided it will be downloaded to the current directory with the same filename. A progress callback can be provided to monitor the progress of the download over a watch channel.
There is no API for listing checkpoints it is expected that trainers are aware of possible checkpoints and their names within the checkpoint folder on the server.
Sourcepub async fn tasks(
&self,
name: Option<&str>,
workflow: Option<&str>,
status: Option<&str>,
manager: Option<&str>,
) -> Result<Vec<Task>, Error>
pub async fn tasks( &self, name: Option<&str>, workflow: Option<&str>, status: Option<&str>, manager: Option<&str>, ) -> Result<Vec<Task>, Error>
Return a list of tasks for the current user.
§Arguments
name- Optional filter for task name (client-side substring match)workflow- Optional filter for workflow/task type. If provided, filters server-side by exact match. Valid values include: “trainer”, “validation”, “snapshot-create”, “snapshot-restore”, “copyds”, “upload”, “auto-ann”, “auto-seg”, “aigt”, “import”, “export”, “convertor”, “twostage”status- Optional filter for task status (e.g., “running”, “complete”, “error”)manager- Optional filter for task manager type (e.g., “aws”, “user”, “kubernetes”)
Sourcepub async fn task_info(&self, task_id: TaskID) -> Result<TaskInfo, Error>
pub async fn task_info(&self, task_id: TaskID) -> Result<TaskInfo, Error>
Retrieve the task information and status.
Sourcepub async fn task_status(
&self,
task_id: TaskID,
status: &str,
) -> Result<Task, Error>
pub async fn task_status( &self, task_id: TaskID, status: &str, ) -> Result<Task, Error>
Updates the tasks status.
Sourcepub async fn set_stages(
&self,
task_id: TaskID,
stages: &[(&str, &str)],
) -> Result<(), Error>
pub async fn set_stages( &self, task_id: TaskID, stages: &[(&str, &str)], ) -> Result<(), Error>
Defines the stages for the task. The stages are defined as a mapping from stage names to their descriptions. Once stages are defined their status can be updated using the update_stage method.
Sourcepub async fn update_stage(
&self,
task_id: TaskID,
stage: &str,
status: &str,
message: &str,
percentage: u8,
) -> Result<(), Error>
pub async fn update_stage( &self, task_id: TaskID, stage: &str, status: &str, message: &str, percentage: u8, ) -> Result<(), Error>
Updates the progress of the task for the provided stage and status information.
Sourcepub async fn fetch(&self, query: &str) -> Result<Vec<u8>, Error>
pub async fn fetch(&self, query: &str) -> Result<Vec<u8>, Error>
Raw fetch from the Studio server is used for downloading files.
Sourcepub async fn post_multipart(
&self,
method: &str,
form: Form,
) -> Result<String, Error>
pub async fn post_multipart( &self, method: &str, form: Form, ) -> Result<String, Error>
Sends a multipart post request to the server. This is used by the upload and download APIs which do not use JSON-RPC but instead transfer files using multipart/form-data.
Sourcepub async fn rpc<Params, RpcResult>(
&self,
method: String,
params: Option<Params>,
) -> Result<RpcResult, Error>where
Params: Serialize,
RpcResult: DeserializeOwned,
pub async fn rpc<Params, RpcResult>(
&self,
method: String,
params: Option<Params>,
) -> Result<RpcResult, Error>where
Params: Serialize,
RpcResult: DeserializeOwned,
Send a JSON-RPC request to the server. The method is the name of the method to call on the server. The params are the parameters to pass to the method. The method and params are serialized into a JSON-RPC request and sent to the server. The response is deserialized into the specified type and returned to the caller.
NOTE: This API would generally not be called directly and instead users should use the higher-level methods provided by the client.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more