Struct MLCTrainingGraph

Source
#[repr(C)]
pub struct MLCTrainingGraph { /* private fields */ }
👎Deprecated
Available on crate features MLCGraph and MLCTrainingGraph only.
Expand description

A training graph created from one or more MLCGraph objects plus additional layers added directly to the training graph.

See also Apple’s documentation

Implementations§

Source§

impl MLCTrainingGraph

Source

pub unsafe fn optimizer(&self) -> Option<Retained<MLCOptimizer>>

👎Deprecated
Available on crate feature MLCOptimizer only.

The optimizer to be used with the training graph

Source

pub unsafe fn deviceMemorySize(&self) -> NSUInteger

👎Deprecated

Returns the total size in bytes of device memory used for all intermediate tensors for forward, gradient passes and optimizer update for all layers in the training graph. We recommend executing an iteration before checking the device memory size as the buffers needed get allocated when the corresponding pass such as gradient, optimizer update is executed.

Returns: A NSUInteger value

Source

pub unsafe fn graphWithGraphObjects_lossLayer_optimizer( graph_objects: &NSArray<MLCGraph>, loss_layer: Option<&MLCLayer>, optimizer: Option<&MLCOptimizer>, ) -> Retained<Self>

👎Deprecated
Available on crate features MLCLayer and MLCOptimizer only.

Create a training graph

Parameter graphObjects: The layers from these graph objects will be added to the training graph

Parameter lossLayer: The loss layer to use. The loss layer can also be added to the training graph using nodeWithLayer:sources:lossLabels

Parameter optimizer: The optimizer to use

Returns: A new training graph object

Source

pub unsafe fn addInputs_lossLabels( &self, inputs: &NSDictionary<NSString, MLCTensor>, loss_labels: Option<&NSDictionary<NSString, MLCTensor>>, ) -> bool

👎Deprecated
Available on crate feature MLCTensor only.

Add the list of inputs to the training graph

Parameter inputs: The inputs

Parameter lossLabels: The loss label inputs

Returns: A boolean indicating success or failure

Source

pub unsafe fn addInputs_lossLabels_lossLabelWeights( &self, inputs: &NSDictionary<NSString, MLCTensor>, loss_labels: Option<&NSDictionary<NSString, MLCTensor>>, loss_label_weights: Option<&NSDictionary<NSString, MLCTensor>>, ) -> bool

👎Deprecated
Available on crate feature MLCTensor only.

Add the list of inputs to the training graph

Each input, loss label or label weights tensor is identified by a NSString. When the training graph is executed, this NSString is used to identify which data object should be as input data for each tensor whose device memory needs to be updated before the graph is executed.

Parameter inputs: The inputs

Parameter lossLabels: The loss label inputs

Parameter lossLabelWeights: The loss label weights

Returns: A boolean indicating success or failure

Source

pub unsafe fn addOutputs( &self, outputs: &NSDictionary<NSString, MLCTensor>, ) -> bool

👎Deprecated
Available on crate feature MLCTensor only.

Add the list of outputs to the training graph

Parameter outputs: The outputs

Returns: A boolean indicating success or failure

Source

pub unsafe fn stopGradientForTensors( &self, tensors: &NSArray<MLCTensor>, ) -> bool

👎Deprecated
Available on crate feature MLCTensor only.

Add the list of tensors whose contributions are not to be taken when computing gradients during gradient pass

Parameter tensors: The list of tensors

Returns: A boolean indicating success or failure

Source

pub unsafe fn compileWithOptions_device( &self, options: MLCGraphCompilationOptions, device: &MLCDevice, ) -> bool

👎Deprecated
Available on crate features MLCDevice and MLCTypes only.

Compile the training graph for a device.

Parameter options: The compiler options to use when compiling the training graph

Parameter device: The MLCDevice object

Returns: A boolean indicating success or failure

Source

pub unsafe fn compileWithOptions_device_inputTensors_inputTensorsData( &self, options: MLCGraphCompilationOptions, device: &MLCDevice, input_tensors: Option<&NSDictionary<NSString, MLCTensor>>, input_tensors_data: Option<&NSDictionary<NSString, MLCTensorData>>, ) -> bool

Available on crate features MLCDevice and MLCTensor and MLCTensorData and MLCTypes only.

Compile the training graph for a device.

Specifying the list of constant tensors when we compile the graph allows MLCompute to perform additional optimizations at compile time.

Parameter options: The compiler options to use when compiling the training graph

Parameter device: The MLCDevice object

Parameter inputTensors: The list of input tensors that are constants

Parameter inputTensorsData: The tensor data to be used with these constant input tensors

Returns: A boolean indicating success or failure

Source

pub unsafe fn compileOptimizer(&self, optimizer: &MLCOptimizer) -> bool

👎Deprecated
Available on crate feature MLCOptimizer only.

Compile the optimizer to be used with a training graph.

Typically the optimizer to be used with a training graph is specifed when the training graph is created using graphWithGraphObjects:lossLayer:optimizer. The optimizer will be compiled in when compileWithOptions:device is called if an optimizer is specified with the training graph. In the case where the optimizer to be used is not known when the graph is created or compiled, this method can be used to associate and compile a training graph with an optimizer.

Parameter optimizer: The MLCOptimizer object

Returns: A boolean indicating success or failure

Source

pub unsafe fn linkWithGraphs(&self, graphs: &NSArray<MLCTrainingGraph>) -> bool

👎Deprecated

Link mutiple training graphs

This is used to link subsequent training graphs with first training sub-graph. This method should be used when we have tensors shared by one or more layers in multiple sub-graphs

Parameter graphs: The list of training graphs to link

Returns: A boolean indicating success or failure

Source

pub unsafe fn gradientTensorForInput( &self, input: &MLCTensor, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate feature MLCTensor only.

Get the gradient tensor for an input tensor

Parameter input: The input tensor

Returns: The gradient tensor

Source

pub unsafe fn sourceGradientTensorsForLayer( &self, layer: &MLCLayer, ) -> Retained<NSArray<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Get the source gradient tensors for a layer in the training graph

Parameter layer: A layer in the training graph

Returns: A list of tensors

Source

pub unsafe fn resultGradientTensorsForLayer( &self, layer: &MLCLayer, ) -> Retained<NSArray<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Get the result gradient tensors for a layer in the training graph

Parameter layer: A layer in the training graph

Returns: A list of tensors

Source

pub unsafe fn gradientDataForParameter_layer( &self, parameter: &MLCTensor, layer: &MLCLayer, ) -> Option<Retained<NSData>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Get the gradient data for a trainable parameter associated with a layer

This can be used to get the gradient data for weights or biases parameters associated with a convolution, fully connected or convolution transpose layer

Parameter parameter: The updatable parameter associated with the layer

Parameter layer: A layer in the training graph. Must be one of the following:

  • MLCConvolutionLayer
  • MLCFullyConnectedLayer
  • MLCBatchNormalizationLayer
  • MLCInstanceNormalizationLayer
  • MLCGroupNormalizationLayer
  • MLCLayerNormalizationLayer
  • MLCEmbeddingLayer
  • MLCMultiheadAttentionLayer

Returns: The gradient data. Will return nil if the layer is marked as not trainable or if training graph is not executed with separate calls to forward and gradient passes.

Source

pub unsafe fn allocateUserGradientForTensor( &self, tensor: &MLCTensor, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate feature MLCTensor only.

Allocate an entry for a user specified gradient for a tensor

Parameter tensor: A result tensor produced by a layer in the training graph that is input to some user specified code and will need to provide a user gradient during the gradient pass.

Returns: A gradient tensor

Source

pub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>, loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTensorData and MLCTypes and block2 only.

Execute the training graph (forward, gradient and optimizer update) with given source and label data

Execute the training graph with given source and label data. If an optimizer is specified, the optimizer update is applied. If MLCExecutionOptionsSynchronous is specified in ‘options’, this method returns after the graph has been executed. Otherwise, this method returns after the graph has been queued for execution. The completion handler is called after the graph has finished execution.

Parameter inputsData: The data objects to use for inputs

Parameter lossLabelsData: The data objects to use for loss labels

Parameter lossLabelWeightsData: The data objects to use for loss label weights

Parameter batchSize: The batch size to use. For a graph where batch size changes between layers this value must be 0.

Parameter options: The execution options

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_outputsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>, loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTensorData and MLCTypes and block2 only.

Execute the training graph (forward, gradient and optimizer update) with given source and label data

Parameter inputsData: The data objects to use for inputs

Parameter lossLabelsData: The data objects to use for loss labels

Parameter lossLabelWeightsData: The data objects to use for loss label weights

Parameter outputsData: The data objects to use for outputs

Parameter batchSize: The batch size to use. For a graph where batch size changes between layers this value must be 0.

Parameter options: The execution options

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn executeForwardWithBatchSize_options_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTypes and block2 only.

Execute the forward pass of the training graph

Parameter batchSize: The batch size to use. For a graph where batch size changes between layers this value must be 0.

Parameter options: The execution options

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn executeForwardWithBatchSize_options_outputsData_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTensorData and MLCTypes and block2 only.

Execute the forward pass for the training graph

Parameter batchSize: The batch size to use. For a graph where batch size changes between layers this value must be 0.

Parameter options: The execution options

Parameter outputsData: The data objects to use for outputs

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn executeGradientWithBatchSize_options_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTypes and block2 only.

Execute the gradient pass of the training graph

Parameter batchSize: The batch size to use. For a graph where batch size changes between layers this value must be 0.

Parameter options: The execution options

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn executeGradientWithBatchSize_options_outputsData_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTensorData and MLCTypes and block2 only.

Execute the gradient pass of the training graph

Parameter batchSize: The batch size to use. For a graph where batch size changes between layers this value must be 0.

Parameter options: The execution options

Parameter outputsData: The data objects to use for outputs

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn executeOptimizerUpdateWithOptions_completionHandler( &self, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTypes and block2 only.

Execute the optimizer update pass of the training graph

Parameter options: The execution options

Parameter completionHandler: The completion handler

Returns: A boolean indicating success or failure

Source

pub unsafe fn synchronizeUpdates(&self)

👎Deprecated

Synchronize updates (weights/biases from convolution, fully connected and LSTM layers, tensor parameters) from device memory to host memory.

Source

pub unsafe fn setTrainingTensorParameters( &self, parameters: &NSArray<MLCTensorParameter>, ) -> bool

👎Deprecated
Available on crate feature MLCTensorParameter only.

Set the input tensor parameters that also will be updated by the optimizer

These represent the list of input tensors to be updated when we execute the optimizer update Weights, bias or beta, gamma tensors are not included in this list. MLCompute automatically adds them to the parameter list based on whether the layer is marked as updatable or not.

Parameter parameters: The list of input tensors to be updated by the optimizer

Returns: A boolean indicating success or failure

Source

pub unsafe fn bindOptimizerData_deviceData_withTensor( &self, data: &NSArray<MLCTensorData>, device_data: Option<&NSArray<MLCTensorOptimizerDeviceData>>, tensor: &MLCTensor, ) -> bool

👎Deprecated
Available on crate features MLCTensor and MLCTensorData and MLCTensorOptimizerDeviceData only.

Associates the given optimizer data and device data buffers with the tensor. Returns true if the data is successfully associated with the tensor and copied to the device.

The caller must guarantee the lifetime of the underlying memory of datafor the entirety of the tensor’s lifetime. The deviceDatabuffers are allocated by MLCompute. This method must be called before executeOptimizerUpdateWithOptions or executeWithInputsData is called for the training graph. We recommend using this method instead of using [MLCTensor bindOptimizerData] especially if the optimizer update is being called multiple times for each batch.

Parameter data: The optimizer data to be associated with the tensor

Parameter deviceData: The optimizer device data to be associated with the tensor

Parameter tensor: The tensor

Returns: A Boolean value indicating whether the data is successfully associated with the tensor .

Source§

impl MLCTrainingGraph

Methods declared on superclass MLCGraph.

Source

pub unsafe fn graph() -> Retained<Self>

👎Deprecated

Creates a new graph.

Returns: A new graph.

Source§

impl MLCTrainingGraph

Methods declared on superclass NSObject.

Source

pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>

Source

pub unsafe fn new() -> Retained<Self>

Methods from Deref<Target = MLCGraph>§

Source

pub unsafe fn device(&self) -> Option<Retained<MLCDevice>>

👎Deprecated
Available on crate feature MLCDevice only.

The device to be used when compiling and executing a graph

Source

pub unsafe fn layers(&self) -> Retained<NSArray<MLCLayer>>

👎Deprecated
Available on crate feature MLCLayer only.

Layers in the graph

Source

pub unsafe fn summarizedDOTDescription(&self) -> Retained<NSString>

👎Deprecated

A DOT representation of the graph.

For more info on the DOT language, refer to https://en.wikipedia.org/wiki/DOT_(graph_description_language). Edges that have a dashed lines are those that have stop gradients, while those with solid lines don’t.

Source

pub unsafe fn nodeWithLayer_source( &self, layer: &MLCLayer, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Add a layer to the graph

Parameter layer: The layer

Parameter source: The source tensor

Returns: A result tensor

Source

pub unsafe fn nodeWithLayer_sources( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Add a layer to the graph

Parameter layer: The layer

Parameter sources: A list of source tensors

For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.

Returns: A result tensor

Source

pub unsafe fn nodeWithLayer_sources_disableUpdate( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, disable_update: bool, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Add a layer to the graph

Parameter layer: The layer

Parameter sources: A list of source tensors

Parameter disableUpdate: A flag to indicate if optimizer update should be disabled for this layer

For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.

Returns: A result tensor

Source

pub unsafe fn nodeWithLayer_sources_lossLabels( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, loss_labels: &NSArray<MLCTensor>, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Add a loss layer to the graph

Parameter layer: The loss layer

Parameter lossLabels: The loss labels tensor

For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.

Returns: A result tensor

Source

pub unsafe fn splitWithSource_splitCount_dimension( &self, source: &MLCTensor, split_count: NSUInteger, dimension: NSUInteger, ) -> Option<Retained<NSArray<MLCTensor>>>

👎Deprecated
Available on crate feature MLCTensor only.

Add a split layer to the graph

Parameter source: The source tensor

Parameter splitCount: The number of splits

Parameter dimension: The dimension to split the source tensor

Returns: A result tensor

Source

pub unsafe fn splitWithSource_splitSectionLengths_dimension( &self, source: &MLCTensor, split_section_lengths: &NSArray<NSNumber>, dimension: NSUInteger, ) -> Option<Retained<NSArray<MLCTensor>>>

👎Deprecated
Available on crate feature MLCTensor only.

Add a split layer to the graph

Parameter source: The source tensor

Parameter splitSectionLengths: The lengths of each split section

Parameter dimension: The dimension to split the source tensor

Returns: A result tensor

Source

pub unsafe fn concatenateWithSources_dimension( &self, sources: &NSArray<MLCTensor>, dimension: NSUInteger, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate feature MLCTensor only.

Add a concat layer to the graph

Parameter sources: The source tensors to concatenate

Parameter dimension: The concatenation dimension

Returns: A result tensor

Source

pub unsafe fn reshapeWithShape_source( &self, shape: &NSArray<NSNumber>, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate feature MLCTensor only.

Add a reshape layer to the graph

Parameter shape: An array representing the shape of result tensor

Parameter source: The source tensor

Returns: A result tensor

Source

pub unsafe fn transposeWithDimensions_source( &self, dimensions: &NSArray<NSNumber>, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>

👎Deprecated
Available on crate feature MLCTensor only.

Add a transpose layer to the graph

Parameter dimensions: NSArray <NSNumber *> representing the desired ordering of dimensions The dimensions array specifies the input axis source for each output axis, such that the K’th element in the dimensions array specifies the input axis source for the K’th axis in the output. The batch dimension which is typically axis 0 cannot be transposed.

Returns: A result tensor

Source

pub unsafe fn selectWithSources_condition( &self, sources: &NSArray<MLCTensor>, condition: &MLCTensor, ) -> Option<Retained<MLCTensor>>

Available on crate feature MLCTensor only.

Add a select layer to the graph

Parameter sources: The source tensors

Parameter condition: The condition mask

Returns: A result tensor

Source

pub unsafe fn scatterWithDimension_source_indices_copyFrom_reductionType( &self, dimension: NSUInteger, source: &MLCTensor, indices: &MLCTensor, copy_from: &MLCTensor, reduction_type: MLCReductionType, ) -> Option<Retained<MLCTensor>>

Available on crate features MLCTensor and MLCTypes only.

Add a scatter layer to the graph

Parameter dimension: The dimension along which to index

Parameter source: The updates to use with scattering with index positions specified in indices to result tensor

Parameter indices: The index of elements to scatter

Parameter copyFrom: The source tensor whose data is to be first copied to the result tensor

Parameter reductionType: The reduction type applied for all values in source tensor that are scattered to a specific location in the result tensor. Must be: MLCReductionTypeNone or MLCReductionTypeSum.

Returns: A result tensor

Source

pub unsafe fn gatherWithDimension_source_indices( &self, dimension: NSUInteger, source: &MLCTensor, indices: &MLCTensor, ) -> Option<Retained<MLCTensor>>

Available on crate feature MLCTensor only.

Add a gather layer to the graph

Parameter dimension: The dimension along which to index

Parameter source: The source tensor

Parameter indices: The index of elements to gather

Returns: A result tensor

Source

pub unsafe fn bindAndWriteData_forInputs_toDevice_batchSize_synchronous( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, input_tensors: &NSDictionary<NSString, MLCTensor>, device: &MLCDevice, batch_size: NSUInteger, synchronous: bool, ) -> bool

👎Deprecated
Available on crate features MLCDevice and MLCTensor and MLCTensorData only.

Associates data with input tensors. If the device is GPU, also copies the data to the device memory. Returns true if the data is successfully associated with input tensors.

This function should be used if you execute the forward, gradient and optimizer updates independently. Before the forward pass is executed, the inputs should be written to device memory. Similarly, before the gradient pass is executed, the inputs (typically the initial gradient tensor) should be written to device memory. The caller must guarantee the lifetime of the underlying memory of each value of inputsDatafor the entirety of each corresponding input tensor’s lifetime.

Parameter inputsData: The input data to use to write to device memory

Parameter inputTensors: The list of tensors to perform writes on

Parameter device: The device

Parameter batchSize: The batch size. This should be set to the actual batch size that may be used when we execute the graph and can be a value less than or equal to the batch size specified in the tensor. If set to 0, we use batch size specified in the tensor.

Parameter synchronous: Whether to execute the copy to the device synchronously. For performance, asynchronous execution is recommended.

Returns: A Boolean value indicating whether the data is successfully associated with the tensor.

Source

pub unsafe fn bindAndWriteData_forInputs_toDevice_synchronous( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, input_tensors: &NSDictionary<NSString, MLCTensor>, device: &MLCDevice, synchronous: bool, ) -> bool

👎Deprecated
Available on crate features MLCDevice and MLCTensor and MLCTensorData only.

Associates data with input tensors. If the device is GPU, also copies the data to the device memory. Returns true if the data is successfully associated with input tensors.

This function should be used if you execute the forward, gradient and optimizer updates independently. Before the forward pass is executed, the inputs should be written to device memory. Similarly, before the gradient pass is executed, the inputs (typically the initial gradient tensor) should be written to device memory. The caller must guarantee the lifetime of the underlying memory of each value of inputsDatafor the entirety of each corresponding input tensor’s lifetime.

Parameter inputsData: The input data to use to write to device memory

Parameter inputTensors: The list of tensors to perform writes on

Parameter device: The device

Parameter synchronous: Whether to execute the copy to the device synchronously. For performance, asynchronous execution is recommended.

Returns: A Boolean value indicating whether the data is successfully associated with the tensor.

Source

pub unsafe fn sourceTensorsForLayer( &self, layer: &MLCLayer, ) -> Retained<NSArray<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Get the source tensors for a layer in the training graph

Parameter layer: A layer in the training graph

Returns: A list of tensors

Source

pub unsafe fn resultTensorsForLayer( &self, layer: &MLCLayer, ) -> Retained<NSArray<MLCTensor>>

👎Deprecated
Available on crate features MLCLayer and MLCTensor only.

Get the result tensors for a layer in the training graph

Parameter layer: A layer in the training graph

Returns: A list of tensors

Methods from Deref<Target = NSObject>§

Source

pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !

Handle messages the object doesn’t recognize.

See Apple’s documentation for details.

Methods from Deref<Target = AnyObject>§

Source

pub fn class(&self) -> &'static AnyClass

Dynamically find the class of this object.

§Panics

May panic if the object is invalid (which may be the case for objects returned from unavailable init/new methods).

§Example

Check that an instance of NSObject has the precise class NSObject.

use objc2::ClassType;
use objc2::runtime::NSObject;

let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Source

pub unsafe fn get_ivar<T>(&self, name: &str) -> &T
where T: Encode,

👎Deprecated: this is difficult to use correctly, use Ivar::load instead.

Use Ivar::load instead.

§Safety

The object must have an instance variable with the given name, and it must be of type T.

See Ivar::load_ptr for details surrounding this.

Source

pub fn downcast_ref<T>(&self) -> Option<&T>
where T: DowncastTarget,

Attempt to downcast the object to a class of type T.

This is the reference-variant. Use Retained::downcast if you want to convert a retained object to another type.

§Mutable classes

Some classes have immutable and mutable variants, such as NSString and NSMutableString.

When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.

So using this method to convert a NSString to a NSMutableString, while not unsound, is generally frowned upon unless you created the string yourself, or the API explicitly documents the string to be mutable.

See Apple’s documentation on mutability and on isKindOfClass: for more details.

§Generic classes

Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.

You can, however, safely downcast to generic collections where all the type-parameters are AnyObject.

§Panics

This works internally by calling isKindOfClass:. That means that the object must have the instance method of that name, and an exception will be thrown (if CoreFoundation is linked) or the process will abort if that is not the case. In the vast majority of cases, you don’t need to worry about this, since both root objects NSObject and NSProxy implement this method.

§Examples

Cast an NSString back and forth from NSObject.

use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};

let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();

Try (and fail) to cast an NSObject to an NSString.

use objc2_foundation::{NSObject, NSString};

let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());

Try to cast to an array of strings.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();

This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.

Downcast when processing each element instead.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);

for elem in arr {
    if let Some(data) = elem.downcast_ref::<NSString>() {
        // handle `data`
    }
}

Trait Implementations§

Source§

impl AsRef<AnyObject> for MLCTrainingGraph

Source§

fn as_ref(&self) -> &AnyObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<MLCGraph> for MLCTrainingGraph

Source§

fn as_ref(&self) -> &MLCGraph

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<MLCTrainingGraph> for MLCTrainingGraph

Source§

fn as_ref(&self) -> &Self

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<NSObject> for MLCTrainingGraph

Source§

fn as_ref(&self) -> &NSObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl Borrow<AnyObject> for MLCTrainingGraph

Source§

fn borrow(&self) -> &AnyObject

Immutably borrows from an owned value. Read more
Source§

impl Borrow<MLCGraph> for MLCTrainingGraph

Source§

fn borrow(&self) -> &MLCGraph

Immutably borrows from an owned value. Read more
Source§

impl Borrow<NSObject> for MLCTrainingGraph

Source§

fn borrow(&self) -> &NSObject

Immutably borrows from an owned value. Read more
Source§

impl ClassType for MLCTrainingGraph

Source§

const NAME: &'static str = "MLCTrainingGraph"

The name of the Objective-C class that this type represents. Read more
Source§

type Super = MLCGraph

The superclass of this class. Read more
Source§

type ThreadKind = <<MLCTrainingGraph as ClassType>::Super as ClassType>::ThreadKind

Whether the type can be used from any thread, or from only the main thread. Read more
Source§

fn class() -> &'static AnyClass

Get a reference to the Objective-C class that this type represents. Read more
Source§

fn as_super(&self) -> &Self::Super

Get an immutable reference to the superclass.
Source§

impl Debug for MLCTrainingGraph

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Deref for MLCTrainingGraph

Source§

type Target = MLCGraph

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl Hash for MLCTrainingGraph

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Message for MLCTrainingGraph

Source§

fn retain(&self) -> Retained<Self>
where Self: Sized,

Increment the reference count of the receiver. Read more
Source§

impl NSObjectProtocol for MLCTrainingGraph

Source§

fn isEqual(&self, other: Option<&AnyObject>) -> bool
where Self: Sized + Message,

Check whether the object is equal to an arbitrary other object. Read more
Source§

fn hash(&self) -> usize
where Self: Sized + Message,

An integer that can be used as a table address in a hash table structure. Read more
Source§

fn isKindOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of the class, or one of its subclasses. Read more
Source§

fn is_kind_of<T>(&self) -> bool
where T: ClassType, Self: Sized + Message,

👎Deprecated: use isKindOfClass directly, or cast your objects with AnyObject::downcast_ref
Check if the object is an instance of the class type, or one of its subclasses. Read more
Source§

fn isMemberOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of a specific class, without checking subclasses. Read more
Source§

fn respondsToSelector(&self, aSelector: Sel) -> bool
where Self: Sized + Message,

Check whether the object implements or inherits a method with the given selector. Read more
Source§

fn conformsToProtocol(&self, aProtocol: &AnyProtocol) -> bool
where Self: Sized + Message,

Check whether the object conforms to a given protocol. Read more
Source§

fn description(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object. Read more
Source§

fn debugDescription(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object to use when debugging. Read more
Source§

fn isProxy(&self) -> bool
where Self: Sized + Message,

Check whether the receiver is a subclass of the NSProxy root class instead of the usual NSObject. Read more
Source§

fn retainCount(&self) -> usize
where Self: Sized + Message,

The reference count of the object. Read more
Source§

impl PartialEq for MLCTrainingGraph

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl RefEncode for MLCTrainingGraph

Source§

const ENCODING_REF: Encoding = <MLCGraph as ::objc2::RefEncode>::ENCODING_REF

The Objective-C type-encoding for a reference of this type. Read more
Source§

impl DowncastTarget for MLCTrainingGraph

Source§

impl Eq for MLCTrainingGraph

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<'a, T> AnyThread for T
where T: ClassType<ThreadKind = dyn AnyThread + 'a> + ?Sized,

Source§

fn alloc() -> Allocated<Self>
where Self: Sized + ClassType,

Allocate a new instance of the class. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> AutoreleaseSafe for T
where T: ?Sized,