#[repr(C)]pub struct MLCTrainingGraph { /* private fields */ }
MLCGraph
and MLCTrainingGraph
only.Expand description
A training graph created from one or more MLCGraph objects plus additional layers added directly to the training graph.
See also Apple’s documentation
Implementations§
Source§impl MLCTrainingGraph
impl MLCTrainingGraph
Sourcepub unsafe fn optimizer(&self) -> Option<Retained<MLCOptimizer>>
👎DeprecatedAvailable on crate feature MLCOptimizer
only.
pub unsafe fn optimizer(&self) -> Option<Retained<MLCOptimizer>>
MLCOptimizer
only.The optimizer to be used with the training graph
Sourcepub unsafe fn deviceMemorySize(&self) -> NSUInteger
👎Deprecated
pub unsafe fn deviceMemorySize(&self) -> NSUInteger
Returns the total size in bytes of device memory used for all intermediate tensors for forward, gradient passes and optimizer update for all layers in the training graph. We recommend executing an iteration before checking the device memory size as the buffers needed get allocated when the corresponding pass such as gradient, optimizer update is executed.
Returns: A NSUInteger value
Sourcepub unsafe fn graphWithGraphObjects_lossLayer_optimizer(
graph_objects: &NSArray<MLCGraph>,
loss_layer: Option<&MLCLayer>,
optimizer: Option<&MLCOptimizer>,
) -> Retained<Self>
👎DeprecatedAvailable on crate features MLCLayer
and MLCOptimizer
only.
pub unsafe fn graphWithGraphObjects_lossLayer_optimizer( graph_objects: &NSArray<MLCGraph>, loss_layer: Option<&MLCLayer>, optimizer: Option<&MLCOptimizer>, ) -> Retained<Self>
MLCLayer
and MLCOptimizer
only.Create a training graph
Parameter graphObjects
: The layers from these graph objects will be added to the training graph
Parameter lossLayer
: The loss layer to use. The loss layer can also be added to the training graph
using nodeWithLayer:sources:lossLabels
Parameter optimizer
: The optimizer to use
Returns: A new training graph object
Sourcepub unsafe fn addInputs_lossLabels(
&self,
inputs: &NSDictionary<NSString, MLCTensor>,
loss_labels: Option<&NSDictionary<NSString, MLCTensor>>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn addInputs_lossLabels( &self, inputs: &NSDictionary<NSString, MLCTensor>, loss_labels: Option<&NSDictionary<NSString, MLCTensor>>, ) -> bool
MLCTensor
only.Add the list of inputs to the training graph
Parameter inputs
: The inputs
Parameter lossLabels
: The loss label inputs
Returns: A boolean indicating success or failure
Sourcepub unsafe fn addInputs_lossLabels_lossLabelWeights(
&self,
inputs: &NSDictionary<NSString, MLCTensor>,
loss_labels: Option<&NSDictionary<NSString, MLCTensor>>,
loss_label_weights: Option<&NSDictionary<NSString, MLCTensor>>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn addInputs_lossLabels_lossLabelWeights( &self, inputs: &NSDictionary<NSString, MLCTensor>, loss_labels: Option<&NSDictionary<NSString, MLCTensor>>, loss_label_weights: Option<&NSDictionary<NSString, MLCTensor>>, ) -> bool
MLCTensor
only.Add the list of inputs to the training graph
Each input, loss label or label weights tensor is identified by a NSString. When the training graph is executed, this NSString is used to identify which data object should be as input data for each tensor whose device memory needs to be updated before the graph is executed.
Parameter inputs
: The inputs
Parameter lossLabels
: The loss label inputs
Parameter lossLabelWeights
: The loss label weights
Returns: A boolean indicating success or failure
Sourcepub unsafe fn addOutputs(
&self,
outputs: &NSDictionary<NSString, MLCTensor>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn addOutputs( &self, outputs: &NSDictionary<NSString, MLCTensor>, ) -> bool
MLCTensor
only.Add the list of outputs to the training graph
Parameter outputs
: The outputs
Returns: A boolean indicating success or failure
Sourcepub unsafe fn stopGradientForTensors(
&self,
tensors: &NSArray<MLCTensor>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn stopGradientForTensors( &self, tensors: &NSArray<MLCTensor>, ) -> bool
MLCTensor
only.Add the list of tensors whose contributions are not to be taken when computing gradients during gradient pass
Parameter tensors
: The list of tensors
Returns: A boolean indicating success or failure
Sourcepub unsafe fn compileWithOptions_device(
&self,
options: MLCGraphCompilationOptions,
device: &MLCDevice,
) -> bool
👎DeprecatedAvailable on crate features MLCDevice
and MLCTypes
only.
pub unsafe fn compileWithOptions_device( &self, options: MLCGraphCompilationOptions, device: &MLCDevice, ) -> bool
MLCDevice
and MLCTypes
only.Compile the training graph for a device.
Parameter options
: The compiler options to use when compiling the training graph
Parameter device
: The MLCDevice object
Returns: A boolean indicating success or failure
Sourcepub unsafe fn compileWithOptions_device_inputTensors_inputTensorsData(
&self,
options: MLCGraphCompilationOptions,
device: &MLCDevice,
input_tensors: Option<&NSDictionary<NSString, MLCTensor>>,
input_tensors_data: Option<&NSDictionary<NSString, MLCTensorData>>,
) -> bool
Available on crate features MLCDevice
and MLCTensor
and MLCTensorData
and MLCTypes
only.
pub unsafe fn compileWithOptions_device_inputTensors_inputTensorsData( &self, options: MLCGraphCompilationOptions, device: &MLCDevice, input_tensors: Option<&NSDictionary<NSString, MLCTensor>>, input_tensors_data: Option<&NSDictionary<NSString, MLCTensorData>>, ) -> bool
MLCDevice
and MLCTensor
and MLCTensorData
and MLCTypes
only.Compile the training graph for a device.
Specifying the list of constant tensors when we compile the graph allows MLCompute to perform additional optimizations at compile time.
Parameter options
: The compiler options to use when compiling the training graph
Parameter device
: The MLCDevice object
Parameter inputTensors
: The list of input tensors that are constants
Parameter inputTensorsData
: The tensor data to be used with these constant input tensors
Returns: A boolean indicating success or failure
Sourcepub unsafe fn compileOptimizer(&self, optimizer: &MLCOptimizer) -> bool
👎DeprecatedAvailable on crate feature MLCOptimizer
only.
pub unsafe fn compileOptimizer(&self, optimizer: &MLCOptimizer) -> bool
MLCOptimizer
only.Compile the optimizer to be used with a training graph.
Typically the optimizer to be used with a training graph is specifed when the training graph is created using graphWithGraphObjects:lossLayer:optimizer. The optimizer will be compiled in when compileWithOptions:device is called if an optimizer is specified with the training graph. In the case where the optimizer to be used is not known when the graph is created or compiled, this method can be used to associate and compile a training graph with an optimizer.
Parameter optimizer
: The MLCOptimizer object
Returns: A boolean indicating success or failure
Sourcepub unsafe fn linkWithGraphs(&self, graphs: &NSArray<MLCTrainingGraph>) -> bool
👎Deprecated
pub unsafe fn linkWithGraphs(&self, graphs: &NSArray<MLCTrainingGraph>) -> bool
Link mutiple training graphs
This is used to link subsequent training graphs with first training sub-graph. This method should be used when we have tensors shared by one or more layers in multiple sub-graphs
Parameter graphs
: The list of training graphs to link
Returns: A boolean indicating success or failure
Sourcepub unsafe fn gradientTensorForInput(
&self,
input: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn gradientTensorForInput( &self, input: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Get the gradient tensor for an input tensor
Parameter input
: The input tensor
Returns: The gradient tensor
Sourcepub unsafe fn sourceGradientTensorsForLayer(
&self,
layer: &MLCLayer,
) -> Retained<NSArray<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn sourceGradientTensorsForLayer( &self, layer: &MLCLayer, ) -> Retained<NSArray<MLCTensor>>
MLCLayer
and MLCTensor
only.Get the source gradient tensors for a layer in the training graph
Parameter layer
: A layer in the training graph
Returns: A list of tensors
Sourcepub unsafe fn resultGradientTensorsForLayer(
&self,
layer: &MLCLayer,
) -> Retained<NSArray<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn resultGradientTensorsForLayer( &self, layer: &MLCLayer, ) -> Retained<NSArray<MLCTensor>>
MLCLayer
and MLCTensor
only.Get the result gradient tensors for a layer in the training graph
Parameter layer
: A layer in the training graph
Returns: A list of tensors
Sourcepub unsafe fn gradientDataForParameter_layer(
&self,
parameter: &MLCTensor,
layer: &MLCLayer,
) -> Option<Retained<NSData>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn gradientDataForParameter_layer( &self, parameter: &MLCTensor, layer: &MLCLayer, ) -> Option<Retained<NSData>>
MLCLayer
and MLCTensor
only.Get the gradient data for a trainable parameter associated with a layer
This can be used to get the gradient data for weights or biases parameters associated with a convolution, fully connected or convolution transpose layer
Parameter parameter
: The updatable parameter associated with the layer
Parameter layer
: A layer in the training graph. Must be one of the following:
- MLCConvolutionLayer
- MLCFullyConnectedLayer
- MLCBatchNormalizationLayer
- MLCInstanceNormalizationLayer
- MLCGroupNormalizationLayer
- MLCLayerNormalizationLayer
- MLCEmbeddingLayer
- MLCMultiheadAttentionLayer
Returns: The gradient data. Will return nil if the layer is marked as not trainable or if training graph is not executed with separate calls to forward and gradient passes.
Sourcepub unsafe fn allocateUserGradientForTensor(
&self,
tensor: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn allocateUserGradientForTensor( &self, tensor: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Allocate an entry for a user specified gradient for a tensor
Parameter tensor
: A result tensor produced by a layer in the training graph
that is input to some user specified code and will need to
provide a user gradient during the gradient pass.
Returns: A gradient tensor
Sourcepub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_batchSize_options_completionHandler(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>,
loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>, loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the training graph (forward, gradient and optimizer update) with given source and label data
Execute the training graph with given source and label data. If an optimizer is specified, the optimizer update is applied. If MLCExecutionOptionsSynchronous is specified in ‘options’, this method returns after the graph has been executed. Otherwise, this method returns after the graph has been queued for execution. The completion handler is called after the graph has finished execution.
Parameter inputsData
: The data objects to use for inputs
Parameter lossLabelsData
: The data objects to use for loss labels
Parameter lossLabelWeightsData
: The data objects to use for loss label weights
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_outputsData_batchSize_options_completionHandler(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>,
loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>,
outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_outputsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>, loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the training graph (forward, gradient and optimizer update) with given source and label data
Parameter inputsData
: The data objects to use for inputs
Parameter lossLabelsData
: The data objects to use for loss labels
Parameter lossLabelWeightsData
: The data objects to use for loss label weights
Parameter outputsData
: The data objects to use for outputs
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeForwardWithBatchSize_options_completionHandler(
&self,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTypes
and block2
only.
pub unsafe fn executeForwardWithBatchSize_options_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTypes
and block2
only.Execute the forward pass of the training graph
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeForwardWithBatchSize_options_outputsData_completionHandler(
&self,
batch_size: NSUInteger,
options: MLCExecutionOptions,
outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeForwardWithBatchSize_options_outputsData_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the forward pass for the training graph
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter outputsData
: The data objects to use for outputs
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeGradientWithBatchSize_options_completionHandler(
&self,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTypes
and block2
only.
pub unsafe fn executeGradientWithBatchSize_options_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTypes
and block2
only.Execute the gradient pass of the training graph
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeGradientWithBatchSize_options_outputsData_completionHandler(
&self,
batch_size: NSUInteger,
options: MLCExecutionOptions,
outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeGradientWithBatchSize_options_outputsData_completionHandler( &self, batch_size: NSUInteger, options: MLCExecutionOptions, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the gradient pass of the training graph
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter outputsData
: The data objects to use for outputs
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeOptimizerUpdateWithOptions_completionHandler(
&self,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTypes
and block2
only.
pub unsafe fn executeOptimizerUpdateWithOptions_completionHandler( &self, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTypes
and block2
only.Execute the optimizer update pass of the training graph
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn synchronizeUpdates(&self)
👎Deprecated
pub unsafe fn synchronizeUpdates(&self)
Synchronize updates (weights/biases from convolution, fully connected and LSTM layers, tensor parameters) from device memory to host memory.
Sourcepub unsafe fn setTrainingTensorParameters(
&self,
parameters: &NSArray<MLCTensorParameter>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensorParameter
only.
pub unsafe fn setTrainingTensorParameters( &self, parameters: &NSArray<MLCTensorParameter>, ) -> bool
MLCTensorParameter
only.Set the input tensor parameters that also will be updated by the optimizer
These represent the list of input tensors to be updated when we execute the optimizer update Weights, bias or beta, gamma tensors are not included in this list. MLCompute automatically adds them to the parameter list based on whether the layer is marked as updatable or not.
Parameter parameters
: The list of input tensors to be updated by the optimizer
Returns: A boolean indicating success or failure
Sourcepub unsafe fn bindOptimizerData_deviceData_withTensor(
&self,
data: &NSArray<MLCTensorData>,
device_data: Option<&NSArray<MLCTensorOptimizerDeviceData>>,
tensor: &MLCTensor,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTensorOptimizerDeviceData
only.
pub unsafe fn bindOptimizerData_deviceData_withTensor( &self, data: &NSArray<MLCTensorData>, device_data: Option<&NSArray<MLCTensorOptimizerDeviceData>>, tensor: &MLCTensor, ) -> bool
MLCTensor
and MLCTensorData
and MLCTensorOptimizerDeviceData
only.Associates the given optimizer data and device data buffers with the tensor. Returns true if the data is successfully associated with the tensor and copied to the device.
The caller must guarantee the lifetime of the underlying memory of
data
for the entirety of the tensor’s
lifetime. The
deviceData
buffers are allocated by MLCompute. This method must be called
before executeOptimizerUpdateWithOptions or executeWithInputsData is called for the training graph.
We recommend using this method instead of using [MLCTensor bindOptimizerData] especially if the
optimizer update is being called multiple times for each batch.
Parameter data
: The optimizer data to be associated with the tensor
Parameter deviceData
: The optimizer device data to be associated with the tensor
Parameter tensor
: The tensor
Returns: A Boolean value indicating whether the data is successfully associated with the tensor .
Source§impl MLCTrainingGraph
Methods declared on superclass MLCGraph
.
impl MLCTrainingGraph
Methods declared on superclass MLCGraph
.
Methods from Deref<Target = MLCGraph>§
Sourcepub unsafe fn device(&self) -> Option<Retained<MLCDevice>>
👎DeprecatedAvailable on crate feature MLCDevice
only.
pub unsafe fn device(&self) -> Option<Retained<MLCDevice>>
MLCDevice
only.The device to be used when compiling and executing a graph
Sourcepub unsafe fn layers(&self) -> Retained<NSArray<MLCLayer>>
👎DeprecatedAvailable on crate feature MLCLayer
only.
pub unsafe fn layers(&self) -> Retained<NSArray<MLCLayer>>
MLCLayer
only.Layers in the graph
Sourcepub unsafe fn summarizedDOTDescription(&self) -> Retained<NSString>
👎Deprecated
pub unsafe fn summarizedDOTDescription(&self) -> Retained<NSString>
A DOT representation of the graph.
For more info on the DOT language, refer to https://en.wikipedia.org/wiki/DOT_(graph_description_language). Edges that have a dashed lines are those that have stop gradients, while those with solid lines don’t.
Sourcepub unsafe fn nodeWithLayer_source(
&self,
layer: &MLCLayer,
source: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_source( &self, layer: &MLCLayer, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a layer to the graph
Parameter layer
: The layer
Parameter source
: The source tensor
Returns: A result tensor
Sourcepub unsafe fn nodeWithLayer_sources(
&self,
layer: &MLCLayer,
sources: &NSArray<MLCTensor>,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_sources( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a layer to the graph
Parameter layer
: The layer
Parameter sources
: A list of source tensors
For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.
Returns: A result tensor
Sourcepub unsafe fn nodeWithLayer_sources_disableUpdate(
&self,
layer: &MLCLayer,
sources: &NSArray<MLCTensor>,
disable_update: bool,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_sources_disableUpdate( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, disable_update: bool, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a layer to the graph
Parameter layer
: The layer
Parameter sources
: A list of source tensors
Parameter disableUpdate
: A flag to indicate if optimizer update should be disabled for this layer
For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.
Returns: A result tensor
Sourcepub unsafe fn nodeWithLayer_sources_lossLabels(
&self,
layer: &MLCLayer,
sources: &NSArray<MLCTensor>,
loss_labels: &NSArray<MLCTensor>,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_sources_lossLabels( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, loss_labels: &NSArray<MLCTensor>, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a loss layer to the graph
Parameter layer
: The loss layer
Parameter lossLabels
: The loss labels tensor
For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.
Returns: A result tensor
Sourcepub unsafe fn splitWithSource_splitCount_dimension(
&self,
source: &MLCTensor,
split_count: NSUInteger,
dimension: NSUInteger,
) -> Option<Retained<NSArray<MLCTensor>>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn splitWithSource_splitCount_dimension( &self, source: &MLCTensor, split_count: NSUInteger, dimension: NSUInteger, ) -> Option<Retained<NSArray<MLCTensor>>>
MLCTensor
only.Add a split layer to the graph
Parameter source
: The source tensor
Parameter splitCount
: The number of splits
Parameter dimension
: The dimension to split the source tensor
Returns: A result tensor
Sourcepub unsafe fn splitWithSource_splitSectionLengths_dimension(
&self,
source: &MLCTensor,
split_section_lengths: &NSArray<NSNumber>,
dimension: NSUInteger,
) -> Option<Retained<NSArray<MLCTensor>>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn splitWithSource_splitSectionLengths_dimension( &self, source: &MLCTensor, split_section_lengths: &NSArray<NSNumber>, dimension: NSUInteger, ) -> Option<Retained<NSArray<MLCTensor>>>
MLCTensor
only.Add a split layer to the graph
Parameter source
: The source tensor
Parameter splitSectionLengths
: The lengths of each split section
Parameter dimension
: The dimension to split the source tensor
Returns: A result tensor
Sourcepub unsafe fn concatenateWithSources_dimension(
&self,
sources: &NSArray<MLCTensor>,
dimension: NSUInteger,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn concatenateWithSources_dimension( &self, sources: &NSArray<MLCTensor>, dimension: NSUInteger, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a concat layer to the graph
Parameter sources
: The source tensors to concatenate
Parameter dimension
: The concatenation dimension
Returns: A result tensor
Sourcepub unsafe fn reshapeWithShape_source(
&self,
shape: &NSArray<NSNumber>,
source: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn reshapeWithShape_source( &self, shape: &NSArray<NSNumber>, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a reshape layer to the graph
Parameter shape
: An array representing the shape of result tensor
Parameter source
: The source tensor
Returns: A result tensor
Sourcepub unsafe fn transposeWithDimensions_source(
&self,
dimensions: &NSArray<NSNumber>,
source: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn transposeWithDimensions_source( &self, dimensions: &NSArray<NSNumber>, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a transpose layer to the graph
Parameter dimensions
: NSArray
<NSNumber
*> representing the desired ordering of dimensions
The dimensions array specifies the input axis source for each output axis, such that the
K’th element in the dimensions array specifies the input axis source for the K’th axis in the
output. The batch dimension which is typically axis 0 cannot be transposed.
Returns: A result tensor
Sourcepub unsafe fn selectWithSources_condition(
&self,
sources: &NSArray<MLCTensor>,
condition: &MLCTensor,
) -> Option<Retained<MLCTensor>>
Available on crate feature MLCTensor
only.
pub unsafe fn selectWithSources_condition( &self, sources: &NSArray<MLCTensor>, condition: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a select layer to the graph
Parameter sources
: The source tensors
Parameter condition
: The condition mask
Returns: A result tensor
Sourcepub unsafe fn scatterWithDimension_source_indices_copyFrom_reductionType(
&self,
dimension: NSUInteger,
source: &MLCTensor,
indices: &MLCTensor,
copy_from: &MLCTensor,
reduction_type: MLCReductionType,
) -> Option<Retained<MLCTensor>>
Available on crate features MLCTensor
and MLCTypes
only.
pub unsafe fn scatterWithDimension_source_indices_copyFrom_reductionType( &self, dimension: NSUInteger, source: &MLCTensor, indices: &MLCTensor, copy_from: &MLCTensor, reduction_type: MLCReductionType, ) -> Option<Retained<MLCTensor>>
MLCTensor
and MLCTypes
only.Add a scatter layer to the graph
Parameter dimension
: The dimension along which to index
Parameter source
: The updates to use with scattering with index positions specified in indices to result tensor
Parameter indices
: The index of elements to scatter
Parameter copyFrom
: The source tensor whose data is to be first copied to the result tensor
Parameter reductionType
: The reduction type applied for all values in source tensor that are scattered to a specific location in the result tensor.
Must be: MLCReductionTypeNone or MLCReductionTypeSum.
Returns: A result tensor
Sourcepub unsafe fn gatherWithDimension_source_indices(
&self,
dimension: NSUInteger,
source: &MLCTensor,
indices: &MLCTensor,
) -> Option<Retained<MLCTensor>>
Available on crate feature MLCTensor
only.
pub unsafe fn gatherWithDimension_source_indices( &self, dimension: NSUInteger, source: &MLCTensor, indices: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a gather layer to the graph
Parameter dimension
: The dimension along which to index
Parameter source
: The source tensor
Parameter indices
: The index of elements to gather
Returns: A result tensor
Sourcepub unsafe fn bindAndWriteData_forInputs_toDevice_batchSize_synchronous(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
input_tensors: &NSDictionary<NSString, MLCTensor>,
device: &MLCDevice,
batch_size: NSUInteger,
synchronous: bool,
) -> bool
👎DeprecatedAvailable on crate features MLCDevice
and MLCTensor
and MLCTensorData
only.
pub unsafe fn bindAndWriteData_forInputs_toDevice_batchSize_synchronous( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, input_tensors: &NSDictionary<NSString, MLCTensor>, device: &MLCDevice, batch_size: NSUInteger, synchronous: bool, ) -> bool
MLCDevice
and MLCTensor
and MLCTensorData
only.Associates data with input tensors. If the device is GPU, also copies the data to the device memory. Returns true if the data is successfully associated with input tensors.
This function should be used if you execute the forward, gradient and optimizer updates independently.
Before the forward pass is executed, the inputs should be written to device memory. Similarly, before the
gradient pass is executed, the inputs (typically the initial gradient tensor) should be written to device
memory. The caller must guarantee the lifetime of the underlying memory of each value of
inputsData
for the entirety of each corresponding input tensor’s lifetime.
Parameter inputsData
: The input data to use to write to device memory
Parameter inputTensors
: The list of tensors to perform writes on
Parameter device
: The device
Parameter batchSize
: The batch size. This should be set to the actual batch size that may be used when we execute
the graph and can be a value less than or equal to the batch size specified in the tensor.
If set to 0, we use batch size specified in the tensor.
Parameter synchronous
: Whether to execute the copy to the device synchronously. For performance, asynchronous
execution is recommended.
Returns: A Boolean value indicating whether the data is successfully associated with the tensor.
Sourcepub unsafe fn bindAndWriteData_forInputs_toDevice_synchronous(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
input_tensors: &NSDictionary<NSString, MLCTensor>,
device: &MLCDevice,
synchronous: bool,
) -> bool
👎DeprecatedAvailable on crate features MLCDevice
and MLCTensor
and MLCTensorData
only.
pub unsafe fn bindAndWriteData_forInputs_toDevice_synchronous( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, input_tensors: &NSDictionary<NSString, MLCTensor>, device: &MLCDevice, synchronous: bool, ) -> bool
MLCDevice
and MLCTensor
and MLCTensorData
only.Associates data with input tensors. If the device is GPU, also copies the data to the device memory. Returns true if the data is successfully associated with input tensors.
This function should be used if you execute the forward, gradient and optimizer updates independently.
Before the forward pass is executed, the inputs should be written to device memory. Similarly, before the
gradient pass is executed, the inputs (typically the initial gradient tensor) should be written to device
memory. The caller must guarantee the lifetime of the underlying memory of each value of
inputsData
for the entirety of each corresponding input tensor’s lifetime.
Parameter inputsData
: The input data to use to write to device memory
Parameter inputTensors
: The list of tensors to perform writes on
Parameter device
: The device
Parameter synchronous
: Whether to execute the copy to the device synchronously. For performance, asynchronous
execution is recommended.
Returns: A Boolean value indicating whether the data is successfully associated with the tensor.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init
/new
methods).
§Example
Check that an instance of NSObject
has the precise class NSObject
.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load
instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load
instead.Use Ivar::load
instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T
.
See Ivar::load_ptr
for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T
.
This is the reference-variant. Use Retained::downcast
if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString
.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString
to a NSMutableString
,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass:
for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject
.
§Panics
This works internally by calling isKindOfClass:
. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject
and
NSProxy
implement this method.
§Examples
Cast an NSString
back and forth from NSObject
.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();
Try (and fail) to cast an NSObject
to an NSString
.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());
Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();
This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}
Trait Implementations§
Source§impl AsRef<AnyObject> for MLCTrainingGraph
impl AsRef<AnyObject> for MLCTrainingGraph
Source§impl AsRef<MLCGraph> for MLCTrainingGraph
impl AsRef<MLCGraph> for MLCTrainingGraph
Source§impl AsRef<MLCTrainingGraph> for MLCTrainingGraph
impl AsRef<MLCTrainingGraph> for MLCTrainingGraph
Source§impl AsRef<NSObject> for MLCTrainingGraph
impl AsRef<NSObject> for MLCTrainingGraph
Source§impl Borrow<AnyObject> for MLCTrainingGraph
impl Borrow<AnyObject> for MLCTrainingGraph
Source§impl Borrow<MLCGraph> for MLCTrainingGraph
impl Borrow<MLCGraph> for MLCTrainingGraph
Source§impl Borrow<NSObject> for MLCTrainingGraph
impl Borrow<NSObject> for MLCTrainingGraph
Source§impl ClassType for MLCTrainingGraph
impl ClassType for MLCTrainingGraph
Source§const NAME: &'static str = "MLCTrainingGraph"
const NAME: &'static str = "MLCTrainingGraph"
Source§type ThreadKind = <<MLCTrainingGraph as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<MLCTrainingGraph as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for MLCTrainingGraph
impl Debug for MLCTrainingGraph
Source§impl Deref for MLCTrainingGraph
impl Deref for MLCTrainingGraph
Source§impl Hash for MLCTrainingGraph
impl Hash for MLCTrainingGraph
Source§impl Message for MLCTrainingGraph
impl Message for MLCTrainingGraph
Source§impl NSObjectProtocol for MLCTrainingGraph
impl NSObjectProtocol for MLCTrainingGraph
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass
directly, or cast your objects with AnyObject::downcast_ref