#[repr(C)]pub struct MLCInferenceGraph { /* private fields */ }
MLCGraph
and MLCInferenceGraph
only.Expand description
An inference graph created from one or more MLCGraph objects plus additional layers added directly to the inference graph.
See also Apple’s documentation
Implementations§
Source§impl MLCInferenceGraph
impl MLCInferenceGraph
Sourcepub unsafe fn deviceMemorySize(&self) -> NSUInteger
👎Deprecated
pub unsafe fn deviceMemorySize(&self) -> NSUInteger
Returns the total size in bytes of device memory used by all intermediate tensors in the inference graph
Returns: A NSUInteger value
pub unsafe fn new() -> Retained<Self>
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
Sourcepub unsafe fn graphWithGraphObjects(
graph_objects: &NSArray<MLCGraph>,
) -> Retained<Self>
👎Deprecated
pub unsafe fn graphWithGraphObjects( graph_objects: &NSArray<MLCGraph>, ) -> Retained<Self>
Create an inference graph
Parameter graphObjects
: The layers from these graph objects will be added to the training graph
Returns: A new inference graph object
Sourcepub unsafe fn addInputs(
&self,
inputs: &NSDictionary<NSString, MLCTensor>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn addInputs( &self, inputs: &NSDictionary<NSString, MLCTensor>, ) -> bool
MLCTensor
only.Add the list of inputs to the inference graph
Parameter inputs
: The inputs
Returns: A boolean indicating success or failure
Sourcepub unsafe fn addInputs_lossLabels_lossLabelWeights(
&self,
inputs: &NSDictionary<NSString, MLCTensor>,
loss_labels: Option<&NSDictionary<NSString, MLCTensor>>,
loss_label_weights: Option<&NSDictionary<NSString, MLCTensor>>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn addInputs_lossLabels_lossLabelWeights( &self, inputs: &NSDictionary<NSString, MLCTensor>, loss_labels: Option<&NSDictionary<NSString, MLCTensor>>, loss_label_weights: Option<&NSDictionary<NSString, MLCTensor>>, ) -> bool
MLCTensor
only.Add the list of inputs to the inference graph
Each input, loss label or label weights tensor is identified by a NSString. When the inference graph is executed, this NSString is used to identify which data object should be as input data for each tensor whose device memory needs to be updated before the graph is executed.
Parameter inputs
: The inputs
Parameter lossLabels
: The loss label inputs
Parameter lossLabelWeights
: The loss label weights
Returns: A boolean indicating success or failure
Sourcepub unsafe fn addOutputs(
&self,
outputs: &NSDictionary<NSString, MLCTensor>,
) -> bool
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn addOutputs( &self, outputs: &NSDictionary<NSString, MLCTensor>, ) -> bool
MLCTensor
only.Add the list of outputs to the inference graph
Parameter outputs
: The outputs
Returns: A boolean indicating success or failure
Sourcepub unsafe fn compileWithOptions_device(
&self,
options: MLCGraphCompilationOptions,
device: &MLCDevice,
) -> bool
👎DeprecatedAvailable on crate features MLCDevice
and MLCTypes
only.
pub unsafe fn compileWithOptions_device( &self, options: MLCGraphCompilationOptions, device: &MLCDevice, ) -> bool
MLCDevice
and MLCTypes
only.Compile the training graph for a device.
Parameter options
: The compiler options to use when compiling the training graph
Parameter device
: The MLCDevice object
Returns: A boolean indicating success or failure
Sourcepub unsafe fn compileWithOptions_device_inputTensors_inputTensorsData(
&self,
options: MLCGraphCompilationOptions,
device: &MLCDevice,
input_tensors: Option<&NSDictionary<NSString, MLCTensor>>,
input_tensors_data: Option<&NSDictionary<NSString, MLCTensorData>>,
) -> bool
Available on crate features MLCDevice
and MLCTensor
and MLCTensorData
and MLCTypes
only.
pub unsafe fn compileWithOptions_device_inputTensors_inputTensorsData( &self, options: MLCGraphCompilationOptions, device: &MLCDevice, input_tensors: Option<&NSDictionary<NSString, MLCTensor>>, input_tensors_data: Option<&NSDictionary<NSString, MLCTensorData>>, ) -> bool
MLCDevice
and MLCTensor
and MLCTensorData
and MLCTypes
only.Compile the inference graph for a device.
Specifying the list of constant tensors when we compile the graph allows MLCompute to perform additional optimizations at compile time.
Parameter options
: The compiler options to use when compiling the inference graph
Parameter device
: The MLCDevice object
Parameter inputTensors
: The list of input tensors that are constants
Parameter inputTensorsData
: The tensor data to be used with these constant input tensors
Returns: A boolean indicating success or failure
Sourcepub unsafe fn linkWithGraphs(&self, graphs: &NSArray<MLCInferenceGraph>) -> bool
👎Deprecated
pub unsafe fn linkWithGraphs(&self, graphs: &NSArray<MLCInferenceGraph>) -> bool
Link mutiple inference graphs
This is used to link subsequent inference graphs with first inference sub-graph. This method should be used when we have tensors shared by one or more layers in multiple sub-graphs
Parameter graphs
: The list of inference graphs to link
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeWithInputsData_batchSize_options_completionHandler(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeWithInputsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the inference graph with given input data
Execute the inference graph given input data. If MLCExecutionOptionsSynchronous is specified in ‘options’, this method returns after the graph has been executed. Otherwise, this method returns after the graph has been queued for execution. The completion handler is called after the graph has finished execution.
Parameter inputsData
: The data objects to use for inputs
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeWithInputsData_outputsData_batchSize_options_completionHandler(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeWithInputsData_outputsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the inference graph with given input data
Execute the inference graph given input data. If MLCExecutionOptionsSynchronous is specified in ‘options’, this method returns after the graph has been executed. Otherwise, this method returns after the graph has been queued for execution. The completion handler is called after the graph has finished execution.
Parameter inputsData
: The data objects to use for inputs
Parameter outputsData
: The data objects to use for outputs
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_batchSize_options_completionHandler(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>,
loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>, loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the inference graph with given input data
Execute the inference graph given input data. If MLCExecutionOptionsSynchronous is specified in ‘options’, this method returns after the graph has been executed. Otherwise, this method returns after the graph has been queued for execution. The completion handler is called after the graph has finished execution.
Parameter inputsData
: The data objects to use for inputs
Parameter lossLabelsData
: The data objects to use for loss labels
Parameter lossLabelWeightsData
: The data objects to use for loss label weights
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Sourcepub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_outputsData_batchSize_options_completionHandler(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>,
loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>,
outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>,
batch_size: NSUInteger,
options: MLCExecutionOptions,
completion_handler: MLCGraphCompletionHandler,
) -> bool
👎DeprecatedAvailable on crate features MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.
pub unsafe fn executeWithInputsData_lossLabelsData_lossLabelWeightsData_outputsData_batchSize_options_completionHandler( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, loss_labels_data: Option<&NSDictionary<NSString, MLCTensorData>>, loss_label_weights_data: Option<&NSDictionary<NSString, MLCTensorData>>, outputs_data: Option<&NSDictionary<NSString, MLCTensorData>>, batch_size: NSUInteger, options: MLCExecutionOptions, completion_handler: MLCGraphCompletionHandler, ) -> bool
MLCTensor
and MLCTensorData
and MLCTypes
and block2
only.Execute the inference graph with given input data
Execute the inference graph given input data. If MLCExecutionOptionsSynchronous is specified in ‘options’, this method returns after the graph has been executed. Otherwise, this method returns after the graph has been queued for execution. The completion handler is called after the graph has finished execution.
Parameter inputsData
: The data objects to use for inputs
Parameter lossLabelsData
: The data objects to use for loss labels
Parameter lossLabelWeightsData
: The data objects to use for loss label weights
Parameter outputsData
: The data objects to use for outputs
Parameter batchSize
: The batch size to use. For a graph where batch size changes between layers this value must be 0.
Parameter options
: The execution options
Parameter completionHandler
: The completion handler
Returns: A boolean indicating success or failure
Methods from Deref<Target = MLCGraph>§
Sourcepub unsafe fn device(&self) -> Option<Retained<MLCDevice>>
👎DeprecatedAvailable on crate feature MLCDevice
only.
pub unsafe fn device(&self) -> Option<Retained<MLCDevice>>
MLCDevice
only.The device to be used when compiling and executing a graph
Sourcepub unsafe fn layers(&self) -> Retained<NSArray<MLCLayer>>
👎DeprecatedAvailable on crate feature MLCLayer
only.
pub unsafe fn layers(&self) -> Retained<NSArray<MLCLayer>>
MLCLayer
only.Layers in the graph
Sourcepub unsafe fn summarizedDOTDescription(&self) -> Retained<NSString>
👎Deprecated
pub unsafe fn summarizedDOTDescription(&self) -> Retained<NSString>
A DOT representation of the graph.
For more info on the DOT language, refer to https://en.wikipedia.org/wiki/DOT_(graph_description_language). Edges that have a dashed lines are those that have stop gradients, while those with solid lines don’t.
Sourcepub unsafe fn nodeWithLayer_source(
&self,
layer: &MLCLayer,
source: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_source( &self, layer: &MLCLayer, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a layer to the graph
Parameter layer
: The layer
Parameter source
: The source tensor
Returns: A result tensor
Sourcepub unsafe fn nodeWithLayer_sources(
&self,
layer: &MLCLayer,
sources: &NSArray<MLCTensor>,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_sources( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a layer to the graph
Parameter layer
: The layer
Parameter sources
: A list of source tensors
For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.
Returns: A result tensor
Sourcepub unsafe fn nodeWithLayer_sources_disableUpdate(
&self,
layer: &MLCLayer,
sources: &NSArray<MLCTensor>,
disable_update: bool,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_sources_disableUpdate( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, disable_update: bool, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a layer to the graph
Parameter layer
: The layer
Parameter sources
: A list of source tensors
Parameter disableUpdate
: A flag to indicate if optimizer update should be disabled for this layer
For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.
Returns: A result tensor
Sourcepub unsafe fn nodeWithLayer_sources_lossLabels(
&self,
layer: &MLCLayer,
sources: &NSArray<MLCTensor>,
loss_labels: &NSArray<MLCTensor>,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate features MLCLayer
and MLCTensor
only.
pub unsafe fn nodeWithLayer_sources_lossLabels( &self, layer: &MLCLayer, sources: &NSArray<MLCTensor>, loss_labels: &NSArray<MLCTensor>, ) -> Option<Retained<MLCTensor>>
MLCLayer
and MLCTensor
only.Add a loss layer to the graph
Parameter layer
: The loss layer
Parameter lossLabels
: The loss labels tensor
For variable length sequences of LSTMs/RNNs layers, create an MLCTensor of sortedSequenceLengths and pass it as the last index (i.e. index 2 or 4) of sources. This tensor must of be type MLCDataTypeInt32.
Returns: A result tensor
Sourcepub unsafe fn splitWithSource_splitCount_dimension(
&self,
source: &MLCTensor,
split_count: NSUInteger,
dimension: NSUInteger,
) -> Option<Retained<NSArray<MLCTensor>>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn splitWithSource_splitCount_dimension( &self, source: &MLCTensor, split_count: NSUInteger, dimension: NSUInteger, ) -> Option<Retained<NSArray<MLCTensor>>>
MLCTensor
only.Add a split layer to the graph
Parameter source
: The source tensor
Parameter splitCount
: The number of splits
Parameter dimension
: The dimension to split the source tensor
Returns: A result tensor
Sourcepub unsafe fn splitWithSource_splitSectionLengths_dimension(
&self,
source: &MLCTensor,
split_section_lengths: &NSArray<NSNumber>,
dimension: NSUInteger,
) -> Option<Retained<NSArray<MLCTensor>>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn splitWithSource_splitSectionLengths_dimension( &self, source: &MLCTensor, split_section_lengths: &NSArray<NSNumber>, dimension: NSUInteger, ) -> Option<Retained<NSArray<MLCTensor>>>
MLCTensor
only.Add a split layer to the graph
Parameter source
: The source tensor
Parameter splitSectionLengths
: The lengths of each split section
Parameter dimension
: The dimension to split the source tensor
Returns: A result tensor
Sourcepub unsafe fn concatenateWithSources_dimension(
&self,
sources: &NSArray<MLCTensor>,
dimension: NSUInteger,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn concatenateWithSources_dimension( &self, sources: &NSArray<MLCTensor>, dimension: NSUInteger, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a concat layer to the graph
Parameter sources
: The source tensors to concatenate
Parameter dimension
: The concatenation dimension
Returns: A result tensor
Sourcepub unsafe fn reshapeWithShape_source(
&self,
shape: &NSArray<NSNumber>,
source: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn reshapeWithShape_source( &self, shape: &NSArray<NSNumber>, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a reshape layer to the graph
Parameter shape
: An array representing the shape of result tensor
Parameter source
: The source tensor
Returns: A result tensor
Sourcepub unsafe fn transposeWithDimensions_source(
&self,
dimensions: &NSArray<NSNumber>,
source: &MLCTensor,
) -> Option<Retained<MLCTensor>>
👎DeprecatedAvailable on crate feature MLCTensor
only.
pub unsafe fn transposeWithDimensions_source( &self, dimensions: &NSArray<NSNumber>, source: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a transpose layer to the graph
Parameter dimensions
: NSArray
<NSNumber
*> representing the desired ordering of dimensions
The dimensions array specifies the input axis source for each output axis, such that the
K’th element in the dimensions array specifies the input axis source for the K’th axis in the
output. The batch dimension which is typically axis 0 cannot be transposed.
Returns: A result tensor
Sourcepub unsafe fn selectWithSources_condition(
&self,
sources: &NSArray<MLCTensor>,
condition: &MLCTensor,
) -> Option<Retained<MLCTensor>>
Available on crate feature MLCTensor
only.
pub unsafe fn selectWithSources_condition( &self, sources: &NSArray<MLCTensor>, condition: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a select layer to the graph
Parameter sources
: The source tensors
Parameter condition
: The condition mask
Returns: A result tensor
Sourcepub unsafe fn scatterWithDimension_source_indices_copyFrom_reductionType(
&self,
dimension: NSUInteger,
source: &MLCTensor,
indices: &MLCTensor,
copy_from: &MLCTensor,
reduction_type: MLCReductionType,
) -> Option<Retained<MLCTensor>>
Available on crate features MLCTensor
and MLCTypes
only.
pub unsafe fn scatterWithDimension_source_indices_copyFrom_reductionType( &self, dimension: NSUInteger, source: &MLCTensor, indices: &MLCTensor, copy_from: &MLCTensor, reduction_type: MLCReductionType, ) -> Option<Retained<MLCTensor>>
MLCTensor
and MLCTypes
only.Add a scatter layer to the graph
Parameter dimension
: The dimension along which to index
Parameter source
: The updates to use with scattering with index positions specified in indices to result tensor
Parameter indices
: The index of elements to scatter
Parameter copyFrom
: The source tensor whose data is to be first copied to the result tensor
Parameter reductionType
: The reduction type applied for all values in source tensor that are scattered to a specific location in the result tensor.
Must be: MLCReductionTypeNone or MLCReductionTypeSum.
Returns: A result tensor
Sourcepub unsafe fn gatherWithDimension_source_indices(
&self,
dimension: NSUInteger,
source: &MLCTensor,
indices: &MLCTensor,
) -> Option<Retained<MLCTensor>>
Available on crate feature MLCTensor
only.
pub unsafe fn gatherWithDimension_source_indices( &self, dimension: NSUInteger, source: &MLCTensor, indices: &MLCTensor, ) -> Option<Retained<MLCTensor>>
MLCTensor
only.Add a gather layer to the graph
Parameter dimension
: The dimension along which to index
Parameter source
: The source tensor
Parameter indices
: The index of elements to gather
Returns: A result tensor
Sourcepub unsafe fn bindAndWriteData_forInputs_toDevice_batchSize_synchronous(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
input_tensors: &NSDictionary<NSString, MLCTensor>,
device: &MLCDevice,
batch_size: NSUInteger,
synchronous: bool,
) -> bool
👎DeprecatedAvailable on crate features MLCDevice
and MLCTensor
and MLCTensorData
only.
pub unsafe fn bindAndWriteData_forInputs_toDevice_batchSize_synchronous( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, input_tensors: &NSDictionary<NSString, MLCTensor>, device: &MLCDevice, batch_size: NSUInteger, synchronous: bool, ) -> bool
MLCDevice
and MLCTensor
and MLCTensorData
only.Associates data with input tensors. If the device is GPU, also copies the data to the device memory. Returns true if the data is successfully associated with input tensors.
This function should be used if you execute the forward, gradient and optimizer updates independently.
Before the forward pass is executed, the inputs should be written to device memory. Similarly, before the
gradient pass is executed, the inputs (typically the initial gradient tensor) should be written to device
memory. The caller must guarantee the lifetime of the underlying memory of each value of
inputsData
for the entirety of each corresponding input tensor’s lifetime.
Parameter inputsData
: The input data to use to write to device memory
Parameter inputTensors
: The list of tensors to perform writes on
Parameter device
: The device
Parameter batchSize
: The batch size. This should be set to the actual batch size that may be used when we execute
the graph and can be a value less than or equal to the batch size specified in the tensor.
If set to 0, we use batch size specified in the tensor.
Parameter synchronous
: Whether to execute the copy to the device synchronously. For performance, asynchronous
execution is recommended.
Returns: A Boolean value indicating whether the data is successfully associated with the tensor.
Sourcepub unsafe fn bindAndWriteData_forInputs_toDevice_synchronous(
&self,
inputs_data: &NSDictionary<NSString, MLCTensorData>,
input_tensors: &NSDictionary<NSString, MLCTensor>,
device: &MLCDevice,
synchronous: bool,
) -> bool
👎DeprecatedAvailable on crate features MLCDevice
and MLCTensor
and MLCTensorData
only.
pub unsafe fn bindAndWriteData_forInputs_toDevice_synchronous( &self, inputs_data: &NSDictionary<NSString, MLCTensorData>, input_tensors: &NSDictionary<NSString, MLCTensor>, device: &MLCDevice, synchronous: bool, ) -> bool
MLCDevice
and MLCTensor
and MLCTensorData
only.Associates data with input tensors. If the device is GPU, also copies the data to the device memory. Returns true if the data is successfully associated with input tensors.
This function should be used if you execute the forward, gradient and optimizer updates independently.
Before the forward pass is executed, the inputs should be written to device memory. Similarly, before the
gradient pass is executed, the inputs (typically the initial gradient tensor) should be written to device
memory. The caller must guarantee the lifetime of the underlying memory of each value of
inputsData
for the entirety of each corresponding input tensor’s lifetime.
Parameter inputsData
: The input data to use to write to device memory
Parameter inputTensors
: The list of tensors to perform writes on
Parameter device
: The device
Parameter synchronous
: Whether to execute the copy to the device synchronously. For performance, asynchronous
execution is recommended.
Returns: A Boolean value indicating whether the data is successfully associated with the tensor.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init
/new
methods).
§Example
Check that an instance of NSObject
has the precise class NSObject
.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load
instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load
instead.Use Ivar::load
instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T
.
See Ivar::load_ptr
for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T
.
This is the reference-variant. Use Retained::downcast
if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString
.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString
to a NSMutableString
,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass:
for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject
.
§Panics
This works internally by calling isKindOfClass:
. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject
and
NSProxy
implement this method.
§Examples
Cast an NSString
back and forth from NSObject
.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();
Try (and fail) to cast an NSObject
to an NSString
.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());
Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();
This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}
Trait Implementations§
Source§impl AsRef<AnyObject> for MLCInferenceGraph
impl AsRef<AnyObject> for MLCInferenceGraph
Source§impl AsRef<MLCGraph> for MLCInferenceGraph
impl AsRef<MLCGraph> for MLCInferenceGraph
Source§impl AsRef<MLCInferenceGraph> for MLCInferenceGraph
impl AsRef<MLCInferenceGraph> for MLCInferenceGraph
Source§impl AsRef<NSObject> for MLCInferenceGraph
impl AsRef<NSObject> for MLCInferenceGraph
Source§impl Borrow<AnyObject> for MLCInferenceGraph
impl Borrow<AnyObject> for MLCInferenceGraph
Source§impl Borrow<MLCGraph> for MLCInferenceGraph
impl Borrow<MLCGraph> for MLCInferenceGraph
Source§impl Borrow<NSObject> for MLCInferenceGraph
impl Borrow<NSObject> for MLCInferenceGraph
Source§impl ClassType for MLCInferenceGraph
impl ClassType for MLCInferenceGraph
Source§const NAME: &'static str = "MLCInferenceGraph"
const NAME: &'static str = "MLCInferenceGraph"
Source§type ThreadKind = <<MLCInferenceGraph as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<MLCInferenceGraph as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for MLCInferenceGraph
impl Debug for MLCInferenceGraph
Source§impl Deref for MLCInferenceGraph
impl Deref for MLCInferenceGraph
Source§impl Hash for MLCInferenceGraph
impl Hash for MLCInferenceGraph
Source§impl Message for MLCInferenceGraph
impl Message for MLCInferenceGraph
Source§impl NSObjectProtocol for MLCInferenceGraph
impl NSObjectProtocol for MLCInferenceGraph
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass
directly, or cast your objects with AnyObject::downcast_ref