Crate objc2_metal_performance_shaders

Source
Expand description

§Bindings to the MetalPerformanceShaders framework

See Apple’s docs and the general docs on framework crates for more information.

Structs§

MPSAccelerationStructureDeprecatedMPSAccelerationStructure and MPSRayIntersector and MPSCore and MPSKernel
A data structure built over geometry used to accelerate ray tracing
MPSAccelerationStructureGroupDeprecatedMPSAccelerationStructureGroup and MPSRayIntersector
A group of acceleration structures which may be used together in an instance acceleration structure.
MPSAccelerationStructureStatusDeprecatedMPSAccelerationStructure and MPSRayIntersector
Possible values of the acceleration structure status property
MPSAccelerationStructureUsageDeprecatedMPSAccelerationStructure and MPSRayIntersector
Options describing how an acceleration structure will be used
MPSAliasingStrategyMPSCoreTypes and MPSCore
Apple’s documentation
MPSAlphaTypeMPSImageTypes and MPSImage
Apple’s documentation
MPSBinaryImageKernelMPSImageKernel and MPSImage and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSBoundingBoxIntersectionTestTypeDeprecatedMPSRayIntersector
Options for the MPSRayIntersector bounding box intersection test type property
MPSCNNAddMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNAddGradientMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNArithmeticMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNArithmeticGradientMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNArithmeticGradientStateMPSCNNMath and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Dependencies: This depends on Metal.framework.
MPSCNNBatchNormalizationMPSCNNBatchNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNBatchNormalizationFlagsMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSCNNBatchNormalizationGradientMPSCNNBatchNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNBatchNormalizationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing batch normalization gradient for training
MPSCNNBatchNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing batch normalization for inference or training
MPSCNNBatchNormalizationStateMPSCNNBatchNormalization and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
MPSCNNBatchNormalizationState encapsulates the data necessary to execute batch normalization.
MPSCNNBatchNormalizationStatisticsMPSCNNBatchNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNBatchNormalizationStatisticsGradientMPSCNNBatchNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNBinaryConvolutionMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNBinaryConvolutionFlagsMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSCNNBinaryConvolutionNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNFilterNode representing a MPSCNNBinaryConvolution kernel
MPSCNNBinaryConvolutionTypeMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSCNNBinaryFullyConnectedMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNBinaryFullyConnectedNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNFilterNode representing a MPSCNNBinaryFullyConnected kernel
MPSCNNBinaryKernelMPSCNNKernel and MPSNeuralNetwork and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNConvolutionMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNConvolutionDescriptorMPSCNNConvolution and MPSNeuralNetwork
Dependencies: This depends on Metal.framework
MPSCNNConvolutionFlagsMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSCNNConvolutionGradientMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNConvolutionGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNConvolutionGradientOptionMPSCNNConvolution and MPSNeuralNetwork
Apple’s documentation
MPSCNNConvolutionGradientStateMPSCNNConvolution and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
The MPSCNNConvolutionGradientState is returned by resultStateForSourceImage:sourceStates method on MPSCNNConvolution object. Note that resultStateForSourceImage:sourceStates:destinationImage creates the object on autoreleasepool. It will be consumed by MPSCNNConvolutionGradient. This is also used by MPSCNNConvolutionTranspose encode call that returns MPSImage on left hand side to correctly size the destination. Note that state objects are not usable across batches i.e. when batch is done you should nuke the state object and create new one for next batch.
MPSCNNConvolutionGradientStateNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNConvolutionNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNFilterNode representing a MPSCNNConvolution kernel
MPSCNNConvolutionTransposeMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNConvolutionTransposeGradientMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNConvolutionTransposeGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNConvolutionTransposeGradientStateMPSCNNConvolution and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
The MPSCNNConvolutionTransposeGradientState is returned by resultStateForSourceImage:sourceStates method on MPSCNNConvolutionTranspose object. Note that resultStateForSourceImage:sourceStates:destinationImage creates the object on autoreleasepool. It will be consumed by MPSCNNConvolutionTransposeGradient. It contains reference to MPSCNNConvolutionGradientState object that connects MPSCNNConvolution and its corresponding MPSCNNConvolutionTranspose in forward pass of autoencoder. In an autoencoder forward pass, MPSCNNConvolutionGradientState is produced by MPSCNNConvolution object and is used by corresponding MPSCNNConvolutionTraspose of forward pass that “undo” the corresponding MPSCNNConvolution. It is used to correctly size destination image that is returned on left hand side by encode call MPSCNNConvolutionTranspose as well as automatically set kernelOffsetX/Y on MPSCNNConvolutionTranspose using the offset and other properties of corresponding MPSCNNConvolution object. During training, same MPSCNNConvolutionGradientState object will be consumed by MPSCNNConvolutionGradient object and the MPSCNNConvolutionTransposeGradientState produced by MPSCNNConvolutionTranspose’s resultStateForSourceImage:sourceStates:destinationImage will be consumed by MPSCNNConvolutionTransposeGradient object
MPSCNNConvolutionTransposeGradientStateNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNConvolutionTransposeNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNFilterNode representing a MPSCNNConvolutionTranspose kernel
MPSCNNConvolutionWeightsAndBiasesStateMPSCNNConvolution and MPSNeuralNetwork and MPSCore and MPSState
The MPSCNNConvolutionWeightsAndBiasesState is returned by exportWeightsAndBiasesWithCommandBuffer: method on MPSCNNConvolution object. This is mainly used for GPU side weights/biases update process. During training, application can keep a copy of weights, velocity, momentum MTLBuffers in its data source, update the weights (in-place or out of place) with gradients obtained from MPSCNNConvolutionGradientState and call [MPSCNNConvolution reloadWeightsAndBiasesWithCommandBuffer] with resulting updated MTLBuffer. If application does not want to keep a copy of weights/biases, it can call [MPSCNNConvolution exportWeightsAndBiasesWithCommandBuffer:] to get the current weights from convolution itself, do the updated and call reloadWithCommandBuffer.
MPSCNNConvolutionWeightsLayoutMPSCNNConvolution and MPSNeuralNetwork
Apple’s documentation
MPSCNNCrossChannelNormalizationMPSCNNNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNCrossChannelNormalizationGradientMPSCNNNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNCrossChannelNormalizationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNCrossChannelNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing MPSCNNCrossChannelNormalization
MPSCNNDepthWiseConvolutionDescriptorMPSCNNConvolution and MPSNeuralNetwork
MPSCNNDepthWiseConvolutionDescriptor can be used to create MPSCNNConvolution object that does depthwise convolution
MPSCNNDilatedPoolingMaxMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNDilatedPoolingMaxGradientMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNDilatedPoolingMaxGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNDilatedPoolingMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
A node for a MPSCNNDilatedPooling kernel
MPSCNNDivideMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNDropoutMPSCNNDropout and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNDropoutGradientMPSCNNDropout and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNDropoutGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNDropoutGradientStateMPSCNNDropout and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Dependencies: This depends on Metal.framework.
MPSCNNDropoutNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNFullyConnectedMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNFullyConnectedGradientMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNFullyConnectedGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNFullyConnectedNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNFilterNode representing a MPSCNNFullyConnected kernel
MPSCNNGradientKernelMPSCNNKernel and MPSNeuralNetwork and MPSCore and MPSKernel
Gradient kernels are the backwards pass of a MPSCNNKernel used during training to calculate gradient back propagation. These take as arguments the gradient result from the next filter and the source image for the forward version of the filter. There is also a MPSNNGradientState passed from MPSCNNKernel to MPSCNNGradientKernel that contains information about the MPSCNNKernel parameters at the time it encoded and possibly also additional MTLResources to enable it to do its job.
MPSCNNGroupNormalizationMPSCNNGroupNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNGroupNormalizationGradientMPSCNNGroupNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNGroupNormalizationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNGroupNormalizationGradientStateMPSCNNGroupNormalization and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Dependencies: This depends on Metal.framework
MPSCNNGroupNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNInstanceNormalizationMPSCNNInstanceNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNInstanceNormalizationGradientMPSCNNInstanceNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNInstanceNormalizationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNInstanceNormalizationGradientStateMPSCNNInstanceNormalization and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Dependencies: This depends on Metal.framework
MPSCNNInstanceNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNKernelMPSCNNKernel and MPSNeuralNetwork and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNLocalContrastNormalizationMPSCNNNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNLocalContrastNormalizationGradientMPSCNNNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNLocalContrastNormalizationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNLocalContrastNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing MPSCNNLocalContrastNormalization
MPSCNNLogSoftMaxMPSCNNSoftMax and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNLogSoftMaxGradientMPSCNNSoftMax and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNLogSoftMaxGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNLogSoftMaxGradient kernel
MPSCNNLogSoftMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNLogSoftMax kernel
MPSCNNLossMPSCNNLoss and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNLossDataDescriptorMPSCNNLoss and MPSNeuralNetwork
Dependencies: This depends on Metal.framework.
MPSCNNLossDescriptorMPSCNNLoss and MPSNeuralNetwork
Dependencies: This depends on Metal.framework.
MPSCNNLossLabelsMPSCNNLoss and MPSNeuralNetwork and MPSCore and MPSState
Dependencies: This depends on Metal.framework.
MPSCNNLossNodeMPSNNGraphNodes and MPSNeuralNetwork
This node calculates loss information during training typically immediately after the inference portion of network evaluation is performed. The result image of the loss operations is typically the first gradient image to be comsumed by the gradient passes that work their way back up the graph. In addition, the node will update the loss image in the MPSNNLabels with the desired estimate of correctness.
MPSCNNLossTypeMPSCNNTypes and MPSNeuralNetwork
Apple’s documentation
MPSCNNMultiaryKernelMPSCNNKernel and MPSNeuralNetwork and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNMultiplyMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNMultiplyGradientMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNNeuronMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronAbsoluteMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronAbsoluteNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronAbsolute kernel
MPSCNNNeuronELUMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronELUNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronELU kernel
MPSCNNNeuronExponentialMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNNeuronExponentialNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronExponential kernel
MPSCNNNeuronGeLUNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronGeLU kernel
MPSCNNNeuronGradientMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronGradient
MPSCNNNeuronHardSigmoidMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronHardSigmoidNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronHardSigmoid kernel
MPSCNNNeuronLinearMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronLinearNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronLinear kernel
MPSCNNNeuronLogarithmMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNNeuronLogarithmNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronLogarithm kernel
MPSCNNNeuronNodeMPSNNGraphNodes and MPSNeuralNetwork
virtual base class for MPSCNNNeuron nodes
MPSCNNNeuronPReLUMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronPReLUNodeMPSNNGraphNodes and MPSNeuralNetwork
A ReLU node with parameter a provided independently for each feature channel
MPSCNNNeuronPowerMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNNeuronPowerNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronPower kernel
MPSCNNNeuronReLUMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronReLUNMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronReLUNNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronReLUN kernel
MPSCNNNeuronReLUNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronReLU kernel
MPSCNNNeuronSigmoidMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronSigmoidNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronSigmoid kernel
MPSCNNNeuronSoftPlusMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronSoftPlusNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronSoftPlus kernel
MPSCNNNeuronSoftSignMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronSoftSignNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronSoftSign kernel
MPSCNNNeuronTanHMPSCNNNeuron and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNNeuronTanHNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNNeuronTanH kernel
MPSCNNNeuronTypeMPSCNNNeuronType and MPSNeuralNetwork
Apple’s documentation
MPSCNNNormalizationGammaAndBetaStateMPSCNNNormalizationWeights and MPSNeuralNetwork and MPSCore and MPSState
A state which contains gamma and beta terms used to apply a scale and bias in either an MPSCNNInstanceNormalization or MPSCNNBatchNormalization operation.
MPSCNNNormalizationMeanAndVarianceStateMPSCNNBatchNormalization and MPSNeuralNetwork and MPSCore and MPSState
A state which contains mean and variance terms used to apply a normalization in a MPSCNNBatchNormalization operation.
MPSCNNNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
virtual base class for CNN normalization nodes
MPSCNNPoolingMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingAverageMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingAverageGradientMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingAverageGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNPoolingAverageNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNPoolingAverage kernel
MPSCNNPoolingGradientMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNPoolingL2NormMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingL2NormGradientMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingL2NormGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNPoolingL2NormNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNPoolingL2Norm kernel
MPSCNNPoolingMaxMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingMaxGradientMPSCNNPooling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNPoolingMaxGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNPoolingMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
A node representing a MPSCNNPoolingMax kernel
MPSCNNPoolingNodeMPSNNGraphNodes and MPSNeuralNetwork
A node for a MPSCNNPooling kernel
MPSCNNReductionTypeMPSCNNTypes and MPSNeuralNetwork
Apple’s documentation
MPSCNNSoftMaxMPSCNNSoftMax and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNSoftMaxGradientMPSCNNSoftMax and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNSoftMaxGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNSoftMaxGradient kernel
MPSCNNSoftMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNSoftMax kernel
MPSCNNSpatialNormalizationMPSCNNNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNSpatialNormalizationGradientMPSCNNNormalization and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNSpatialNormalizationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSCNNSpatialNormalizationNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing MPSCNNSpatialNormalization
MPSCNNSubPixelConvolutionDescriptorMPSCNNConvolution and MPSNeuralNetwork
MPSCNNSubPixelConvolutionDescriptor can be used to create MPSCNNConvolution object that does sub pixel upsamling and reshaping opeartion as described in http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf
MPSCNNSubtractMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNSubtractGradientMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNUpsamplingMPSCNNUpsampling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNUpsamplingBilinearMPSCNNUpsampling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNUpsamplingBilinearGradientMPSCNNUpsampling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNUpsamplingBilinearGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNUpsamplingBilinear kernel
MPSCNNUpsamplingBilinearNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNUpsamplingBilinear kernel
MPSCNNUpsamplingGradientMPSCNNUpsampling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSCNNUpsamplingNearestMPSCNNUpsampling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNUpsamplingNearestGradientMPSCNNUpsampling and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSCNNUpsamplingNearestGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNUpsamplingNearest kernel
MPSCNNUpsamplingNearestNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSCNNUpsamplingNearest kernel
MPSCNNWeightsQuantizationTypeMPSCNNConvolution and MPSNeuralNetwork
Apple’s documentation
MPSCNNYOLOLossMPSCNNLoss and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSCNNYOLOLossDescriptorMPSCNNLoss and MPSNeuralNetwork
Dependencies: This depends on Metal.framework.
MPSCNNYOLOLossNodeMPSNNGraphNodes and MPSNeuralNetwork
This node calculates loss information during training typically immediately after the inference portion of network evaluation is performed. The result image of the loss operations is typically the first gradient image to be comsumed by the gradient passes that work their way back up the graph. In addition, the node will update the loss image in the MPSNNLabels with the desired estimate of correctness.
MPSCommandBufferMPSCommandBuffer and MPSCore
Dependencies: This depends on Metal.framework
MPSCustomKernelArgumentCountMPSKernelTypes and MPSCore
Apple’s documentation
MPSCustomKernelIndexMPSKernelTypes and MPSCore
Apple’s documentation
MPSDataLayoutMPSImage and MPSCore
Apple’s documentation
MPSDataTypeMPSCoreTypes and MPSCore
Apple’s documentation
MPSDeviceCapsValuesMPSKernelTypes and MPSCore
Apple’s documentation
MPSDeviceOptions
Apple’s documentation
MPSDimensionSliceMPSCoreTypes and MPSCore
Describes a sub-region of an array dimension
MPSFloatDataTypeBitMPSCoreTypes and MPSCore
Apple’s documentation
MPSFloatDataTypeShiftMPSCoreTypes and MPSCore
Apple’s documentation
MPSGRUDescriptorMPSRNNLayer and MPSNeuralNetwork
Dependencies: This depends on Metal.framework
MPSImageMPSImage and MPSCore
Dependencies: This depends on Metal.framework
MPSImageAddMPSImageMath and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Dependencies: This depends on Metal.framework.
MPSImageAreaMaxMPSImageMorphology and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageAreaMax kernel finds the maximum pixel value in a rectangular region centered around each pixel in the source image. If there are multiple channels in the source image, each channel is processed independently. The edgeMode property is assumed to always be MPSImageEdgeModeClamp for this filter.
MPSImageAreaMinMPSImageMorphology and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageAreaMin finds the minimum pixel value in a rectangular region centered around each pixel in the source image. If there are multiple channels in the source image, each channel is processed independently. It has the same methods as MPSImageAreaMax The edgeMode property is assumed to always be MPSImageEdgeModeClamp for this filter.
MPSImageArithmeticMPSImageMath and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Dependencies: This depends on Metal.framework.
MPSImageBilinearScaleMPSImageResampling and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Resize an image and / or change its aspect ratio
MPSImageBoxMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageBox convolves an image with given filter of odd width and height. The kernel elements all have equal weight, achieving a blur effect. (Each result is the unweighted average of the surrounding pixels.) This allows for much faster algorithms, espcially for larger blur radii. The box height and width must be odd numbers. The box blur is a separable filter. The implementation is aware of this and will act accordingly to give best performance for multi-dimensional blurs.
MPSImageCannyMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageCanny implements the Canny edge detection algorithm. When the color model of the source and destination textures match, the filter is applied to each channel seperately. If the destination is monochrome but source multichannel, the source will be converted to grayscale using the linear gray color transform vector (v). Luminance = v[0] * pixel.x + v[1] * pixel.y + v[2] * pixel.z;
MPSImageConversionMPSImageConversion and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageConversion filter performs a conversion from source to destination
MPSImageConvolutionMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageConvolution convolves an image with given filter of odd width and height. The center of the kernel aligns with the MPSImageConvolution.offset. That is, the position of the top left corner of the area covered by the kernel is given by MPSImageConvolution.offset - {kernel_width>>1, kernel_height>>1, 0}
MPSImageCoordinateMPSCoreTypes and MPSCore
A unsigned coordinate with x, y and channel components
MPSImageCopyToMatrixMPSImageCopy and MPSImage and MPSCore and MPSKernel
The MPSImageCopyToMatrix copies image data to a MPSMatrix. The image data is stored in a row of a matrix. The dataLayout specifies the order in which the feature channels in the MPSImage get stored in the matrix. If MPSImage stores a batch of images, the images are copied into multiple rows, one row per image.
MPSImageDescriptorMPSImage and MPSCore
Dependencies: This depends on Metal.framework
MPSImageDilateMPSImageMorphology and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageDilate finds the maximum pixel value in a rectangular region centered around each pixel in the source image. It is like the MPSImageAreaMax, except that the intensity at each position is calculated relative to a different value before determining which is the maximum pixel value, allowing for shaped, non-rectangular morphological probes.
MPSImageDivideMPSImageMath and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Dependencies: This depends on Metal.framework.
MPSImageEDLinesMPSImageEDLines and MPSImage and MPSCore and MPSKernel
The MPSImageEDLInes class implements the EDLines line segmenting algorithm using edge-drawing (ED) described here https://ieeexplore.ieee.org/document/6116138
MPSImageEdgeModeMPSCoreTypes and MPSCore
Apple’s documentation
MPSImageErodeMPSImageMorphology and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageErode filter finds the minimum pixel value in a rectangular region centered around each pixel in the source image. It is like the MPSImageAreaMin, except that the intensity at each position is calculated relative to a different value before determining which is the maximum pixel value, allowing for shaped, non-rectangular morphological probes.
MPSImageEuclideanDistanceTransformMPSImageDistanceTransform and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Perform a Euclidean Distance Transform
MPSImageFeatureChannelFormatMPSCoreTypes and MPSCore
Apple’s documentation
MPSImageFindKeypointsMPSImageKeypoint and MPSImage and MPSCore and MPSKernel
The MPSImageFindKeypoints kernel is used to find a list of keypoints whose values are >= minimumPixelThresholdValue in MPSImageKeypointRangeInfo. The keypoints are generated for a specified region in the image. The pixel format of the source image must be MTLPixelFormatR8Unorm.
MPSImageGaussianBlurMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageGaussianBlur convolves an image with gaussian of given sigma in both x and y direction.
MPSImageGaussianPyramidMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
A Gaussian image pyramid is constructed as follows: The mipmap level zero is the source of the operation and is left untouched and the subsequent mipmap levels are constructed from it recursively:
MPSImageGuidedFilterMPSImageGuidedFilter and MPSImage and MPSCore and MPSKernel
Perform Guided Filter to produce a coefficients image The filter is broken into two stages:
MPSImageHistogramMPSImageHistogram and MPSImage and MPSCore and MPSKernel
The MPSImageHistogram computes the histogram of an image.
MPSImageHistogramEqualizationMPSImageHistogram and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageHistogramEqualization performs equalizes the histogram of an image. The process is divided into three steps.
MPSImageHistogramSpecificationMPSImageHistogram and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageHistogramSpecification performs a histogram specification operation on an image. It is a generalized version of histogram equalization operation. The histogram specificaiton filter converts the image so that its histogram matches the desired histogram.
MPSImageIntegralMPSImageIntegral and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageIntegral calculates the sum of pixels over a specified region in the image. The value at each position is the sum of all pixels in a source image rectangle, sumRect:
MPSImageIntegralOfSquaresMPSImageIntegral and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageIntegralOfSquares calculates the sum of squared pixels over a specified region in the image. The value at each position is the sum of all squared pixels in a source image rectangle, sumRect:
MPSImageKeypointRangeInfoMPSImageKeypoint and MPSImage
Specifies information to find the keypoints in an image.
MPSImageLanczosScaleMPSImageResampling and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Resize an image and / or change its aspect ratio
MPSImageLaplacianMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageLaplacian is an optimized variant of the MPSImageConvolution filter provided primarily for ease of use. This filter uses an optimized convolution filter with a 3 x 3 kernel with the following weights: [ 0 1 0 1 -4 1 0 1 0 ]
MPSImageLaplacianPyramidMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Laplacian pyramid levels are constructed as difference between the current source level and 2x interpolated version of the half-resolution source level immediately above it.
MPSImageLaplacianPyramidAddMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Apple’s documentation
MPSImageLaplacianPyramidSubtractMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Apple’s documentation
MPSImageMedianMPSImageMedian and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageMedian applies a median filter to an image. A median filter finds the median color value for each channel within a kernelDiameter x kernelDiameter window surrounding the pixel of interest. It is a common means of noise reduction and also as a smoothing filter with edge preserving qualities.
MPSImageMultiplyMPSImageMath and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Dependencies: This depends on Metal.framework.
MPSImageNormalizedHistogramMPSImageHistogram and MPSImage and MPSCore and MPSKernel
The MPSImageNormalizedHistogram computes the normalized histogram of an image. The minimum and maximum pixel values for a given region of an image are first computed. The max(computed minimum pixel value, MPSImageHistogramInfo.minPixelValue) and the min(computed maximum pixel value, MPSImageHistogramInfo.maxPixelValue) are used to compute the normalized histogram.
MPSImagePyramidMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImagePyramid is a base class for creating different kinds of pyramid images
MPSImageReadWriteParamsMPSImage and MPSCore
these parameters are passed in to allow user to read/write to a particular set of featureChannels in an MPSImage
MPSImageReduceColumnMaxMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceColumnMax performs a reduction operation returning the maximum value for each column of an image
MPSImageReduceColumnMeanMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceColumnMean performs a reduction operation returning the mean value for each column of an image
MPSImageReduceColumnMinMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceColumnMin performs a reduction operation returning the mininmum value for each column of an image
MPSImageReduceColumnSumMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceColumnSum performs a reduction operation returning the sum for each column of an image
MPSImageReduceRowMaxMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceRowMax performs a reduction operation returning the maximum value for each row of an image
MPSImageReduceRowMeanMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceRowMean performs a reduction operation returning the mean value for each row of an image
MPSImageReduceRowMinMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceRowMin performs a reduction operation returning the mininmum value for each row of an image
MPSImageReduceRowSumMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduceRowSum performs a reduction operation returning the sum for each row of an image
MPSImageReduceUnaryMPSImageReduce and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageReduce performs a reduction operation The reduction operations supported are:
MPSImageRegionMPSCoreTypes and MPSCore
A rectangular subregion of a MPSImage
MPSImageScaleMPSImageResampling and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Resize an image and / or change its aspect ratio
MPSImageSobelMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageSobel implements the Sobel filter. When the color model (e.g. RGB, two-channel, grayscale, etc.) of source and destination textures match, the filter is applied to each channel separately. If the destination is monochrome (single channel) but source multichannel, the pixel values are converted to grayscale before applying Sobel operator using the linear gray color transform vector (v).
MPSImageStatisticsMeanMPSImageStatistics and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageStatisticsMean computes the mean for a given region of an image.
MPSImageStatisticsMeanAndVarianceMPSImageStatistics and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageStatisticsMeanAndVariance computes the mean and variance for a given region of an image. The mean and variance values are written to the destination image at the following pixel locations:
MPSImageStatisticsMinAndMaxMPSImageStatistics and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageStatisticsMinAndMax computes the minimum and maximum pixel values for a given region of an image. The min and max values are written to the destination image at the following pixel locations:
MPSImageSubtractMPSImageMath and MPSImage and MPSCore and MPSImageKernel and MPSKernel
Dependencies: This depends on Metal.framework.
MPSImageTentMPSImageConvolution and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The box filter, while fast, may yield square-ish looking blur effects. However, multiple passes of the box filter tend to smooth out with each additional pass. For example, two 3-wide box blurs produces the same effective convolution as a 5-wide tent blur:
MPSImageThresholdBinaryMPSImageThreshold and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSThreshold filter applies a fixed-level threshold to each pixel in the image. The threshold functions convert a single channel image to a binary image. If the input image is not a single channel image, convert the inputimage to a single channel luminance image using the linearGrayColorTransform and then apply the threshold. The ThresholdBinary function is: destinationPixelValue = sourcePixelValue > thresholdValue ? maximumValue : 0
MPSImageThresholdBinaryInverseMPSImageThreshold and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageThresholdBinaryInverse filter applies a fixed-level threshold to each pixel in the image. The threshold functions convert a single channel image to a binary image. If the input image is not a single channel image, convert the inputimage to a single channel luminance image using the linearGrayColorTransform and then apply the threshold. The ThresholdBinaryInverse function is: destinationPixelValue = sourcePixelValue > thresholdValue ? 0 : maximumValue
MPSImageThresholdToZeroMPSImageThreshold and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageThresholdToZero filter applies a fixed-level threshold to each pixel in the image. The threshold functions convert a single channel image to a binary image. If the input image is not a single channel image, convert the inputimage to a single channel luminance image using the linearGrayColorTransform and then apply the threshold. The ThresholdToZero function is: destinationPixelValue = sourcePixelValue > thresholdValue ? sourcePixelValue : 0
MPSImageThresholdToZeroInverseMPSImageThreshold and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageThresholdToZeroInverse filter applies a fixed-level threshold to each pixel in the image. The threshold functions convert a single channel image to a binary image. If the input image is not a single channel image, convert the inputimage to a single channel luminance image using the linearGrayColorTransform and then apply the threshold. The ThresholdToZeroINverse function is: destinationPixelValue = sourcePixelValue > thresholdValue ? 0 : sourcePixelValue
MPSImageThresholdTruncateMPSImageThreshold and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageThresholdTruncate filter applies a fixed-level threshold to each pixel in the image: The threshold functions convert a single channel image to a binary image. If the input image is not a single channel image, convert the inputimage to a single channel luminance image using the linearGrayColorTransform and then apply the threshold. The ThresholdTruncate function is: destinationPixelValue = sourcePixelValue > thresholdValue ? thresholdValue : sourcePixelValue
MPSImageTransposeMPSImageTranspose and MPSImage and MPSCore and MPSImageKernel and MPSKernel
The MPSImageTranspose transposes an image
MPSImageTypeMPSKernelTypes and MPSCore
Apple’s documentation
MPSInstanceAccelerationStructureDeprecatedMPSInstanceAccelerationStructure and MPSRayIntersector and MPSAccelerationStructure and MPSCore and MPSKernel
An acceleration structure built over instances of other acceleration structures
MPSIntegerDivisionParamsMPSKernelTypes and MPSCore
Apple’s documentation
MPSIntersectionDataTypeMPSRayIntersector
Intersection data type options
MPSIntersectionDistanceMPSRayIntersectorTypes and MPSRayIntersector
Returned intersection result which contains the distance from the ray origin to the intersection point
MPSIntersectionDistancePrimitiveIndexMPSRayIntersectorTypes and MPSRayIntersector
Intersection result which contains the distance from the ray origin to the intersection point and the index of the intersected primitive
MPSIntersectionDistancePrimitiveIndexBufferIndexMPSRayIntersectorTypes and MPSRayIntersector
Intersection result which contains the distance from the ray origin to the intersection point, the index of the intersected primitive, and the polygon buffer index of the intersected primitive.
MPSIntersectionDistancePrimitiveIndexBufferIndexInstanceIndexMPSRayIntersectorTypes and MPSRayIntersector
Intersection result which contains the distance from the ray origin to the intersection point, the index of the intersected primitive, the polygon buffer index of the intersected primitive, and the index of the intersected instance.
MPSIntersectionDistancePrimitiveIndexInstanceIndexMPSRayIntersectorTypes and MPSRayIntersector
Intersection result which contains the distance from the ray origin to the intersection point, the index of the intersected primitive, and the index of the intersected instance.
MPSIntersectionTypeMPSRayIntersector
Options for the MPSRayIntersector intersection type property
MPSKernelMPSKernel and MPSCore
Dependencies: This depends on Metal.framework
MPSKernelOptionsMPSCoreTypes and MPSCore
Apple’s documentation
MPSKeyedUnarchiverMPSKeyedUnarchiver and MPSCore
A NSKeyedArchiver that supports the MPSDeviceProvider protocol for MPSKernel decoding
MPSLSTMDescriptorMPSRNNLayer and MPSNeuralNetwork
Dependencies: This depends on Metal.framework
MPSMatrixMPSMatrix and MPSCore
Dependencies: This depends on Metal.framework
MPSMatrixBatchNormalizationMPSMatrixBatchNormalization and MPSNeuralNetwork and MPSCore and MPSKernel and MPSMatrix and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixBatchNormalizationGradientMPSMatrixBatchNormalization and MPSNeuralNetwork and MPSCore and MPSKernel and MPSMatrix and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixBinaryKernelMPSMatrixTypes and MPSMatrix and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSMatrixCopyMPSMatrixCombination and MPSMatrix and MPSCore and MPSKernel
Apple’s documentation
MPSMatrixCopyDescriptorMPSMatrixCombination and MPSMatrix
A list of copy operations
MPSMatrixCopyOffsetsMPSMatrixCombination and MPSMatrix
A description of each copy operation
MPSMatrixCopyToImageMPSImageCopy and MPSImage and MPSCore and MPSKernel
The MPSMatrixCopyToImage copies matrix data to a MPSImage. The operation is the reverse of MPSImageCopyToMatrix.
MPSMatrixDecompositionCholeskyMPSMatrixDecomposition and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixDecompositionLUMPSMatrixDecomposition and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixDecompositionStatusMPSMatrixDecomposition and MPSMatrix
Apple’s documentation
MPSMatrixDescriptorMPSMatrix and MPSCore
Dependencies: This depends on Metal.framework
MPSMatrixFindTopKMPSMatrixFindTopK and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixFullyConnectedMPSMatrixFullyConnected and MPSNeuralNetwork and MPSCore and MPSKernel and MPSMatrix and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixFullyConnectedGradientMPSMatrixFullyConnected and MPSNeuralNetwork and MPSCore and MPSKernel and MPSMatrix and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixLogSoftMaxMPSMatrixSoftMax and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixLogSoftMaxGradientMPSMatrixSoftMax and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixMultiplicationMPSMatrixMultiplication and MPSMatrix and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSMatrixNeuronMPSMatrixNeuron and MPSNeuralNetwork and MPSCore and MPSKernel and MPSMatrix and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixNeuronGradientMPSMatrixNeuron and MPSNeuralNetwork and MPSCore and MPSKernel and MPSMatrix and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixOffsetMPSKernelTypes and MPSCore
Specifies a row and column offset into an MPSMatrix.
MPSMatrixRandomMPSMatrixRandom and MPSMatrix and MPSCore and MPSKernel
Kernels that implement random number generation.
MPSMatrixRandomDistributionMPSMatrixRandom and MPSMatrix
Apple’s documentation
MPSMatrixRandomDistributionDescriptorMPSMatrixRandom and MPSMatrix
Dependencies: This depends on Metal.framework
MPSMatrixRandomMTGP32MPSMatrixRandom and MPSMatrix and MPSCore and MPSKernel
Generates random numbers using a Mersenne Twister algorithm suitable for GPU execution. It uses a period of 2**11214. For further details see: Mutsuo Saito. A Variant of Mersenne Twister Suitable for Graphic Processors. arXiv:1005.4973
MPSMatrixRandomPhiloxMPSMatrixRandom and MPSMatrix and MPSCore and MPSKernel
Generates random numbers using a counter based algorithm. For further details see: John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw. Parallel Random Numbers: As Easy as 1, 2, 3.
MPSMatrixSoftMaxMPSMatrixSoftMax and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixSoftMaxGradientMPSMatrixSoftMax and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixSolveCholeskyMPSMatrixSolve and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixSolveLUMPSMatrixSolve and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixSolveTriangularMPSMatrixSolve and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSMatrixSumMPSMatrixSum and MPSNeuralNetwork and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSMatrixUnaryKernelMPSMatrixTypes and MPSMatrix and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSMatrixVectorMultiplicationMPSMatrixMultiplication and MPSMatrix and MPSCore and MPSKernel and MPSMatrixTypes
Dependencies: This depends on Metal.framework.
MPSNDArrayMPSNDArray and MPSCore
A MPSNDArray object is a MTLBuffer based storage container for multi-dimensional data.
MPSNDArrayAffineInt4DequantizeMPSNDArrayQuantizedMatrixMultiplication and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayAffineQuantizationDescriptorMPSNDArrayQuantization and MPSNDArray
Dependencies: This depends on Metal.framework.
MPSNDArrayBinaryKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Apple’s documentation
MPSNDArrayBinaryPrimaryGradientKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayBinarySecondaryGradientKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayDescriptorMPSNDArray and MPSCore
Dependencies: This depends on Metal.framework
MPSNDArrayGatherMPSNDArrayGather and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayGatherGradientMPSNDArrayGather and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayGatherGradientStateMPSNDArrayGather and MPSNDArray and MPSCore and MPSNDArrayGradientState and MPSState
at the time an -encode call was made.
MPSNDArrayGradientStateMPSNDArrayGradientState and MPSNDArray and MPSCore and MPSState
at the time an -encode call was made. The contents are opaque.
MPSNDArrayIdentityMPSNDArrayIdentity and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayLUTDequantizeMPSNDArrayQuantizedMatrixMultiplication and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayLUTQuantizationDescriptorMPSNDArrayQuantization and MPSNDArray
Dependencies: This depends on Metal.framework.
MPSNDArrayMatrixMultiplicationMPSNDArrayMatrixMultiplication and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayMultiaryBaseMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Apple’s documentation
MPSNDArrayMultiaryGradientKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Apple’s documentation
MPSNDArrayMultiaryKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Apple’s documentation
MPSNDArrayOffsetsMPSNDArrayTypes and MPSNDArray
Apple’s documentation
MPSNDArrayQuantizationDescriptorMPSNDArrayQuantization and MPSNDArray
Dependencies: This depends on Metal.framework.
MPSNDArrayQuantizationSchemeMPSNDArrayQuantization and MPSNDArray
Apple’s documentation
MPSNDArrayQuantizedMatrixMultiplicationMPSNDArrayQuantizedMatrixMultiplication and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel and MPSNDArrayMatrixMultiplication
Dependencies: This depends on Metal.framework.
MPSNDArraySizesMPSNDArrayTypes and MPSNDArray
Apple’s documentation
MPSNDArrayStridedSliceMPSNDArrayStridedSlice and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayStridedSliceGradientMPSNDArrayStridedSlice and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNDArrayUnaryGradientKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Apple’s documentation
MPSNDArrayUnaryKernelMPSNDArrayKernel and MPSNDArray and MPSCore and MPSKernel
Apple’s documentation
MPSNDArrayVectorLUTDequantizeMPSNDArrayQuantizedMatrixMultiplication and MPSNDArray and MPSCore and MPSKernel and MPSNDArrayKernel
Dependencies: This depends on Metal.framework.
MPSNNAdditionGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
returns gradient for either primary or secondary source image from the inference pass. Use the isSecondarySourceFilter property to indicate whether this filter is computing the gradient for the primary or secondary source image from the inference pass.
MPSNNAdditionNodeMPSNNGraphNodes and MPSNeuralNetwork
returns elementwise sum of left + right
MPSNNArithmeticGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNArithmeticGradientStateNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNBilinearScaleNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNScale object that uses bilinear interpolation for resampling
MPSNNBinaryArithmeticNodeMPSNNGraphNodes and MPSNeuralNetwork
virtual base class for basic arithmetic nodes
MPSNNBinaryGradientStateMPSNNGradientState and MPSNeuralNetwork and MPSCore and MPSState
at the time an -encode call was made. The contents are opaque.
MPSNNBinaryGradientStateNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNCompareMPSCNNMath and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSNNComparisonNodeMPSNNGraphNodes and MPSNeuralNetwork
returns elementwise comparison of left and right
MPSNNComparisonTypeMPSCNNMath and MPSNeuralNetwork
Apple’s documentation
MPSNNConcatenationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNSlice filter that operates as the conjugate computation for concatentation operators during training
MPSNNConcatenationNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a the concatenation (in the feature channel dimension) of the results from one or more kernels
MPSNNConvolutionAccumulatorPrecisionOptionMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSNNCropAndResizeBilinearMPSNNResize and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNDefaultPaddingMPSNeuralNetworkTypes and MPSNeuralNetwork
This class provides some pre-rolled padding policies for common tasks
MPSNNDivisionNodeMPSNNGraphNodes and MPSNeuralNetwork
returns elementwise quotient of left / right
MPSNNFilterNodeMPSNNGraphNodes and MPSNeuralNetwork
A placeholder node denoting a neural network filter stage
MPSNNForwardLossMPSCNNLoss and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSNNForwardLossNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSNNForwardLosskernel
MPSNNGradientFilterNodeMPSNNGraphNodes and MPSNeuralNetwork
For each MPSNNFilterNode, there is a corresponding MPSNNGradientFilterNode used for training that back propagates image gradients to refine the various parameters in each node. Generally, it takes as input a gradient corresponding to the result image from the MPSNNFilterNode and returns a gradient image corresponding to the source image of the MPSNNFilterNode. In addition, there is generally a MPSNNState produced by the MPSNNFilterNode that is consumed by the MPSNNGradientNode and the MPSNNGradientNode generally needs to look at the MPSNNFilterNode source image.
MPSNNGradientStateMPSNNGradientState and MPSNeuralNetwork and MPSCore and MPSState
at the time an -encode call was made. The contents are opaque.
MPSNNGradientStateNodeMPSNNGraphNodes and MPSNeuralNetwork
During training, each MPSNNFilterNode has a corresponding MPSNNGradientFilterNode for the gradient computation for trainable parameter update. The two communicate through a MPSNNGradientStateNode or subclass which carries information about the inference pass settings to the gradient pass. You can avoid managing these – there will be many! – by using -[MPSNNFilterNode gradientFilterWithSources:] to make the MPSNNGradientFilterNodes. That method will append the necessary extra information like MPSNNGradientState nodes and inference filter source image nodes to the object as needed.
MPSNNGramMatrixCalculationMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNGramMatrixCalculationGradientMPSCNNConvolution and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNGramMatrixCalculationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSNNGramMatrixCalculationGradientkernel
MPSNNGramMatrixCalculationNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSNNGramMatrixCalculationkernel
MPSNNGraphMPSNNGraph and MPSNeuralNetwork and MPSCore and MPSKernel
Optimized representation of a graph of MPSNNImageNodes and MPSNNFilterNodes
MPSNNGridSampleMPSNNGridSample and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSNNImageNodeMPSNNGraphNodes and MPSNeuralNetwork
A placeholder node denoting the position of a MPSImage in a graph
MPSNNInitialGradientMPSCNNLoss and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNInitialGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
A node for a MPSNNInitialGradient kernel
MPSNNLabelsNodeMPSNNGraphNodes and MPSNeuralNetwork
The labels and weights for each MPSImage are passed in separately to the graph in a MPSNNLabels object. If the batch interface is used then there will be a MPSStateBatch of these of the same size as the MPSImageBatch that holds the images. The MPSNNLabelsNode is a place holder in the graph for these nodes. The MPSNNLabels node is taken as an input to the Loss node
MPSNNLanczosScaleNodeMPSNNGraphNodes and MPSNeuralNetwork
A MPSNNScale object that uses the Lanczos resampling filter
MPSNNLocalCorrelationMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNLocalCorrelation filter computes the correlation between two images locally with a varying offset on x-y plane between the two source images (controlled by the window and stride properties) and the end result is summed over the feature channels. The results are stored in the different feature channels of the destination image, ordered such that the offset in the x direction is the faster running index.
MPSNNLossGradientMPSCNNLoss and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework.
MPSNNLossGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Node representing a MPSNNLossGradientkernel
MPSNNMultiaryGradientStateMPSNNGradientState and MPSNeuralNetwork and MPSCore and MPSState
Apple’s documentation
MPSNNMultiaryGradientStateNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNMultiplicationGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
returns gradient for either primary or secondary source image from the inference pass. Use the isSecondarySourceFilter property to indicate whether this filter is computing the gradient for the primary or secondary source image from the inference pass.
MPSNNMultiplicationNodeMPSNNGraphNodes and MPSNeuralNetwork
returns elementwise product of left * right
MPSNNNeuronDescriptorMPSCNNNeuron and MPSNeuralNetwork
Dependencies: This depends on Metal.framework
MPSNNOptimizerMPSNNOptimizers and MPSNeuralNetwork and MPSCore and MPSKernel
The MPSNNOptimizer base class, use one of the child classes, not to be directly used. Optimizers are generally used to update trainable neural network parameters. Users are usually expected to call these MPSKernels from the update methods on their Convolution or BatchNormalization data sources.
MPSNNOptimizerAdamMPSNNOptimizers and MPSNeuralNetwork and MPSCore and MPSKernel
The MPSNNOptimizerAdam performs an Adam Update
MPSNNOptimizerDescriptorMPSNNOptimizers and MPSNeuralNetwork
The MPSNNOptimizerDescriptor base class. Optimizers are generally used to update trainable neural network parameters. Users are usually expected to call these MPSKernels from the update methods on their Convolution or BatchNormalization data sources.
MPSNNOptimizerRMSPropMPSNNOptimizers and MPSNeuralNetwork and MPSCore and MPSKernel
The MPSNNOptimizerRMSProp performs an RMSProp Update RMSProp is also known as root mean square propagation.
MPSNNOptimizerStochasticGradientDescentMPSNNOptimizers and MPSNeuralNetwork and MPSCore and MPSKernel
The MPSNNOptimizerStochasticGradientDescent performs a gradient descent with an optional momentum Update RMSProp is also known as root mean square propagation.
MPSNNPadMPSNNReshape and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSNNPadGradientMPSNNReshape and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNPadGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNPadNodeMPSNNGraphNodes and MPSNeuralNetwork
A node for a MPSNNPad kernel
MPSNNPaddingMethodMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSNNReduceBinaryMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduce performs a reduction operation The reduction operations supported are:
MPSNNReduceColumnMaxMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceColumnMax performs a reduction operation returning the maximum value for each column of an image
MPSNNReduceColumnMeanMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceColumnMean performs a reduction operation returning the mean value for each column of an image
MPSNNReduceColumnMinMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceColumnMin performs a reduction operation returning the mininmum value for each column of an image
MPSNNReduceColumnSumMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceColumnSum performs a reduction operation returning the sum for each column of an image
MPSNNReduceFeatureChannelsAndWeightsMeanMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSNNReduceFeatureChannelsAndWeightsSumMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSNNReduceFeatureChannelsArgumentMaxMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceFeatureChannelsArgumentMax performs returns the argument index that is the location of the maximum value for feature channels of an image
MPSNNReduceFeatureChannelsArgumentMinMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceFeatureChannelsArgumentMin returns the argument index that is the location of the minimum value for feature channels of an image
MPSNNReduceFeatureChannelsMaxMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceFeatureChannelsMax performs a reduction operation returning the maximum value for feature channels of an image
MPSNNReduceFeatureChannelsMeanMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceFeatureChannelsMean performs a reduction operation returning the mean value for each column of an image
MPSNNReduceFeatureChannelsMinMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceFeatureChannelsMin performs a reduction operation returning the mininmum value for feature channels of an image
MPSNNReduceFeatureChannelsSumMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceFeatureChannelsSum performs a reduction operation returning the sum for each column of an image
MPSNNReduceRowMaxMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceRowMax performs a reduction operation returning the maximum value for each row of an image
MPSNNReduceRowMeanMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceRowMean performs a reduction operation returning the mean value for each row of an image
MPSNNReduceRowMinMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceRowMin performs a reduction operation returning the mininmum value for each row of an image
MPSNNReduceRowSumMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduceRowSum performs a reduction operation returning the sum for each row of an image
MPSNNReduceUnaryMPSNNReduce and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
The MPSNNReduce performs a reduction operation The reduction operations supported are:
MPSNNReductionColumnMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionColumnMeanNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionColumnMinNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionColumnSumNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionFeatureChannelsArgumentMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionFeatureChannelsArgumentMinNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionFeatureChannelsMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionFeatureChannelsMeanNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionFeatureChannelsMinNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionFeatureChannelsSumNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionRowMaxNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionRowMeanNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionRowMinNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionRowSumNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionSpatialMeanGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReductionSpatialMeanNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNRegularizationTypeMPSNNOptimizers and MPSNeuralNetwork
Apple’s documentation
MPSNNReshapeMPSNNReshape and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSNNReshapeGradientMPSNNReshape and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNReshapeGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNNReshapeNodeMPSNNGraphNodes and MPSNeuralNetwork
A node for a MPSNNReshape kernel
MPSNNResizeBilinearMPSNNResize and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSNNScaleNodeMPSNNGraphNodes and MPSNeuralNetwork
Abstract Node representing a image resampling operation
MPSNNSliceMPSNNSlice and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Apple’s documentation
MPSNNStateNodeMPSNNGraphNodes and MPSNeuralNetwork
A placeholder node denoting the position in the graph of a MPSState object
MPSNNSubtractionGradientNodeMPSNNGraphNodes and MPSNeuralNetwork
returns gradient for either primary or secondary source image from the inference pass. Use the isSecondarySourceFilter property to indicate whether this filter is computing the gradient for the primary or secondary source image from the inference pass.
MPSNNSubtractionNodeMPSNNGraphNodes and MPSNeuralNetwork
returns elementwise difference of left - right
MPSNNTrainingStyleMPSNeuralNetworkTypes and MPSNeuralNetwork
Apple’s documentation
MPSNNUnaryReductionNodeMPSNNGraphNodes and MPSNeuralNetwork
A node for a unary MPSNNReduce node.
MPSOffsetMPSCoreTypes and MPSCore
A signed coordinate with x, y and z components
MPSOriginMPSCoreTypes and MPSCore
A position in an image
MPSPackedFloat3MPSRayIntersectorTypes
Apple’s documentation
MPSPolygonAccelerationStructureDeprecatedMPSPolygonAccelerationStructure and MPSRayIntersector and MPSAccelerationStructure and MPSCore and MPSKernel
An acceleration structure built over polygonal shapes
MPSPolygonBufferDeprecatedMPSPolygonBuffer and MPSRayIntersector
A vertex buffer and optional index and mask buffer for a set of polygons
MPSPolygonTypeDeprecatedMPSPolygonAccelerationStructure and MPSRayIntersector
Apple’s documentation
MPSPredicateMPSCommandBuffer and MPSCore
Dependencies: This depends on Metal.framework
MPSPurgeableStateMPSImage and MPSCore
Apple’s documentation
MPSQuadrilateralAccelerationStructureDeprecatedMPSQuadrilateralAccelerationStructure and MPSRayIntersector and MPSAccelerationStructure and MPSCore and MPSKernel and MPSPolygonAccelerationStructure
An acceleration structure built over quadrilaterals
MPSRNNBidirectionalCombineModeMPSRNNLayer and MPSNeuralNetwork
Apple’s documentation
MPSRNNDescriptorMPSRNNLayer and MPSNeuralNetwork
Dependencies: This depends on Metal.framework
MPSRNNImageInferenceLayerMPSRNNLayer and MPSNeuralNetwork and MPSCNNKernel and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSRNNMatrixIdMPSRNNLayer and MPSNeuralNetwork
Apple’s documentation
MPSRNNMatrixInferenceLayerMPSRNNLayer and MPSNeuralNetwork and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSRNNMatrixTrainingLayerMPSRNNLayer and MPSNeuralNetwork and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSRNNMatrixTrainingStateMPSRNNLayer and MPSNeuralNetwork and MPSCore and MPSState
Dependencies: This depends on Metal.framework
MPSRNNRecurrentImageStateMPSRNNLayer and MPSNeuralNetwork and MPSCore and MPSState
Dependencies: This depends on Metal.framework
MPSRNNRecurrentMatrixStateMPSRNNLayer and MPSNeuralNetwork and MPSCore and MPSState
Dependencies: This depends on Metal.framework
MPSRNNSequenceDirectionMPSRNNLayer and MPSNeuralNetwork
Apple’s documentation
MPSRNNSingleGateDescriptorMPSRNNLayer and MPSNeuralNetwork
Dependencies: This depends on Metal.framework
MPSRayDataTypeMPSRayIntersector
Options for the MPSRayIntersector ray data type property
MPSRayIntersectorDeprecatedMPSCore and MPSKernel and MPSRayIntersector
Performs intersection tests between rays and the geometry in an MPSAccelerationStructure
MPSRayMaskOperatorDeprecatedMPSRayIntersector
Options for the MPSRayIntersector ray mask operator property
MPSRayMaskOptionsDeprecatedMPSRayIntersector
Options for the MPSRayIntersector ray mask options property
MPSRayOriginMaskDirectionMaxDistanceMPSRayIntersectorTypes and MPSRayIntersector
Represents a 3D ray with an origin, a direction, and a mask to filter out intersections
MPSRayOriginMinDistanceDirectionMaxDistanceMPSRayIntersectorTypes and MPSRayIntersector
Represents a 3D ray with an origin, a direction, and an intersection distance range from the origin
MPSRayPackedOriginDirectionMPSRayIntersectorTypes and MPSRayIntersector
Represents a 3D ray with an origin and a direction
MPSRegionMPSCoreTypes and MPSCore
A region of an image
MPSSVGFMPSSVGF and MPSRayIntersector and MPSCore and MPSKernel
Reduces noise in images rendered with Monte Carlo ray tracing methods
MPSSVGFDefaultTextureAllocatorMPSSVGF and MPSRayIntersector
A default implementation of the MPSSVGFTextureAllocator protocol. Maintains a cache of textures which is checked first when a texture is requested. If there is no suitable texture in the cache, allocates a texture directly from the Metal device.
MPSSVGFDenoiserMPSSVGF and MPSRayIntersector
A convenience object which uses an MPSSVGF object to manage the denoising process
MPSScaleTransformMPSCoreTypes and MPSCore
Transform matrix for explict control over resampling in MPSImageScale.
MPSSizeMPSCoreTypes and MPSCore
A size of a region in an image
MPSStateMPSState and MPSCore
Dependencies: This depends on Metal Framework
MPSStateResourceListMPSState and MPSCore
Apple’s documentation
MPSStateResourceTypeMPSState and MPSCore
Apple’s documentation
MPSStateTextureInfoMPSState and MPSCore
Apple’s documentation
MPSTemporalAAMPSTemporalAA and MPSRayIntersector and MPSCore and MPSKernel
Reduces aliasing in an image by accumulating samples over multiple frames
MPSTemporalWeightingMPSSVGF and MPSRayIntersector
Controls how samples are weighted over time
MPSTemporaryImageMPSImage and MPSCore
Dependencies: MPSImage
MPSTemporaryMatrixMPSMatrix and MPSCore
A MPSMatrix allocated on GPU private memory.
MPSTemporaryNDArrayMPSNDArray and MPSCore
A MPSNDArray that uses command buffer specific memory to store the array data
MPSTemporaryVectorMPSMatrix and MPSCore
A MPSVector allocated on GPU private memory.
MPSTransformTypeMPSInstanceAccelerationStructure and MPSRayIntersector
Instance transformation type options
MPSTriangleAccelerationStructureDeprecatedMPSTriangleAccelerationStructure and MPSRayIntersector and MPSAccelerationStructure and MPSCore and MPSKernel and MPSPolygonAccelerationStructure
An acceleration structure built over triangles
MPSTriangleIntersectionTestTypeDeprecatedMPSRayIntersector
Options for the MPSRayIntersector triangle intersection test type property
MPSUnaryImageKernelMPSImageKernel and MPSImage and MPSCore and MPSKernel
Dependencies: This depends on Metal.framework
MPSVectorMPSMatrix and MPSCore
Dependencies: This depends on Metal.framework
MPSVectorDescriptorMPSMatrix and MPSCore
Dependencies: This depends on Metal.framework

Constants§

MPSBatchSizeIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSDeviceCapsIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSFunctionConstantIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSFunctionConstantIndexReservedMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSNDArrayConstantIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSNDArrayConstantMultiDestDstAddressingIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSNDArrayConstantMultiDestIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSNDArrayConstantMultiDestIndex0MPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSNDArrayConstantMultiDestIndex1MPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSNDArrayConstantMultiDestSrcAddressingIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSTextureLinkingConstantIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSUserAvailableFunctionConstantStartIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation
MPSUserConstantIndexMPSFunctionConstantIndices and MPSCore
Apple’s documentation

Statics§

MPSFunctionConstantNoneMPSKernelTypes and MPSCore
Apple’s documentation
MPSRectNoClipMPSCoreTypes and MPSCore
This is a special constant to indicate no clipping is to be done. The entire image will be used. This is the default clipping rectangle or the input extent for MPSKernels.

Traits§

MPSCNNBatchNormalizationDataSourceMPSCNNBatchNormalization and MPSNeuralNetwork
The MPSCNNBatchNormalizationDataSource protocol declares the methods that an instance of MPSCNNBatchNormalizationState uses to initialize the scale factors, bias terms, and batch statistics.
MPSCNNConvolutionDataSourceMPSCNNConvolution and MPSNeuralNetwork
Provides convolution filter weights and bias terms
MPSCNNGroupNormalizationDataSourceMPSCNNGroupNormalization and MPSNeuralNetwork
The MPSCNNGroupNormalizationDataSource protocol declares the methods that an group of MPSCNNGroupNormalization uses to initialize the scale factors (gamma) and bias terms (beta).
MPSCNNInstanceNormalizationDataSourceMPSCNNInstanceNormalization and MPSNeuralNetwork
The MPSCNNInstanceNormalizationDataSource protocol declares the methods that an instance of MPSCNNInstanceNormalization uses to initialize the scale factors (gamma) and bias terms (beta).
MPSDeviceProviderMPSCoreTypes and MPSCore
A way of extending a NSCoder to enable the setting of MTLDevice for unarchived objects
MPSHandleMPSNNGraphNodes and MPSNeuralNetwork
MPS resource identification
MPSHeapProviderMPSCommandBuffer and MPSCore
Apple’s documentation
MPSImageAllocatorMPSImage and MPSCore
A class that allocates new MPSImage or MPSTemporaryImage
MPSImageSizeEncodingStateMPSNeuralNetworkTypes and MPSNeuralNetwork
MPSStates conforming to this protocol contain information about a image size elsewhere in the graph
MPSImageTransformProviderMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSNDArrayAllocatorMPSNDArray and MPSCore
Apple’s documentation
MPSNNGramMatrixCallbackMPSNNGraphNodes and MPSNeuralNetwork
MPSNNGramMatrixCallback Defines a callback protocol for MPSNNGramMatrixCalculationNodeto set the ‘alpha’ scaling value dynamically just before encoding the underlying MPSNNGramMatrixCalculation kernel.
MPSNNLossCallbackMPSNNGraphNodes and MPSNeuralNetwork
MPSNNLossCallback Defines a callback protocol for MPSNNForwardLossNodeand MPSNNLossGradientNodeto set the scalar weight value just before encoding the underlying kernels.
MPSNNPaddingMPSNeuralNetworkTypes and MPSNeuralNetwork
A method to describe how MPSCNNKernels should pad images when data outside the image is needed
MPSNNTrainableNodeMPSNNGraphNodes and MPSNeuralNetwork
Apple’s documentation
MPSSVGFTextureAllocatorMPSSVGF and MPSRayIntersector
Protocol dictating how texture allocator objects should operate so that they can be used by an MPSSVGFDenoiser object to allocate and reuse intermediate and output textures during the denoising process.

Functions§

MPSGetImageTypeMPSKernelTypes and MPSCore and MPSImage
MPSGetPreferredDevice
Identify the preferred device for MPS computation
MPSHintTemporaryMemoryHighWaterMark
Hint to MPS how much memory your application expects to need for the command buffer
MPSImageBatchIncrementReadCountDeprecatedMPSImage and MPSCore
MPSImageBatchIterateDeprecatedMPSImage and MPSCore and block2
MPSImageBatchResourceSizeDeprecatedMPSImage and MPSCore
MPSImageBatchSynchronizeDeprecatedMPSImage and MPSCore
MPSSetHeapCacheDuration
Set the timeout after which unused cached MTLHeaps are released
MPSStateBatchIncrementReadCountDeprecatedMPSState and MPSCore
MPSStateBatchResourceSizeDeprecatedMPSState and MPSCore
MPSStateBatchSynchronizeDeprecatedMPSState and MPSCore
MPSSupportsMTLDevice
MPSSupportsMTLDevice

Type Aliases§

MPSAccelerationStructureCompletionHandlerMPSAccelerationStructure and MPSRayIntersector and MPSCore and MPSKernel and block2
A block of code invoked when an operation on an MPSAccelerationStructure is completed
MPSCNNArithmeticGradientStateBatchMPSCNNMath and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Apple’s documentation
MPSCNNConvolutionGradientStateBatchMPSCNNConvolution and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Apple’s documentation
MPSCNNConvolutionTransposeGradientStateBatchMPSCNNConvolution and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Apple’s documentation
MPSCNNDropoutGradientStateBatchMPSCNNDropout and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Apple’s documentation
MPSCNNGroupNormalizationGradientStateBatchMPSCNNGroupNormalization and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Apple’s documentation
MPSCNNInstanceNormalizationGradientStateBatchMPSCNNInstanceNormalization and MPSNeuralNetwork and MPSCore and MPSNNGradientState and MPSState
Apple’s documentation
MPSCNNLossLabelsBatchMPSCNNLoss and MPSNeuralNetwork and MPSCore and MPSState
Apple’s documentation
MPSCopyAllocatorMPSImageKernel and MPSImage and MPSCore and MPSKernel and block2
Apple’s documentation
MPSDeviceCapsMPSKernelTypes and MPSCore
Apple’s documentation
MPSFunctionConstantMPSKernelTypes and MPSCore
Apple’s documentation
MPSFunctionConstantInMetalMPSKernelTypes and MPSCore
Apple’s documentation
MPSGradientNodeBlockMPSNNGraphNodes and MPSNeuralNetwork and block2
Block callback for customizing gradient nodes as they are constructed
MPSImageBatchMPSImage and MPSCore
Apple’s documentation
MPSNNBinaryGradientStateBatchMPSNNGradientState and MPSNeuralNetwork and MPSCore and MPSState
Apple’s documentation
MPSNNGradientStateBatchMPSNNGradientState and MPSNeuralNetwork and MPSCore and MPSState
Apple’s documentation
MPSNNGraphCompletionHandlerMPSNNGraph and MPSNeuralNetwork and MPSCore and MPSImage and block2
A notification when computeAsyncWithSourceImages:completionHandler: has finished
MPSNNMultiaryGradientStateBatchMPSNNGradientState and MPSNeuralNetwork and MPSCore and MPSState
Apple’s documentation
MPSShapeMPSCoreTypes and MPSCore
An array of NSNumbers where dimension lengths provided by the user goes from slowest moving to fastest moving dimension. This is same order as MLMultiArray in coreML and most frameworks in Python.
MPSStateBatchMPSState and MPSCore
Apple’s documentation