ANeuralNetworksModel_relaxComputationFloat32toFloat16

Function ANeuralNetworksModel_relaxComputationFloat32toFloat16 

Source
pub unsafe extern "C" fn ANeuralNetworksModel_relaxComputationFloat32toFloat16(
    model: *mut ANeuralNetworksModel,
    allow: bool,
) -> c_int
Expand description

Specifies whether {@link ANEURALNETWORKS_TENSOR_FLOAT32} is allowed to be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating-point format. By default, {@link ANEURALNETWORKS_TENSOR_FLOAT32} must be calculated using at least the range and precision of the IEEE 754 32-bit floating-point format.

The relaxComputationFloat32toFloat16 setting of the main model of a compilation overrides the values of the referenced models.

@param model The model to be modified. @param allow ‘true’ indicates {@link ANEURALNETWORKS_TENSOR_FLOAT32} may be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating point format. ‘false’ indicates {@link ANEURALNETWORKS_TENSOR_FLOAT32} must be calculated using at least the range and precision of the IEEE 754 32-bit floating point format.

Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.

Available since API level 28.

See {@link ANeuralNetworksModel} for information on multithreaded usage.