Expand description
FFI Rust bindings for the Android Neural Networks API.
Structs
- ANeuralNetworksOperandType describes the type of an operand.
- Parameters for ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL operand.
Enums
- Operand types.
- Operation types.
- Result codes.
Constants
- Dedicated accelerator for Machine Learning workloads.
- The device runs NNAPI models on single or multi-core CPU.
- The device can run NNAPI models and also accelerate graphics APIs such as OpenGL ES and Vulkan.
- The device does not fall into any category below.
- The device type cannot be provided.
- NO fused activation function.
- Fused ReLU activation function.
- Fused ReLU1 activation function.
- Fused ReLU6 activation function.
- SAME padding. Padding on both ends are the “same”: padding_to_beginning = total_padding / 2 padding_to_end = (total_padding + 1)/2. i.e., for even number of padding, padding to both ends are exactly the same; for odd number of padding, padding to the ending is bigger than the padding to the beginning by 1.
- VALID padding. No padding. When the input size is not evenly divisible by the filter size, the input at the end that could not fill the whole filter tile will simply be ignored.
- Prefer returning a single answer as fast as possible, even if this causes more power consumption.
- Prefer executing in a way that minimizes battery drain. This is desirable for compilations that will be executed often.
- Prefer maximizing the throughput of successive frames, for example when processing successive frames coming from the camera.
Functions
- Create a {@link ANeuralNetworksBurst} to apply the given compilation. This only creates the burst object. Computation is only performed once {@link ANeuralNetworksExecution_burstCompute} is invoked with a valid {@link ANeuralNetworksExecution} and {@link ANeuralNetworksBurst}.
- Destroys the burst object.
- Create a {@link ANeuralNetworksCompilation} to compile the given model.
- Create a {@link ANeuralNetworksCompilation} to compile the given model for a specified set of devices. If more than one device is specified, the compilation will distribute the workload automatically across the devices. The model must be fully supported by the specified set of devices. This means that ANeuralNetworksModel_getSupportedOperationsForDevices() must have returned true for every operation for that model/devices pair.
- Indicate that we have finished modifying a compilation. Required before calling {@link ANeuralNetworksBurst_create} or {@link ANeuralNetworksExecution_create}.
- Destroy a compilation.
- Sets the compilation caching signature and the cache directory.
- Sets the execution preference.
- Set the execution priority.
- Set the maximum expected duration for compiling the model.
- Get the supported NNAPI version of the specified device.
- Get the name of the specified device.
- Get the type of a given device.
- Get the version of the driver implementation of the specified device.
- Wait until the device is in a live state.
- Create a {@link ANeuralNetworksEvent} from a sync_fence file descriptor.
- Destroys the event.
- Get sync_fence file descriptor from the event.
- Waits until the execution completes.
- Schedule synchronous evaluation of the execution on a burst object.
- Schedule synchronous evaluation of the execution.
- Create a {@link ANeuralNetworksExecution} to apply the given compilation. This only creates the object. Computation is only performed once {@link ANeuralNetworksExecution_burstCompute}, {@link ANeuralNetworksExecution_compute}, {@link ANeuralNetworksExecution_startCompute} or {@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
- Destroy an execution.
- Get the time spent in the specified {@link ANeuralNetworksExecution}, in nanoseconds.
- Get the dimensional information of the specified output operand of the model of the {@link ANeuralNetworksExecution}. The target output operand cannot be a scalar.
- Get the dimensional information of the specified output operand of the model of the {@link ANeuralNetworksExecution}.
- Associate a user buffer with an input of the model of the {@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed. Evaluation of the execution will not change the content of the buffer.
- Associate a region of a memory object with an input of the model of the {@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed. Evaluation of the execution will not change the content of the region.
- Set the maximum duration of WHILE loops in the specified execution.
- Specifies whether duration of the {@link ANeuralNetworksExecution} is to be measured. Evaluation of the execution must not have been scheduled.
- Associate a user buffer with an output of the model of the {@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed.
- Associate a region of a memory object with an output of the model of the {@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed.
- Set the maximum expected duration of the specified execution.
- Schedule asynchronous evaluation of the execution.
- Schedule asynchronous evaluation of the execution with dependencies.
- Specify that a memory object will be playing the role of an input to an execution created from a particular compilation.
- Specify that a memory object will be playing the role of an output to an execution created from a particular compilation.
- Create a {@link ANeuralNetworksMemoryDesc} with no properties.
- Indicate that we have finished modifying a memory descriptor. Required before calling {@link ANeuralNetworksMemory_createFromDesc}.
- Destroy a memory descriptor.
- Set the dimensional information of the memory descriptor.
- Copies data from one memory object to another.
- Creates a shared memory object from an AHardwareBuffer handle.
- Creates a memory object from a memory descriptor.
- Creates a shared memory object from a file descriptor.
- Delete a memory object.
- Add an operand to a model.
- Add an operation to a model.
- Create an empty {@link ANeuralNetworksModel}.
- Indicate that we have finished modifying a model. Required before calling {@link ANeuralNetworksCompilation_create} and {@link ANeuralNetworksCompilation_createForDevices}.
- Destroy a model.
- Get the supported operations for a specified set of devices. If multiple devices are selected, the supported operation list is a union of supported operations of all selected devices.
- Specifies which operands will be the model’s inputs and outputs. Every model must have at least one input and one output.
- Specifies whether {@link ANEURALNETWORKS_TENSOR_FLOAT32} is allowed to be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating-point format. By default, {@link ANEURALNETWORKS_TENSOR_FLOAT32} must be calculated using at least the range and precision of the IEEE 754 32-bit floating-point format.
- Sets an operand’s per channel quantization parameters.
- Sets an operand to a constant value.
- Sets an operand to a value stored in a memory object.
- Sets an operand to a value that is a reference to another NNAPI model.
- Get the default timeout value for WHILE loops.
- Get the representation of the specified device.
- Get the number of available devices.
- Get the maximum timeout value for WHILE loops.
Type Aliases
- Device types.
- Different duration measurements.
- Fused activation function types.
- Implicit padding algorithms.
- Execution preferences.
- Relative execution priority.
- For {@link ANeuralNetworksModel_setOperandValue}, values with a length smaller or equal to this will be immediately copied into the model. The size is in bytes.
- For {@link ANeuralNetworksCompilation_setCaching}, specify the size of the cache token required from the application. The size is in bytes.