Crate deepviewrt_sys

Crate deepviewrt_sys 

Source

Structs§

_IO_FILE
_IO_codecvt
_IO_marker
_IO_wide_data
nn_context
nn_engine
nn_quant_param
nn_tensor

Constants§

NNError_NN_ERROR_GRAPH_VERIFY_FAILED
Failed to verify graph generateed from model.
NNError_NN_ERROR_INTERNAL
Internal error without a specific error code, catch-all error.
NNError_NN_ERROR_INVALID_AXIS
The requested axis for an operation was invalid or unsupported.
NNError_NN_ERROR_INVALID_ENGINE
The requested engine is invalid.
NNError_NN_ERROR_INVALID_HANDLE
The provided handle is invalid. This error is typically used by NNEngine when interfacing with another API such as OpenCL or OpenVX which require native handles for their internal API.
NNError_NN_ERROR_INVALID_LAYER
When working with a model a reference was made to a layer which did not exist.
NNError_NN_ERROR_INVALID_ORDER
The requested ordering was invalid.
NNError_NN_ERROR_INVALID_PARAMETER
A required parameter was missing or NULL or simply invalid.
NNError_NN_ERROR_INVALID_QUANT
The quantization parameters are invalid.
NNError_NN_ERROR_INVALID_SHAPE
The tensor’s shape is invalid for the given operation. It differs from the shape mismatch in that the shape is invalid on its own and not relative to another related tensor. An example would be a shape with more than one -1 dimension.
NNError_NN_ERROR_KERNEL_MISSING
The internal kernel or subroutine required to complete an operation using the engine plugin was missing. An example would be OpenCL or OpenVX operation where the kernel implementation cannot be located.
NNError_NN_ERROR_MISSING_RESOURCE
A required resource was missing or the reference invalid.
NNError_NN_ERROR_MODEL_GRAPH_FAILED
Failed to generate graph representation of model.
NNError_NN_ERROR_MODEL_INVALID
The model is invalid or corrupted.
NNError_NN_ERROR_MODEL_MISSING
An operation referenced a model but the model was not provided.
NNError_NN_ERROR_NOT_IMPLEMENTED
Signals an API has not been implemented. Can be caught by the core DeepViewRT library when interfacing with engine plugins to gracefully fallback to the native implementation.
NNError_NN_ERROR_OUT_OF_MEMORY
Out of memory error, returned if a call to malloc returns NULL or similar error from an underlying engine plugin.
NNError_NN_ERROR_OUT_OF_RESOURCES
Out of resources errors are similar to out of memory though sometimes treated separately by underlying engine plugins.
NNError_NN_ERROR_SHAPE_MISMATCH
When attempting to run an operation and the input/output tensors have invalid or unsupported shape combinations. Some operations require the shapes to be the same while others, such as arithmetic broadcasting operations, will support various shape combinations but if the provided pairs are invalid then the shape mismatch is returned.
NNError_NN_ERROR_STRING_TOO_LARGE
The string was too large.
NNError_NN_ERROR_SYSTEM_ERROR
A system error occured when interfacing with an operating system function. On some systems errno might be updated with the underlying error code.
NNError_NN_ERROR_TENSOR_NO_DATA
The tensor has no data or the data is not currently accessible. An example of the latter would be attempting to call @ref nn_tensor_maprw while the tensor was already mapped read-only or write-only.
NNError_NN_ERROR_TENSOR_TYPE_UNSUPPORTED
The operation does not support the tensor’s type.
NNError_NN_ERROR_TOO_MANY_INPUTS
For operations which can operate on an array of inputs, the provided list of inputs was too large.
NNError_NN_ERROR_TYPE_MISMATCH
When attempting to run an operation where the input/output tensors are of different types and the operation does not support automatic type conversions.
NNError_NN_SUCCESS
Successfull operation, no error.
NNQuantizationType_NNQuantizationType_Affine_PerChannel
Affine quantization with separate parameters applied to each channel. Also known as per-axis where the axis is always the channel “C” axis in a NCHW, NHWC, and so-on shaped tensor.
NNQuantizationType_NNQuantizationType_Affine_PerTensor
Affine quantization with parameters applied globally across the tensor.
NNQuantizationType_NNQuantizationType_DFP
Quantized using Dynamic Fixed Point.
NNQuantizationType_NNQuantizationType_None
No quantization for tensor.
NNTensorType_NNTensorType_F16
Half precision (16-bit) floating point tensor data.
NNTensorType_NNTensorType_F32
Single precision (32-bit) floating point tensor data.
NNTensorType_NNTensorType_F64
Double precision (64-bit) floating point tensor data.
NNTensorType_NNTensorType_I8
Signed 8-bit integer tensor data internally @ref int8_t
NNTensorType_NNTensorType_I16
Signed 16-bit integer tensor data internally @ref int16_t
NNTensorType_NNTensorType_I32
Signed 16-bit integer tensor data internally @ref int32_t
NNTensorType_NNTensorType_I64
Signed 16-bit integer tensor data internally @ref int64_t
NNTensorType_NNTensorType_RAW
Raw byte-stream tensor, useful for encoded tensors such as PNG images. The size of this tensor would be in bytes.
NNTensorType_NNTensorType_STR
String tensor data, a single dimension would hold one null-terminated string of variable length. A standard C char* array.
NNTensorType_NNTensorType_U8
Unsigned 8-bit integer tensor data internally @ref uint8_t
NNTensorType_NNTensorType_U16
Unsigned 16-bit integer tensor data internally @ref uint16_t
NNTensorType_NNTensorType_U32
Unsigned 16-bit integer tensor data internally @ref uint32_t
NNTensorType_NNTensorType_U64
Unsigned 16-bit integer tensor data internally @ref uint64_t

Functions§

nn_context_cache
@public @memberof NNContext @since 2.2
nn_context_engine
Returns the engine used by the given context object.
nn_context_init
Initializes an NNContext and allocates required memories. If any of the pointers are NULL malloc will be called automatically to create the memory using the provided sizes. For memory_size and cache_size if these are 0 then they will not be initialized.
nn_context_init_ex
Initializes an NNContext into the provided memory which MUST be at least NN_CONTEXT_SIZEOF bytes. If any of the pointers are NULL malloc will be called automatically to create the memory using the provided sizes. For memory_size and cache_size if these are 0 then they will not be initialized.
nn_context_mempool
@public @memberof NNContext @since 2.2
nn_context_model
Returns the currently loaded model blob for the context.
nn_context_model_load
Loads the model provided by the input into the context.
nn_context_model_unload
Frees the memory used by the model within the given context object.
nn_context_release
Release the memory being used by the given context object.
nn_context_run
Runs the model within the given context object.
nn_context_sizeof
Returns the actual size of the context structure. This size will be smaller than @ref NN_CONTEXT_SIZEOF which contains additional padding for future extension. Since @ref nn_context_sizeof() is called dynamically at runtime it can return the true and unpadded size.
nn_context_step
Runs layer with index from model within the given context object. If index is invalid NN_ERROR_INVALID_LAYER is returned, this can be used to determine when at the end of the model.
nn_context_tensor
Returns the tensor with the given name within the model provided by the given context object.
nn_context_tensor_index
Returns the tensor at the given index with the model provided by the given context object.
nn_context_user_ops
@public @memberof NNContext @since 2.4
nn_context_user_ops_register
@public @memberof NNContext @since 2.4
nn_engine_init
Initializes the NNEngine structure using the provided memory or allocating a new buffer is none was provided.
nn_engine_load
Loads the plugin to provided engine object. The plugin should point to an engine plugin library either as an absolute or relative path or be found in the standard OS search path for shared libraries.
nn_engine_name
Returns the name of the engine object.
nn_engine_native_handle
Returns handle of the NNEngine object.
nn_engine_release
Releases the memory that was being used by the engine.
nn_engine_sizeof
The actual size of the NNEngine structure. This will differ from the size defined by @ref NN_ENGINE_SIZEOF as the later is padded for future API extensions while this function returns the actual size currently required.
nn_engine_unload
Unloads the plugin from the given engine object.
nn_engine_version
Returns the version of the engine object.
nn_free
Exposes the free() function
nn_init
Initializes the library with optional parameters. This function MUST be called before any others (though nn_version and nn_strerror are safe) and MUST not be called again unless care is taken to protect this call.
nn_malloc
Exposes the malloc() function
nn_model_cache_minimum_size
Returns the minimum cache size of a given model object.
nn_model_cache_optimum_size
Returns the optimum cache size of a given model object.
nn_model_inputs
Returns the list of model input indices and optionally the number of inputs.
nn_model_label
Returns the label of the given index within the given model object. If the model contains no labels or the index is out of range then NULL will be returned.
nn_model_label_count
Returns the number of labels within a given model object.
nn_model_label_icon
Returns an optional icon resource for the provided label index.
nn_model_layer_axis
Returns the natural data axis for the tensor or -1 if one is not set.
nn_model_layer_count
Returns the number of layers within a given model object.
nn_model_layer_datatype
Returns the datatype of a layer at the given index within the given model object.
nn_model_layer_datatype_id
Returns the datatype of a layer at the given index within the given model object.
nn_model_layer_inputs
Returns the number of inputs to a layer at the given index within the given model object.
nn_model_layer_lookup
Returns the index of a given layer with the name provided in the given model object.
nn_model_layer_name
Returns the name of a layer at a given index within the given model object.
nn_model_layer_parameter
Returns an NNModelParameter from the model at the layer index defined by layer using the parameter key. If the layer does not contain this parameter NULL is returned.
nn_model_layer_parameter_data_f32
Returns float data for parameter at layer index . This is a convenience wrapper around acquiring the parameter followed by acquiring the data.
nn_model_layer_parameter_data_i16
Returns int16 data for parameter at layer index . This is a convenience wrapper around acquiring the parameter followed by acquiring the data.
nn_model_layer_parameter_data_raw
Returns raw data for parameter at layer index . This is a convenience wrapper around acquiring the parameter followed by acquiring the data.
nn_model_layer_parameter_data_str
Returns string data for parameter at layer index for string array element . This is a convenience wrapper around acquiring the parameter followed by acquiring the data.
nn_model_layer_parameter_data_str_len
Returns number of string elements in the data_str array for the specified layer and parameter key. This is a convenience wrapper around acquiring the parameter followed by acquiring the data.
nn_model_layer_parameter_shape
Returns the shape of the model parameter for layer at index .
nn_model_layer_scales
Returns the array of quantization scales, and optionally the number of scales in the array. The length will either be 0, 1, or equal to the number of channels in an NHWC/NCHW tensor.
nn_model_layer_shape
Returns the shape of a layer at the given index within the given model object.
nn_model_layer_type
Returns the type of a layer at the given index within the given model object.
nn_model_layer_type_id
Returns the type ID of the layer.
nn_model_layer_zeros
Returns the array of quantization zero-points, and optionally the number of zero-points in the array. The length will either be 0, 1, or equal to the number of channels in an NHWC/NCHW tensor.
nn_model_memory_size
Returns the memory size of the given model object.
nn_model_name
Returns the name of the given model object. Names are optional and if the model does not contain a name then NULL will be returned.
nn_model_outputs
Returns the list of model output indices and optionally the number of outputs.
nn_model_parameter_data_f32
Returns parameter float data, length of the array is optionally stored into the length parameter if non-NULL.
nn_model_parameter_data_i8
Returns parameter int8_t data, length of the array is optionally stored into the length parameter if non-NULL.
nn_model_parameter_data_i16
Returns parameter int16_t data, length of the array is optionally stored into the length parameter if non-NULL.
nn_model_parameter_data_i32
Returns parameter int32_t data, length of the array is optionally stored into the length parameter if non-NULL.
nn_model_parameter_data_raw
Returns parameter raw data pointer, length of the array is optionally stored into the length parameter if non-NULL.
nn_model_parameter_data_str
Returns parameter string data at desired index. This data handler is different from the others which return the array as strings are themselves arrays and need special handling. Refer to @ref nn_model_parameter_data_str_len() to query the size of the data_str array, which refers to the number of strings in this parameter.
nn_model_parameter_data_str_len
Returns the number of strings in the parameter’s data_str attribute.
nn_model_parameter_shape
Returns the shape of the parameter data or NULL if no shape was defined. If n_dims is non-NULL the number of dimensions will be stored there. The shape attribute is not required for parameters but can be used either on its own or as part of defining layout of data attributes.
nn_model_resource
Retrieves a reference to the resource with the given name.
nn_model_resource_at
Retrieves a reference to the resource at the provided index.
nn_model_resource_count
The number of resources defined in the model.
nn_model_resource_data
Returns the raw binary data for the resource, the size of the data will be saved in @p data_size if non-NULL.
nn_model_resource_meta
Returns the meta string for the resource.
nn_model_resource_mime
Returns the mime type string for the resource.
nn_model_resource_name
The unique name of the resource as can be used to retrieve the resource using @ref nn_model_resource().
nn_model_serial
Currently returns 0
nn_model_uuid
Currently returns NULL (UPDATE WHEN FUNCTION IS UPDATED)
nn_model_validate
Attempts to validate model, this is automatically called by nn_model_load and nn_model_mmap. The function returns 0 on success, otherwise it will return an error code which can be turned into a string by calling @ref nn_model_validate_error() with the return value from @ref nn_model_validate().
nn_model_validate_error
Returns the string associated with a given error returned from @ref nn_model_validate().
nn_strerror
Returns the string associated with a given error.
nn_tensor_alloc
Allocates the internal memory for the tensor.
nn_tensor_assign
Assigns the tensor parameters and optionally data pointer. The default implementation uses the data buffer as the internal storage for tensor data and it MUST outlive the tensor. For engine plugins they may choose how to use the data but for the OpenCL example if data is provided it will be copied into the OpenCL buffer then otherwise never used again. If NULL is provided for data the OpenCL engine would create the memory and leave it unassigned.
nn_tensor_aux_free
Returns the auxiliary object’s free function, or NULL if none is attached.
nn_tensor_aux_free_by_name
Frees the auxiliary object associated with the given name parameter.
nn_tensor_aux_object
Returns the auxiliary object for the tensor, or NULL if none is attached.
nn_tensor_aux_object_by_name
Acquire the auxiliary object associated with the given name parameter.
nn_tensor_axis
Returns the natural data axis of the tensor.
nn_tensor_compare
Element-wise comparison of two tensors within a given tolerance, returning total number of errors relative to the left tensor. If the two tensors are incompatible the volume of the left tensor is returned (all elements invalid).
nn_tensor_concat
nn_tensor_concat concatenates all of the given input tensors into the given output tensor.
nn_tensor_copy
Copies the contents of source tensor into destination tensor.
nn_tensor_copy_buffer
Loads a tensor with data from a user buffer User has to maintain the buffer and ensure compatibility with NHWC tensor Function will return error if there is a size mismatch i.e (bufsize != nn_tensor_size(tensor)) or tensor is invalid
nn_tensor_dequantize
De-quantizes the source tensor into the destination tensor.
nn_tensor_dequantize_buffer
De-quantizes the source tensor into the destination buffer.
nn_tensor_dims
Returns the number of dimensions of the given tensor object.
nn_tensor_element_size
Returns the element size of a given tensor object.
nn_tensor_engine
Returns the engine owning this tensor, could be NULL.
nn_tensor_fill
Fills the tensor with the provided constant. The constant is captured as double precision (64-bit floating point) which has 53-bits of precision on whole numbers. This means the constant CANNOT represent all 64-bit integers but it CAN represent all 32-bit and lower integers. If full 64-bit integer support is required @ref nn_tensor_map can be used though it is less efficient with some engines because of the addition memory transfer required.
nn_tensor_init
Initializes the tensor using provided memory. The memory MUST be at least the size returned by @ref nn_tensor_sizeof(). This size does not include the actual tensor data which is allocated separately, either by requesting the implementation to allocate the buffer or attaching to externally allocated memory.
nn_tensor_io_time
Returns the I/O time information stored in the tensor. The time is returned in nanoseconds of the duration of the last map/unmap pair. When tensors are mapped to the CPU (no accelerator engine is loaded) then times are expected to be zero time as no mapping is actually required and the internal pointer is simply returned. When an accelerator engine is used, such as OpenVX, then the io_time measures the time the map/unmap or copy operations took to complete.
nn_tensor_load_file
Loads an image from file into the provided tensor.
nn_tensor_load_file_ex
Loads an image from file into the provided tensor.
nn_tensor_load_image
Loads an image from the provided buffer and decodes it accordingly, the function uses the images headers to find an appropriate decoder. The function will handle any required casting to the target tensor’s format.
nn_tensor_load_image_ex
Loads an image from the provided buffer and decodes it accordingly, the function uses the images headers to find an appropriate decoder. The function will handle any required casting to the target tensor’s format and will apply image standardization (compatible with tensorflow’s tf.image.per_image_standardization) if the proc parameter is set to NN_IMAGE_PROC_WHITENING.
nn_tensor_mapped
Returns the tensor’s mapping count, 0 means the tensor is unmapped.
nn_tensor_mapro
Maps the tensor’s memory and returns the client accessible pointer. This is the read-only version which causes the engine to download buffers to the CPU memory space if required but will not flush back to the device on unmap.
nn_tensor_maprw
Maps the tensor’s memory and returns the client accessible pointer. This is the read-write version which causes the engine to download buffers to the CPU memory space if required and will also flush back to the device on unmap.
nn_tensor_mapwo
Maps the tensor’s memory and returns the client accessible pointer. This is the write-only version which will not cause a download of the buffers to the CPU memory space on map but will upload to the device on unmap.
nn_tensor_native_handle
Returns the native handle of the tensor object. This is an internal API for access internal structures.
nn_tensor_offset
Returns the offset of a given tensor. This function can be used to calculate the index across numerous dimensions.
nn_tensor_offsetv
Returns the offset of a given tensor using variable length dimensions. This works the same as @ref nn_tensor_offset() but uses variable arguments. The user must provide @p n_dims number of parameters after the @p n_dims parameter.
nn_tensor_pad
nn_tensor_pad implements a padded Tensor to Tensor copy. This can be used to achieve the various convolution padding strategies (SAME, FULL). For example SAME conv2d would use the following padded_copy before running the conv2d layer.
nn_tensor_padding
nn_tensor_padding calculates the paddings for the given tensor, padtype, window, stride, and dilation given n_dims being queried from the tensor’s nn_tensor_dims().
nn_tensor_panel_size
Retrieves the panel size of the tensor when it has been panel-shuffled for improved tiling performance. The panel size is the vectorization length.
nn_tensor_printf
Writes the tensor inforamtion to the FILE stream provided. The format is “[D0 D1 D2 D3]” where D0..D3 are the dimensions provided. If the data parameter is true the format will be followed by “: …” where … is the string representation of the tensor’s data.
nn_tensor_quant_params
Internal API used by the RTM loader to associate quantization parameters to the tensor.
nn_tensor_quantization_type
Returns the quantization type for the tensor.
nn_tensor_quantize
Quantizes the source tensor into the destination tensor.
nn_tensor_quantize_buffer
Quantizes the source buffer into the destination tensor.
nn_tensor_randomize
Randomizes the data within the tensor.
nn_tensor_release
Releases the memory used by the tensor object.
nn_tensor_requantize
Requantizes the source tensor into the destination tensor.
nn_tensor_reshape
Reshapes the given tensor to the provided new shape.
nn_tensor_scales
Returns the scales array for the tensor and optionally the number of scales.
nn_tensor_set_aux_object
Configures an auxiliary object for the tensor. This is a private API used for attaching auxiliary buffers.
nn_tensor_set_aux_object_by_name
Extended version of the auxiliary object API which allows additional objects to be attached to the tensor using name-based indexing.
nn_tensor_set_axis
Configures the channel axis of the tensor. This refers to the “C” in orderings such as NHWC and NCHW.
nn_tensor_set_native_handle
Sets the tensor objects native handle to the one provided.
nn_tensor_set_panel_size
Sets the panel size of the tensor. This is primarily an internal API used to store the vectorization length when shuffling tensors into an optimized tile format.
nn_tensor_set_scales
Sets the quantization scales for the tensor. If n_scales>1 it should match the channel dimension (axis) of the tensor.
nn_tensor_set_type
Sets the type of a given tensor object.
nn_tensor_set_zeros
Sets the quantization zero-points for the tensor. If n_zeros>1 it should match the channel dimension (axis) of the tensor.
nn_tensor_shape
Returns the shape of the given tensor object.
nn_tensor_shape_copy
Copys the source shape array to the destination array.
nn_tensor_shape_equal
Tensor shape comparison.
nn_tensor_shuffle
Shuffles (transpose) the tensor moving the current dimensions into the ordering defined in the order parameter.
nn_tensor_size
Calculates the total byte size of the tensor (volume * element_size).
nn_tensor_sizeof
Returns the size of the tensor object for preparing memory allocations.
nn_tensor_slice
nn_tensor_slice copies a slice of the tensor into output. For a version which supports strides see @ref nn_tensor_strided_slice.
nn_tensor_strided_slice
nn_tensor_strides
Returns the strides of the given tensor object.
nn_tensor_sync
Synchronize the tensor and all preceeding events in the chain.
nn_tensor_time
Returns the time information stored in the tensor. The time is returned in nanoseconds of the duration of the last operation the wrote into this tensor. causes a nn_tensor_sync on the target tensor.
nn_tensor_type
Returns the type of a given tensor object.
nn_tensor_unmap
Releases the tensor mapping, if the reference count reaches 0 it will be fully unmapped and will force the flush to the device, if required.
nn_tensor_view
Maps the tensor using the memory from the parent tensor.
nn_tensor_volume
Calculates the total tensor volume (product of dimensions).
nn_tensor_zeros
Returns the zero-points for the tensor and optionally the number of zero-points.
nn_version
DeepViewRT library version as “MAJOR.MINOR.PATCH”.

Type Aliases§

FILE
NNContext
@struct NNContext
NNEngine
@struct NNEngine
NNError
Enumeration of all errors provided by DeepViewRT. Most functions will return an NNError with NN_SUCCESS being zero. A common usage pattern for client code is to check for err using if (err) ... as any error condition will return non-zero.
NNModel
@struct NNModel
NNModelParameter
@struct NNModelParameter
NNModelResource
@struct NNModelResource
NNOptions
DeepViewRT library initialization options.
NNQuantParam
@struct NNQuantParam
NNQuantizationType
Enumeration of all quantization type provided by DeepViewRT.
NNTensor
@struct NNTensor
NNTensorType
@enum NNTensorType Enumeration of the data types supported by NNTensors in DeepViewRT.
_IO_lock_t
__off64_t
__off_t
nn_aux_object_free
Callback function to free an auxiliary object, called from nn_tensor_release.
nn_user_ops
Callback function for custom user ops.