pub struct Interpreter<'a> { /* private fields */ }
Expand description
A TensorFlow Lite interpreter that performs inference from a given model.
- Note: Interpreter instances are not thread-safe.
Implementations§
Source§impl<'a> Interpreter<'a>
impl<'a> Interpreter<'a>
Sourcepub fn new(
model: &'a Model<'a>,
options: Option<Options>,
) -> Result<Interpreter<'a>>
pub fn new( model: &'a Model<'a>, options: Option<Options>, ) -> Result<Interpreter<'a>>
Creates new Interpreter
§Arguments
§Examples
use tflitec::model::Model;
use tflitec::interpreter::Interpreter;
let model = Model::new("tests/add.bin")?;
let interpreter = Interpreter::new(&model, None)?;
§Errors
Returns error if TensorFlow Lite C fails internally.
Sourcepub fn input_tensor_count(&self) -> usize
pub fn input_tensor_count(&self) -> usize
Returns the total number of input Tensor
s associated with the model.
Sourcepub fn output_tensor_count(&self) -> usize
pub fn output_tensor_count(&self) -> usize
Returns the total number of output Tensor
s associated with the model.
Sourcepub fn invoke(&self) -> Result<()>
pub fn invoke(&self) -> Result<()>
Invokes the interpreter to perform inference from the loaded graph.
§Errors
Returns error if TensorFlow Lite C fails to invoke.
Sourcepub fn input(&self, index: usize) -> Result<Tensor<'_>>
pub fn input(&self, index: usize) -> Result<Tensor<'_>>
Returns the input Tensor
at the given index
.
§Arguments
index
: The index for the inputTensor
.
§Errors
Returns error if Interpreter::allocate_tensors()
was not called before calling this
or given index is not a valid input tensor index in
[0, Interpreter::input_tensor_count()
).
Sourcepub fn output(&self, index: usize) -> Result<Tensor<'_>>
pub fn output(&self, index: usize) -> Result<Tensor<'_>>
Returns the output Tensor
at the given index
.
§Arguments
index
: The index for the outputTensor
.
§Errors
Returns error if given index is not a valid output tensor index in
[0, Interpreter::output_tensor_count()
). And, it may return error
unless the output tensor has been both sized and allocated. In general,
best practice is to call this after calling Interpreter::invoke()
.
Sourcepub fn resize_input(&self, index: usize, shape: Shape) -> Result<()>
pub fn resize_input(&self, index: usize, shape: Shape) -> Result<()>
Resizes the input Tensor
at the given index to the
specified Shape
.
- Note: After resizing an input tensor, the client must explicitly call
Interpreter::allocate_tensors()
before attempting to access the resized tensor data or invoking the interpreter to perform inference.
§Arguments
§Errors
Returns error if given index is not a valid input tensor index in
[0, Interpreter::input_tensor_count()
) or TensorFlow Lite C fails internally.
Sourcepub fn allocate_tensors(&self) -> Result<()>
pub fn allocate_tensors(&self) -> Result<()>
Allocates memory for all input Tensor
s and dependent tensors based on
their Shape
s.
- Note: This is a relatively expensive operation and should only be called after creating the interpreter and resizing any input tensors.
§Error
Returns error if TensorFlow Lite C fails to allocate memory for the input tensors.
Sourcepub fn copy<T>(&self, data: &[T], index: usize) -> Result<()>
pub fn copy<T>(&self, data: &[T], index: usize) -> Result<()>
Copies the given data
to the input Tensor
at the given index
.
§Arguments
data
: The data to be copied to the inputTensor
’s data buffer.index
: The index for the inputTensor
.
§Errors
Returns error if byte count of the data does not match the buffer size of the
input tensor or the given index is not a valid input tensor index in
[0, Interpreter::input_tensor_count()
) or TensorFlow Lite C fails internally.