Struct ExecutionAccelerators

Source
pub struct ExecutionAccelerators {
    pub gpu_execution_accelerator: Vec<Accelerator>,
    pub cpu_execution_accelerator: Vec<Accelerator>,
}
Expand description

@@ @@ .. cpp:var:: message ExecutionAccelerators @@ @@ Specify the preferred execution accelerators to be used to execute @@ the model. Currently only recognized by ONNX Runtime backend and @@ TensorFlow backend. @@ @@ For ONNX Runtime backend, it will deploy the model with the execution @@ accelerators by priority, the priority is determined based on the @@ order that they are set, i.e. the provider at the front has highest @@ priority. Overall, the priority will be in the following order: @@ <gpu_execution_accelerator> (if instance is on GPU) @@ CUDA Execution Provider (if instance is on GPU) @@ <cpu_execution_accelerator> @@ Default CPU Execution Provider @@

Fields§

§gpu_execution_accelerator: Vec<Accelerator>

@@ .. cpp:var:: Accelerator gpu_execution_accelerator (repeated) @@ @@ The preferred execution provider to be used if the model instance @@ is deployed on GPU. @@ @@ For ONNX Runtime backend, possible value is “tensorrt” as name, @@ and no parameters are required. @@ @@ For TensorFlow backend, possible values are “tensorrt”, @@ “auto_mixed_precision”, “gpu_io”. @@ @@ For “tensorrt”, the following parameters can be specified: @@ “precision_mode”: The precision used for optimization. @@ Allowed values are “FP32” and “FP16”. Default value is “FP32”. @@ @@ “max_cached_engines”: The maximum number of cached TensorRT @@ engines in dynamic TensorRT ops. Default value is 100. @@ @@ “minimum_segment_size”: The smallest model subgraph that will @@ be considered for optimization by TensorRT. Default value is 3. @@ @@ “max_workspace_size_bytes”: The maximum GPU memory the model @@ can use temporarily during execution. Default value is 1GB. @@ @@ For “auto_mixed_precision”, no parameters are required. If set, @@ the model will try to use FP16 for better performance. @@ This optimization can not be set with “tensorrt”. @@ @@ For “gpu_io”, no parameters are required. If set, the model will @@ be executed using TensorFlow Callable API to set input and output @@ tensors in GPU memory if possible, which can reduce data transfer @@ overhead if the model is used in ensemble. However, the Callable @@ object will be created on model creation and it will request all @@ outputs for every model execution, which may impact the @@ performance if a request does not require all outputs. This @@ optimization will only take affect if the model instance is @@ created with KIND_GPU. @@

§cpu_execution_accelerator: Vec<Accelerator>

@@ .. cpp:var:: Accelerator cpu_execution_accelerator (repeated) @@ @@ The preferred execution provider to be used if the model instance @@ is deployed on CPU. @@ @@ For ONNX Runtime backend, possible value is “openvino” as name, @@ and no parameters are required. @@

Trait Implementations§

Source§

impl Clone for ExecutionAccelerators

Source§

fn clone(&self) -> ExecutionAccelerators

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for ExecutionAccelerators

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for ExecutionAccelerators

Source§

fn default() -> Self

Returns the “default value” for a type. Read more
Source§

impl Message for ExecutionAccelerators

Source§

fn encoded_len(&self) -> usize

Returns the encoded length of the message without a length delimiter.
Source§

fn clear(&mut self)

Clears the message, resetting all fields to their default.
Source§

fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError>
where B: BufMut, Self: Sized,

Encodes the message to a buffer. Read more
Source§

fn encode_to_vec(&self) -> Vec<u8>
where Self: Sized,

Encodes the message to a newly allocated buffer.
Source§

fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError>
where B: BufMut, Self: Sized,

Encodes the message with a length-delimiter to a buffer. Read more
Source§

fn encode_length_delimited_to_vec(&self) -> Vec<u8>
where Self: Sized,

Encodes the message with a length-delimiter to a newly allocated buffer.
Source§

fn decode<B>(buf: B) -> Result<Self, DecodeError>
where B: Buf, Self: Default,

Decodes an instance of the message from a buffer. Read more
Source§

fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError>
where B: Buf, Self: Default,

Decodes a length-delimited instance of the message from the buffer.
Source§

fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError>
where B: Buf, Self: Sized,

Decodes an instance of the message from a buffer, and merges it into self. Read more
Source§

fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError>
where B: Buf, Self: Sized,

Decodes a length-delimited instance of the message from buffer, and merges it into self.
Source§

impl PartialEq for ExecutionAccelerators

Source§

fn eq(&self, other: &ExecutionAccelerators) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl StructuralPartialEq for ExecutionAccelerators

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoRequest<T> for T

Source§

fn into_request(self) -> Request<T>

Wrap the input message T in a tonic::Request
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more