[−][src]Enum onnxruntime::download::vision::image_classification::ImageClassification
Image classification model
This collection of models take images as input, then classifies the major objects in the images into 1000 object categories such as keyboard, mouse, pencil, and many animals.
Source: https://github.com/onnx/models#image-classification-
Variants
Image classification aimed for mobile targets.
MobileNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. MobileNet models are also very efficient in terms of speed and size and hence are ideal for embedded and mobile applications.
Source: https://github.com/onnx/models/tree/master/vision/classification/mobilenet
Variant downloaded: ONNX Version 1.2.1 with Opset Version 7.
ResNet(ResNet)
Image classification, trained on ImageNet with 1000 classes.
ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.
Source: https://github.com/onnx/models/tree/master/vision/classification/resnet
A small CNN with AlexNet level accuracy on ImageNet with 50x fewer parameters.
SqueezeNet is a small CNN which achieves AlexNet level accuracy on ImageNet with 50x fewer parameters. SqueezeNet requires less communication across servers during distributed training, less bandwidth to export a new model from the cloud to an autonomous car and more feasible to deploy on FPGAs and other hardware with limited memory.
Source: https://github.com/onnx/models/tree/master/vision/classification/squeezenet
Variant downloaded: SqueezeNet v1.1, ONNX Version 1.2.1 with Opset Version 7.
Vgg(Vgg)
Image classification, trained on ImageNet with 1000 classes.
VGG models provide very high accuracies but at the cost of increased model sizes. They are ideal for cases when high accuracy of classification is essential and there are limited constraints on model sizes.
Source: https://github.com/onnx/models/tree/master/vision/classification/vgg
Convolutional neural network for classification, which competed in the ImageNet Large Scale Visual Recognition Challenge in 2012.
Source: https://github.com/onnx/models/tree/master/vision/classification/alexnet
Variant downloaded: ONNX Version 1.4 with Opset Version 9.
Convolutional neural network for classification, which competed in the ImageNet Large Scale Visual Recognition Challenge in 2014.
Source: https://github.com/onnx/models/tree/master/vision/classification/inception_and_googlenet/googlenet
Variant downloaded: ONNX Version 1.4 with Opset Version 9.
Variant of AlexNet, it's the name of a convolutional neural network for classification, which competed in the ImageNet Large Scale Visual Recognition Challenge in 2012.
Source: https://github.com/onnx/models/tree/master/vision/classification/caffenet
Variant downloaded: ONNX Version 1.4 with Opset Version 9.
Convolutional neural network for detection.
This model was made by transplanting the R-CNN SVM classifiers into a fc-rcnn classification layer.
Source: https://github.com/onnx/models/tree/master/vision/classification/rcnn_ilsvrc13
Variant downloaded: ONNX Version 1.4 with Opset Version 9.
Convolutional neural network for classification.
Source: https://github.com/onnx/models/tree/master/vision/classification/rcnn_ilsvrc13
Variant downloaded: ONNX Version 1.4 with Opset Version 9.
Inception(InceptionVersion)
Google's Inception
ShuffleNet(ShuffleNetVersion)
Computationally efficient CNN architecture designed specifically for mobile devices with very limited computing power.
Source: https://github.com/onnx/models/tree/master/vision/classification/shufflenet
Deep convolutional networks for classification.
This model's 4th layer has 512 maps instead of 1024 maps mentioned in the paper.
Source: https://github.com/onnx/models/tree/master/vision/classification/zfnet-512
Image classification model that achieves state-of-the-art accuracy.
It is designed to run on mobile CPU, GPU, and EdgeTPU devices, allowing for applications on mobile and loT, where computational resources are limited.
Source: https://github.com/onnx/models/tree/master/vision/classification/efficientnet-lite4
Variant downloaded: ONNX Version 1.7.0 with Opset Version 11.
Trait Implementations
impl Clone for ImageClassification
[src]
fn clone(&self) -> ImageClassification
[src]
fn clone_from(&mut self, source: &Self)
1.0.0[src]
impl Debug for ImageClassification
[src]
impl From<ImageClassification> for AvailableOnnxModel
[src]
fn from(model: ImageClassification) -> Self
[src]
Auto Trait Implementations
impl RefUnwindSafe for ImageClassification
impl Send for ImageClassification
impl Sync for ImageClassification
impl Unpin for ImageClassification
impl UnwindSafe for ImageClassification
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T> Instrument for T
[src]
fn instrument(self, span: Span) -> Instrumented<Self>
[src]
fn in_current_span(self) -> Instrumented<Self>
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,