shrink_caffe_model_def

Function shrink_caffe_model_def 

Source
pub fn shrink_caffe_model_def(src: &str, dst: &str) -> Result<()>
Expand description

Convert all weights of Caffe network to half precision floating point.

§Parameters

  • src: Path to origin model from Caffe framework contains single precision floating point weights (usually has .caffemodel extension).
  • dst: Path to destination model with updated weights.
  • layersTypes: Set of layers types which parameters will be converted. By default, converts only Convolutional and Fully-Connected layers’ weights.

Note: Shrinked model has no origin float32 weights so it can’t be used in origin Caffe framework anymore. However the structure of data is taken from NVidia’s Caffe fork: https://github.com/NVIDIA/caffe. So the resulting model may be used there.

§Note

This alternative version of shrink_caffe_model function uses the following default values for its arguments:

  • layers_types: std::vector()