pub fn quantize_mobile_optimized( input: &[f32], scale: f32, zero_point: i32, output: &mut [i8], use_reduced_precision: bool, ) -> Result<()>
Mobile-optimized quantization with reduced memory usage