pub struct AVCapturePhotoSettings { /* private fields */ }AVCapturePhotoOutput only.Expand description
A mutable settings object encapsulating all the desired properties of a photo capture.
To take a picture, a client instantiates and configures an AVCapturePhotoSettings object, then calls AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate:, passing the settings and a delegate to be informed when events relating to the photo capture occur. Since AVCapturePhotoSettings has no reference to the AVCapturePhotoOutput instance with which it will be used, minimal validation occurs while you configure an AVCapturePhotoSettings instance. The bulk of the validation is executed when you call AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate:.
See also Apple’s documentation
Implementations§
Source§impl AVCapturePhotoSettings
impl AVCapturePhotoSettings
Sourcepub unsafe fn photoSettings() -> Retained<Self>
pub unsafe fn photoSettings() -> Retained<Self>
Creates a default instance of AVCapturePhotoSettings.
Returns: An instance of AVCapturePhotoSettings.
A default AVCapturePhotoSettings object has a format of AVVideoCodecTypeJPEG, a fileType of AVFileTypeJPEG, and photoQualityPrioritization set to AVCapturePhotoQualityPrioritizationBalanced.
Sourcepub unsafe fn photoSettingsWithFormat(
format: Option<&NSDictionary<NSString, AnyObject>>,
) -> Retained<Self>
pub unsafe fn photoSettingsWithFormat( format: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>
Creates an instance of AVCapturePhotoSettings with a user-specified output format.
Parameter format: A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property.
Returns: An instance of AVCapturePhotoSettings.
If you wish an uncompressed format, your dictionary must contain kCVPixelBufferPixelFormatTypeKey, and the format specified must be present in AVCapturePhotoOutput’s -availablePhotoPixelFormatTypes array. kCVPixelBufferPixelFormatTypeKey is the only supported key when expressing uncompressed output. If you wish a compressed format, your dictionary must contain AVVideoCodecKey and the codec specified must be present in AVCapturePhotoOutput’s -availablePhotoCodecTypes array. If you are specifying a compressed format, the AVVideoCompressionPropertiesKey is also supported, with a payload dictionary containing a single AVVideoQualityKey. Passing a nil format dictionary is analogous to calling +photoSettings.
§Safety
format generic should be of the correct type.
Sourcepub unsafe fn photoSettingsWithRawPixelFormatType(
raw_pixel_format_type: u32,
) -> Retained<Self>
pub unsafe fn photoSettingsWithRawPixelFormatType( raw_pixel_format_type: u32, ) -> Retained<Self>
Creates an instance of AVCapturePhotoSettings specifying RAW only output.
Parameter rawPixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).
Returns: An instance of AVCapturePhotoSettings.
rawPixelFormatType must be one of the OSTypes contained in AVCapturePhotoOutput’s -availableRawPhotoPixelFormatTypes array. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.
Sourcepub unsafe fn photoSettingsWithRawPixelFormatType_processedFormat(
raw_pixel_format_type: u32,
processed_format: Option<&NSDictionary<NSString, AnyObject>>,
) -> Retained<Self>
pub unsafe fn photoSettingsWithRawPixelFormatType_processedFormat( raw_pixel_format_type: u32, processed_format: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>
Creates an instance of AVCapturePhotoSettings specifying RAW + a processed format (such as JPEG).
Parameter rawPixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).
Parameter processedFormat: A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property.
Returns: An instance of AVCapturePhotoSettings.
rawPixelFormatType must be one of the OSTypes contained in AVCapturePhotoOutput’s -availableRawPhotoPixelFormatTypes array. If you wish an uncompressed processedFormat, your dictionary must contain kCVPixelBufferPixelFormatTypeKey, and the processedFormat specified must be present in AVCapturePhotoOutput’s -availablePhotoPixelFormatTypes array. kCVPixelBufferPixelFormatTypeKey is the only supported key when expressing uncompressed processedFormat. If you wish a compressed format, your dictionary must contain AVVideoCodecKey and the codec specified must be present in AVCapturePhotoOutput’s -availablePhotoCodecTypes array. If you are specifying a compressed format, the AVVideoCompressionPropertiesKey is also supported, with a payload dictionary containing a single AVVideoQualityKey. Passing a nil processedFormat dictionary is analogous to calling +photoSettingsWithRawPixelFormatType:. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.
§Safety
processed_format generic should be of the correct type.
Sourcepub unsafe fn photoSettingsWithRawPixelFormatType_rawFileType_processedFormat_processedFileType(
raw_pixel_format_type: u32,
raw_file_type: Option<&AVFileType>,
processed_format: Option<&NSDictionary<NSString, AnyObject>>,
processed_file_type: Option<&AVFileType>,
) -> Retained<Self>
Available on crate feature AVMediaFormat only.
pub unsafe fn photoSettingsWithRawPixelFormatType_rawFileType_processedFormat_processedFileType( raw_pixel_format_type: u32, raw_file_type: Option<&AVFileType>, processed_format: Option<&NSDictionary<NSString, AnyObject>>, processed_file_type: Option<&AVFileType>, ) -> Retained<Self>
AVMediaFormat only.Creates an instance of AVCapturePhotoSettings specifying RAW + a processed format (such as JPEG) and a file container to which it will be written.
Parameter rawPixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h). Pass 0 if you do not desire a RAW photo callback.
Parameter rawFileType: The file container for which the RAW image should be formatted to be written. Pass nil if you have no preferred file container. A default container will be chosen for you.
Parameter processedFormat: A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property. Pass nil if you do not desire a processed photo callback.
Parameter processedFileType: The file container for which the processed image should be formatted to be written. Pass nil if you have no preferred file container. A default container will be chosen for you.
Returns: An instance of AVCapturePhotoSettings.
rawPixelFormatType must be one of the OSTypes contained in AVCapturePhotoOutput’s -availableRawPhotoPixelFormatTypes array. Set rawPixelFormatType to 0 if you do not desire a RAW photo callback. If you are specifying a rawFileType, it must be present in AVCapturePhotoOutput’s -availableRawPhotoFileTypes array. If you wish an uncompressed processedFormat, your dictionary must contain kCVPixelBufferPixelFormatTypeKey, and the processedFormat specified must be present in AVCapturePhotoOutput’s -availablePhotoPixelFormatTypes array. kCVPixelBufferPixelFormatTypeKey is the only supported key when expressing uncompressed processedFormat. If you wish a compressed format, your dictionary must contain AVVideoCodecKey and the codec specified must be present in AVCapturePhotoOutput’s -availablePhotoCodecTypes array. If you are specifying a compressed format, the AVVideoCompressionPropertiesKey is also supported, with a payload dictionary containing a single AVVideoQualityKey. If you are specifying a processedFileType (such as AVFileTypeJPEG, AVFileTypeHEIC or AVFileTypeDICOM), it must be present in AVCapturePhotoOutput’s -availablePhotoFileTypes array. Pass a nil processedFormat dictionary if you only desire a RAW photo capture. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.
§Safety
processed_format generic should be of the correct type.
Sourcepub unsafe fn photoSettingsFromPhotoSettings(
photo_settings: &AVCapturePhotoSettings,
) -> Retained<Self>
pub unsafe fn photoSettingsFromPhotoSettings( photo_settings: &AVCapturePhotoSettings, ) -> Retained<Self>
Creates an instance of AVCapturePhotoSettings with a new uniqueID from an existing instance of AVCapturePhotoSettings.
Parameter photoSettings: An existing AVCapturePhotoSettings instance.
Returns: An new instance of AVCapturePhotoSettings with new uniqueID.
Use this factory method to create a clone of an existing photo settings instance, but with a new uniqueID that can safely be passed to AVCapturePhotoOutput -capturePhotoWithSettings:delegate:.
Sourcepub unsafe fn uniqueID(&self) -> i64
pub unsafe fn uniqueID(&self) -> i64
A 64-bit number that uniquely identifies this instance.
When you create an instance of AVCapturePhotoSettings, a uniqueID is generated automatically. This uniqueID is guaranteed to be unique for the life time of your process.
Sourcepub unsafe fn format(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn format( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property.
The format dictionary you passed to one of the creation methods. May be nil if you’ve specified RAW-only capture.
Sourcepub unsafe fn rawFileFormat(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn rawFileFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
A dictionary of AVVideoSettings keys specifying the RAW file format to be used for the RAW photo.
One can specify desired format properties of the RAW file that will be created. Currently only the key AVVideoAppleProRAWBitDepthKey is allowed and the value to which it can be set should be from 8-16. The AVVideoCodecKey must be present in the receiver’s -availableRawPhotoCodecTypes array as well as in -supportedRawPhotoCodecTypesForRawPhotoPixelFormatType:fileType:. AVVideoQualityKey (NSNumber in range [0.0,1.0]) can be optionally set and a value between [0.0,1.0] will use lossy compression with lower values being more lossy resulting in smaller file sizes but lower image quality, while a value of 1.0 will use lossless compression resulting in the largest file size but also the best quality.
Sourcepub unsafe fn setRawFileFormat(
&self,
raw_file_format: Option<&NSDictionary<NSString, AnyObject>>,
)
pub unsafe fn setRawFileFormat( &self, raw_file_format: Option<&NSDictionary<NSString, AnyObject>>, )
Setter for rawFileFormat.
This is copied when set.
§Safety
raw_file_format generic should be of the correct type.
Sourcepub unsafe fn processedFileType(&self) -> Option<Retained<AVFileType>>
Available on crate feature AVMediaFormat only.
pub unsafe fn processedFileType(&self) -> Option<Retained<AVFileType>>
AVMediaFormat only.The file container for which the processed photo is formatted to be stored.
The formatting of data within a photo buffer is often dependent on the file format intended for storage. For instance, a JPEG encoded photo buffer intended for storage in a JPEG (JPEG File Interchange Format) file differs from JPEG to be stored in HEIF. The HEIF-containerized JPEG buffer is tiled for readback efficiency and partitioned into the box structure dictated by the HEIF file format. Some codecs are only supported by AVCapturePhotoOutput if containerized. For instance, the AVVideoCodecTypeHEVC is only supported with AVFileTypeHEIF and AVFileTypeHEIC formatting. To discover which photo pixel format types and video codecs are supported for a given file type, you may query AVCapturePhotoOutput’s -supportedPhotoPixelFormatTypesForFileType:, or -supportedPhotoCodecTypesForFileType: respectively.
Sourcepub unsafe fn rawPhotoPixelFormatType(&self) -> u32
pub unsafe fn rawPhotoPixelFormatType(&self) -> u32
A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).
The rawPixelFormatType you specified in one of the creation methods. Returns 0 if you did not specify RAW capture. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.
Sourcepub unsafe fn rawFileType(&self) -> Option<Retained<AVFileType>>
Available on crate feature AVMediaFormat only.
pub unsafe fn rawFileType(&self) -> Option<Retained<AVFileType>>
AVMediaFormat only.The file container for which the RAW photo is formatted to be stored.
The formatting of data within a RAW photo buffer may be dependent on the file format intended for storage. To discover which RAW photo pixel format types are supported for a given file type, you may query AVCapturePhotoOutput’s -supportedRawPhotoPixelFormatTypesForFileType:.
Sourcepub unsafe fn flashMode(&self) -> AVCaptureFlashMode
Available on crate feature AVCaptureDevice only.
pub unsafe fn flashMode(&self) -> AVCaptureFlashMode
AVCaptureDevice only.Specifies whether the flash should be on, off, or chosen automatically by AVCapturePhotoOutput.
flashMode takes the place of the deprecated AVCaptureDevice -flashMode API. Setting AVCaptureDevice.flashMode has no effect on AVCapturePhotoOutput, which only pays attention to the flashMode specified in your AVCapturePhotoSettings. The default value is AVCaptureFlashModeOff. Flash modes are defined in AVCaptureDevice.h. If you specify a flashMode of AVCaptureFlashModeOn, it wins over autoStillImageStabilizationEnabled=YES. When the device becomes very hot, the flash becomes temporarily unavailable until the device cools down (see AVCaptureDevice’s -flashAvailable). While the flash is unavailable, AVCapturePhotoOutput’s -supportedFlashModes property still reports AVCaptureFlashModeOn and AVCaptureFlashModeAuto as being available, thus allowing you to specify a flashMode of AVCaptureModeOn. You should always check the AVCaptureResolvedPhotoSettings provided to you in the AVCapturePhotoCaptureDelegate callbacks, as the resolved flashEnabled property will tell you definitively if the flash is being used.
Sourcepub unsafe fn setFlashMode(&self, flash_mode: AVCaptureFlashMode)
Available on crate feature AVCaptureDevice only.
pub unsafe fn setFlashMode(&self, flash_mode: AVCaptureFlashMode)
AVCaptureDevice only.Setter for flashMode.
Sourcepub unsafe fn isAutoRedEyeReductionEnabled(&self) -> bool
pub unsafe fn isAutoRedEyeReductionEnabled(&self) -> bool
Specifies whether red-eye reduction should be applied automatically on flash captures.
Default is YES on platforms that support automatic red-eye reduction unless you are capturing a bracket using AVCapturePhotoBracketSettings or a RAW photo without a processed photo. For RAW photos with a processed photo the red-eye reduction will be applied to the processed photo only (RAW photos by definition are not processed). When set to YES, red-eye reduction is applied as needed for flash captures if the photo output’s autoRedEyeReductionSupported property returns YES.
Sourcepub unsafe fn setAutoRedEyeReductionEnabled(
&self,
auto_red_eye_reduction_enabled: bool,
)
pub unsafe fn setAutoRedEyeReductionEnabled( &self, auto_red_eye_reduction_enabled: bool, )
Setter for isAutoRedEyeReductionEnabled.
Sourcepub unsafe fn photoQualityPrioritization(
&self,
) -> AVCapturePhotoQualityPrioritization
pub unsafe fn photoQualityPrioritization( &self, ) -> AVCapturePhotoQualityPrioritization
Indicates how photo quality should be prioritized against speed of photo delivery.
Default value is AVCapturePhotoQualityPrioritizationBalanced. The AVCapturePhotoOutput is capable of applying a variety of techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), depending on the source device’s activeFormat. Some of these techniques can take significant processing time before the photo is returned to your delegate callback. The photoQualityPrioritization property allows you to specify your preferred quality vs speed of delivery. By default, speed and quality are considered to be of equal importance. When you specify AVCapturePhotoQualityPrioritizationSpeed, you indicate that speed should be prioritized at the expense of quality. Likewise, when you choose AVCapturePhotoQualityPrioritizationQuality, you signal your willingness to prioritize the very best quality at the expense of speed, and your readiness to wait (perhaps significantly) longer for the photo to be returned to your delegate.
Sourcepub unsafe fn setPhotoQualityPrioritization(
&self,
photo_quality_prioritization: AVCapturePhotoQualityPrioritization,
)
pub unsafe fn setPhotoQualityPrioritization( &self, photo_quality_prioritization: AVCapturePhotoQualityPrioritization, )
Setter for photoQualityPrioritization.
Sourcepub unsafe fn isAutoStillImageStabilizationEnabled(&self) -> bool
👎Deprecated
pub unsafe fn isAutoStillImageStabilizationEnabled(&self) -> bool
Specifies whether still image stabilization should be used automatically.
Default is YES unless you are capturing a Bayer RAW photo (Bayer RAW photos may not be processed by definition) or a bracket using AVCapturePhotoBracketSettings. When set to YES, still image stabilization is applied automatically in low light to counteract hand shake. If the device has optical image stabilization, autoStillImageStabilizationEnabled makes use of lens stabilization as well.
As of iOS 13 hardware, the AVCapturePhotoOutput is capable of applying a variety of multi-image fusion techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), all of which have been previously lumped under the stillImageStabilization moniker. This property should no longer be used as it no longer provides meaningful information about the techniques used to improve quality in a photo capture. Instead, you should use -photoQualityPrioritization to indicate your preferred quality vs speed.
Sourcepub unsafe fn setAutoStillImageStabilizationEnabled(
&self,
auto_still_image_stabilization_enabled: bool,
)
👎Deprecated
pub unsafe fn setAutoStillImageStabilizationEnabled( &self, auto_still_image_stabilization_enabled: bool, )
Setter for isAutoStillImageStabilizationEnabled.
Sourcepub unsafe fn isAutoVirtualDeviceFusionEnabled(&self) -> bool
pub unsafe fn isAutoVirtualDeviceFusionEnabled(&self) -> bool
Specifies whether virtual device image fusion should be used automatically.
Default is YES unless you are capturing a RAW photo (RAW photos may not be processed by definition) or a bracket using AVCapturePhotoBracketSettings. When set to YES, and -[AVCapturePhotoOutput isVirtualDeviceFusionSupported] is also YES, constituent camera images of a virtual device may be fused to improve still image quality, depending on the current zoom factor, light levels, and focus position. You may determine whether virtual device fusion is enabled for a particular capture request by inspecting the virtualDeviceFusionEnabled property of the AVCaptureResolvedPhotoSettings. Note that when using the deprecated AVCaptureStillImageOutput interface with a virtual device, autoVirtualDeviceFusionEnabled fusion is always enabled if supported, and may not be turned off.
Sourcepub unsafe fn setAutoVirtualDeviceFusionEnabled(
&self,
auto_virtual_device_fusion_enabled: bool,
)
pub unsafe fn setAutoVirtualDeviceFusionEnabled( &self, auto_virtual_device_fusion_enabled: bool, )
Setter for isAutoVirtualDeviceFusionEnabled.
Sourcepub unsafe fn isAutoDualCameraFusionEnabled(&self) -> bool
👎Deprecated
pub unsafe fn isAutoDualCameraFusionEnabled(&self) -> bool
Specifies whether DualCamera image fusion should be used automatically.
Default is YES unless you are capturing a RAW photo (RAW photos may not be processed by definition) or a bracket using AVCapturePhotoBracketSettings. When set to YES, and -[AVCapturePhotoOutput isDualCameraFusionSupported] is also YES, wide-angle and telephoto images may be fused to improve still image quality, depending on the current zoom factor, light levels, and focus position. You may determine whether DualCamera fusion is enabled for a particular capture request by inspecting the dualCameraFusionEnabled property of the AVCaptureResolvedPhotoSettings. Note that when using the deprecated AVCaptureStillImageOutput interface with the DualCamera, auto DualCamera fusion is always enabled and may not be turned off. As of iOS 13, this property is deprecated in favor of autoVirtualDeviceFusionEnabled.
Sourcepub unsafe fn setAutoDualCameraFusionEnabled(
&self,
auto_dual_camera_fusion_enabled: bool,
)
👎Deprecated
pub unsafe fn setAutoDualCameraFusionEnabled( &self, auto_dual_camera_fusion_enabled: bool, )
Setter for isAutoDualCameraFusionEnabled.
Sourcepub unsafe fn virtualDeviceConstituentPhotoDeliveryEnabledDevices(
&self,
) -> Retained<NSArray<AVCaptureDevice>>
Available on crate feature AVCaptureDevice only.
pub unsafe fn virtualDeviceConstituentPhotoDeliveryEnabledDevices( &self, ) -> Retained<NSArray<AVCaptureDevice>>
AVCaptureDevice only.Specifies the constituent devices for which the virtual device should deliver photos.
Default is empty array. To opt in for constituent device photo delivery, you may set this property to any subset of 2 or more of the devices in virtualDevice.constituentDevices. Your captureOutput:didFinishProcessingPhoto:error: callback will be called n times – one for each of the devices you include in the array. You may only set this property to a non-nil array if you’ve set your AVCapturePhotoOutput’s virtualDeviceConstituentPhotoDeliveryEnabled property to YES, and your delegate responds to the captureOutput:didFinishProcessingPhoto:error: selector.
Sourcepub unsafe fn setVirtualDeviceConstituentPhotoDeliveryEnabledDevices(
&self,
virtual_device_constituent_photo_delivery_enabled_devices: &NSArray<AVCaptureDevice>,
)
Available on crate feature AVCaptureDevice only.
pub unsafe fn setVirtualDeviceConstituentPhotoDeliveryEnabledDevices( &self, virtual_device_constituent_photo_delivery_enabled_devices: &NSArray<AVCaptureDevice>, )
AVCaptureDevice only.Setter for virtualDeviceConstituentPhotoDeliveryEnabledDevices.
This is copied when set.
Sourcepub unsafe fn isDualCameraDualPhotoDeliveryEnabled(&self) -> bool
👎Deprecated
pub unsafe fn isDualCameraDualPhotoDeliveryEnabled(&self) -> bool
Specifies whether the DualCamera should return both the telephoto and wide image.
Default is NO. When set to YES, your captureOutput:didFinishProcessingPhoto:error: callback will receive twice the number of callbacks, as both the telephoto image(s) and wide-angle image(s) are delivered. You may only set this property to YES if you’ve set your AVCapturePhotoOutput’s dualCameraDualPhotoDeliveryEnabled property to YES, and your delegate responds to the captureOutput:didFinishProcessingPhoto:error: selector. As of iOS 13, this property is deprecated in favor of virtualDeviceConstituentPhotoDeliveryEnabledDevices.
Sourcepub unsafe fn setDualCameraDualPhotoDeliveryEnabled(
&self,
dual_camera_dual_photo_delivery_enabled: bool,
)
👎Deprecated
pub unsafe fn setDualCameraDualPhotoDeliveryEnabled( &self, dual_camera_dual_photo_delivery_enabled: bool, )
Setter for isDualCameraDualPhotoDeliveryEnabled.
Sourcepub unsafe fn isHighResolutionPhotoEnabled(&self) -> bool
👎Deprecated: Use maxPhotoDimensions instead.
pub unsafe fn isHighResolutionPhotoEnabled(&self) -> bool
Specifies whether photos should be captured at the highest resolution supported by the source AVCaptureDevice’s activeFormat.
Default is NO. By default, AVCapturePhotoOutput emits images with the same dimensions as its source AVCaptureDevice’s activeFormat.formatDescription. However, if you set this property to YES, the AVCapturePhotoOutput emits images at its source AVCaptureDevice’s activeFormat.highResolutionStillImageDimensions. Note that if you enable video stabilization (see AVCaptureConnection’s preferredVideoStabilizationMode) for any output, the high resolution photos emitted by AVCapturePhotoOutput may be smaller by 10 or more percent. You may inspect your AVCaptureResolvedPhotoSettings in the delegate callbacks to discover the exact dimensions of the capture photo(s).
Starting in iOS 14.5 if you disable geometric distortion correction, the high resolution photo emitted by AVCapturePhotoOutput may be is smaller depending on the format.
Sourcepub unsafe fn setHighResolutionPhotoEnabled(
&self,
high_resolution_photo_enabled: bool,
)
👎Deprecated: Use maxPhotoDimensions instead.
pub unsafe fn setHighResolutionPhotoEnabled( &self, high_resolution_photo_enabled: bool, )
Setter for isHighResolutionPhotoEnabled.
Sourcepub unsafe fn maxPhotoDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn maxPhotoDimensions(&self) -> CMVideoDimensions
objc2-core-media only.Indicates the maximum resolution photo that will be captured.
By setting this property you are requesting an image that may be up to as large as the specified dimensions, but no larger. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the currently configured format and be equal to or smaller than the value of AVCapturePhotoOutput.maxPhotoDimensions. This property defaults to the smallest dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions.
Sourcepub unsafe fn setMaxPhotoDimensions(
&self,
max_photo_dimensions: CMVideoDimensions,
)
Available on crate feature objc2-core-media only.
pub unsafe fn setMaxPhotoDimensions( &self, max_photo_dimensions: CMVideoDimensions, )
objc2-core-media only.Setter for maxPhotoDimensions.
Sourcepub unsafe fn isDepthDataDeliveryEnabled(&self) -> bool
pub unsafe fn isDepthDataDeliveryEnabled(&self) -> bool
Specifies whether AVDepthData should be captured along with the photo.
Default is NO. Set to YES if you wish to receive depth data with your photo. Throws an exception if -[AVCapturePhotoOutput depthDataDeliveryEnabled] is not set to YES or your delegate does not respond to the captureOutput:didFinishProcessingPhoto:error: selector. Note that setting this property to YES may add significant processing time to the delivery of your didFinishProcessingPhoto: callback.
For best rendering results in Apple’s Photos.app, portrait photos should be captured with both embedded depth data and a portrait effects matte (see portraitEffectsMatteDeliveryEnabled). When supported, it is recommended to opt in for both of these auxiliary images in your photo captures involving depth.
Sourcepub unsafe fn setDepthDataDeliveryEnabled(
&self,
depth_data_delivery_enabled: bool,
)
pub unsafe fn setDepthDataDeliveryEnabled( &self, depth_data_delivery_enabled: bool, )
Setter for isDepthDataDeliveryEnabled.
Sourcepub unsafe fn embedsDepthDataInPhoto(&self) -> bool
pub unsafe fn embedsDepthDataInPhoto(&self) -> bool
Specifies whether depth data included with this photo should be written to the photo’s file structure.
Default is YES. When depthDataDeliveryEnabled is set to YES, this property specifies whether the included depth data should be written to the resulting photo’s internal file structure. Depth data is currently only supported in HEIF and JPEG. This property is ignored if depthDataDeliveryEnabled is set to NO.
Sourcepub unsafe fn setEmbedsDepthDataInPhoto(&self, embeds_depth_data_in_photo: bool)
pub unsafe fn setEmbedsDepthDataInPhoto(&self, embeds_depth_data_in_photo: bool)
Setter for embedsDepthDataInPhoto.
Sourcepub unsafe fn isDepthDataFiltered(&self) -> bool
pub unsafe fn isDepthDataFiltered(&self) -> bool
Specifies whether the depth data delivered with the photo should be filtered to fill invalid values.
Default is YES. This property is ignored unless depthDataDeliveryEnabled is set to YES. Depth data maps may contain invalid pixel values due to a variety of factors including occlusions and low light. When depthDataFiltered is set to YES, the photo output interpolates missing data, filling in all holes.
Sourcepub unsafe fn setDepthDataFiltered(&self, depth_data_filtered: bool)
pub unsafe fn setDepthDataFiltered(&self, depth_data_filtered: bool)
Setter for isDepthDataFiltered.
Sourcepub unsafe fn isCameraCalibrationDataDeliveryEnabled(&self) -> bool
pub unsafe fn isCameraCalibrationDataDeliveryEnabled(&self) -> bool
Specifies whether AVCameraCalibrationData should be captured and delivered along with this photo.
Default is NO. Set to YES if you wish to receive camera calibration data with your photo. Camera calibration data is delivered as a property of an AVCapturePhoto, so if you are using the CMSampleBuffer delegate callbacks rather than -captureOutput:didFinishProcessingPhoto:error:, an exception is thrown. Also, you may only set this property to YES if your AVCapturePhotoOutput’s cameraCalibrationDataDeliverySupported property is YES and 2 or more devices are selected for virtual device constituent photo delivery. When requesting virtual device constituent photo delivery plus camera calibration data, the photos for each constituent device each contain camera calibration data. Note that AVCameraCalibrationData can be delivered as a property of an AVCapturePhoto or an AVDepthData, thus your delegate must respond to the captureOutput:didFinishProcessingPhoto:error: selector.
Sourcepub unsafe fn setCameraCalibrationDataDeliveryEnabled(
&self,
camera_calibration_data_delivery_enabled: bool,
)
pub unsafe fn setCameraCalibrationDataDeliveryEnabled( &self, camera_calibration_data_delivery_enabled: bool, )
Setter for isCameraCalibrationDataDeliveryEnabled.
Sourcepub unsafe fn isPortraitEffectsMatteDeliveryEnabled(&self) -> bool
pub unsafe fn isPortraitEffectsMatteDeliveryEnabled(&self) -> bool
Specifies whether an AVPortraitEffectsMatte should be captured along with the photo.
Default is NO. Set to YES if you wish to receive a portrait effects matte with your photo. Throws an exception if -[AVCapturePhotoOutput portraitEffectsMatteDeliveryEnabled] is not set to YES or your delegate does not respond to the captureOutput:didFinishProcessingPhoto:error: selector. Portrait effects matte generation requires depth to be present, so if you wish to enable portrait effects matte delivery, you must set depthDataDeliveryEnabled to YES. Setting this property to YES does not guarantee that a portrait effects matte will be present in the resulting AVCapturePhoto. As the property name implies, the matte is primarily used to improve the rendering quality of portrait effects on the image. If the photo’s content lacks a clear foreground subject, no portrait effects matte is generated, and the -[AVCapturePhoto portraitEffectsMatte] property returns nil. Note that setting this property to YES may add significant processing time to the delivery of your didFinishProcessingPhoto: callback.
For best rendering results in Apple’s Photos.app, portrait photos should be captured with both embedded depth data (see depthDataDeliveryEnabled) and a portrait effects matte. When supported, it is recommended to opt in for both of these auxiliary images in your photo captures involving depth.
Sourcepub unsafe fn setPortraitEffectsMatteDeliveryEnabled(
&self,
portrait_effects_matte_delivery_enabled: bool,
)
pub unsafe fn setPortraitEffectsMatteDeliveryEnabled( &self, portrait_effects_matte_delivery_enabled: bool, )
Setter for isPortraitEffectsMatteDeliveryEnabled.
Sourcepub unsafe fn embedsPortraitEffectsMatteInPhoto(&self) -> bool
pub unsafe fn embedsPortraitEffectsMatteInPhoto(&self) -> bool
Specifies whether the portrait effects matte captured with this photo should be written to the photo’s file structure.
Default is YES. When portraitEffectsMatteDeliveryEnabled is set to YES, this property specifies whether the included portrait effects matte should be written to the resulting photo’s internal file structure. Portrait effects mattes are currently only supported in HEIF and JPEG. This property is ignored if portraitEffectsMatteDeliveryEnabled is set to NO.
Sourcepub unsafe fn setEmbedsPortraitEffectsMatteInPhoto(
&self,
embeds_portrait_effects_matte_in_photo: bool,
)
pub unsafe fn setEmbedsPortraitEffectsMatteInPhoto( &self, embeds_portrait_effects_matte_in_photo: bool, )
Setter for embedsPortraitEffectsMatteInPhoto.
Sourcepub unsafe fn enabledSemanticSegmentationMatteTypes(
&self,
) -> Retained<NSArray<AVSemanticSegmentationMatteType>>
Available on crate feature AVSemanticSegmentationMatte only.
pub unsafe fn enabledSemanticSegmentationMatteTypes( &self, ) -> Retained<NSArray<AVSemanticSegmentationMatteType>>
AVSemanticSegmentationMatte only.Specifies which types of AVSemanticSegmentationMatte should be captured along with the photo.
Default is empty array. You may set this property to an array of AVSemanticSegmentationMatteTypes you’d like to capture. Throws an exception if -[AVCapturePhotoOutput enabledSemanticSegmentationMatteTypes] does not contain any of the AVSemanticSegmentationMatteTypes specified. In other words, when setting up a capture session, you opt in for the superset of segmentation matte types you might like to receive, and then on a shot-by-shot basis, you may opt in to all or a subset of the previously specified types by setting this property. An exception is also thrown during -[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] if your delegate does not respond to the captureOutput:didFinishProcessingPhoto:error: selector. Setting this property to YES does not guarantee that the specified mattes will be present in the resulting AVCapturePhoto. If the photo’s content lacks any persons, for instance, no hair, skin, or teeth mattes are generated, and the -[AVCapturePhoto semanticSegmentationMatteForType:] property returns nil. Note that setting this property to YES may add significant processing time to the delivery of your didFinishProcessingPhoto: callback.
Sourcepub unsafe fn setEnabledSemanticSegmentationMatteTypes(
&self,
enabled_semantic_segmentation_matte_types: &NSArray<AVSemanticSegmentationMatteType>,
)
Available on crate feature AVSemanticSegmentationMatte only.
pub unsafe fn setEnabledSemanticSegmentationMatteTypes( &self, enabled_semantic_segmentation_matte_types: &NSArray<AVSemanticSegmentationMatteType>, )
AVSemanticSegmentationMatte only.Setter for enabledSemanticSegmentationMatteTypes.
This is copied when set.
Sourcepub unsafe fn embedsSemanticSegmentationMattesInPhoto(&self) -> bool
pub unsafe fn embedsSemanticSegmentationMattesInPhoto(&self) -> bool
Specifies whether enabledSemanticSegmentationMatteTypes captured with this photo should be written to the photo’s file structure.
Default is YES. This property specifies whether the captured semantic segmentation mattes should be written to the resulting photo’s internal file structure. Semantic segmentation mattes are currently only supported in HEIF and JPEG. This property is ignored if enabledSemanticSegmentationMatteTypes is set to an empty array.
Sourcepub unsafe fn setEmbedsSemanticSegmentationMattesInPhoto(
&self,
embeds_semantic_segmentation_mattes_in_photo: bool,
)
pub unsafe fn setEmbedsSemanticSegmentationMattesInPhoto( &self, embeds_semantic_segmentation_mattes_in_photo: bool, )
Setter for embedsSemanticSegmentationMattesInPhoto.
Sourcepub unsafe fn metadata(&self) -> Retained<NSDictionary<NSString, AnyObject>>
pub unsafe fn metadata(&self) -> Retained<NSDictionary<NSString, AnyObject>>
A dictionary of metadata key/value pairs you’d like to have written to each photo in the capture request.
Valid metadata keys are found in <ImageIO /CGImageProperties.h>. AVCapturePhotoOutput inserts a base set of metadata into each photo it captures, such as kCGImagePropertyOrientation, kCGImagePropertyExifDictionary, and kCGImagePropertyMakerAppleDictionary. You may specify metadata keys and values that should be written to each photo in the capture request. If you’ve specified metadata that also appears in AVCapturePhotoOutput’s base set, your value replaces the base value. An NSInvalidArgumentException is thrown if you specify keys other than those found in <ImageIO /CGImageProperties.h>.
Sourcepub unsafe fn setMetadata(&self, metadata: &NSDictionary<NSString, AnyObject>)
pub unsafe fn setMetadata(&self, metadata: &NSDictionary<NSString, AnyObject>)
Sourcepub unsafe fn livePhotoMovieFileURL(&self) -> Option<Retained<NSURL>>
pub unsafe fn livePhotoMovieFileURL(&self) -> Option<Retained<NSURL>>
Specifies that a Live Photo movie be captured to complement the still photo.
A Live Photo movie is a short movie (with audio, if you’ve added an audio input to your session) containing the moments right before and after the still photo. A QuickTime movie file will be written to disk at the URL specified if it is a valid file URL accessible to your app’s sandbox. You may only set this property if AVCapturePhotoOutput’s livePhotoCaptureSupported property is YES. When you specify a Live Photo, your AVCapturePhotoCaptureDelegate object must implement -captureOutput:didFinishProcessingLivePhotoToMovieFileAtURL:duration:photoDisplayTime:resolvedSettings:error:.
Sourcepub unsafe fn setLivePhotoMovieFileURL(
&self,
live_photo_movie_file_url: Option<&NSURL>,
)
pub unsafe fn setLivePhotoMovieFileURL( &self, live_photo_movie_file_url: Option<&NSURL>, )
Setter for livePhotoMovieFileURL.
This is copied when set.
Sourcepub unsafe fn livePhotoVideoCodecType(&self) -> Retained<AVVideoCodecType>
Available on crate feature AVVideoSettings only.
pub unsafe fn livePhotoVideoCodecType(&self) -> Retained<AVVideoCodecType>
AVVideoSettings only.Specifies the video codec type to use when compressing video for the Live Photo movie complement.
Prior to iOS 11, all Live Photo movie video tracks are compressed using H.264. Beginning in iOS 11, you can select the Live Photo movie video compression format by specifying one of the strings present in AVCapturePhotoOutput’s availableLivePhotoVideoCodecTypes array.
Sourcepub unsafe fn setLivePhotoVideoCodecType(
&self,
live_photo_video_codec_type: &AVVideoCodecType,
)
Available on crate feature AVVideoSettings only.
pub unsafe fn setLivePhotoVideoCodecType( &self, live_photo_video_codec_type: &AVVideoCodecType, )
AVVideoSettings only.Setter for livePhotoVideoCodecType.
This is copied when set.
Sourcepub unsafe fn livePhotoMovieMetadata(&self) -> Retained<NSArray<AVMetadataItem>>
Available on crate feature AVMetadataItem only.
pub unsafe fn livePhotoMovieMetadata(&self) -> Retained<NSArray<AVMetadataItem>>
AVMetadataItem only.Movie-level metadata to be written to the Live Photo movie.
An array of AVMetadataItems to be inserted into the top level of the Live Photo movie. The receiver makes immutable copies of the AVMetadataItems in the array. Live Photo movies always contain a AVMetadataQuickTimeMetadataKeyContentIdentifier which allow them to be paired with a similar identifier in the MakerNote of the photo complement. AVCapturePhotoSettings generates a unique content identifier for you. If you provide a metadata array containing an AVMetadataItem with keyspace = AVMetadataKeySpaceQuickTimeMetadata and key = AVMetadataQuickTimeMetadataKeyContentIdentifier, an NSInvalidArgumentException is thrown.
Sourcepub unsafe fn setLivePhotoMovieMetadata(
&self,
live_photo_movie_metadata: Option<&NSArray<AVMetadataItem>>,
)
Available on crate feature AVMetadataItem only.
pub unsafe fn setLivePhotoMovieMetadata( &self, live_photo_movie_metadata: Option<&NSArray<AVMetadataItem>>, )
AVMetadataItem only.Setter for livePhotoMovieMetadata.
This is copied when set.
Sourcepub unsafe fn availablePreviewPhotoPixelFormatTypes(
&self,
) -> Retained<NSArray<NSNumber>>
pub unsafe fn availablePreviewPhotoPixelFormatTypes( &self, ) -> Retained<NSArray<NSNumber>>
An array of available kCVPixelBufferPixelFormatTypeKeys that may be used when specifying a previewPhotoFormat.
The array is sorted such that the preview format requiring the fewest conversions is presented first.
Sourcepub unsafe fn previewPhotoFormat(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn previewPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
A dictionary of Core Video pixel buffer attributes specifying the preview photo format to be delivered along with the RAW or processed photo.
A dictionary of pixel buffer attributes specifying a smaller version of the RAW or processed photo for preview purposes. The kCVPixelBufferPixelFormatTypeKey is required and must be present in the receiver’s -availablePreviewPhotoPixelFormatTypes array. Optional keys are { kCVPixelBufferWidthKey | kCVPixelBufferHeightKey }. If you wish to specify dimensions, you must add both width and height. Width and height are only honored up to the display dimensions. If you specify a width and height whose aspect ratio differs from the RAW or processed photo, the larger of the two dimensions is honored and aspect ratio of the RAW or processed photo is always preserved.
Sourcepub unsafe fn setPreviewPhotoFormat(
&self,
preview_photo_format: Option<&NSDictionary<NSString, AnyObject>>,
)
pub unsafe fn setPreviewPhotoFormat( &self, preview_photo_format: Option<&NSDictionary<NSString, AnyObject>>, )
Setter for previewPhotoFormat.
This is copied when set.
§Safety
preview_photo_format generic should be of the correct type.
Sourcepub unsafe fn availableEmbeddedThumbnailPhotoCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate feature AVVideoSettings only.
pub unsafe fn availableEmbeddedThumbnailPhotoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.An array of available AVVideoCodecKeys that may be used when specifying an embeddedThumbnailPhotoFormat.
The array is sorted such that the thumbnail codec type that is most backward compatible is listed first.
Sourcepub unsafe fn embeddedThumbnailPhotoFormat(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn embeddedThumbnailPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
A dictionary of AVVideoSettings keys specifying the thumbnail format to be written to the processed or RAW photo.
A dictionary of AVVideoSettings keys specifying a thumbnail (usually smaller) version of the processed photo to be embedded in that image before calling the AVCapturePhotoCaptureDelegate. This image is sometimes referred to as a “thumbnail image”. The AVVideoCodecKey is required and must be present in the receiver’s -availableEmbeddedThumbnailPhotoCodecTypes array. Optional keys are { AVVideoWidthKey | AVVideoHeightKey }. If you wish to specify dimensions, you must specify both width and height. If you specify a width and height whose aspect ratio differs from the processed photo, the larger of the two dimensions is honored and aspect ratio of the RAW or processed photo is always preserved. For RAW captures, use -rawEmbeddedThumbnailPhotoFormat to specify the thumbnail format you’d like to capture in the RAW image. For apps linked on or after iOS 12, the raw thumbnail format must be specified using the -rawEmbeddedThumbnailPhotoFormat API rather than -embeddedThumbnailPhotoFormat. Beginning in iOS 12, HEIC files may contain thumbnails up to the full resolution of the main image.
Sourcepub unsafe fn setEmbeddedThumbnailPhotoFormat(
&self,
embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>,
)
pub unsafe fn setEmbeddedThumbnailPhotoFormat( &self, embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>, )
Setter for embeddedThumbnailPhotoFormat.
This is copied when set.
§Safety
embedded_thumbnail_photo_format generic should be of the correct type.
Sourcepub unsafe fn availableRawEmbeddedThumbnailPhotoCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate feature AVVideoSettings only.
pub unsafe fn availableRawEmbeddedThumbnailPhotoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.An array of available AVVideoCodecKeys that may be used when specifying a rawEmbeddedThumbnailPhotoFormat.
The array is sorted such that the thumbnail codec type that is most backward compatible is listed first.
Sourcepub unsafe fn rawEmbeddedThumbnailPhotoFormat(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn rawEmbeddedThumbnailPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
A dictionary of AVVideoSettings keys specifying the thumbnail format to be written to the RAW photo in a RAW photo request.
A dictionary of AVVideoSettings keys specifying a thumbnail (usually smaller) version of the RAW photo to be embedded in that image’s DNG before calling back the AVCapturePhotoCaptureDelegate. The AVVideoCodecKey is required and must be present in the receiver’s -availableRawEmbeddedThumbnailPhotoCodecTypes array. Optional keys are { AVVideoWidthKey | AVVideoHeightKey }. If you wish to specify dimensions, you must specify both width and height. If you specify a width and height whose aspect ratio differs from the RAW or processed photo, the larger of the two dimensions is honored and aspect ratio of the RAW or processed photo is always preserved. For apps linked on or after iOS 12, the raw thumbnail format must be specified using the -rawEmbeddedThumbnailPhotoFormat API rather than -embeddedThumbnailPhotoFormat. Beginning in iOS 12, DNG files may contain thumbnails up to the full resolution of the RAW image.
Sourcepub unsafe fn setRawEmbeddedThumbnailPhotoFormat(
&self,
raw_embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>,
)
pub unsafe fn setRawEmbeddedThumbnailPhotoFormat( &self, raw_embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>, )
Setter for rawEmbeddedThumbnailPhotoFormat.
This is copied when set.
§Safety
raw_embedded_thumbnail_photo_format generic should be of the correct type.
Sourcepub unsafe fn isAutoContentAwareDistortionCorrectionEnabled(&self) -> bool
pub unsafe fn isAutoContentAwareDistortionCorrectionEnabled(&self) -> bool
Specifies whether the photo output should use content aware distortion correction on this photo request (at its discretion).
Default is NO. Set to YES if you wish content aware distortion correction to be performed on your AVCapturePhotos, when the photo output deems it necessary. Photos may or may not benefit from distortion correction. For instance, photos lacking faces may be left as is. Setting this property to YES does introduce a small additional amount of latency to the photo processing. You may check your AVCaptureResolvedPhotoSettings to see whether content aware distortion correction will be enabled for a given photo request. Throws an exception if -[AVCapturePhotoOutput contentAwareDistortionCorrectionEnabled] is not set to YES.
Sourcepub unsafe fn setAutoContentAwareDistortionCorrectionEnabled(
&self,
auto_content_aware_distortion_correction_enabled: bool,
)
pub unsafe fn setAutoContentAwareDistortionCorrectionEnabled( &self, auto_content_aware_distortion_correction_enabled: bool, )
Setter for isAutoContentAwareDistortionCorrectionEnabled.
Sourcepub unsafe fn isConstantColorEnabled(&self) -> bool
pub unsafe fn isConstantColorEnabled(&self) -> bool
Specifies whether the photo will be captured with constant color.
Default is NO. Set to YES if you wish to capture a constant color photo. Throws an exception if -[AVCapturePhotoOutput constantColorEnabled] is not set to YES.
Sourcepub unsafe fn setConstantColorEnabled(&self, constant_color_enabled: bool)
pub unsafe fn setConstantColorEnabled(&self, constant_color_enabled: bool)
Setter for isConstantColorEnabled.
Sourcepub unsafe fn isConstantColorFallbackPhotoDeliveryEnabled(&self) -> bool
pub unsafe fn isConstantColorFallbackPhotoDeliveryEnabled(&self) -> bool
Specifies whether a fallback photo is delivered when taking a constant color capture.
Default is NO. Set to YES if you wish to receive a fallback photo that can be used in case the main constant color photo’s confidence level is too low for your use case.
Sourcepub unsafe fn setConstantColorFallbackPhotoDeliveryEnabled(
&self,
constant_color_fallback_photo_delivery_enabled: bool,
)
pub unsafe fn setConstantColorFallbackPhotoDeliveryEnabled( &self, constant_color_fallback_photo_delivery_enabled: bool, )
Setter for isConstantColorFallbackPhotoDeliveryEnabled.
Sourcepub unsafe fn isShutterSoundSuppressionEnabled(&self) -> bool
pub unsafe fn isShutterSoundSuppressionEnabled(&self) -> bool
Specifies whether the built-in shutter sound should be suppressed when capturing a photo with these settings.
Default is NO. Set to YES if you wish to suppress AVCapturePhotoOutput’s built-in shutter sound for this request. AVCapturePhotoOutput throws an NSInvalidArgumentException in -capturePhotoWithSettings: if its shutterSoundSuppressionSupported property returns NO.
Sourcepub unsafe fn setShutterSoundSuppressionEnabled(
&self,
shutter_sound_suppression_enabled: bool,
)
pub unsafe fn setShutterSoundSuppressionEnabled( &self, shutter_sound_suppression_enabled: bool, )
Setter for isShutterSoundSuppressionEnabled.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCapturePhotoSettings> for AVCapturePhotoBracketSettings
impl AsRef<AVCapturePhotoSettings> for AVCapturePhotoBracketSettings
Source§fn as_ref(&self) -> &AVCapturePhotoSettings
fn as_ref(&self) -> &AVCapturePhotoSettings
Source§impl AsRef<AnyObject> for AVCapturePhotoSettings
impl AsRef<AnyObject> for AVCapturePhotoSettings
Source§impl AsRef<NSObject> for AVCapturePhotoSettings
impl AsRef<NSObject> for AVCapturePhotoSettings
Source§impl Borrow<AVCapturePhotoSettings> for AVCapturePhotoBracketSettings
impl Borrow<AVCapturePhotoSettings> for AVCapturePhotoBracketSettings
Source§fn borrow(&self) -> &AVCapturePhotoSettings
fn borrow(&self) -> &AVCapturePhotoSettings
Source§impl Borrow<AnyObject> for AVCapturePhotoSettings
impl Borrow<AnyObject> for AVCapturePhotoSettings
Source§impl Borrow<NSObject> for AVCapturePhotoSettings
impl Borrow<NSObject> for AVCapturePhotoSettings
Source§impl ClassType for AVCapturePhotoSettings
impl ClassType for AVCapturePhotoSettings
Source§const NAME: &'static str = "AVCapturePhotoSettings"
const NAME: &'static str = "AVCapturePhotoSettings"
Source§type ThreadKind = <<AVCapturePhotoSettings as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCapturePhotoSettings as ClassType>::Super as ClassType>::ThreadKind
Source§impl CopyingHelper for AVCapturePhotoSettings
impl CopyingHelper for AVCapturePhotoSettings
Source§type Result = AVCapturePhotoSettings
type Result = AVCapturePhotoSettings
Self if the type has no
immutable counterpart. Read moreSource§impl Debug for AVCapturePhotoSettings
impl Debug for AVCapturePhotoSettings
Source§impl Deref for AVCapturePhotoSettings
impl Deref for AVCapturePhotoSettings
Source§impl Hash for AVCapturePhotoSettings
impl Hash for AVCapturePhotoSettings
Source§impl Message for AVCapturePhotoSettings
impl Message for AVCapturePhotoSettings
Source§impl NSCopying for AVCapturePhotoSettings
impl NSCopying for AVCapturePhotoSettings
Source§impl NSObjectProtocol for AVCapturePhotoSettings
impl NSObjectProtocol for AVCapturePhotoSettings
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref