pub struct AVCapturePhotoOutput { /* private fields */ }AVCaptureOutputBase and AVCapturePhotoOutput only.Expand description
AVCapturePhotoOutput is a concrete subclass of AVCaptureOutput that supersedes AVCaptureStillImageOutput as the preferred interface for capturing photos. In addition to capturing all flavors of still image supported by AVCaptureStillImageOutput, it supports Live Photo capture, preview-sized image delivery, wide color, RAW, RAW+JPG and RAW+DNG formats.
Taking a photo is multi-step process. Clients wishing to build a responsive UI need to know about the progress of a photo capture request as it advances from capture to processing to finished delivery. AVCapturePhotoOutput informs clients of photo capture progress through a delegate protocol. To take a picture, a client instantiates and configures an AVCapturePhotoSettings object, then calls AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate:, passing a delegate to be informed when events relating to the photo capture occur (e.g., the photo is about to be captured, the photo has been captured but not processed yet, the Live Photo movie is ready, etc.).
Some AVCapturePhotoSettings properties can be set to “Auto”, such as flashMode. When set to AVCaptureFlashModeAuto, the photo output decides at capture time whether the current scene and lighting conditions require use of the flash. Thus the client doesn’t know with certainty which features will be enabled when making the capture request. With the first and each subsequent delegate callback, the client is provided an AVCaptureResolvedPhotoSettings instance that indicates the settings that were applied to the capture. All “Auto” features have now been resolved to on or off. The AVCaptureResolvedPhotoSettings object passed in the client’s delegate callbacks has a uniqueID identical to the AVCapturePhotoSettings request. This uniqueID allows clients to pair unresolved and resolved settings objects. See AVCapturePhotoCaptureDelegate below for a detailed discussion of the delegate callbacks.
Enabling certain photo features (Live Photo capture and high resolution capture) requires a reconfiguration of the capture render pipeline. Clients wishing to opt in for these features should call -setLivePhotoCaptureEnabled: and/or -setHighResolutionCaptureEnabled: before calling -startRunning on the AVCaptureSession. Changing any of these properties while the session is running requires a disruptive reconfiguration of the capture render pipeline. Live Photo captures in progress will be ended immediately; unfulfilled photo requests will be aborted; video preview will temporarily freeze. If you wish to capture Live Photos containing sound, you must add an audio AVCaptureDeviceInput to your AVCaptureSession.
Simultaneous Live Photo capture and MovieFileOutput capture is not supported. If an AVCaptureMovieFileOutput is added to your session, AVCapturePhotoOutput’s livePhotoCaptureSupported property returns NO. Note that simultaneous Live Photo capture and AVCaptureVideoDataOutput is supported.
AVCaptureStillImageOutput and AVCapturePhotoOutput may not both be added to a capture session. You must use one or the other. If you add both to a session, a NSInvalidArgumentException is thrown.
AVCapturePhotoOutput implicitly supports wide color photo capture, following the activeColorSpace of the source AVCaptureDevice. If the source device’s activeColorSpace is AVCaptureColorSpace_P3_D65, photos are encoded with wide color information, unless you’ve specified an output format of ‘420v’, which does not support wide color.
See also Apple’s documentation
Implementations§
Source§impl AVCapturePhotoOutput
impl AVCapturePhotoOutput
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn capturePhotoWithSettings_delegate(
&self,
settings: &AVCapturePhotoSettings,
delegate: &ProtocolObject<dyn AVCapturePhotoCaptureDelegate>,
)
pub unsafe fn capturePhotoWithSettings_delegate( &self, settings: &AVCapturePhotoSettings, delegate: &ProtocolObject<dyn AVCapturePhotoCaptureDelegate>, )
Method for initiating a photo capture request with progress monitoring through the supplied delegate.
Parameter settings: An AVCapturePhotoSettings object you have configured. May not be nil.
Parameter delegate: An object conforming to the AVCapturePhotoCaptureDelegate protocol. This object’s delegate methods are called back as the photo advances from capture to processing to finished delivery. May not be nil.
This method initiates a photo capture. The receiver copies your provided settings to prevent unintentional mutation. It is illegal to re-use settings. The receiver throws an NSInvalidArgumentException if your settings.uniqueID matches that of any previously used settings. This method is used to initiate all flavors of photo capture: single photo, RAW capture with or without a processed image (such as a JPEG), bracketed capture, and Live Photo.
Clients need not wait for a capture photo request to complete before issuing another request. This is true for single photo captures as well as Live Photos, where movie complements of adjacent photo captures are allowed to overlap.
This method validates your settings and enforces the following rules in order to ensure deterministic behavior. If any of these rules are violated, a NSInvalidArgumentException is thrown. RAW rules: See +isBayerRAWPixelFormat: and +isAppleProRAWPixelFormat: on the difference between Bayer RAW and Apple ProRAW pixel formats. Common RAW rules:
- If rawPhotoPixelFormatType is non-zero, it must be present in the receiver’s -availableRawPhotoPixelFormatTypes array.
- If rawPhotoPixelFormatType is non-zero, your delegate must respond to -captureOutput:didFinishProcessingRawPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error:.
- If rawPhotoPixelFormatType is non-zero, highResolutionPhotoEnabled may be YES or NO, but the setting only applies to the processed image, if you’ve specified one.
- If rawPhotoPixelFormatType is non-zero, constantColorEnabled must be set to NO.
- If rawFileType is specified, it must be present in -availableRawPhotoFileTypes and must support the rawPhotoPixelFormatType specified using -supportedRawPhotoPixelFormatTypesForFileType:. Bayer RAW rules (isBayerRAWPixelFormat: returns yes for rawPhotoPixelFormatType):
- photoQualityPrioritization must be set to AVCapturePhotoQualityPrioritizationSpeed (deprecated autoStillImageStabilizationEnabled must be set to NO).
- the videoZoomFactor of the source device and the videoScaleAndCropFactor of the photo output’s video connection must both be 1.0. Ensure no zoom is applied before requesting a RAW capture, and don’t change the zoom during RAW capture. Apple ProRAW rules (isAppleProRAWPixelFormat: returns yes for rawPhotoPixelFormatType):
- livePhotoMovieFileURL must be nil in AVCapturePhotoSettings settings
- autoContentAwareDistortionCorrectionEnabled will automatically be disabled in AVCapturePhotoSettings
- autoRedEyeReductionEnabled will automatically be disabled in AVCapturePhotoSettings
- portraitEffectsMatteDeliveryEnabled will automatically be disabled in AVCapturePhotoSettings
- enabledSemanticSegmentationMatteTypes will automatically be cleared in AVCapturePhotoSettings Processed Format rules:
- If format is non-nil, a kCVPixelBufferPixelFormatTypeKey or AVVideoCodecKey must be present. You cannot specify both.
- If format has a kCVPixelBufferPixelFormatTypeKey, its value must be present in the receiver’s -availablePhotoPixelFormatTypes array.
- If format has an AVVideoCodecKey, its value must be present in the receiver’s -availablePhotoCodecTypes array.
- If format is non-nil, your delegate must respond to -captureOutput:didFinishProcessingPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error:.
- If processedFileType is specified, it must be present in -availablePhotoFileTypes and must support the format’s specified kCVPixelBufferPixelFormatTypeKey (using -supportedPhotoPixelFormatTypesForFileType:) or AVVideoCodecKey (using -supportedPhotoCodecTypesForFileType:).
- The photoQualityPrioritization you specify may not be a greater number than the photo output’s maxPhotoQualityPrioritization. You must set your AVCapturePhotoOutput maxPhotoQualityPrioritization up front. Flash rules:
- The specified flashMode must be present in the receiver’s -supportedFlashModes array. Live Photo rules:
- The receiver’s livePhotoCaptureEnabled must be YES if settings.livePhotoMovieURL is non-nil.
- If settings.livePhotoMovieURL is non-nil, the receiver’s livePhotoCaptureSuspended property must be set to NO.
- If settings.livePhotoMovieURL is non-nil, it must be a file URL that’s accessible to your app’s sandbox.
- If settings.livePhotoMovieURL is non-nil, your delegate must respond to -captureOutput:didFinishProcessingLivePhotoToMovieFileAtURL:duration:photoDisplayTime:resolvedSettings:error:. Bracketed capture rules:
- bracketedSettings.count must be < = the receiver’s maxBracketedCapturePhotoCount property.
- For manual exposure brackets, ISO value must be within the source device activeFormat’s minISO and maxISO values.
- For manual exposure brackets, exposureDuration value must be within the source device activeFormat’s minExposureDuration and maxExposureDuration values.
- For auto exposure brackets, exposureTargetBias value must be within the source device’s minExposureTargetBias and maxExposureTargetBias values. Deferred Photo Delivery rules:
- If the receiver’s autoDeferredPhotoDeliveryEnabled is YES, your delegate must respond to -captureOutput:didFinishCapturingDeferredPhotoProxy:error:.
- The maxPhotoDimensions setting for 24MP (5712, 4284), when supported, is only serviced as 24MP via deferred photo delivery. Color space rules:
- Photo capture is not supported when AVCaptureDevice has selected AVCaptureColorSpace_AppleLog or AVCaptureColorSpace_AppleLog2 as color space.
Sourcepub unsafe fn preparedPhotoSettingsArray(
&self,
) -> Retained<NSArray<AVCapturePhotoSettings>>
pub unsafe fn preparedPhotoSettingsArray( &self, ) -> Retained<NSArray<AVCapturePhotoSettings>>
An array of AVCapturePhotoSettings instances for which the receiver is prepared to capture.
See also setPreparedPhotoSettingsArray:completionHandler: Some types of photo capture, such as bracketed captures and RAW captures, require the receiver to allocate additional buffers or prepare other resources. To prevent photo capture requests from executing slowly due to lazy resource allocation, you may call -setPreparedPhotoSettingsArray:completionHandler: with an array of settings objects representative of the types of capture you will be performing (e.g., settings for a bracketed capture, RAW capture, and/or still image stabilization capture). By default, the receiver prepares sufficient resources to capture photos with default settings, +[AVCapturePhotoSettings photoSettings].
Sourcepub unsafe fn setPreparedPhotoSettingsArray_completionHandler(
&self,
prepared_photo_settings_array: &NSArray<AVCapturePhotoSettings>,
completion_handler: Option<&DynBlock<dyn Fn(Bool, *mut NSError)>>,
)
Available on crate feature block2 only.
pub unsafe fn setPreparedPhotoSettingsArray_completionHandler( &self, prepared_photo_settings_array: &NSArray<AVCapturePhotoSettings>, completion_handler: Option<&DynBlock<dyn Fn(Bool, *mut NSError)>>, )
block2 only.Method allowing the receiver to prepare resources in advance for future -capturePhotoWithSettings:delegate: requests.
Parameter preparedPhotoSettingsArray: An array of AVCapturePhotoSettings instances indicating the types of capture for which the receiver should prepare resources.
Parameter completionHandler: A completion block to be fired on a serial dispatch queue once the receiver has finished preparing. You may pass nil to indicate you do not wish to be called back when preparation is complete.
Some types of photo capture, such as bracketed captures and RAW captures, require the receiver to allocate additional buffers or prepare other resources. To prevent photo capture requests from executing slowly due to lazy resource allocation, you may call this method with an array of settings objects representative of the types of capture you will be performing (e.g., settings for a bracketed capture, RAW capture, and/or still image stabilization capture). You may call this method even before calling -[AVCaptureSession startRunning] in order to hint the receiver up front which features you’ll be utilizing. Each time you call this method with an array of settings, the receiver evaluates what additional resources it needs to allocate, as well as existing resources that can be reclaimed, and calls back your completionHandler when it has finished preparing (and possibly reclaiming) needed resources. By default, the receiver prepares sufficient resources to capture photos with default settings, +[AVCapturePhotoSettings photoSettings]. If you wish to reclaim all possible resources, you may call this method with an empty array.
Preparation for photo capture is always optional. You may call -capturePhotoWithSettings:delegate: without first calling -setPreparedPhotoSettingsArray:completionHandler:, but be advised that some of your photo captures may execute slowly as additional resources are allocated just-in-time.
If you call this method while your AVCaptureSession is not running, your completionHandler does not fire immediately. It only fires once you’ve called -[AVCaptureSession startRunning], and the needed resources have actually been prepared. If you call -setPreparedPhotoSettingsArray:completionHandler: with an array of settings, and then call it a second time, your first prepare call’s completionHandler fires immediately with prepared == NO.
Prepared settings persist across session starts/stops and committed configuration changes. This property participates in -[AVCaptureSession beginConfiguration] / -[AVCaptureSession commitConfiguration] deferred work behavior. That is, if you call -[AVCaptureSession beginConfiguration], change your session’s input/output topology, and call this method, preparation is deferred until you call -[AVCaptureSession commitConfiguration], enabling you to atomically commit a new configuration as well as prepare to take photos in that new configuration.
Sourcepub unsafe fn availablePhotoPixelFormatTypes(
&self,
) -> Retained<NSArray<NSNumber>>
pub unsafe fn availablePhotoPixelFormatTypes( &self, ) -> Retained<NSArray<NSNumber>>
An array of kCVPixelBufferPixelFormatTypeKey values that are currently supported by the receiver.
If you wish to capture a photo in an uncompressed format, such as 420f, 420v, or BGRA, you must ensure that the format you want is present in the receiver’s availablePhotoPixelFormatTypes array. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no pixel format types are available. This property is key-value observable.
Sourcepub unsafe fn availablePhotoCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate feature AVVideoSettings only.
pub unsafe fn availablePhotoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.An array of AVVideoCodecKey values that are currently supported by the receiver.
If you wish to capture a photo in a compressed format, such as JPEG, you must ensure that the format you want is present in the receiver’s availablePhotoCodecTypes array. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no codec types are available. This property is key-value observable.
Sourcepub unsafe fn availableRawPhotoCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate feature AVVideoSettings only.
pub unsafe fn availableRawPhotoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.An array of available AVVideoCodecType values that may be used for the raw photo.
Not all codecs can be used for all rawPixelFormatType values and this call will show all of the possible codecs available. To check if a codec is available for a specific rawPixelFormatType and rawFileType, one should use supportedRawPhotoCodecTypesForRawPhotoPixelFormatType:fileType:.
Sourcepub unsafe fn isAppleProRAWSupported(&self) -> bool
pub unsafe fn isAppleProRAWSupported(&self) -> bool
Indicates whether the current configuration supports Apple ProRAW pixel formats.
The AVCapturePhotoSettings appleProRAWEnabled property may only be set to YES if this property returns YES. This property is key-value observable.
Sourcepub unsafe fn isAppleProRAWEnabled(&self) -> bool
pub unsafe fn isAppleProRAWEnabled(&self) -> bool
Indicates whether the photo output is configured for delivery of Apple ProRAW pixel formats as well as Bayer RAW formats.
Setting this property to YES will enable support for taking photos in Apple ProRAW pixel formats. These formats will be added to -availableRawPhotoPixelFormatTypes after any existing Bayer RAW formats. Compared to photos taken with a Bayer RAW format, these photos will be demosaiced and partially processed. They are still scene-referred, and allow capturing RAW photos in modes where there is no traditional sensor/Bayer RAW available. Examples are any modes that rely on fusion of multiple captures. Use +isBayerRAWPixelFormat: to determine if a pixel format in -availableRawPhotoPixelFormatTypes is a Bayer RAW format, and +isAppleProRAWPixelFormat: to determine if it is an Apple ProRAW format. When writing an Apple ProRAW buffer to a DNG file, the resulting file is known as “Linear DNG”. Apple ProRAW formats are not supported on all platforms and devices. This property may only be set to YES if appleProRAWSupported returns YES. This property is key-value observable.
Enabling this property requires a lengthy reconfiguration of the capture render pipeline, so you should set this property to YES before calling -[AVCaptureSession startRunning].
Sourcepub unsafe fn setAppleProRAWEnabled(&self, apple_pro_raw_enabled: bool)
pub unsafe fn setAppleProRAWEnabled(&self, apple_pro_raw_enabled: bool)
Setter for isAppleProRAWEnabled.
Sourcepub unsafe fn isBayerRAWPixelFormat(pixel_format: u32) -> bool
pub unsafe fn isBayerRAWPixelFormat(pixel_format: u32) -> bool
Returns YES if the given pixel format is a Bayer RAW format.
May be used to distinguish Bayer RAW from Apple ProRAW pixel formats in -availableRawPhotoPixelFormatTypes once appleProRAWEnabled has been set to YES.
Sourcepub unsafe fn isAppleProRAWPixelFormat(pixel_format: u32) -> bool
pub unsafe fn isAppleProRAWPixelFormat(pixel_format: u32) -> bool
Returns YES if the given pixel format is an Apple ProRAW format.
May be used to distinguish Bayer RAW from Apple ProRAW pixel formats in -availableRawPhotoPixelFormatTypes once appleProRAWEnabled has been set to YES.
See appleProRAWEnabled for more information on Apple ProRAW.
Sourcepub unsafe fn availableRawPhotoPixelFormatTypes(
&self,
) -> Retained<NSArray<NSNumber>>
pub unsafe fn availableRawPhotoPixelFormatTypes( &self, ) -> Retained<NSArray<NSNumber>>
An array of RAW CVPixelBufferPixelFormatTypeKey values that are currently supported by the receiver.
If you wish to capture a RAW photo, you must ensure that the RAW format you want is present in the receiver’s availableRawPhotoPixelFormatTypes array. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no RAW formats are available. See AVCapturePhotoOutput.appleProRAWEnabled on how to enable support for partially processed RAW formats. This property is key-value observable. RAW capture is not supported on all platforms.
Sourcepub unsafe fn availablePhotoFileTypes(&self) -> Retained<NSArray<AVFileType>>
Available on crate feature AVMediaFormat only.
pub unsafe fn availablePhotoFileTypes(&self) -> Retained<NSArray<AVFileType>>
AVMediaFormat only.An array of AVFileType values that are currently supported by the receiver.
If you wish to capture a photo that is formatted for a particular file container, such as HEIF or DICOM, you must ensure that the fileType you desire is present in the receiver’s availablePhotoFileTypes array. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no file types are available. This property is key-value observable.
Sourcepub unsafe fn availableRawPhotoFileTypes(&self) -> Retained<NSArray<AVFileType>>
Available on crate feature AVMediaFormat only.
pub unsafe fn availableRawPhotoFileTypes(&self) -> Retained<NSArray<AVFileType>>
AVMediaFormat only.An array of AVFileType values that are currently supported by the receiver for RAW capture.
If you wish to capture a RAW photo that is formatted for a particular file container, such as DNG, you must ensure that the fileType you desire is present in the receiver’s availableRawPhotoFileTypes array. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no file types are available. This property is key-value observable.
Sourcepub unsafe fn supportedPhotoPixelFormatTypesForFileType(
&self,
file_type: &AVFileType,
) -> Retained<NSArray<NSNumber>>
Available on crate feature AVMediaFormat only.
pub unsafe fn supportedPhotoPixelFormatTypesForFileType( &self, file_type: &AVFileType, ) -> Retained<NSArray<NSNumber>>
AVMediaFormat only.An array of pixel format type values that are currently supported by the receiver for a particular file container.
Parameter fileType: The AVFileType container type intended for storage of a photo.
Returns: An array of CVPixelBufferPixelFormatTypeKey values supported by the receiver for the file type in question.
If you wish to capture a photo for storage in a particular file container, such as TIFF, you must ensure that the photo pixel format type you request is valid for that file type. If no pixel format types are supported for a given fileType, an empty array is returned. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no pixel format types are supported.
Sourcepub unsafe fn supportedPhotoCodecTypesForFileType(
&self,
file_type: &AVFileType,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate features AVMediaFormat and AVVideoSettings only.
pub unsafe fn supportedPhotoCodecTypesForFileType( &self, file_type: &AVFileType, ) -> Retained<NSArray<AVVideoCodecType>>
AVMediaFormat and AVVideoSettings only.An array of AVVideoCodecKey values that are currently supported by the receiver for a particular file container.
Parameter fileType: The AVFileType container type intended for storage of a photo.
Returns: An array of AVVideoCodecKey values supported by the receiver for the file type in question.
If you wish to capture a photo for storage in a particular file container, such as HEIF, you must ensure that the photo codec type you request is valid for that file type. If no codec types are supported for a given fileType, an empty array is returned. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no codec types are supported.
Sourcepub unsafe fn supportedRawPhotoCodecTypesForRawPhotoPixelFormatType_fileType(
&self,
pixel_format_type: u32,
file_type: &AVFileType,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate features AVMediaFormat and AVVideoSettings only.
pub unsafe fn supportedRawPhotoCodecTypesForRawPhotoPixelFormatType_fileType( &self, pixel_format_type: u32, file_type: &AVFileType, ) -> Retained<NSArray<AVVideoCodecType>>
AVMediaFormat and AVVideoSettings only.An array of AVVideoCodecType values that are currently supported by the receiver for a particular file container and raw pixel format.
Parameter pixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).
Parameter fileType: The AVFileType container type intended for storage of a photo which can be retrieved from -availableRawPhotoFileTypes.
Returns: An array of AVVideoCodecType values supported by the receiver for the file type and and raw pixel format in question.
If you wish to capture a raw photo for storage using a Bayer RAW or Apple ProRAW pixel format and to be stored in a file container, such as DNG, you must ensure that the codec type you request is valid for that file and pixel format type. If no RAW codec types are supported for a given file type and/or pixel format type, an empty array is returned. If you have not yet added your receiver to an AVCaptureSession with a video source, an empty array is returned.
Sourcepub unsafe fn supportedRawPhotoPixelFormatTypesForFileType(
&self,
file_type: &AVFileType,
) -> Retained<NSArray<NSNumber>>
Available on crate feature AVMediaFormat only.
pub unsafe fn supportedRawPhotoPixelFormatTypesForFileType( &self, file_type: &AVFileType, ) -> Retained<NSArray<NSNumber>>
AVMediaFormat only.An array of CVPixelBufferPixelFormatType values that are currently supported by the receiver for a particular file container.
Parameter fileType: The AVFileType container type intended for storage of a photo.
Returns: An array of CVPixelBufferPixelFormatType values supported by the receiver for the file type in question.
If you wish to capture a photo for storage in a particular file container, such as DNG, you must ensure that the RAW pixel format type you request is valid for that file type. If no RAW pixel format types are supported for a given fileType, an empty array is returned. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no pixel format types are supported.
Sourcepub unsafe fn maxPhotoQualityPrioritization(
&self,
) -> AVCapturePhotoQualityPrioritization
pub unsafe fn maxPhotoQualityPrioritization( &self, ) -> AVCapturePhotoQualityPrioritization
Indicates the highest quality the receiver should be prepared to output on a capture-by-capture basis.
Default value is AVCapturePhotoQualityPrioritizationBalanced when attached to an AVCaptureSession, and AVCapturePhotoQualityPrioritizationSpeed when attached to an AVCaptureMultiCamSession. The AVCapturePhotoOutput is capable of applying a variety of techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc). Some techniques improve image quality at the expense of speed (shot-to-shot time). Before starting your session, you may set this property to indicate the highest quality prioritization you intend to request when calling -capturePhotoWithSettings:delegate:. When configuring an AVCapturePhotoSettings object, you may not exceed this quality prioritization level, but you may select a lower (speedier) prioritization level.
Changing the maxPhotoQualityPrioritization while the session is running can result in a lengthy rebuild of the session in which video preview is disrupted.
Setting the maxPhotoQualityPrioritization to .quality will turn on optical image stabilization if the -isHighPhotoQualitySupported of the source device’s -activeFormat is true.
Sourcepub unsafe fn setMaxPhotoQualityPrioritization(
&self,
max_photo_quality_prioritization: AVCapturePhotoQualityPrioritization,
)
pub unsafe fn setMaxPhotoQualityPrioritization( &self, max_photo_quality_prioritization: AVCapturePhotoQualityPrioritization, )
Setter for maxPhotoQualityPrioritization.
Sourcepub unsafe fn isFastCapturePrioritizationSupported(&self) -> bool
pub unsafe fn isFastCapturePrioritizationSupported(&self) -> bool
Specifies whether fast capture prioritization is supported.
Fast capture prioritization allows capture quality to be automatically reduced from the selected AVCapturePhotoQualityPrioritization to ensure the photo output can keep up when captures are requested in rapid succession. Fast capture prioritization is only supported for certain AVCaptureSession sessionPresets and AVCaptureDevice activeFormats and only when responsiveCaptureEnabled is YES. When switching cameras or formats this property may change. When this property changes from YES to NO, fastCapturePrioritizationEnabled also reverts to NO. If you’ve previously opted in for fast capture prioritization and then change configurations, you may need to set fastCapturePrioritizationEnabled = YES again.
Sourcepub unsafe fn setFastCapturePrioritizationSupported(
&self,
fast_capture_prioritization_supported: bool,
)
pub unsafe fn setFastCapturePrioritizationSupported( &self, fast_capture_prioritization_supported: bool, )
Setter for isFastCapturePrioritizationSupported.
Sourcepub unsafe fn isFastCapturePrioritizationEnabled(&self) -> bool
pub unsafe fn isFastCapturePrioritizationEnabled(&self) -> bool
Specifies whether fast capture prioritization is enabled.
This property defaults to NO. This property may only be set to YES if fastCapturePrioritizationSupported is YES, otherwise an NSInvalidArgumentException is thrown. By setting this property to YES, the photo output prepares itself to automatically reduce capture quality from the selected AVCapturePhotoQualityPrioritization when needed to keep up with rapid capture requests. In many cases the slightly reduced quality is preferable to missing the moment entirely. If you intend to use fast capture prioritization, you should set this property to YES before calling -[AVCaptureSession startRunning] or within -[AVCaptureSession beginConfiguration] and -[AVCaptureSession commitConfiguration] while running.
Sourcepub unsafe fn setFastCapturePrioritizationEnabled(
&self,
fast_capture_prioritization_enabled: bool,
)
pub unsafe fn setFastCapturePrioritizationEnabled( &self, fast_capture_prioritization_enabled: bool, )
Setter for isFastCapturePrioritizationEnabled.
Sourcepub unsafe fn isAutoDeferredPhotoDeliverySupported(&self) -> bool
pub unsafe fn isAutoDeferredPhotoDeliverySupported(&self) -> bool
Indicates whether the deferred photo delivery feature is supported by the receiver.
This property may change as the session’s -sessionPreset or source device’s -activeFormat change. When deferred photo delivery is not supported, your capture requests always resolve their AVCaptureResolvedPhotoSettings.deferredPhotoProxyDimensions to { 0, 0 }. This property is key-value observable.
Automatic deferred photo delivery can produce a lightweight photo representation, called a “proxy”, at the time of capture that can later be processed to completion while improving camera responsiveness. When it’s appropriate for the receiver to deliver a photo proxy for deferred processing, the delegate callback -captureOutput:didFinishCapturingDeferredPhotoProxy:error: will be invoked instead of -captureOutput:didFinishProcessingPhoto:error:. See the documentation for AVCaptureDeferredPhotoProxy for more details.
Sourcepub unsafe fn isAutoDeferredPhotoDeliveryEnabled(&self) -> bool
pub unsafe fn isAutoDeferredPhotoDeliveryEnabled(&self) -> bool
Specifies whether automatic deferred photo delivery is enabled.
Setting this value to either YES or NO requires a lengthy reconfiguration of the capture pipeline, so you should set this property before calling -[AVCaptureSession startRunning]. Setting this property to YES throws an NSInvalidArgumentException if autoDeferredPhotoDeliverySupported is NO.
Sourcepub unsafe fn setAutoDeferredPhotoDeliveryEnabled(
&self,
auto_deferred_photo_delivery_enabled: bool,
)
pub unsafe fn setAutoDeferredPhotoDeliveryEnabled( &self, auto_deferred_photo_delivery_enabled: bool, )
Setter for isAutoDeferredPhotoDeliveryEnabled.
Sourcepub unsafe fn isStillImageStabilizationSupported(&self) -> bool
👎Deprecated
pub unsafe fn isStillImageStabilizationSupported(&self) -> bool
Indicates whether the still image stabilization feature is supported by the receiver.
This property may change as the session’s -sessionPreset or source device’s -activeFormat change. When still image stabilization is not supported, your capture requests always resolve stillImageStabilizationEnabled to NO. This property is key-value observable.
As of iOS 13 hardware, the AVCapturePhotoOutput is capable of applying a variety of multi-image fusion techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), all of which have been previously lumped under the stillImageStabilization moniker. This property should no longer be used as it no longer provides meaningful information about the techniques used to improve quality in a photo capture. Instead, you should use -maxPhotoQualityPrioritization to indicate the highest quality prioritization level you might request in a photo capture, understanding that the higher the quality, the longer the potential wait. You may also use AVCapturePhotoSettings’ photoQualityPrioritization property to specify a prioritization level for a particular photo capture, and then query the AVCaptureResolvedPhotoSettings photoProcessingTimeRange property to find out how long it might take to receive the resulting photo in your delegate callback.
Sourcepub unsafe fn isStillImageStabilizationScene(&self) -> bool
👎Deprecated
pub unsafe fn isStillImageStabilizationScene(&self) -> bool
Indicates whether the current scene is dark enough to warrant use of still image stabilization.
This property reports whether the current scene being previewed by the camera is dark enough to benefit from still image stabilization. You can influence this property’s answers by setting the photoSettingsForSceneMonitoring property, indicating whether autoStillImageStabilization monitoring should be on or off. If you set autoStillImageStabilization to NO, isStillImageStabilizationScene always reports NO. If you set it to YES, this property returns YES or NO depending on the current scene’s lighting conditions. Note that some very dark scenes do not benefit from still image stabilization, but do benefit from flash. By default, this property always returns NO unless you set photoSettingsForSceneMonitoring to a non-nil value. This property may be key-value observed.
As of iOS 13 hardware, the AVCapturePhotoOutput is capable of applying a variety of multi-image fusion techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), all of which have been previously lumped under the stillImageStabilization moniker. This property should no longer be used as it no longer provides meaningful information about the techniques used to improve quality in a photo capture. Instead, you should use -maxPhotoQualityPrioritization to indicate the highest quality prioritization level you might request in a photo capture, understanding that the higher the quality, the longer the potential wait. You may also use AVCapturePhotoSettings’ photoQualityPrioritization property to specify a prioritization level for a particular photo capture, and then query the AVCaptureResolvedPhotoSettings photoProcessingTimeRange property to find out how long it might take to receive the resulting photo in your delegate callback.
Sourcepub unsafe fn isVirtualDeviceFusionSupported(&self) -> bool
pub unsafe fn isVirtualDeviceFusionSupported(&self) -> bool
Indicates whether the virtual device image fusion feature is supported by the receiver.
This property may change as the session’s -sessionPreset or source device’s -activeFormat change. When using a virtual AVCaptureDevice, its constituent camera images can be fused together to improve image quality when this property answers YES. When virtual device fusion is not supported by the current configuration, your capture requests always resolve virtualDeviceFusionEnabled to NO. This property is key-value observable.
Sourcepub unsafe fn isDualCameraFusionSupported(&self) -> bool
👎Deprecated
pub unsafe fn isDualCameraFusionSupported(&self) -> bool
Indicates whether the DualCamera image fusion feature is supported by the receiver.
This property may change as the session’s -sessionPreset or source device’s -activeFormat change. When using the AVCaptureDevice with deviceType AVCaptureDeviceTypeBuiltInDualCamera, the wide-angle and telephoto camera images can be fused together to improve image quality in some configurations. When DualCamera image fusion is not supported by the current configuration, your capture requests always resolve dualCameraFusionEnabled to NO. This property is key-value observable. As of iOS 13, this property is deprecated in favor of virtualDeviceFusionSupported.
Sourcepub unsafe fn isVirtualDeviceConstituentPhotoDeliverySupported(&self) -> bool
pub unsafe fn isVirtualDeviceConstituentPhotoDeliverySupported(&self) -> bool
Specifies whether the photo output’s current configuration supports delivery of photos from constituent cameras of a virtual device.
Virtual device constituent photo delivery is only supported for certain AVCaptureSession sessionPresets and AVCaptureDevice activeFormats. When switching cameras or formats this property may change. When this property changes from YES to NO, virtualDeviceConstituentPhotoDeliveryEnabled also reverts to NO. If you’ve previously opted in for virtual device constituent photo delivery and then change configurations, you may need to set virtualDeviceConstituentPhotoDeliveryEnabled = YES again. This property is key-value observable.
Sourcepub unsafe fn isDualCameraDualPhotoDeliverySupported(&self) -> bool
👎Deprecated
pub unsafe fn isDualCameraDualPhotoDeliverySupported(&self) -> bool
Specifies whether the photo output’s current configuration supports delivery of both telephoto and wide images from the DualCamera.
DualCamera dual photo delivery is only supported for certain AVCaptureSession sessionPresets and AVCaptureDevice activeFormats. When switching cameras or formats this property may change. When this property changes from YES to NO, dualCameraDualPhotoDeliveryEnabled also reverts to NO. If you’ve previously opted in for DualCamera dual photo delivery and then change configurations, you may need to set dualCameraDualPhotoDeliveryEnabled = YES again. This property is key-value observable. As of iOS 13, this property is deprecated in favor of virtualDeviceConstituentPhotoDeliverySupported.
Sourcepub unsafe fn isVirtualDeviceConstituentPhotoDeliveryEnabled(&self) -> bool
pub unsafe fn isVirtualDeviceConstituentPhotoDeliveryEnabled(&self) -> bool
Indicates whether the photo output is configured for delivery of photos from constituent cameras of a virtual device.
Default value is NO. This property may only be set to YES if virtualDeviceConstituentPhotoDeliverySupported is YES. Virtual device constituent photo delivery requires a lengthy reconfiguration of the capture render pipeline, so if you intend to do any constituent photo delivery captures, you should set this property to YES before calling -[AVCaptureSession startRunning]. See also -[AVCapturePhotoSettings virtualDeviceConstituentPhotoDeliveryEnabledDevices].
Sourcepub unsafe fn setVirtualDeviceConstituentPhotoDeliveryEnabled(
&self,
virtual_device_constituent_photo_delivery_enabled: bool,
)
pub unsafe fn setVirtualDeviceConstituentPhotoDeliveryEnabled( &self, virtual_device_constituent_photo_delivery_enabled: bool, )
Setter for isVirtualDeviceConstituentPhotoDeliveryEnabled.
Sourcepub unsafe fn isDualCameraDualPhotoDeliveryEnabled(&self) -> bool
👎Deprecated
pub unsafe fn isDualCameraDualPhotoDeliveryEnabled(&self) -> bool
Indicates whether the photo output is configured for delivery of both the telephoto and wide images from the DualCamera.
Default value is NO. This property may only be set to YES if dualCameraDualPhotoDeliverySupported is YES. DualCamera dual photo delivery requires a lengthy reconfiguration of the capture render pipeline, so if you intend to do any dual photo delivery captures, you should set this property to YES before calling -[AVCaptureSession startRunning]. See also -[AVCapturePhotoSettings dualCameraDualPhotoDeliveryEnabled]. As of iOS 13, this property is deprecated in favor of virtualDeviceConstituentPhotoDeliveryEnabled.
Sourcepub unsafe fn setDualCameraDualPhotoDeliveryEnabled(
&self,
dual_camera_dual_photo_delivery_enabled: bool,
)
👎Deprecated
pub unsafe fn setDualCameraDualPhotoDeliveryEnabled( &self, dual_camera_dual_photo_delivery_enabled: bool, )
Setter for isDualCameraDualPhotoDeliveryEnabled.
Sourcepub unsafe fn isCameraCalibrationDataDeliverySupported(&self) -> bool
pub unsafe fn isCameraCalibrationDataDeliverySupported(&self) -> bool
Specifies whether the photo output’s current configuration supports delivery of AVCameraCalibrationData in the resultant AVCapturePhoto.
Camera calibration data delivery (intrinsics, extrinsics, lens distortion characteristics, etc.) is only supported if virtualDeviceConstituentPhotoDeliveryEnabled is YES and contentAwareDistortionCorrectionEnabled is NO and the source device’s geometricDistortionCorrectionEnabled property is set to NO. This property is key-value observable.
Sourcepub unsafe fn supportedFlashModes(&self) -> Retained<NSArray<NSNumber>>
pub unsafe fn supportedFlashModes(&self) -> Retained<NSArray<NSNumber>>
An array of AVCaptureFlashMode constants for the current capture session configuration.
This property supersedes AVCaptureDevice’s isFlashModeSupported: It returns an array of AVCaptureFlashMode constants. To test whether a particular flash mode is supported, use NSArray’s containsObject API: [photoOutput.supportedFlashModes containsObject: @ (AVCaptureFlashModeAuto)]. This property is key-value observable.
Sourcepub unsafe fn isAutoRedEyeReductionSupported(&self) -> bool
pub unsafe fn isAutoRedEyeReductionSupported(&self) -> bool
Indicates whether the receiver supports automatic red-eye reduction for flash captures.
Flash images may cause subjects’ eyes to appear red, golden, or white. Automatic red-eye reduction detects and corrects for reflected light in eyes, at the cost of additional processing time per image. This property may change as the session’s -sessionPreset or source device’s -activeFormat change. When red-eye reduction is not supported, your capture requests always resolve redEyeReductionEnabled to NO. This property is key-value observable.
Sourcepub unsafe fn isFlashScene(&self) -> bool
pub unsafe fn isFlashScene(&self) -> bool
Indicates whether the current scene is dark enough to warrant use of the flash.
This property reports whether the current scene being previewed by the camera is dark enough to need the flash. If -supportedFlashModes only contains AVCaptureFlashModeOff, isFlashScene always reports NO. You can influence this property’s answers by setting the photoSettingsForSceneMonitoring property, indicating the flashMode you wish to monitor. If you set flashMode to AVCaptureFlashModeOff, isFlashScene always reports NO. If you set it to AVCaptureFlashModeAuto or AVCaptureFlashModeOn, isFlashScene answers YES or NO based on the current scene’s lighting conditions. By default, this property always returns NO unless you set photoSettingsForSceneMonitoring to a non-nil value. Note that there is some overlap in the light level ranges that benefit from still image stabilization and flash. If your photoSettingsForSceneMonitoring indicate that both still image stabilization and flash scenes should be monitored, still image stabilization takes precedence, and isFlashScene becomes YES at lower overall light levels. This property may be key-value observed.
Sourcepub unsafe fn photoSettingsForSceneMonitoring(
&self,
) -> Option<Retained<AVCapturePhotoSettings>>
pub unsafe fn photoSettingsForSceneMonitoring( &self, ) -> Option<Retained<AVCapturePhotoSettings>>
Settings that govern the behavior of isFlashScene and isStillImageStabilizationScene.
You can influence the return values of isFlashScene and isStillImageStabilizationScene by setting this property, indicating the flashMode and photoQualityPrioritization values that should be considered for scene monitoring. For instance, if you set flashMode to AVCaptureFlashModeOff, isFlashScene always reports NO. If you set it to AVCaptureFlashModeAuto or AVCaptureFlashModeOn, isFlashScene answers YES or NO based on the current scene’s lighting conditions. Note that there is some overlap in the light level ranges that benefit from still image stabilization and flash. If your photoSettingsForSceneMonitoring indicate that both still image stabilization and flash scenes should be monitored, still image stabilization takes precedence, and isFlashScene becomes YES at lower overall light levels. The default value for this property is nil. See isStillImageStabilizationScene and isFlashScene for further discussion.
Sourcepub unsafe fn setPhotoSettingsForSceneMonitoring(
&self,
photo_settings_for_scene_monitoring: Option<&AVCapturePhotoSettings>,
)
pub unsafe fn setPhotoSettingsForSceneMonitoring( &self, photo_settings_for_scene_monitoring: Option<&AVCapturePhotoSettings>, )
Setter for photoSettingsForSceneMonitoring.
This is copied when set.
Sourcepub unsafe fn isHighResolutionCaptureEnabled(&self) -> bool
👎Deprecated: Use maxPhotoDimensions instead.
pub unsafe fn isHighResolutionCaptureEnabled(&self) -> bool
Indicates whether the photo render pipeline should be configured to deliver high resolution still images.
Some AVCaptureDeviceFormats support outputting higher resolution stills than their streaming resolution (See AVCaptureDeviceFormat.highResolutionStillImageDimensions). Under some conditions, AVCaptureSession needs to set up the photo render pipeline differently to support high resolution still image capture. If you intend to take high resolution still images at all, you should set this property to YES before calling -[AVCaptureSession startRunning]. Once you’ve opted in for high resolution capture, you are free to issue photo capture requests with or without highResolutionCaptureEnabled in the AVCapturePhotoSettings. If you have not set this property to YES and call capturePhotoWithSettings:delegate: with settings.highResolutionCaptureEnabled set to YES, an NSInvalidArgumentException will be thrown.
Sourcepub unsafe fn setHighResolutionCaptureEnabled(
&self,
high_resolution_capture_enabled: bool,
)
👎Deprecated: Use maxPhotoDimensions instead.
pub unsafe fn setHighResolutionCaptureEnabled( &self, high_resolution_capture_enabled: bool, )
Setter for isHighResolutionCaptureEnabled.
Sourcepub unsafe fn maxPhotoDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn maxPhotoDimensions(&self) -> CMVideoDimensions
objc2-core-media only.Indicates the maximum resolution of the requested photo.
Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled.
Sourcepub unsafe fn setMaxPhotoDimensions(
&self,
max_photo_dimensions: CMVideoDimensions,
)
Available on crate feature objc2-core-media only.
pub unsafe fn setMaxPhotoDimensions( &self, max_photo_dimensions: CMVideoDimensions, )
objc2-core-media only.Setter for maxPhotoDimensions.
Sourcepub unsafe fn maxBracketedCapturePhotoCount(&self) -> NSUInteger
pub unsafe fn maxBracketedCapturePhotoCount(&self) -> NSUInteger
Specifies the maximum number of photos that may be taken in a single bracket.
AVCapturePhotoOutput can only satisfy a limited number of image requests in a single bracket without exhausting system resources. The maximum number of photos that may be taken in a single bracket depends on the size and format of the images being captured, and consequently may vary with AVCaptureSession -sessionPreset and AVCaptureDevice -activeFormat. Some formats do not support bracketed capture at all, and thus this property may return a value of 0. This read-only property is key-value observable. If you call -capturePhotoWithSettings:delegate: with a bracketedSettings whose count exceeds -maxBracketedCapturePhotoCount, an NSInvalidArgumentException is thrown.
Sourcepub unsafe fn isLensStabilizationDuringBracketedCaptureSupported(&self) -> bool
pub unsafe fn isLensStabilizationDuringBracketedCaptureSupported(&self) -> bool
Indicates whether the receiver supports lens stabilization during bracketed captures.
The AVCapturePhotoBracketSettings lensStabilizationEnabled property may only be set if this property returns YES. Its value may change as the session’s -sessionPreset or input device’s -activeFormat changes. This read-only property is key-value observable.
Sourcepub unsafe fn isLivePhotoCaptureSupported(&self) -> bool
pub unsafe fn isLivePhotoCaptureSupported(&self) -> bool
Indicates whether the receiver supports Live Photo capture.
Live Photo capture is only supported for certain AVCaptureSession sessionPresets and AVCaptureDevice activeFormats. When switching cameras or formats this property may change. When this property changes from YES to NO, livePhotoCaptureEnabled also reverts to NO. If you’ve previously opted in for Live Photo capture and then change configurations, you may need to set livePhotoCaptureEnabled = YES again.
Sourcepub unsafe fn isLivePhotoCaptureEnabled(&self) -> bool
pub unsafe fn isLivePhotoCaptureEnabled(&self) -> bool
Indicates whether the receiver is configured for Live Photo capture.
Default value is NO. This property may only be set to YES if livePhotoCaptureSupported is YES. Live Photo capture requires a lengthy reconfiguration of the capture render pipeline, so if you intend to do any Live Photo captures at all, you should set livePhotoCaptureEnabled to YES before calling -[AVCaptureSession startRunning].
Sourcepub unsafe fn setLivePhotoCaptureEnabled(
&self,
live_photo_capture_enabled: bool,
)
pub unsafe fn setLivePhotoCaptureEnabled( &self, live_photo_capture_enabled: bool, )
Setter for isLivePhotoCaptureEnabled.
Sourcepub unsafe fn isLivePhotoCaptureSuspended(&self) -> bool
pub unsafe fn isLivePhotoCaptureSuspended(&self) -> bool
Indicates whether Live Photo capture is enabled, but currently suspended.
This property allows you to cut current Live Photo movie captures short (for instance, if you suddenly need to do something that you don’t want to show up in the Live Photo movie, such as take a non Live Photo capture that makes a shutter sound). By default, livePhotoCaptureSuspended is NO. When you set livePhotoCaptureSuspended = YES, any Live Photo movie captures in progress are trimmed to the current time. Likewise, when you toggle livePhotoCaptureSuspended from YES to NO, subsequent Live Photo movie captures will not contain any samples earlier than the time you un-suspended Live Photo capture. Setting this property to YES throws an NSInvalidArgumentException if livePhotoCaptureEnabled is NO. By default, this property resets to NO when the AVCaptureSession stops. This behavior can be prevented by setting preservesLivePhotoCaptureSuspendedOnSessionStop to YES before stopping the session.
Sourcepub unsafe fn setLivePhotoCaptureSuspended(
&self,
live_photo_capture_suspended: bool,
)
pub unsafe fn setLivePhotoCaptureSuspended( &self, live_photo_capture_suspended: bool, )
Setter for isLivePhotoCaptureSuspended.
Sourcepub unsafe fn preservesLivePhotoCaptureSuspendedOnSessionStop(&self) -> bool
pub unsafe fn preservesLivePhotoCaptureSuspendedOnSessionStop(&self) -> bool
By default, Live Photo capture is resumed when the session stops. This property allows clients to opt out of this and preserve the value of livePhotoCaptureSuspended.
Defaults to NO.
Sourcepub unsafe fn setPreservesLivePhotoCaptureSuspendedOnSessionStop(
&self,
preserves_live_photo_capture_suspended_on_session_stop: bool,
)
pub unsafe fn setPreservesLivePhotoCaptureSuspendedOnSessionStop( &self, preserves_live_photo_capture_suspended_on_session_stop: bool, )
Setter for preservesLivePhotoCaptureSuspendedOnSessionStop.
Sourcepub unsafe fn isLivePhotoAutoTrimmingEnabled(&self) -> bool
pub unsafe fn isLivePhotoAutoTrimmingEnabled(&self) -> bool
Indicates whether Live Photo movies are trimmed in real time to avoid excessive movement.
This property defaults to YES when livePhotoCaptureSupported is YES. Changing this property’s value while your session is running will cause a lengthy reconfiguration of the session. You should set livePhotoAutoTrimmingEnabled to YES or NO before calling -[AVCaptureSession startRunning]. When set to YES, Live Photo movies are analyzed in real time and trimmed if there’s excessive movement before or after the photo is taken. Nominally, Live Photos are approximately 3 seconds long. With trimming enabled, they may be shorter, depending on movement. This feature prevents common problems such as Live Photo movies containing shoe or pocket shots.
Sourcepub unsafe fn setLivePhotoAutoTrimmingEnabled(
&self,
live_photo_auto_trimming_enabled: bool,
)
pub unsafe fn setLivePhotoAutoTrimmingEnabled( &self, live_photo_auto_trimming_enabled: bool, )
Setter for isLivePhotoAutoTrimmingEnabled.
Sourcepub unsafe fn availableLivePhotoVideoCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate feature AVVideoSettings only.
pub unsafe fn availableLivePhotoVideoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.An array of AVVideoCodecKey values that are currently supported by the receiver for use in the movie complement of a Live Photo.
Prior to iOS 11, all Live Photo movie video tracks are compressed using H.264. Beginning in iOS 11, you can select the Live Photo movie video compression format using one of the AVVideoCodecKey strings presented in this property. The system’s default (preferred) video codec is always presented first in the list. If you’ve not yet added your receiver to an AVCaptureSession with a video source, no codecs are available. This property is key-value observable.
Sourcepub unsafe fn JPEGPhotoDataRepresentationForJPEGSampleBuffer_previewPhotoSampleBuffer(
jpeg_sample_buffer: &CMSampleBuffer,
preview_photo_sample_buffer: Option<&CMSampleBuffer>,
) -> Option<Retained<NSData>>
👎DeprecatedAvailable on crate feature objc2-core-media only.
pub unsafe fn JPEGPhotoDataRepresentationForJPEGSampleBuffer_previewPhotoSampleBuffer( jpeg_sample_buffer: &CMSampleBuffer, preview_photo_sample_buffer: Option<&CMSampleBuffer>, ) -> Option<Retained<NSData>>
objc2-core-media only.A class method that writes a JPEG sample buffer to an NSData in the JPEG file format.
Parameter JPEGSampleBuffer: A CMSampleBuffer containing JPEG compressed data.
Parameter previewPhotoSampleBuffer: An optional CMSampleBuffer containing pixel buffer image data to be written as a thumbnail image.
Returns: An NSData containing bits in the JPEG file format. May return nil if the re-packaging process fails.
AVCapturePhotoOutput’s depecrated -captureOutput:didFinishProcessingPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error: callback delivers JPEG photos to clients as CMSampleBuffers. To re-package these buffers in a data format suitable for writing to a JPEG file, you may call this class method, optionally inserting your own metadata into the JPEG CMSampleBuffer first, and optionally passing a preview image to be written to the JPEG file format as a thumbnail image.
Sourcepub unsafe fn DNGPhotoDataRepresentationForRawSampleBuffer_previewPhotoSampleBuffer(
raw_sample_buffer: &CMSampleBuffer,
preview_photo_sample_buffer: Option<&CMSampleBuffer>,
) -> Option<Retained<NSData>>
👎DeprecatedAvailable on crate feature objc2-core-media only.
pub unsafe fn DNGPhotoDataRepresentationForRawSampleBuffer_previewPhotoSampleBuffer( raw_sample_buffer: &CMSampleBuffer, preview_photo_sample_buffer: Option<&CMSampleBuffer>, ) -> Option<Retained<NSData>>
objc2-core-media only.A class method that writes a RAW sample buffer to an NSData containing bits in the DNG file format.
Parameter rawSampleBuffer: A CMSampleBuffer containing Bayer RAW data.
Parameter previewPhotoSampleBuffer: An optional CMSampleBuffer containing pixel buffer image data to be written as a thumbnail image.
Returns: An NSData containing bits in the DNG file format. May return nil if the re-packaging process fails.
AVCapturePhotoOutput’s deprecated -captureOutput:didFinishProcessingRawPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error: callback delivers RAW photos to clients as CMSampleBuffers. To re-package these buffers in a data format suitable for writing to a DNG file, you may call this class method, optionally inserting your own metadata into the RAW CMSampleBuffer first, and optionally passing a preview image to be written to the DNG file format as a thumbnail image. Only RAW images from Apple built-in cameras are supported.
Sourcepub unsafe fn isContentAwareDistortionCorrectionSupported(&self) -> bool
pub unsafe fn isContentAwareDistortionCorrectionSupported(&self) -> bool
A BOOL value specifying whether content aware distortion correction is supported.
The rectilinear model used in optical design and by geometric distortion correction only preserves lines but not area, angles, or distance. Thus the wider the field of view of a lens, the greater the areal distortion at the edges of images. Content aware distortion correction, when enabled, intelligently corrects distortions by taking content into consideration, such as faces near the edges of the image. This property returns YES if the session’s current configuration allows photos to be captured with content aware distortion correction. When switching cameras or formats or enabling depth data delivery this property may change. When this property changes from YES to NO, contentAwareDistortionCorrectionEnabled also reverts to NO. This property is key-value observable.
Sourcepub unsafe fn isContentAwareDistortionCorrectionEnabled(&self) -> bool
pub unsafe fn isContentAwareDistortionCorrectionEnabled(&self) -> bool
A BOOL value specifying whether the photo render pipeline is set up to perform content aware distortion correction.
Default is NO. Set to YES if you wish content aware distortion correction to be performed on your AVCapturePhotos. This property may only be set to YES if contentAwareDistortionCorrectionSupported is YES. Note that warping the photos to preserve more natural looking content may result in a small change in field of view compared to what you see in the AVCaptureVideoPreviewLayer. The amount of field of view lost or gained is content specific and may vary from photo to photo. Enabling this property requires a lengthy reconfiguration of the capture render pipeline, so you should set this property to YES before calling -[AVCaptureSession startRunning].
Sourcepub unsafe fn setContentAwareDistortionCorrectionEnabled(
&self,
content_aware_distortion_correction_enabled: bool,
)
pub unsafe fn setContentAwareDistortionCorrectionEnabled( &self, content_aware_distortion_correction_enabled: bool, )
Setter for isContentAwareDistortionCorrectionEnabled.
Sourcepub unsafe fn isZeroShutterLagSupported(&self) -> bool
pub unsafe fn isZeroShutterLagSupported(&self) -> bool
A BOOL value specifying whether zero shutter lag is supported.
This property returns YES if the session’s current configuration allows zero shutter lag. When switching cameras or formats, setting depthDataDeliveryEnabled, or setting virtualDeviceConstituentPhotoDeliveryEnabled this property may change. When this property changes from YES to NO, zeroShutterLagEnabled also reverts to NO. This property is key-value observable.
Sourcepub unsafe fn isZeroShutterLagEnabled(&self) -> bool
pub unsafe fn isZeroShutterLagEnabled(&self) -> bool
A BOOL value specifying whether the output is set up to support zero shutter lag.
This property may only be set to YES if zeroShutterLagSupported is YES, otherwise an NSInvalidArgumentException is thrown. For apps linked on or after iOS 17 zero shutter lag is automatically enabled when supported. Enabling zero shutter lag reduces or eliminates shutter lag when using AVCapturePhotoQualityPrioritizationBalanced or Quality at the cost of additional memory usage by the photo output. The timestamp of the AVCapturePhoto may be slightly earlier than when -capturePhotoWithSettings:delegate: was called. To minimize camera shake from the user’s tapping gesture it is recommended that -capturePhotoWithSettings:delegate: be called as early as possible when handling the touch down event. Zero shutter lag isn’t available when using manual exposure or bracketed capture. Changing this property requires a lengthy reconfiguration of the capture render pipeline, so you should set this property to YES before calling -[AVCaptureSession startRunning] or within -[AVCaptureSession beginConfiguration] and -[AVCaptureSession commitConfiguration] while running.
Sourcepub unsafe fn setZeroShutterLagEnabled(&self, zero_shutter_lag_enabled: bool)
pub unsafe fn setZeroShutterLagEnabled(&self, zero_shutter_lag_enabled: bool)
Setter for isZeroShutterLagEnabled.
Sourcepub unsafe fn isResponsiveCaptureSupported(&self) -> bool
pub unsafe fn isResponsiveCaptureSupported(&self) -> bool
A BOOL value specifying whether responsive capture is supported.
Enabling responsive capture increases peak and sustained capture rates, and reduces shutter lag at the cost of additional memory usage by the photo output. This property returns YES if the session’s current configuration allows responsive capture. When switching cameras or formats, enabling depth data delivery, or enabling zero shutter lag this property may change. Responsive capture is only supported when zero shutter lag is enabled. When this property changes from YES to NO, responsiveCaptureEnabled also reverts to NO. This property is key-value observable.
Sourcepub unsafe fn isResponsiveCaptureEnabled(&self) -> bool
pub unsafe fn isResponsiveCaptureEnabled(&self) -> bool
A BOOL value specifying whether the photo output is set up to support responsive capture.
This property may only be set to YES if responsiveCaptureSupported is YES, otherwise an NSInvalidArgumentException is thrown. When responsiveCaptureEnabled is YES the captureReadiness property should be used to determine whether new capture requests can be serviced in a reasonable time and whether the shutter control should be available to the user. Responsive capture adds buffering between the capture and photo processing stages which allows a new capture to start before processing has completed for the previous capture, so be prepared to handle -captureOutput:willBeginCaptureForResolvedSettings: being called before the -captureOutput:didFinishProcessingPhoto: for the prior requests. Processed photos continue to be delivered in the order they were captured. To minimize camera shake from the user’s tapping gesture it is recommended that -capturePhotoWithSettings:delegate: be called as early as possible when handling the touch down event. Enabling responsive capture allows the fast capture prioritization feature to be used, which further increases capture rates and reduces preview and recording disruptions. See the fastCapturePrioritizationEnabled property. When requesting uncompressed output using kCVPixelBufferPixelFormatTypeKey in AVCapturePhotoSetting.format the AVCapturePhoto’s pixelBuffer is allocated from a pool with enough capacity for that request only, and overlap between capture and processing is disabled. The client must release the AVCapturePhoto and references to the pixelBuffer before capturing again and the pixelBuffer’s IOSurface must also no longer be in use. Changing this property requires a lengthy reconfiguration of the capture render pipeline, so you should set this property to YES before calling -[AVCaptureSession startRunning] or within -[AVCaptureSession beginConfiguration] and -[AVCaptureSession commitConfiguration] while running.
Sourcepub unsafe fn setResponsiveCaptureEnabled(
&self,
responsive_capture_enabled: bool,
)
pub unsafe fn setResponsiveCaptureEnabled( &self, responsive_capture_enabled: bool, )
Setter for isResponsiveCaptureEnabled.
Sourcepub unsafe fn captureReadiness(&self) -> AVCapturePhotoOutputCaptureReadiness
pub unsafe fn captureReadiness(&self) -> AVCapturePhotoOutputCaptureReadiness
A value specifying whether the photo output is ready to respond to new capture requests in a timely manner.
This property can be key-value observed to enable and disable shutter button UI depending on whether the output is ready to capture, which is especially important when the responsiveCaptureEnabled property is YES. When interacting with AVCapturePhotoOutput on a background queue AVCapturePhotoOutputReadinessCoordinator should instead be used to observe readiness changes and perform UI updates. Capturing only when the output is ready limits the number of requests inflight to minimize shutter lag while maintaining the fastest shot to shot time. When the property returns a value other than Ready the output is not ready to capture and the shutter button should be disabled to prevent the user from initiating new requests. The output continues to accept requests when the captureReadiness property returns a value other than Ready, but the request may not be serviced for a longer period. The visual presentation of the shutter button can be customized based on the readiness value. When the user rapidly taps the shutter button the property may transition to NotReadyMomentarily for a brief period. Although the shutter button should be disabled during this period it is short lived enough that dimming or changing the appearance of the shutter is not recommended as it would be visually distracting to the user. Longer running capture types like flash or captures with AVCapturePhotoQualityPrioritizationQuality may prevent the output from capturing for an extended period, indicated by NotReadyWaitingForCapture or NotReadyWaitingForProcessing, which is appropriate to show by dimming or disabling the shutter button. For NotReadyWaitingForProcessing it is also appropriate to show a spinner or other indication that the shutter is busy.
Sourcepub unsafe fn isConstantColorSupported(&self) -> bool
pub unsafe fn isConstantColorSupported(&self) -> bool
A BOOL value specifying whether constant color capture is supported.
An object’s color in a photograph is affected by the light sources illuminating the scene, so the color of the same object photographed in warm light might look markedly different than in colder light. In some use cases, such ambient light induced color variation is undesirable, and the user may prefer an estimate of what these materials would look like under a standard light such as daylight (D65), regardless of the lighting conditions at the time the photograph was taken. Some devices are capable of producing such constant color photos.
Constant color captures require the flash to be fired and may require pre-flash sequence to determine the correct focus and exposure, therefore it might take several seconds to acquire a constant color photo. Due to this flash requirement, a constant color capture can only be taken with AVCaptureFlashModeAuto or AVCaptureFlashModeOn as the flash mode, otherwise an exception is thrown.
Constant color can only be achieved when the flash has a discernible effect on the scene so it may not perform well in bright conditions such as direct sunlight. Use the constantColorConfidenceMap property to examine the confidence level, and therefore the usefulness, of each region of a constant color photo.
Constant color should not be used in conjunction with locked or manual white balance.
This property returns YES if the session’s current configuration allows photos to be captured with constant color. When switching cameras or formats this property may change. When this property changes from YES to NO, constantColorEnabled also reverts to NO. If you’ve previously opted in for constant color and then change configurations, you may need to set constantColorEnabled = YES again. This property is key-value observable.
Sourcepub unsafe fn isConstantColorEnabled(&self) -> bool
pub unsafe fn isConstantColorEnabled(&self) -> bool
A BOOL value specifying whether the photo render pipeline is set up to perform constant color captures.
Default is NO. Set to YES to enable support for taking constant color photos. This property may only be set to YES if constantColorSupported is YES. Enabling constant color requires a lengthy reconfiguration of the capture render pipeline, so if you intend to capture constant color photos, you should set this property to YES before calling -[AVCaptureSession startRunning] or within -[AVCaptureSession beginConfiguration] and -[AVCaptureSession commitConfiguration] while running.
Sourcepub unsafe fn setConstantColorEnabled(&self, constant_color_enabled: bool)
pub unsafe fn setConstantColorEnabled(&self, constant_color_enabled: bool)
Setter for isConstantColorEnabled.
Sourcepub unsafe fn isShutterSoundSuppressionSupported(&self) -> bool
pub unsafe fn isShutterSoundSuppressionSupported(&self) -> bool
Specifies whether suppressing the shutter sound is supported.
On iOS, this property returns NO in jurisdictions where shutter sound production cannot be disabled. On all other platforms, it always returns NO.
Sourcepub unsafe fn isCameraSensorOrientationCompensationSupported(&self) -> bool
pub unsafe fn isCameraSensorOrientationCompensationSupported(&self) -> bool
A read-only BOOL value indicating whether still image buffers may be rotated to match the sensor orientation of earlier generation hardware.
Value is YES for camera configurations which support compensation for the sensor orientation, which is applied to HEIC, JPEG, and uncompressed processed photos only; compensation is never applied to Bayer RAW or Apple ProRaw captures.
Sourcepub unsafe fn isCameraSensorOrientationCompensationEnabled(&self) -> bool
pub unsafe fn isCameraSensorOrientationCompensationEnabled(&self) -> bool
A BOOL value indicating that still image buffers will be rotated to match the sensor orientation of earlier generation hardware.
Default is YES when cameraSensorOrientationCompensationSupported is YES. Set to NO if your app does not require sensor orientation compensation.
Sourcepub unsafe fn setCameraSensorOrientationCompensationEnabled(
&self,
camera_sensor_orientation_compensation_enabled: bool,
)
pub unsafe fn setCameraSensorOrientationCompensationEnabled( &self, camera_sensor_orientation_compensation_enabled: bool, )
Setter for isCameraSensorOrientationCompensationEnabled.
Source§impl AVCapturePhotoOutput
AVCapturePhotoOutputDepthDataDeliverySupport.
impl AVCapturePhotoOutput
AVCapturePhotoOutputDepthDataDeliverySupport.
Sourcepub unsafe fn isDepthDataDeliverySupported(&self) -> bool
pub unsafe fn isDepthDataDeliverySupported(&self) -> bool
A BOOL value specifying whether depth data delivery is supported.
Some cameras and configurations support the delivery of depth data (e.g. disparity maps) along with the photo. This property returns YES if the session’s current configuration allows photos to be captured with depth data, from which depth-related filters may be applied. When switching cameras or formats this property may change. When this property changes from YES to NO, depthDataDeliveryEnabled also reverts to NO. If you’ve previously opted in for depth data delivery and then change configurations, you may need to set depthDataDeliveryEnabled = YES again. This property is key-value observable.
Sourcepub unsafe fn isDepthDataDeliveryEnabled(&self) -> bool
pub unsafe fn isDepthDataDeliveryEnabled(&self) -> bool
A BOOL specifying whether the photo render pipeline is prepared for depth data delivery.
Default is NO. Set to YES if you wish depth data to be delivered with your AVCapturePhotos. This property may only be set to YES if depthDataDeliverySupported is YES. Enabling depth data delivery requires a lengthy reconfiguration of the capture render pipeline, so if you intend to capture depth data, you should set this property to YES before calling -[AVCaptureSession startRunning].
Sourcepub unsafe fn setDepthDataDeliveryEnabled(
&self,
depth_data_delivery_enabled: bool,
)
pub unsafe fn setDepthDataDeliveryEnabled( &self, depth_data_delivery_enabled: bool, )
Setter for isDepthDataDeliveryEnabled.
Sourcepub unsafe fn isPortraitEffectsMatteDeliverySupported(&self) -> bool
pub unsafe fn isPortraitEffectsMatteDeliverySupported(&self) -> bool
A BOOL value specifying whether portrait effects matte delivery is supported.
Some cameras and configurations support the delivery of a matting image to augment depth data and aid in high quality portrait effect rendering (see AVPortraitEffectsMatte.h). This property returns YES if the session’s current configuration allows photos to be captured with a portrait effects matte. When switching cameras or formats this property may change. When this property changes from YES to NO, portraitEffectsMatteDeliveryEnabled also reverts to NO. If you’ve previously opted in for portrait effects matte delivery and then change configurations, you may need to set portraitEffectsMatteDeliveryEnabled = YES again. This property is key-value observable.
Sourcepub unsafe fn isPortraitEffectsMatteDeliveryEnabled(&self) -> bool
pub unsafe fn isPortraitEffectsMatteDeliveryEnabled(&self) -> bool
A BOOL specifying whether the photo render pipeline is prepared for portrait effects matte delivery.
Default is NO. Set to YES if you wish portrait effects mattes to be delivered with your AVCapturePhotos. This property may only be set to YES if portraitEffectsMatteDeliverySupported is YES. Portrait effects matte generation requires depth to be present, so when enabling portrait effects matte delivery, you must also set depthDataDeliveryEnabled to YES. Enabling portrait effects matte delivery requires a lengthy reconfiguration of the capture render pipeline, so if you intend to capture portrait effects mattes, you should set this property to YES before calling -[AVCaptureSession startRunning].
Sourcepub unsafe fn setPortraitEffectsMatteDeliveryEnabled(
&self,
portrait_effects_matte_delivery_enabled: bool,
)
pub unsafe fn setPortraitEffectsMatteDeliveryEnabled( &self, portrait_effects_matte_delivery_enabled: bool, )
Setter for isPortraitEffectsMatteDeliveryEnabled.
Sourcepub unsafe fn availableSemanticSegmentationMatteTypes(
&self,
) -> Retained<NSArray<AVSemanticSegmentationMatteType>>
Available on crate feature AVSemanticSegmentationMatte only.
pub unsafe fn availableSemanticSegmentationMatteTypes( &self, ) -> Retained<NSArray<AVSemanticSegmentationMatteType>>
AVSemanticSegmentationMatte only.An array of supported semantic segmentation matte types that may be captured and delivered along with your AVCapturePhotos.
Some cameras and configurations support the delivery of semantic segmentation matting images (e.g. segmentations of the hair, skin, or teeth in the photo). This property returns an array of AVSemanticSegmentationMatteTypes available given the session’s current configuration. When switching cameras or formats this property may change. When this property changes, enabledSemanticSegmentationMatteTypes reverts to an empty array. If you’ve previously opted in for delivery of one or more semantic segmentation mattes and then change configurations, you need to set up your enabledSemanticSegmentationMatteTypes again. This property is key-value observable.
Sourcepub unsafe fn enabledSemanticSegmentationMatteTypes(
&self,
) -> Retained<NSArray<AVSemanticSegmentationMatteType>>
Available on crate feature AVSemanticSegmentationMatte only.
pub unsafe fn enabledSemanticSegmentationMatteTypes( &self, ) -> Retained<NSArray<AVSemanticSegmentationMatteType>>
AVSemanticSegmentationMatte only.An array of semantic segmentation matte types which the photo render pipeline is prepared to deliver.
Default is empty array. You may set this to the array of matte types you’d like to be delivered with your AVCapturePhotos. The array may only contain values present in availableSemanticSegmentationMatteTypes. Enabling semantic segmentation matte delivery requires a lengthy reconfiguration of the capture render pipeline, so if you intend to capture semantic segmentation mattes, you should set this property to YES before calling -[AVCaptureSession startRunning].
Sourcepub unsafe fn setEnabledSemanticSegmentationMatteTypes(
&self,
enabled_semantic_segmentation_matte_types: &NSArray<AVSemanticSegmentationMatteType>,
)
Available on crate feature AVSemanticSegmentationMatte only.
pub unsafe fn setEnabledSemanticSegmentationMatteTypes( &self, enabled_semantic_segmentation_matte_types: &NSArray<AVSemanticSegmentationMatteType>, )
AVSemanticSegmentationMatte only.Setter for enabledSemanticSegmentationMatteTypes.
Methods from Deref<Target = AVCaptureOutput>§
Sourcepub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
Available on crate feature AVCaptureSession only.
pub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
AVCaptureSession only.The connections that describe the flow of media data to the receiver from AVCaptureInputs.
The value of this property is an NSArray of AVCaptureConnection objects, each describing the mapping between the receiver and the AVCaptureInputPorts of one or more AVCaptureInputs.
Sourcepub unsafe fn connectionWithMediaType(
&self,
media_type: &AVMediaType,
) -> Option<Retained<AVCaptureConnection>>
Available on crate features AVCaptureSession and AVMediaFormat only.
pub unsafe fn connectionWithMediaType( &self, media_type: &AVMediaType, ) -> Option<Retained<AVCaptureConnection>>
AVCaptureSession and AVMediaFormat only.Returns the first connection in the connections array with an inputPort of the specified mediaType.
Parameter mediaType: An AVMediaType constant from AVMediaFormat.h, e.g. AVMediaTypeVideo.
This convenience method returns the first AVCaptureConnection in the receiver’s connections array that has an AVCaptureInputPort of the specified mediaType. If no connection with the specified mediaType is found, nil is returned.
Sourcepub unsafe fn transformedMetadataObjectForMetadataObject_connection(
&self,
metadata_object: &AVMetadataObject,
connection: &AVCaptureConnection,
) -> Option<Retained<AVMetadataObject>>
Available on crate features AVCaptureSession and AVMetadataObject only.
pub unsafe fn transformedMetadataObjectForMetadataObject_connection( &self, metadata_object: &AVMetadataObject, connection: &AVCaptureConnection, ) -> Option<Retained<AVMetadataObject>>
AVCaptureSession and AVMetadataObject only.Converts an AVMetadataObject’s visual properties to the receiver’s coordinates.
Parameter metadataObject: An AVMetadataObject originating from the same AVCaptureInput as the receiver.
Parameter connection: The receiver’s connection whose AVCaptureInput matches that of the metadata object to be converted.
Returns: An AVMetadataObject whose properties are in output coordinates.
AVMetadataObject bounds may be expressed as a rect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. Face metadata objects likewise express yaw and roll angles with respect to an unrotated picture. -transformedMetadataObjectForMetadataObject:connection: converts the visual properties in the coordinate space of the supplied AVMetadataObject to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. If the provided metadata object originates from an input source other than the preview layer’s, nil will be returned.
If an AVCaptureVideoDataOutput instance’s connection’s videoOrientation or videoMirrored properties are set to non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping sample buffers as they pass through it. AVCaptureStillImageOutput, on the other hand, does not physically rotate its buffers. It attaches an appropriate kCGImagePropertyOrientation number to captured still image buffers (see ImageIO/CGImageProperties.h) indicating how the image should be displayed on playback. Likewise, AVCaptureMovieFileOutput does not physically apply orientation/mirroring to its sample buffers – it uses a QuickTime track matrix to indicate how the buffers should be rotated and/or flipped on playback.
transformedMetadataObjectForMetadataObject:connection: alters the visual properties of the provided metadata object to match the physical rotation / mirroring of the sample buffers provided by the receiver through the indicated connection. I.e., for video data output, adjusted metadata object coordinates are rotated/mirrored. For still image and movie file output, they are not.
Sourcepub unsafe fn metadataOutputRectOfInterestForRect(
&self,
rect_in_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn metadataOutputRectOfInterestForRect( &self, rect_in_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle in the receiver’s coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver.
Parameter rectInOutputCoordinates: A CGRect in the receiver’s coordinates.
Returns: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn rectForMetadataOutputRectOfInterest(
&self,
rect_in_metadata_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn rectForMetadataOutputRectOfInterest( &self, rect_in_metadata_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver to a rectangle in the receiver’s coordinates.
Parameter rectInMetadataOutputCoordinates: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
Returns: A CGRect in the receiver’s coordinates.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn isDeferredStartSupported(&self) -> bool
pub unsafe fn isDeferredStartSupported(&self) -> bool
A BOOL value that indicates whether the output supports deferred start.
You can only set the deferredStartEnabled property value to true if the output supports deferred start.
Sourcepub unsafe fn isDeferredStartEnabled(&self) -> bool
pub unsafe fn isDeferredStartEnabled(&self) -> bool
A BOOL value that indicates whether to defer starting this capture output.
When this value is true, the session does not prepare the output’s resources until some time after AVCaptureSession/startRunning returns. You can start the visual parts of your user interface (e.g. preview) prior to other parts (e.g. photo/movie capture, metadata output, etc..) to improve startup performance. Set this value to false for outputs that your app needs for startup, and true for the ones it does not need to start immediately. For example, an AVCaptureVideoDataOutput that you intend to use for displaying preview should set this value to false, so that the frames are available as soon as possible.
By default, for apps that are linked on or after iOS 26, this property value is true for AVCapturePhotoOutput and AVCaptureFileOutput subclasses if supported, and false otherwise. When set to true for AVCapturePhotoOutput, if you want to support multiple capture requests before running deferred start, set AVCapturePhotoOutput/responsiveCaptureEnabled to true on that output.
If deferredStartSupported is false, setting this property value to true results in the system throwing an NSInvalidArgumentException.
- Note: Set this value before calling
AVCaptureSession/commitConfigurationas it requires a lengthy reconfiguration of the capture render pipeline.
Sourcepub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
pub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
Setter for isDeferredStartEnabled.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCaptureOutput> for AVCapturePhotoOutput
impl AsRef<AVCaptureOutput> for AVCapturePhotoOutput
Source§fn as_ref(&self) -> &AVCaptureOutput
fn as_ref(&self) -> &AVCaptureOutput
Source§impl AsRef<AnyObject> for AVCapturePhotoOutput
impl AsRef<AnyObject> for AVCapturePhotoOutput
Source§impl AsRef<NSObject> for AVCapturePhotoOutput
impl AsRef<NSObject> for AVCapturePhotoOutput
Source§impl Borrow<AVCaptureOutput> for AVCapturePhotoOutput
impl Borrow<AVCaptureOutput> for AVCapturePhotoOutput
Source§fn borrow(&self) -> &AVCaptureOutput
fn borrow(&self) -> &AVCaptureOutput
Source§impl Borrow<AnyObject> for AVCapturePhotoOutput
impl Borrow<AnyObject> for AVCapturePhotoOutput
Source§impl Borrow<NSObject> for AVCapturePhotoOutput
impl Borrow<NSObject> for AVCapturePhotoOutput
Source§impl ClassType for AVCapturePhotoOutput
impl ClassType for AVCapturePhotoOutput
Source§const NAME: &'static str = "AVCapturePhotoOutput"
const NAME: &'static str = "AVCapturePhotoOutput"
Source§type Super = AVCaptureOutput
type Super = AVCaptureOutput
Source§type ThreadKind = <<AVCapturePhotoOutput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCapturePhotoOutput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCapturePhotoOutput
impl Debug for AVCapturePhotoOutput
Source§impl Deref for AVCapturePhotoOutput
impl Deref for AVCapturePhotoOutput
Source§impl Hash for AVCapturePhotoOutput
impl Hash for AVCapturePhotoOutput
Source§impl Message for AVCapturePhotoOutput
impl Message for AVCapturePhotoOutput
Source§impl NSObjectProtocol for AVCapturePhotoOutput
impl NSObjectProtocol for AVCapturePhotoOutput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref