pub struct AVCaptureStillImageOutput { /* private fields */ }AVCaptureOutputBase and AVCaptureStillImageOutput only.Expand description
AVCaptureStillImageOutput is a concrete subclass of AVCaptureOutput that can be used to capture high-quality still images with accompanying metadata.
Instances of AVCaptureStillImageOutput can be used to capture, on demand, high quality snapshots from a realtime capture source. Clients can request a still image for the current time using the captureStillImageAsynchronouslyFromConnection:completionHandler: method. Clients can also configure still image outputs to produce still images in specific image formats.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureStillImageOutput
impl AVCaptureStillImageOutput
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn outputSettings(
&self,
) -> Retained<NSDictionary<NSString, AnyObject>>
👎Deprecated: Use AVCapturePhotoOutput instead.
pub unsafe fn outputSettings( &self, ) -> Retained<NSDictionary<NSString, AnyObject>>
Specifies the options the receiver uses to encode still images before they are delivered.
See AVVideoSettings.h for more information on how to construct an output settings dictionary.
On iOS, the only currently supported keys are AVVideoCodecKey and kCVPixelBufferPixelFormatTypeKey. Use -availableImageDataCVPixelFormatTypes and -availableImageDataCodecTypes to determine what codec keys and pixel formats are supported. AVVideoQualityKey is supported on iOS 6.0 and later and may only be used when AVVideoCodecKey is set to AVVideoCodecTypeJPEG.
Sourcepub unsafe fn setOutputSettings(
&self,
output_settings: &NSDictionary<NSString, AnyObject>,
)
👎Deprecated: Use AVCapturePhotoOutput instead.
pub unsafe fn setOutputSettings( &self, output_settings: &NSDictionary<NSString, AnyObject>, )
Setter for outputSettings.
This is copied when set.
§Safety
output_settings generic should be of the correct type.
Sourcepub unsafe fn availableImageDataCVPixelFormatTypes(
&self,
) -> Retained<NSArray<NSNumber>>
👎Deprecated: Use AVCapturePhotoOutput instead.
pub unsafe fn availableImageDataCVPixelFormatTypes( &self, ) -> Retained<NSArray<NSNumber>>
Indicates the supported image pixel formats that can be specified in outputSettings.
The value of this property is an NSArray of NSNumbers that can be used as values for the kCVPixelBufferPixelFormatTypeKey in the receiver’s outputSettings property. The first format in the returned list is the most efficient output format.
Sourcepub unsafe fn availableImageDataCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
👎Deprecated: Use AVCapturePhotoOutput instead.Available on crate feature AVVideoSettings only.
pub unsafe fn availableImageDataCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.Indicates the supported image codec formats that can be specified in outputSettings.
The value of this property is an NSArray of AVVideoCodecTypes that can be used as values for the AVVideoCodecKey in the receiver’s outputSettings property.
Sourcepub unsafe fn isStillImageStabilizationSupported(&self) -> bool
pub unsafe fn isStillImageStabilizationSupported(&self) -> bool
Indicates whether the receiver supports still image stabilization.
The receiver’s automaticallyEnablesStillImageStabilizationWhenAvailable property can only be set if this property returns YES. Its value may change as the session’s -sessionPreset or input device’s -activeFormat changes.
Sourcepub unsafe fn automaticallyEnablesStillImageStabilizationWhenAvailable(
&self,
) -> bool
pub unsafe fn automaticallyEnablesStillImageStabilizationWhenAvailable( &self, ) -> bool
Indicates whether the receiver should automatically use still image stabilization when necessary.
On a receiver where -isStillImageStabilizationSupported returns YES, image stabilization may be applied to reduce blur commonly found in low light photos. When stabilization is enabled, still image captures incur additional latency. The default value is YES when supported, NO otherwise. Setting this property throws an NSInvalidArgumentException if -isStillImageStabilizationSupported returns NO.
Sourcepub unsafe fn setAutomaticallyEnablesStillImageStabilizationWhenAvailable(
&self,
automatically_enables_still_image_stabilization_when_available: bool,
)
pub unsafe fn setAutomaticallyEnablesStillImageStabilizationWhenAvailable( &self, automatically_enables_still_image_stabilization_when_available: bool, )
Sourcepub unsafe fn isStillImageStabilizationActive(&self) -> bool
pub unsafe fn isStillImageStabilizationActive(&self) -> bool
Indicates whether still image stabilization is in use for the current capture.
On a receiver where -isStillImageStabilizationSupported returns YES, and automaticallyEnablesStillImageStabilizationWhenAvailable is set to YES, this property may be key-value observed, or queried from inside your key-value observation callback for the “ capturingStillImage“ property, to find out if still image stabilization is being applied to the current capture.
Sourcepub unsafe fn isHighResolutionStillImageOutputEnabled(&self) -> bool
pub unsafe fn isHighResolutionStillImageOutputEnabled(&self) -> bool
Indicates whether the receiver should emit still images at the highest resolution supported by its source AVCaptureDevice’s activeFormat.
By default, AVCaptureStillImageOutput emits images with the same dimensions as its source AVCaptureDevice’s activeFormat.formatDescription. However, if you set this property to YES, the receiver emits still images at its source AVCaptureDevice’s activeFormat.highResolutionStillImageDimensions. Note that if you enable video stabilization (see AVCaptureConnection’s preferredVideoStabilizationMode) for any output, the high resolution still images emitted by AVCaptureStillImageOutput may be smaller by 10 or more percent.
Sourcepub unsafe fn setHighResolutionStillImageOutputEnabled(
&self,
high_resolution_still_image_output_enabled: bool,
)
pub unsafe fn setHighResolutionStillImageOutputEnabled( &self, high_resolution_still_image_output_enabled: bool, )
Setter for isHighResolutionStillImageOutputEnabled.
Sourcepub unsafe fn isCameraSensorOrientationCompensationSupported(&self) -> bool
pub unsafe fn isCameraSensorOrientationCompensationSupported(&self) -> bool
A read-only BOOL value indicating whether still image buffers may be rotated to match the sensor orientation of earlier generation hardware.
Value is YES for camera configurations which support compensation for the sensor orientation, which is applied to HEIC, JPEG, and uncompressed processed photos only; compensation is never applied to Bayer RAW or Apple ProRaw captures.
Sourcepub unsafe fn isCameraSensorOrientationCompensationEnabled(&self) -> bool
pub unsafe fn isCameraSensorOrientationCompensationEnabled(&self) -> bool
A BOOL value indicating that still image buffers will be rotated to match the sensor orientation of earlier generation hardware.
Default is YES when cameraSensorOrientationCompensationSupported is YES. Set to NO if your app does not require sensor orientation compensation.
Sourcepub unsafe fn setCameraSensorOrientationCompensationEnabled(
&self,
camera_sensor_orientation_compensation_enabled: bool,
)
pub unsafe fn setCameraSensorOrientationCompensationEnabled( &self, camera_sensor_orientation_compensation_enabled: bool, )
Setter for isCameraSensorOrientationCompensationEnabled.
Sourcepub unsafe fn isCapturingStillImage(&self) -> bool
pub unsafe fn isCapturingStillImage(&self) -> bool
A boolean value that becomes true when a still image is being captured.
The value of this property is a BOOL that becomes true when a still image is being captured, and false when no still image capture is underway. This property is key-value observable.
Sourcepub unsafe fn captureStillImageAsynchronouslyFromConnection_completionHandler(
&self,
connection: &AVCaptureConnection,
handler: &DynBlock<dyn Fn(*mut CMSampleBuffer, *mut NSError)>,
)
👎Deprecated: Use AVCapturePhotoOutput instead.Available on crate features AVCaptureSession and block2 and objc2-core-media only.
pub unsafe fn captureStillImageAsynchronouslyFromConnection_completionHandler( &self, connection: &AVCaptureConnection, handler: &DynBlock<dyn Fn(*mut CMSampleBuffer, *mut NSError)>, )
AVCaptureSession and block2 and objc2-core-media only.Initiates an asynchronous still image capture, returning the result to a completion handler.
Parameter connection: The AVCaptureConnection object from which to capture the still image.
Parameter handler: A block that will be called when the still image capture is complete. The block will be passed a CMSampleBuffer object containing the image data or an NSError object if an image could not be captured.
This method will return immediately after it is invoked, later calling the provided completion handler block when image data is ready. If the request could not be completed, the error parameter will contain an NSError object describing the failure.
Attachments to the image data sample buffer may contain metadata appropriate to the image data format. For instance, a sample buffer containing JPEG data may carry a kCGImagePropertyExifDictionary as an attachment. See <ImageIO /CGImageProperties.h> for a list of keys and value types.
Clients should not assume that the completion handler will be called on a specific thread.
Calls to captureStillImageAsynchronouslyFromConnection:completionHandler: are not synchronized with AVCaptureDevice manual control completion handlers. Setting a device manual control, waiting for its completion, then calling captureStillImageAsynchronouslyFromConnection:completionHandler: DOES NOT ensure that the still image returned reflects your manual control change. It may be from an earlier time. You can compare your manual control completion handler sync time to the returned still image’s presentation time. You can retrieve the sample buffer’s pts using CMSampleBufferGetPresentationTimestamp(). If the still image has an earlier timestamp, your manual control command does not apply to it.
Sourcepub unsafe fn jpegStillImageNSDataRepresentation(
jpeg_sample_buffer: &CMSampleBuffer,
) -> Option<Retained<NSData>>
👎Deprecated: Use AVCapturePhotoOutput instead.Available on crate feature objc2-core-media only.
pub unsafe fn jpegStillImageNSDataRepresentation( jpeg_sample_buffer: &CMSampleBuffer, ) -> Option<Retained<NSData>>
objc2-core-media only.Converts the still image data and metadata attachments in a JPEG sample buffer to an NSData representation.
Parameter jpegSampleBuffer: The sample buffer carrying JPEG image data, optionally with Exif metadata sample buffer attachments. This method throws an NSInvalidArgumentException if jpegSampleBuffer is NULL or not in the JPEG format.
This method returns an NSData representation of a JPEG still image sample buffer, merging the image data and Exif metadata sample buffer attachments without recompressing the image. The returned NSData is suitable for writing to disk.
Source§impl AVCaptureStillImageOutput
AVCaptureStillImageOutputBracketedCapture.
impl AVCaptureStillImageOutput
AVCaptureStillImageOutputBracketedCapture.
A category of methods for bracketed still image capture.
A “still image bracket” is a batch of images taken as quickly as possible in succession, optionally with different settings from picture to picture.
In a bracketed capture, AVCaptureDevice flashMode property is ignored (flash is forced off), as is AVCaptureStillImageOutput’s automaticallyEnablesStillImageStabilizationWhenAvailable property (stabilization is forced off).
Sourcepub unsafe fn maxBracketedCaptureStillImageCount(&self) -> NSUInteger
👎Deprecated: Use AVCapturePhotoOutput maxBracketedCapturePhotoCount instead.
pub unsafe fn maxBracketedCaptureStillImageCount(&self) -> NSUInteger
Specifies the maximum number of still images that may be taken in a single bracket.
AVCaptureStillImageOutput can only satisfy a limited number of image requests in a single bracket without exhausting system resources. The maximum number of still images that may be taken in a single bracket depends on the size of the images being captured, and consequently may vary with AVCaptureSession -sessionPreset and AVCaptureDevice -activeFormat. Some formats do not support bracketed capture and return a maxBracketedCaptureStillImageCount of 0. This read-only property is key-value observable. If you exceed -maxBracketedCaptureStillImageCount, then -captureStillImageBracketAsynchronouslyFromConnection:withSettingsArray:completionHandler: fails and the completionHandler is called [settings count] times with a NULL sample buffer and AVErrorMaximumStillImageCaptureRequestsExceeded.
Sourcepub unsafe fn isLensStabilizationDuringBracketedCaptureSupported(&self) -> bool
👎Deprecated: Use AVCapturePhotoOutput lensStabilizationDuringBracketedCaptureSupported instead.
pub unsafe fn isLensStabilizationDuringBracketedCaptureSupported(&self) -> bool
Indicates whether the receiver supports lens stabilization during bracketed captures.
The receiver’s lensStabilizationDuringBracketedCaptureEnabled property can only be set if this property returns YES. Its value may change as the session’s -sessionPreset or input device’s -activeFormat changes. This read-only property is key-value observable.
Sourcepub unsafe fn isLensStabilizationDuringBracketedCaptureEnabled(&self) -> bool
👎Deprecated: Use AVCapturePhotoOutput with AVCapturePhotoBracketSettings instead.
pub unsafe fn isLensStabilizationDuringBracketedCaptureEnabled(&self) -> bool
Indicates whether the receiver should use lens stabilization during bracketed captures.
On a receiver where -isLensStabilizationDuringBracketedCaptureSupported returns YES, lens stabilization may be applied to the bracket to reduce blur commonly found in low light photos. When lens stabilization is enabled, bracketed still image captures incur additional latency. Lens stabilization is more effective with longer-exposure captures, and offers limited or no benefit for exposure durations shorter than 1/30 of a second. It is possible that during the bracket, the lens stabilization module may run out of correction range and therefore will not be active for every frame in the bracket. Each emitted CMSampleBuffer from the bracket will have an attachment of kCMSampleBufferAttachmentKey_StillImageLensStabilizationInfo indicating additional information about stabilization was applied to the buffer, if any. The default value of -isLensStabilizationDuringBracketedCaptureEnabled is NO. This value will be set to NO when -isLensStabilizationDuringBracketedCaptureSupported changes to NO. Setting this property throws an NSInvalidArgumentException if -isLensStabilizationDuringBracketedCaptureSupported returns NO. This property is key-value observable.
Sourcepub unsafe fn setLensStabilizationDuringBracketedCaptureEnabled(
&self,
lens_stabilization_during_bracketed_capture_enabled: bool,
)
👎Deprecated: Use AVCapturePhotoOutput with AVCapturePhotoBracketSettings instead.
pub unsafe fn setLensStabilizationDuringBracketedCaptureEnabled( &self, lens_stabilization_during_bracketed_capture_enabled: bool, )
Setter for isLensStabilizationDuringBracketedCaptureEnabled.
Sourcepub unsafe fn prepareToCaptureStillImageBracketFromConnection_withSettingsArray_completionHandler(
&self,
connection: &AVCaptureConnection,
settings: &NSArray<AVCaptureBracketedStillImageSettings>,
handler: &DynBlock<dyn Fn(Bool, *mut NSError)>,
)
👎Deprecated: Use AVCapturePhotoOutput setPreparedPhotoSettingsArray:completionHandler: instead.Available on crate features AVCaptureSession and block2 only.
pub unsafe fn prepareToCaptureStillImageBracketFromConnection_withSettingsArray_completionHandler( &self, connection: &AVCaptureConnection, settings: &NSArray<AVCaptureBracketedStillImageSettings>, handler: &DynBlock<dyn Fn(Bool, *mut NSError)>, )
AVCaptureSession and block2 only.Allows the receiver to prepare resources in advance of capturing a still image bracket.
Parameter connection: The connection through which the still image bracket should be captured.
Parameter settings: An array of AVCaptureBracketedStillImageSettings objects. All must be of the same kind of AVCaptureBracketedStillImageSettings subclass, or an NSInvalidArgumentException is thrown.
Parameter handler: A user provided block that will be called asynchronously once resources have successfully been allocated for the specified bracketed capture operation. If sufficient resources could not be allocated, the “prepared” parameter contains NO, and “error” parameter contains a non-nil error value. If [settings count] exceeds -maxBracketedCaptureStillImageCount, then AVErrorMaximumStillImageCaptureRequestsExceeded is returned. You should not assume that the completion handler will be called on a specific thread.
-maxBracketedCaptureStillImageCount tells you the maximum number of images that may be taken in a single bracket given the current AVCaptureDevice/AVCaptureSession/AVCaptureStillImageOutput configuration. But before taking a still image bracket, additional resources may need to be allocated. By calling -prepareToCaptureStillImageBracketFromConnection:withSettingsArray:completionHandler: first, you are able to deterministically know when the receiver is ready to capture the bracket with the specified settings array.
Sourcepub unsafe fn captureStillImageBracketAsynchronouslyFromConnection_withSettingsArray_completionHandler(
&self,
connection: &AVCaptureConnection,
settings: &NSArray<AVCaptureBracketedStillImageSettings>,
handler: &DynBlock<dyn Fn(*mut CMSampleBuffer, *mut AVCaptureBracketedStillImageSettings, *mut NSError)>,
)
👎Deprecated: Use AVCapturePhotoOutput capturePhotoWithSettings:delegate: instead.Available on crate features AVCaptureSession and block2 and objc2-core-media only.
pub unsafe fn captureStillImageBracketAsynchronouslyFromConnection_withSettingsArray_completionHandler( &self, connection: &AVCaptureConnection, settings: &NSArray<AVCaptureBracketedStillImageSettings>, handler: &DynBlock<dyn Fn(*mut CMSampleBuffer, *mut AVCaptureBracketedStillImageSettings, *mut NSError)>, )
AVCaptureSession and block2 and objc2-core-media only.Captures a still image bracket.
Parameter connection: The connection through which the still image bracket should be captured.
Parameter settings: An array of AVCaptureBracketedStillImageSettings objects. All must be of the same kind of AVCaptureBracketedStillImageSettings subclass, or an NSInvalidArgumentException is thrown.
Parameter handler: A user provided block that will be called asynchronously as each still image in the bracket is captured. If the capture request is successful, the “sampleBuffer” parameter contains a valid CMSampleBuffer, the “stillImageSettings” parameter contains the settings object corresponding to this still image, and a nil “error” parameter. If the bracketed capture fails, sample buffer is NULL and error is non-nil. If [settings count] exceeds -maxBracketedCaptureStillImageCount, then AVErrorMaximumStillImageCaptureRequestsExceeded is returned. You should not assume that the completion handler will be called on a specific thread.
If you have not called -prepareToCaptureStillImageBracketFromConnection:withSettingsArray:completionHandler: for this still image bracket request, the bracket may not be taken immediately, as the receiver may internally need to prepare resources.
Methods from Deref<Target = AVCaptureOutput>§
Sourcepub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
Available on crate feature AVCaptureSession only.
pub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
AVCaptureSession only.The connections that describe the flow of media data to the receiver from AVCaptureInputs.
The value of this property is an NSArray of AVCaptureConnection objects, each describing the mapping between the receiver and the AVCaptureInputPorts of one or more AVCaptureInputs.
Sourcepub unsafe fn connectionWithMediaType(
&self,
media_type: &AVMediaType,
) -> Option<Retained<AVCaptureConnection>>
Available on crate features AVCaptureSession and AVMediaFormat only.
pub unsafe fn connectionWithMediaType( &self, media_type: &AVMediaType, ) -> Option<Retained<AVCaptureConnection>>
AVCaptureSession and AVMediaFormat only.Returns the first connection in the connections array with an inputPort of the specified mediaType.
Parameter mediaType: An AVMediaType constant from AVMediaFormat.h, e.g. AVMediaTypeVideo.
This convenience method returns the first AVCaptureConnection in the receiver’s connections array that has an AVCaptureInputPort of the specified mediaType. If no connection with the specified mediaType is found, nil is returned.
Sourcepub unsafe fn transformedMetadataObjectForMetadataObject_connection(
&self,
metadata_object: &AVMetadataObject,
connection: &AVCaptureConnection,
) -> Option<Retained<AVMetadataObject>>
Available on crate features AVCaptureSession and AVMetadataObject only.
pub unsafe fn transformedMetadataObjectForMetadataObject_connection( &self, metadata_object: &AVMetadataObject, connection: &AVCaptureConnection, ) -> Option<Retained<AVMetadataObject>>
AVCaptureSession and AVMetadataObject only.Converts an AVMetadataObject’s visual properties to the receiver’s coordinates.
Parameter metadataObject: An AVMetadataObject originating from the same AVCaptureInput as the receiver.
Parameter connection: The receiver’s connection whose AVCaptureInput matches that of the metadata object to be converted.
Returns: An AVMetadataObject whose properties are in output coordinates.
AVMetadataObject bounds may be expressed as a rect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. Face metadata objects likewise express yaw and roll angles with respect to an unrotated picture. -transformedMetadataObjectForMetadataObject:connection: converts the visual properties in the coordinate space of the supplied AVMetadataObject to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. If the provided metadata object originates from an input source other than the preview layer’s, nil will be returned.
If an AVCaptureVideoDataOutput instance’s connection’s videoOrientation or videoMirrored properties are set to non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping sample buffers as they pass through it. AVCaptureStillImageOutput, on the other hand, does not physically rotate its buffers. It attaches an appropriate kCGImagePropertyOrientation number to captured still image buffers (see ImageIO/CGImageProperties.h) indicating how the image should be displayed on playback. Likewise, AVCaptureMovieFileOutput does not physically apply orientation/mirroring to its sample buffers – it uses a QuickTime track matrix to indicate how the buffers should be rotated and/or flipped on playback.
transformedMetadataObjectForMetadataObject:connection: alters the visual properties of the provided metadata object to match the physical rotation / mirroring of the sample buffers provided by the receiver through the indicated connection. I.e., for video data output, adjusted metadata object coordinates are rotated/mirrored. For still image and movie file output, they are not.
Sourcepub unsafe fn metadataOutputRectOfInterestForRect(
&self,
rect_in_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn metadataOutputRectOfInterestForRect( &self, rect_in_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle in the receiver’s coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver.
Parameter rectInOutputCoordinates: A CGRect in the receiver’s coordinates.
Returns: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn rectForMetadataOutputRectOfInterest(
&self,
rect_in_metadata_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn rectForMetadataOutputRectOfInterest( &self, rect_in_metadata_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver to a rectangle in the receiver’s coordinates.
Parameter rectInMetadataOutputCoordinates: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
Returns: A CGRect in the receiver’s coordinates.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn isDeferredStartSupported(&self) -> bool
pub unsafe fn isDeferredStartSupported(&self) -> bool
A BOOL value that indicates whether the output supports deferred start.
You can only set the deferredStartEnabled property value to true if the output supports deferred start.
Sourcepub unsafe fn isDeferredStartEnabled(&self) -> bool
pub unsafe fn isDeferredStartEnabled(&self) -> bool
A BOOL value that indicates whether to defer starting this capture output.
When this value is true, the session does not prepare the output’s resources until some time after AVCaptureSession/startRunning returns. You can start the visual parts of your user interface (e.g. preview) prior to other parts (e.g. photo/movie capture, metadata output, etc..) to improve startup performance. Set this value to false for outputs that your app needs for startup, and true for the ones it does not need to start immediately. For example, an AVCaptureVideoDataOutput that you intend to use for displaying preview should set this value to false, so that the frames are available as soon as possible.
By default, for apps that are linked on or after iOS 26, this property value is true for AVCapturePhotoOutput and AVCaptureFileOutput subclasses if supported, and false otherwise. When set to true for AVCapturePhotoOutput, if you want to support multiple capture requests before running deferred start, set AVCapturePhotoOutput/responsiveCaptureEnabled to true on that output.
If deferredStartSupported is false, setting this property value to true results in the system throwing an NSInvalidArgumentException.
- Note: Set this value before calling
AVCaptureSession/commitConfigurationas it requires a lengthy reconfiguration of the capture render pipeline.
Sourcepub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
pub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
Setter for isDeferredStartEnabled.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCaptureOutput> for AVCaptureStillImageOutput
impl AsRef<AVCaptureOutput> for AVCaptureStillImageOutput
Source§fn as_ref(&self) -> &AVCaptureOutput
fn as_ref(&self) -> &AVCaptureOutput
Source§impl AsRef<AnyObject> for AVCaptureStillImageOutput
impl AsRef<AnyObject> for AVCaptureStillImageOutput
Source§impl AsRef<NSObject> for AVCaptureStillImageOutput
impl AsRef<NSObject> for AVCaptureStillImageOutput
Source§impl Borrow<AVCaptureOutput> for AVCaptureStillImageOutput
impl Borrow<AVCaptureOutput> for AVCaptureStillImageOutput
Source§fn borrow(&self) -> &AVCaptureOutput
fn borrow(&self) -> &AVCaptureOutput
Source§impl Borrow<AnyObject> for AVCaptureStillImageOutput
impl Borrow<AnyObject> for AVCaptureStillImageOutput
Source§impl Borrow<NSObject> for AVCaptureStillImageOutput
impl Borrow<NSObject> for AVCaptureStillImageOutput
Source§impl ClassType for AVCaptureStillImageOutput
impl ClassType for AVCaptureStillImageOutput
Source§const NAME: &'static str = "AVCaptureStillImageOutput"
const NAME: &'static str = "AVCaptureStillImageOutput"
Source§type Super = AVCaptureOutput
type Super = AVCaptureOutput
Source§type ThreadKind = <<AVCaptureStillImageOutput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureStillImageOutput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureStillImageOutput
impl Debug for AVCaptureStillImageOutput
Source§impl Deref for AVCaptureStillImageOutput
impl Deref for AVCaptureStillImageOutput
Source§impl Hash for AVCaptureStillImageOutput
impl Hash for AVCaptureStillImageOutput
Source§impl Message for AVCaptureStillImageOutput
impl Message for AVCaptureStillImageOutput
Source§impl NSObjectProtocol for AVCaptureStillImageOutput
impl NSObjectProtocol for AVCaptureStillImageOutput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref