pub struct AVCaptureVideoDataOutput { /* private fields */ }AVCaptureOutputBase and AVCaptureVideoDataOutput only.Expand description
AVCaptureVideoDataOutput is a concrete subclass of AVCaptureOutput that can be used to process uncompressed or compressed frames from the video being captured.
Instances of AVCaptureVideoDataOutput produce video frames suitable for processing using other media APIs. Applications can access the frames with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureVideoDataOutput
impl AVCaptureVideoDataOutput
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn setSampleBufferDelegate_queue(
&self,
sample_buffer_delegate: Option<&ProtocolObject<dyn AVCaptureVideoDataOutputSampleBufferDelegate>>,
sample_buffer_callback_queue: Option<&DispatchQueue>,
)
Available on crate feature dispatch2 only.
pub unsafe fn setSampleBufferDelegate_queue( &self, sample_buffer_delegate: Option<&ProtocolObject<dyn AVCaptureVideoDataOutputSampleBufferDelegate>>, sample_buffer_callback_queue: Option<&DispatchQueue>, )
dispatch2 only.Sets the receiver’s delegate that will accept captured buffers and dispatch queue on which the delegate will be called.
Parameter sampleBufferDelegate: An object conforming to the AVCaptureVideoDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured.
Parameter sampleBufferCallbackQueue: A dispatch queue on which all sample buffer delegate methods will be called.
When a new video sample buffer is captured it will be vended to the sample buffer delegate using the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. All delegate methods will be called on the specified dispatch queue. If the queue is blocked when new frames are captured, those frames will be automatically dropped at a time determined by the value of the alwaysDiscardsLateVideoFrames property. This allows clients to process existing frames on the same queue without having to manage the potential memory usage increases that would otherwise occur when that processing is unable to keep up with the rate of incoming frames. If their frame processing is consistently unable to keep up with the rate of incoming frames, clients should consider using the minFrameDuration property, which will generally yield better performance characteristics and more consistent frame rates than frame dropping alone.
Clients that need to minimize the chances of frames being dropped should specify a queue on which a sufficiently small amount of processing is being done outside of receiving sample buffers. However, if such clients migrate extra processing to another queue, they are responsible for ensuring that memory usage does not grow without bound from frames that have not been processed.
A serial dispatch queue must be used to guarantee that video frames will be delivered in order. The sampleBufferCallbackQueue parameter may not be NULL, except when setting the sampleBufferDelegate to nil otherwise -setSampleBufferDelegate:queue: throws an NSInvalidArgumentException.
§Safety
sample_buffer_callback_queue possibly has additional threading requirements.
Sourcepub unsafe fn sampleBufferDelegate(
&self,
) -> Option<Retained<ProtocolObject<dyn AVCaptureVideoDataOutputSampleBufferDelegate>>>
pub unsafe fn sampleBufferDelegate( &self, ) -> Option<Retained<ProtocolObject<dyn AVCaptureVideoDataOutputSampleBufferDelegate>>>
The receiver’s delegate.
The value of this property is an object conforming to the AVCaptureVideoDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured. The delegate is set using the setSampleBufferDelegate:queue: method.
Sourcepub unsafe fn sampleBufferCallbackQueue(
&self,
) -> Option<Retained<DispatchQueue>>
Available on crate feature dispatch2 only.
pub unsafe fn sampleBufferCallbackQueue( &self, ) -> Option<Retained<DispatchQueue>>
dispatch2 only.The dispatch queue on which all sample buffer delegate methods will be called.
The value of this property is a dispatch_queue_t. The queue is set using the setSampleBufferDelegate:queue: method.
Sourcepub unsafe fn videoSettings(
&self,
) -> Retained<NSDictionary<NSString, AnyObject>>
pub unsafe fn videoSettings( &self, ) -> Retained<NSDictionary<NSString, AnyObject>>
Specifies the settings used to decode or re-encode video before it is output by the receiver.
See AVVideoSettings.h for more information on how to construct a video settings dictionary. To receive samples in their device native format, set this property to an empty dictionary (i.e. [NSDictionary dictionary]). To receive samples in a default uncompressed format, set this property to nil. Note that after this property is set to nil, subsequent querying of this property will yield a non-nil dictionary reflecting the settings used by the AVCaptureSession’s current sessionPreset.
On iOS versions prior to iOS 16.0, the only supported key is kCVPixelBufferPixelFormatTypeKey. Use -availableVideoCVPixelFormatTypes for the list of supported pixel formats. For apps linked on or after iOS 16.0, kCVPixelBufferPixelFormatTypeKey, kCVPixelBufferWidthKey, and kCVPixelBufferHeightKey are supported. The width and height must match the videoOrientation specified on the output’s AVCaptureConnection or an NSInvalidArgumentException is thrown. The aspect ratio of width and height must match the aspect ratio of the source’s activeFormat (corrected for the connection’s videoOrientation) or an NSInvalidArgumentException is thrown. If width or height exceeds the source’s activeFormat’s width or height, an NSInvalidArgumentException is thrown. Changing width and height when deliversPreviewSizedOutputBuffers is set to YES is not supported and throws an NSInvalidArgumentException.
Sourcepub unsafe fn setVideoSettings(
&self,
video_settings: Option<&NSDictionary<NSString, AnyObject>>,
)
pub unsafe fn setVideoSettings( &self, video_settings: Option<&NSDictionary<NSString, AnyObject>>, )
Setter for videoSettings.
This is copied when set.
§Safety
video_settings generic should be of the correct type.
Sourcepub unsafe fn recommendedVideoSettingsForAssetWriterWithOutputFileType(
&self,
output_file_type: &AVFileType,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
Available on crate feature AVMediaFormat only.
pub unsafe fn recommendedVideoSettingsForAssetWriterWithOutputFileType( &self, output_file_type: &AVFileType, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
AVMediaFormat only.Specifies the recommended settings for use with an AVAssetWriterInput.
Parameter outputFileType: Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
Returns: A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
The value of this property is an NSDictionary containing values for compression settings keys defined in AVVideoSettings.h. This dictionary is suitable for use as the “outputSettings” parameter when creating an AVAssetWriterInput, such as,
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings sourceFormatHint:hint];
The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO file types, the recommended video settings will produce output comparable to that of AVCaptureMovieFileOutput.
Note that the dictionary of settings is dependent on the current configuration of the receiver’s AVCaptureSession and its inputs. The settings dictionary may change if the session’s configuration changes. As such, you should configure your session first, then query the recommended video settings. As of iOS 8.3, movies produced with these settings successfully import into the iOS camera roll and sync to and from like devices via iTunes.
Sourcepub unsafe fn availableVideoCodecTypesForAssetWriterWithOutputFileType(
&self,
output_file_type: &AVFileType,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate features AVMediaFormat and AVVideoSettings only.
pub unsafe fn availableVideoCodecTypesForAssetWriterWithOutputFileType( &self, output_file_type: &AVFileType, ) -> Retained<NSArray<AVVideoCodecType>>
AVMediaFormat and AVVideoSettings only.Specifies the available video codecs for use with AVAssetWriter and a given file type.
Parameter outputFileType: Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
Returns: An array of video codecs; see AVVideoSettings.h for a full list.
This method allows you to query the available video codecs that may be used when specifying an AVVideoCodecKey in -recommendedVideoSettingsForVideoCodecType:assetWriterOutputFileType:. When specifying an outputFileType of AVFileTypeQuickTimeMovie, video codecs are ordered identically to -[AVCaptureMovieFileOutput availableVideoCodecTypes].
Sourcepub unsafe fn recommendedVideoSettingsForVideoCodecType_assetWriterOutputFileType(
&self,
video_codec_type: &AVVideoCodecType,
output_file_type: &AVFileType,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
Available on crate features AVMediaFormat and AVVideoSettings only.
pub unsafe fn recommendedVideoSettingsForVideoCodecType_assetWriterOutputFileType( &self, video_codec_type: &AVVideoCodecType, output_file_type: &AVFileType, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
AVMediaFormat and AVVideoSettings only.Specifies the recommended settings for a particular video codec type, to be used with an AVAssetWriterInput.
Parameter videoCodecType: Specifies the desired AVVideoCodecKey to be used for compression (see AVVideoSettings.h).
Parameter outputFileType: Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
Returns: A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
The value of this property is an NSDictionary containing values for compression settings keys defined in AVVideoSettings.h. This dictionary is suitable for use as the “outputSettings” parameter when creating an AVAssetWriterInput, such as,
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings sourceFormatHint:hint];
The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO file types, the recommended video settings will produce output comparable to that of AVCaptureMovieFileOutput.
The videoCodecType string provided must be present in the availableVideoCodecTypesForAssetWriterWithOutputFileType: array, or an NSInvalidArgumentException is thrown.
Note that the dictionary of settings is dependent on the current configuration of the receiver’s AVCaptureSession and its inputs. The settings dictionary may change if the session’s configuration changes. As such, you should configure your session first, then query the recommended video settings. As of iOS 8.3, movies produced with these settings successfully import into the iOS camera roll and sync to and from like devices via iTunes.
Sourcepub unsafe fn recommendedVideoSettingsForVideoCodecType_assetWriterOutputFileType_outputFileURL(
&self,
video_codec_type: &AVVideoCodecType,
output_file_type: &AVFileType,
output_file_url: Option<&NSURL>,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
Available on crate features AVMediaFormat and AVVideoSettings only.
pub unsafe fn recommendedVideoSettingsForVideoCodecType_assetWriterOutputFileType_outputFileURL( &self, video_codec_type: &AVVideoCodecType, output_file_type: &AVFileType, output_file_url: Option<&NSURL>, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
AVMediaFormat and AVVideoSettings only.Specifies the recommended settings for a particular video codec type with output file URL, to be used with an AVAssetWriterInput.
Parameter videoCodecType: Specifies the desired AVVideoCodecKey to be used for compression (see AVVideoSettings.h).
Parameter outputFileType: Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
Parameter outputFileURL: Specifies the output URL of the file to be written.
If you wish to capture onto an external storage device get an externalStorageDevice of type AVExternalStorageDevice (as defined in AVExternalStorageDevice.h): [AVExternalStorageDeviceDiscoverySession sharedSession] externalStorageDevices]
Then use [externalStorageDevice nextAvailableURLsWithPathExtensions:pathExtensions error: &error ] to get the output file URL.
Returns: A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
The value of this property is an NSDictionary containing values for compression settings keys defined in AVVideoSettings.h. This dictionary is suitable for use as the “outputSettings” parameter when creating an AVAssetWriterInput, such as,
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings sourceFormatHint:hint];
The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO file types, the recommended video settings will produce output comparable to that of AVCaptureMovieFileOutput.
The videoCodecType string provided must be present in the availableVideoCodecTypesForAssetWriterWithOutputFileType: array, or an NSInvalidArgumentException is thrown.
Note that the dictionary of settings is dependent on the current configuration of the receiver’s AVCaptureSession and its inputs. The settings dictionary may change if the session’s configuration changes. As such, you should configure your session first, then query the recommended video settings. As of iOS 8.3, movies produced with these settings successfully import into the iOS camera roll and sync to and from like devices via iTunes.
Sourcepub unsafe fn recommendedMovieMetadataForVideoCodecType_assetWriterOutputFileType(
&self,
video_codec_type: &AVVideoCodecType,
output_file_type: &AVFileType,
) -> Option<Retained<NSArray<AVMetadataItem>>>
Available on crate features AVMediaFormat and AVMetadataItem and AVVideoSettings only.
pub unsafe fn recommendedMovieMetadataForVideoCodecType_assetWriterOutputFileType( &self, video_codec_type: &AVVideoCodecType, output_file_type: &AVFileType, ) -> Option<Retained<NSArray<AVMetadataItem>>>
AVMediaFormat and AVMetadataItem and AVVideoSettings only.Recommends movie-level metadata for a particular video codec type and output file type, to be used with an asset writer input.
- Parameter videoCodecType: The desired
AVVideoCodecKeyto be used for compression (see <doc ://com.apple.documentation/documentation/avfoundation/video-settings>). - Parameter outputFileType: Specifies the UTI of the file type to be written (see <doc ://com.apple.documentation/documentation/avfoundation/avfiletype>).
- Returns: A fully populated array of
AVMetadataItemobjects compatible withAVAssetWriter.
The value of this property is an array of AVMetadataItem objects representing the collection of top-level metadata to be written in each output file. This array is suitable to use as the AVAssetWriter/metadata property before you have called AVAssetWriter/startWriting. For more details see
<doc
://com.apple.documentation/documentation/avfoundation/avassetwriter/startwriting()>.
The videoCodecType string you provide must be present in availableVideoCodecTypesForAssetWriterWithOutputFileType: array, or an NSInvalidArgumentException is thrown.
For clients writing files using a ProRes Raw codec type, white balance must be locked (call AVCaptureDevice/setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:completionHandler:) before querying this property, or an NSIvalidArgumentException is thrown.
- Note: The array of metadata is dependent on the current configuration of the receiver’s
AVCaptureSessionand its inputs. The array may change when the session’s configuration changes. As such, you should configure and start your session first, then query this method.
Sourcepub unsafe fn recommendedMediaTimeScaleForAssetWriter(&self) -> CMTimeScale
Available on crate feature objc2-core-media only.
pub unsafe fn recommendedMediaTimeScaleForAssetWriter(&self) -> CMTimeScale
objc2-core-media only.Indicates the recommended media timescale for the video track.
- Returns: The recommended media timescale based on the active capture session’s inputs. It is never less than 600. It may or may not be a multiple of 600.
Sourcepub unsafe fn availableVideoCVPixelFormatTypes(
&self,
) -> Retained<NSArray<NSNumber>>
pub unsafe fn availableVideoCVPixelFormatTypes( &self, ) -> Retained<NSArray<NSNumber>>
Indicates the supported video pixel formats that can be specified in videoSettings.
The value of this property is an NSArray of NSNumbers that can be used as values for the kCVPixelBufferPixelFormatTypeKey in the receiver’s videoSettings property. The formats are listed in an unspecified order. This list can may change if the activeFormat of the AVCaptureDevice connected to the receiver changes.
Sourcepub unsafe fn availableVideoCodecTypes(
&self,
) -> Retained<NSArray<AVVideoCodecType>>
Available on crate feature AVVideoSettings only.
pub unsafe fn availableVideoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>
AVVideoSettings only.Indicates the supported video codec formats that can be specified in videoSettings.
The value of this property is an NSArray of AVVideoCodecTypes that can be used as values for the AVVideoCodecKey in the receiver’s videoSettings property.
Sourcepub unsafe fn minFrameDuration(&self) -> CMTime
👎Deprecated: Use AVCaptureConnection’s videoMinFrameDuration property instead.Available on crate feature objc2-core-media only.
pub unsafe fn minFrameDuration(&self) -> CMTime
objc2-core-media only.Specifies the minimum time interval between which the receiver should output consecutive video frames.
The value of this property is a CMTime specifying the minimum duration of each video frame output by the receiver, placing a lower bound on the amount of time that should separate consecutive frames. This is equivalent to the inverse of the maximum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited maximum frame rate. The default value is kCMTimeInvalid. As of iOS 5.0, minFrameDuration is deprecated. Use AVCaptureConnection’s videoMinFrameDuration property instead.
Sourcepub unsafe fn setMinFrameDuration(&self, min_frame_duration: CMTime)
👎Deprecated: Use AVCaptureConnection’s videoMinFrameDuration property instead.Available on crate feature objc2-core-media only.
pub unsafe fn setMinFrameDuration(&self, min_frame_duration: CMTime)
objc2-core-media only.Setter for minFrameDuration.
Sourcepub unsafe fn alwaysDiscardsLateVideoFrames(&self) -> bool
pub unsafe fn alwaysDiscardsLateVideoFrames(&self) -> bool
Specifies whether the receiver should always discard any video frame that is not processed before the next frame is captured.
When the value of this property is YES, the receiver will immediately discard frames that are captured while the dispatch queue handling existing frames is blocked in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. When the value of this property is NO, delegates will be allowed more time to process old frames before new frames are discarded, but application memory usage may increase significantly as a result. The default value is YES.
Sourcepub unsafe fn setAlwaysDiscardsLateVideoFrames(
&self,
always_discards_late_video_frames: bool,
)
pub unsafe fn setAlwaysDiscardsLateVideoFrames( &self, always_discards_late_video_frames: bool, )
Setter for alwaysDiscardsLateVideoFrames.
Sourcepub unsafe fn automaticallyConfiguresOutputBufferDimensions(&self) -> bool
pub unsafe fn automaticallyConfiguresOutputBufferDimensions(&self) -> bool
Indicates whether the receiver automatically configures the size of output buffers.
Default value is YES. In most configurations, AVCaptureVideoDataOutput delivers full-resolution buffers, that is, buffers with the same dimensions as the source AVCaptureDevice’s activeFormat’s videoDimensions. When this property is set to YES, the receiver is free to configure the dimensions of the buffers delivered to -captureOutput:didOutputSampleBuffer:fromConnection:, such that they are a smaller preview size (roughly the size of the screen). For instance, when the AVCaptureSession’s sessionPreset is set to AVCaptureSessionPresetPhoto, it is assumed that video data output buffers are being delivered as a preview proxy. Likewise, if an AVCapturePhotoOutput is present in the session with livePhotoCaptureEnabled, it is assumed that video data output is being used for photo preview, and thus preview-sized buffers are a better choice than full-res buffers. You can query deliversPreviewSizedOutputBuffers to find out whether automatic configuration of output buffer dimensions is currently downscaling buffers to a preview size. You can also query the videoSettings property to find out the exact width and height being delivered. If you wish to manually set deliversPreviewSizedOutputBuffers, you must first set automaticallyConfiguresOutputBufferDimensions to NO.
Sourcepub unsafe fn setAutomaticallyConfiguresOutputBufferDimensions(
&self,
automatically_configures_output_buffer_dimensions: bool,
)
pub unsafe fn setAutomaticallyConfiguresOutputBufferDimensions( &self, automatically_configures_output_buffer_dimensions: bool, )
Setter for automaticallyConfiguresOutputBufferDimensions.
Sourcepub unsafe fn deliversPreviewSizedOutputBuffers(&self) -> bool
pub unsafe fn deliversPreviewSizedOutputBuffers(&self) -> bool
Indicates whether the receiver is currently configured to deliver preview sized buffers.
If you wish to manually set deliversPreviewSizedOutputBuffers, you must first set automaticallyConfiguresOutputBufferDimensions to NO. When deliversPreviewSizedOutputBuffers is set to YES, auto focus, exposure, and white balance changes are quicker. AVCaptureVideoDataOutput assumes that the buffers are being used for on-screen preview rather than recording.
When AVCaptureDevice.activeFormat supports ProRes Raw video, setting deliversPreviewSizedOutputBuffers gives out buffers with 422 format that can be used for proxy video recording.
Sourcepub unsafe fn setDeliversPreviewSizedOutputBuffers(
&self,
delivers_preview_sized_output_buffers: bool,
)
pub unsafe fn setDeliversPreviewSizedOutputBuffers( &self, delivers_preview_sized_output_buffers: bool, )
Setter for deliversPreviewSizedOutputBuffers.
Sourcepub unsafe fn preparesCellularRadioForNetworkConnection(&self) -> bool
pub unsafe fn preparesCellularRadioForNetworkConnection(&self) -> bool
Indicates whether the receiver should prepare the cellular radio for imminent network activity.
Apps that scan video data output buffers for information that will result in network activity (such as detecting a QRCode containing a URL) should set this property true to allow the cellular radio to prepare for an imminent network request. Enabling this property requires a lengthy reconfiguration of the capture render pipeline, so you should set this property to true before calling AVCaptureSession/startRunning.
Using this API requires your app to adopt the entitlement com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning.
Sourcepub unsafe fn setPreparesCellularRadioForNetworkConnection(
&self,
prepares_cellular_radio_for_network_connection: bool,
)
pub unsafe fn setPreparesCellularRadioForNetworkConnection( &self, prepares_cellular_radio_for_network_connection: bool, )
Setter for preparesCellularRadioForNetworkConnection.
Sourcepub unsafe fn preservesDynamicHDRMetadata(&self) -> bool
pub unsafe fn preservesDynamicHDRMetadata(&self) -> bool
Indicates whether the receiver should preserve dynamic HDR metadata as an attachment on the output sample buffer’s underlying pixel buffer.
Set this property to true if you wish to use AVCaptureVideoDataOutput with AVAssetWriter to record HDR movies. You must also set kVTCompressionPropertyKey_PreserveDynamicHDRMetadata to true in the compression settings you pass to your AVAssetWriterInput. These compression settings are represented under the AVVideoCompressionPropertiesKey sub-dictionary of your top-level AVVideoSettings (see
<doc
://com.apple.documentation/documentation/avfoundation/video-settings>). When you set this key to true, performance improves, as the encoder is able to skip HDR metadata calculation for every frame. The default value is false.
Sourcepub unsafe fn setPreservesDynamicHDRMetadata(
&self,
preserves_dynamic_hdr_metadata: bool,
)
pub unsafe fn setPreservesDynamicHDRMetadata( &self, preserves_dynamic_hdr_metadata: bool, )
Setter for preservesDynamicHDRMetadata.
Methods from Deref<Target = AVCaptureOutput>§
Sourcepub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
Available on crate feature AVCaptureSession only.
pub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
AVCaptureSession only.The connections that describe the flow of media data to the receiver from AVCaptureInputs.
The value of this property is an NSArray of AVCaptureConnection objects, each describing the mapping between the receiver and the AVCaptureInputPorts of one or more AVCaptureInputs.
Sourcepub unsafe fn connectionWithMediaType(
&self,
media_type: &AVMediaType,
) -> Option<Retained<AVCaptureConnection>>
Available on crate features AVCaptureSession and AVMediaFormat only.
pub unsafe fn connectionWithMediaType( &self, media_type: &AVMediaType, ) -> Option<Retained<AVCaptureConnection>>
AVCaptureSession and AVMediaFormat only.Returns the first connection in the connections array with an inputPort of the specified mediaType.
Parameter mediaType: An AVMediaType constant from AVMediaFormat.h, e.g. AVMediaTypeVideo.
This convenience method returns the first AVCaptureConnection in the receiver’s connections array that has an AVCaptureInputPort of the specified mediaType. If no connection with the specified mediaType is found, nil is returned.
Sourcepub unsafe fn transformedMetadataObjectForMetadataObject_connection(
&self,
metadata_object: &AVMetadataObject,
connection: &AVCaptureConnection,
) -> Option<Retained<AVMetadataObject>>
Available on crate features AVCaptureSession and AVMetadataObject only.
pub unsafe fn transformedMetadataObjectForMetadataObject_connection( &self, metadata_object: &AVMetadataObject, connection: &AVCaptureConnection, ) -> Option<Retained<AVMetadataObject>>
AVCaptureSession and AVMetadataObject only.Converts an AVMetadataObject’s visual properties to the receiver’s coordinates.
Parameter metadataObject: An AVMetadataObject originating from the same AVCaptureInput as the receiver.
Parameter connection: The receiver’s connection whose AVCaptureInput matches that of the metadata object to be converted.
Returns: An AVMetadataObject whose properties are in output coordinates.
AVMetadataObject bounds may be expressed as a rect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. Face metadata objects likewise express yaw and roll angles with respect to an unrotated picture. -transformedMetadataObjectForMetadataObject:connection: converts the visual properties in the coordinate space of the supplied AVMetadataObject to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. If the provided metadata object originates from an input source other than the preview layer’s, nil will be returned.
If an AVCaptureVideoDataOutput instance’s connection’s videoOrientation or videoMirrored properties are set to non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping sample buffers as they pass through it. AVCaptureStillImageOutput, on the other hand, does not physically rotate its buffers. It attaches an appropriate kCGImagePropertyOrientation number to captured still image buffers (see ImageIO/CGImageProperties.h) indicating how the image should be displayed on playback. Likewise, AVCaptureMovieFileOutput does not physically apply orientation/mirroring to its sample buffers – it uses a QuickTime track matrix to indicate how the buffers should be rotated and/or flipped on playback.
transformedMetadataObjectForMetadataObject:connection: alters the visual properties of the provided metadata object to match the physical rotation / mirroring of the sample buffers provided by the receiver through the indicated connection. I.e., for video data output, adjusted metadata object coordinates are rotated/mirrored. For still image and movie file output, they are not.
Sourcepub unsafe fn metadataOutputRectOfInterestForRect(
&self,
rect_in_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn metadataOutputRectOfInterestForRect( &self, rect_in_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle in the receiver’s coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver.
Parameter rectInOutputCoordinates: A CGRect in the receiver’s coordinates.
Returns: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn rectForMetadataOutputRectOfInterest(
&self,
rect_in_metadata_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn rectForMetadataOutputRectOfInterest( &self, rect_in_metadata_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver to a rectangle in the receiver’s coordinates.
Parameter rectInMetadataOutputCoordinates: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
Returns: A CGRect in the receiver’s coordinates.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn isDeferredStartSupported(&self) -> bool
pub unsafe fn isDeferredStartSupported(&self) -> bool
A BOOL value that indicates whether the output supports deferred start.
You can only set the deferredStartEnabled property value to true if the output supports deferred start.
Sourcepub unsafe fn isDeferredStartEnabled(&self) -> bool
pub unsafe fn isDeferredStartEnabled(&self) -> bool
A BOOL value that indicates whether to defer starting this capture output.
When this value is true, the session does not prepare the output’s resources until some time after AVCaptureSession/startRunning returns. You can start the visual parts of your user interface (e.g. preview) prior to other parts (e.g. photo/movie capture, metadata output, etc..) to improve startup performance. Set this value to false for outputs that your app needs for startup, and true for the ones it does not need to start immediately. For example, an AVCaptureVideoDataOutput that you intend to use for displaying preview should set this value to false, so that the frames are available as soon as possible.
By default, for apps that are linked on or after iOS 26, this property value is true for AVCapturePhotoOutput and AVCaptureFileOutput subclasses if supported, and false otherwise. When set to true for AVCapturePhotoOutput, if you want to support multiple capture requests before running deferred start, set AVCapturePhotoOutput/responsiveCaptureEnabled to true on that output.
If deferredStartSupported is false, setting this property value to true results in the system throwing an NSInvalidArgumentException.
- Note: Set this value before calling
AVCaptureSession/commitConfigurationas it requires a lengthy reconfiguration of the capture render pipeline.
Sourcepub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
pub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
Setter for isDeferredStartEnabled.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCaptureOutput> for AVCaptureVideoDataOutput
impl AsRef<AVCaptureOutput> for AVCaptureVideoDataOutput
Source§fn as_ref(&self) -> &AVCaptureOutput
fn as_ref(&self) -> &AVCaptureOutput
Source§impl AsRef<AnyObject> for AVCaptureVideoDataOutput
impl AsRef<AnyObject> for AVCaptureVideoDataOutput
Source§impl AsRef<NSObject> for AVCaptureVideoDataOutput
impl AsRef<NSObject> for AVCaptureVideoDataOutput
Source§impl Borrow<AVCaptureOutput> for AVCaptureVideoDataOutput
impl Borrow<AVCaptureOutput> for AVCaptureVideoDataOutput
Source§fn borrow(&self) -> &AVCaptureOutput
fn borrow(&self) -> &AVCaptureOutput
Source§impl Borrow<AnyObject> for AVCaptureVideoDataOutput
impl Borrow<AnyObject> for AVCaptureVideoDataOutput
Source§impl Borrow<NSObject> for AVCaptureVideoDataOutput
impl Borrow<NSObject> for AVCaptureVideoDataOutput
Source§impl ClassType for AVCaptureVideoDataOutput
impl ClassType for AVCaptureVideoDataOutput
Source§const NAME: &'static str = "AVCaptureVideoDataOutput"
const NAME: &'static str = "AVCaptureVideoDataOutput"
Source§type Super = AVCaptureOutput
type Super = AVCaptureOutput
Source§type ThreadKind = <<AVCaptureVideoDataOutput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureVideoDataOutput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureVideoDataOutput
impl Debug for AVCaptureVideoDataOutput
Source§impl Deref for AVCaptureVideoDataOutput
impl Deref for AVCaptureVideoDataOutput
Source§impl Hash for AVCaptureVideoDataOutput
impl Hash for AVCaptureVideoDataOutput
Source§impl Message for AVCaptureVideoDataOutput
impl Message for AVCaptureVideoDataOutput
Source§impl NSObjectProtocol for AVCaptureVideoDataOutput
impl NSObjectProtocol for AVCaptureVideoDataOutput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref