AVCaptureMovieFileOutput

Struct AVCaptureMovieFileOutput 

Source
pub struct AVCaptureMovieFileOutput { /* private fields */ }
Available on crate features AVCaptureFileOutput and AVCaptureOutputBase only.
Expand description

AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput that writes captured media to QuickTime movie files.

AVCaptureMovieFileOutput implements the complete file recording interface declared by AVCaptureFileOutput for writing media data to QuickTime movie files. In addition, instances of AVCaptureMovieFileOutput allow clients to configure options specific to the QuickTime file format, including allowing them to write metadata collections to each file, specify media encoding options for each track (macOS), and specify an interval at which movie fragments should be written.

See also Apple’s documentation

Implementations§

Source§

impl AVCaptureMovieFileOutput

Source

pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>

Source

pub unsafe fn new() -> Retained<Self>

Source

pub unsafe fn movieFragmentInterval(&self) -> CMTime

Available on crate feature objc2-core-media only.

Specifies the frequency with which movie fragments should be written.

When movie fragments are used, a partially written QuickTime movie file whose writing is unexpectedly interrupted can be successfully opened and played up to multiples of the specified time interval. A value of kCMTimeInvalid indicates that movie fragments should not be used, but that only a movie atom describing all of the media in the file should be written. The default value of this property is ten seconds.

Changing the value of this property will not affect the movie fragment interval of the file currently being written, if there is one.

For best writing performance on external storage devices, set the movieFragmentInterval to 10 seconds or greater. If the size of a movie fragment is greater than or equal to 2GB, an interval is added at 2GB mark.

Source

pub unsafe fn setMovieFragmentInterval(&self, movie_fragment_interval: CMTime)

Available on crate feature objc2-core-media only.
Source

pub unsafe fn metadata(&self) -> Option<Retained<NSArray<AVMetadataItem>>>

Available on crate feature AVMetadataItem only.

A collection of metadata to be written to the receiver’s output files.

The value of this property is an array of AVMetadataItem objects representing the collection of top-level metadata to be written in each output file.

Source

pub unsafe fn setMetadata(&self, metadata: Option<&NSArray<AVMetadataItem>>)

Available on crate feature AVMetadataItem only.

Setter for metadata.

This is copied when set.

Source

pub unsafe fn availableVideoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>

Available on crate feature AVVideoSettings only.

Indicates the supported video codec formats that can be specified in setOutputSettingsForConnection:.

The value of this property is an NSArray of AVVideoCodecTypes that can be used as values for the AVVideoCodecKey in the receiver’s setOutputSettingsForConnection: dictionary. The array of available video codecs may change depending on the current session preset. The first codec in the array is used by default when recording a file.

Source

pub unsafe fn supportedOutputSettingsKeysForConnection( &self, connection: &AVCaptureConnection, ) -> Retained<NSArray<NSString>>

Available on crate feature AVCaptureSession only.

Indicates the supported keys that can be specified in setOutputSettings:forConnection:.

Parameter connection: The connection delivering the media to be encoded.

Returns an NSArray of NSStrings listing the allowable keys in the receiver’s setOutputSettings:forConnection: dictionary.

Source

pub unsafe fn outputSettingsForConnection( &self, connection: &AVCaptureConnection, ) -> Retained<NSDictionary<NSString, AnyObject>>

Available on crate feature AVCaptureSession only.

Returns the options the receiver uses to encode media from the given connection as it is being recorded.

Parameter connection: The connection delivering the media to be encoded.

Returns: An NSDictionary of output settings.

See AVAudioSettings.h for audio connections or AVVideoSettings.h for video connections for more information on the structure of an output settings dictionary. If the returned value is an empty dictionary (i.e. [NSDictionary dictionary], the format of the media from the connection will not be changed before being written to the file. If -setOutputSettings:forConnection: was called with a nil dictionary, this method returns a non-nil dictionary reflecting the settings used by the AVCaptureSession’s current sessionPreset.

Source

pub unsafe fn setOutputSettings_forConnection( &self, output_settings: Option<&NSDictionary<NSString, AnyObject>>, connection: &AVCaptureConnection, )

Available on crate feature AVCaptureSession only.

Sets the options the receiver uses to encode media from the given connection as it is being recorded.

Parameter outputSettings: An NSDictionary of output settings.

Parameter connection: The connection delivering the media to be encoded.

See AVAudioSettings.h for audio connections or AVVideoSettings.h for video connections for more information on how to construct an output settings dictionary. A value of an empty dictionary (i.e. +[NSDictionary dictionary]), means that the format of the media from the connection should not be changed before being written to the file. A value of nil means that the output format will be determined by the session preset. In this case, -outputSettingsForConnection: will return a non-nil dictionary reflecting the settings used by the AVCaptureSession’s current sessionPreset.

On iOS, your outputSettings dictionary may only contain keys listed in - supportedOutputSettingsKeysForConnection:. If you specify any other key, an NSInvalidArgumentException will be thrown. Further restrictions may be imposed on the AVVideoCodecTypeKey. Its value should be present in the -availableVideoCodecTypes array. If AVVideoCompressionPropertiesKey is specified, you must also specify a valid value for AVVideoCodecKey. On iOS versions prior to 12.0, the only settable key for video connections is AVVideoCodecTypeKey. On iOS 12.0 and later, video connections gain support for AVVideoCompressionPropertiesKey.

On iOS, -outputSettingsForConnection: always provides a fully populated dictionary. If you call -outputSettingsForConnection: with the intent of overriding a few of the values, you must take care to exclude keys that are not supported before calling -setOutputSettings:forConnection:. When providing an AVVideoCompressionPropertiesKey sub dictionary, you may specify a sparse dictionary. AVCaptureMovieFileOutput will always fill in missing keys with default values for the current AVCaptureSession configuration.

§Safety

output_settings generic should be of the correct type.

Source

pub unsafe fn recordsVideoOrientationAndMirroringChangesAsMetadataTrackForConnection( &self, connection: &AVCaptureConnection, ) -> bool

Available on crate feature AVCaptureSession only.

Returns YES if the movie file output will create a timed metadata track that records samples which reflect changes made to the given connection’s videoOrientation and videoMirrored properties during recording.

Parameter connection: A connection delivering video media to the movie file output. This method throws an NSInvalidArgumentException if the connection does not have a mediaType of AVMediaTypeVideo or if the connection does not terminate at the movie file output.

See setRecordsVideoOrientationAndMirroringChanges:asMetadataTrackForConnection: for details on the behavior controlled by this value. The default value returned is NO.

Source

pub unsafe fn setRecordsVideoOrientationAndMirroringChanges_asMetadataTrackForConnection( &self, do_record_changes: bool, connection: &AVCaptureConnection, )

Available on crate feature AVCaptureSession only.

Controls whether or not the movie file output will create a timed metadata track that records samples which reflect changes made to the given connection’s videoOrientation and videoMirrored properties during recording.

Parameter doRecordChanges: If YES, the movie file output will create a timed metadata track that records samples which reflect changes made to the given connection’s videoOrientation and videoMirrored properties during recording.

Parameter connection: A connection delivering video media to the movie file output. This method throws an NSInvalidArgumentException if the connection does not have a mediaType of AVMediaTypeVideo or if the connection does not terminate at the movie file output.

When a recording is started the current state of a video capture connection’s videoOrientation and videoMirrored properties are used to build the display matrix for the created video track. The movie file format allows only one display matrix per track, which means that any changes made during a recording to the videoOrientation and videoMirrored properties are not captured. For example, a user starts a recording with their device in the portrait orientation, and then partway through the recording changes the device to a landscape orientation. The landscape orientation requires a different display matrix, but only the initial display matrix (the portrait display matrix) is recorded for the video track.

By invoking this method the client application directs the movie file output to create an additional track in the captured movie. This track is a timed metadata track that is associated with the video track, and contains one or more samples that contain a Video Orientation value (as defined by EXIF and TIFF specifications, which is enumerated by CGImagePropertyOrientation in <ImageIO /CGImageProperties.h>). The value represents the display matrix corresponding to the AVCaptureConnection’s videoOrientation and videoMirrored properties when applied to the input source. The initial sample written to the timed metadata track represents video track’s display matrix. During recording additional samples will be written to the timed metadata track whenever the client application changes the video connection’s videoOrienation or videoMirrored properties. Using the above example, when the client application detects the user changing the device from portrait to landscape orientation, it updates the video connection’s videoOrientation property, thus causing the movie file output to add a new sample to the timed metadata track.

After capture, playback and editing applications can use the timed metadata track to enhance their user’s experience. For example, when playing back the captured movie, a playback engine can use the samples to adjust the display of the video samples to keep the video properly oriented. Another example is an editing application that uses the sample the sample times to suggest cut points for breaking the captured movie into separate clips, where each clip is properly oriented.

The default behavior is to not create the timed metadata track.

The doRecordChanges value is only observed at the start of recording. Changes to the value will not have any effect until the next recording is started.

Source

pub unsafe fn isPrimaryConstituentDeviceSwitchingBehaviorForRecordingEnabled( &self, ) -> bool

Enable or disable a constituent device selection behavior when recording.

This property enables a camera selection behavior to be applied when recording a movie. Once recording starts, the specified behavior and conditions take effect. Once recording stops the camera selection will change back to the primaryConstituentDeviceSwitchingBehavior specified by the AVCaptureDevice. By default, this property is set to YES when connected to an AVCaptureDevice that supports constituent device switching.

Source

pub unsafe fn setPrimaryConstituentDeviceSwitchingBehaviorForRecordingEnabled( &self, primary_constituent_device_switching_behavior_for_recording_enabled: bool, )

Source

pub unsafe fn setPrimaryConstituentDeviceSwitchingBehaviorForRecording_restrictedSwitchingBehaviorConditions( &self, switching_behavior: AVCapturePrimaryConstituentDeviceSwitchingBehavior, restricted_switching_behavior_conditions: AVCapturePrimaryConstituentDeviceRestrictedSwitchingBehaviorConditions, )

Available on crate feature AVCaptureDevice only.

When primaryConstituentDeviceSwitchingBehaviorForRecordingEnabled is set to YES, this method controls the switching behavior and conditions, while a movie file is being recorded.

This controls the camera selection behavior used while recording a movie, when enabled through primaryConstituentDeviceSwitchingBehaviorForRecordingEnabled. Setting the switching behavior to anything other than AVCapturePrimaryConstituentDeviceSwitchingBehaviorUnsupported when connected to an AVCaptureDevice that does not support constituent device selection throws an NSInvalidArgumentException. Setting restrictedSwitchingBehaviorConditions to something other than AVCapturePrimaryConstituentDeviceRestrictedSwitchingBehaviorConditionNone while setting switchingBehavior to something other than AVCapturePrimaryConstituentDeviceSwitchingBehaviorRestricted throws an NSInvalidArgumentException exception.

Source

pub unsafe fn primaryConstituentDeviceSwitchingBehaviorForRecording( &self, ) -> AVCapturePrimaryConstituentDeviceSwitchingBehavior

Available on crate feature AVCaptureDevice only.

The primaryConstituentDeviceSwitchingBehavior as set by -[AVCaptureMovieFileOutput setPrimaryConstituentDeviceSwitchingBehaviorForRecording:restrictedSwitchingBehaviorConditions:].

By default, this property is set to AVCapturePrimaryConstituentDeviceSwitchingBehaviorRestricted. This property is key-value observable.

Source

pub unsafe fn primaryConstituentDeviceRestrictedSwitchingBehaviorConditionsForRecording( &self, ) -> AVCapturePrimaryConstituentDeviceRestrictedSwitchingBehaviorConditions

Available on crate feature AVCaptureDevice only.

The primaryConstituentDeviceRestrictedSwitchingBehaviorConditions as set by -[AVCaptureMovieFileOutput setPrimaryConstituentDeviceSwitchingBehaviorForRecording:restrictedSwitchingBehaviorConditions:].

By default, this property is set to AVCapturePrimaryConstituentDeviceRestrictedSwitchingBehaviorCondition{VideoZoomChanged | FocusModeChanged | ExposureModeChanged}. This property is key-value observable.

Source

pub unsafe fn isSpatialVideoCaptureSupported(&self) -> bool

Returns whether or not capturing spatial video to a file is supported. Note that in order to be supported, two conditions must be met. (1) The source AVCaptureDevice’s activeFormat.spatialVideoCaptureSupported property must return YES. (2) The video AVCaptureConnection’s activeVideoStabilizationMode property must return AVCaptureVideoStabilizationModeCinematic, AVCaptureVideoStabilizationModeCinematicExtended, or AVCaptureVideoStabilizationModeCinematicExtendedEnhanced.

Source

pub unsafe fn isSpatialVideoCaptureEnabled(&self) -> bool

Enable or disable capturing spatial video to a file.

This property enables capturing spatial video to a file. By default, this property is set to NO. Check spatialVideoCaptureSupported before setting this property, as setting to YES will throw an exception if the feature is not supported.

On iOS, enabling spatial video will overwrite the connected AVCaptureDevice’s videoZoomFactor, minAvailableVideoZoomFactor, and maxAvailableVideoZoomFactor to the field of view of the narrower camera in the pair.

When spatialVideoCaptureEnabled is true, setting -[AVCaptureDeviceInput activeVideoMinFrameDuration] or -[AVCaptureDeviceInput activeVideoMaxFrameDuration] throws an NSInvalidArgumentException.

Enabling this property throws an NSInvalidArgumentException if -[AVCaptureDevice isVideoFrameDurationLocked] or -[AVCaptureDevice isFollowingExternalSyncDevice] is true.

Source

pub unsafe fn setSpatialVideoCaptureEnabled( &self, spatial_video_capture_enabled: bool, )

Methods from Deref<Target = AVCaptureFileOutput>§

Source

pub unsafe fn delegate( &self, ) -> Option<Retained<ProtocolObject<dyn AVCaptureFileOutputDelegate>>>

The receiver’s delegate.

The value of this property is an object conforming to the AVCaptureFileOutputDelegate protocol that will be able to monitor and control recording along exact sample boundaries.

§Safety

This is not retained internally, you must ensure the object is still alive.

Source

pub unsafe fn setDelegate( &self, delegate: Option<&ProtocolObject<dyn AVCaptureFileOutputDelegate>>, )

Setter for delegate.

§Safety

This is unretained, you must ensure the object is kept alive while in use.

Source

pub unsafe fn outputFileURL(&self) -> Option<Retained<NSURL>>

The file URL of the file to which the receiver is currently recording incoming buffers.

The value of this property is an NSURL object containing the file URL of the file currently being written by the receiver. Returns nil if the receiver is not recording to any file.

Source

pub unsafe fn startRecordingToOutputFileURL_recordingDelegate( &self, output_file_url: &NSURL, delegate: &ProtocolObject<dyn AVCaptureFileOutputRecordingDelegate>, )

Tells the receiver to start recording to a new file, and specifies a delegate that will be notified when recording is finished.

Parameter outputFileURL: An NSURL object containing the URL of the output file. This method throws an NSInvalidArgumentException if the URL is not a valid file URL.

Parameter delegate: An object conforming to the AVCaptureFileOutputRecordingDelegate protocol. Clients must specify a delegate so that they can be notified when recording to the given URL is finished.

The method sets the file URL to which the receiver is currently writing output media. If a file at the given URL already exists when capturing starts, recording to the new file will fail.

Clients need not call stopRecording before calling this method while another recording is in progress. On macOS, if this method is invoked while an existing output file was already being recorded, no media samples will be discarded between the old file and the new file.

When recording is stopped either by calling stopRecording, by changing files using this method, or because of an error, the remaining data that needs to be included to the file will be written in the background. Therefore, clients must specify a delegate that will be notified when all data has been written to the file using the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method. The recording delegate can also optionally implement methods that inform it when data starts being written, when recording is paused and resumed, and when recording is about to be finished.

On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the new file are guaranteed to be those contained in the sample buffer passed to that method.

Note: AVCaptureAudioFileOutput does not support -startRecordingToOutputFileURL:recordingDelegate:. Use -startRecordingToOutputFileURL:outputFileType:recordingDelegate: instead.

Source

pub unsafe fn stopRecording(&self)

Tells the receiver to stop recording to the current file.

Clients can call this method when they want to stop recording new samples to the current file, and do not want to continue recording to another file. Clients that want to switch from one file to another should not call this method. Instead they should simply call startRecordingToOutputFileURL:recordingDelegate: with the new file URL.

When recording is stopped either by calling this method, by changing files using startRecordingToOutputFileURL:recordingDelegate:, or because of an error, the remaining data that needs to be included to the file will be written in the background. Therefore, before using the file, clients must wait until the delegate that was specified in startRecordingToOutputFileURL:recordingDelegate: is notified when all data has been written to the file using the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the last samples written to the current file are guaranteed to be those that were output immediately before those in the sample buffer passed to that method.

Source

pub unsafe fn isRecording(&self) -> bool

Indicates whether the receiver is currently recording.

The value of this property is YES when the receiver currently has a file to which it is writing new samples, NO otherwise.

Source

pub unsafe fn isRecordingPaused(&self) -> bool

Indicates whether recording to the current output file is paused.

This property indicates recording to the file returned by outputFileURL has been previously paused using the pauseRecording method. When a recording is paused, captured samples are not written to the output file, but new samples can be written to the same file in the future by calling resumeRecording.

Source

pub unsafe fn pauseRecording(&self)

Pauses recording to the current output file.

This method causes the receiver to stop writing captured samples to the current output file returned by outputFileURL, but leaves the file open so that samples can be written to it in the future, when resumeRecording is called. This allows clients to record multiple media segments that are not contiguous in time to a single file.

On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the last samples written to the current file are guaranteed to be those that were output immediately before those in the sample buffer passed to that method.

A recording can be stopped as normal, even when it’s paused.

A format or device change will result in the recording being stopped, even when it’s paused.

Source

pub unsafe fn resumeRecording(&self)

Resumes recording to the current output file after it was previously paused using pauseRecording.

This method causes the receiver to resume writing captured samples to the current output file returned by outputFileURL, after recording was previously paused using pauseRecording. This allows clients to record multiple media segments that are not contiguous in time to a single file.

On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the current file are guaranteed to be those contained in the sample buffer passed to that method.

Source

pub unsafe fn recordedDuration(&self) -> CMTime

Available on crate feature objc2-core-media only.

Indicates the duration of the media recorded to the current output file.

If recording is in progress, this property returns the total time recorded so far.

Source

pub unsafe fn recordedFileSize(&self) -> i64

Indicates the size, in bytes, of the data recorded to the current output file.

If a recording is in progress, this property returns the size in bytes of the data recorded so far.

Source

pub unsafe fn maxRecordedDuration(&self) -> CMTime

Available on crate feature objc2-core-media only.

Specifies the maximum duration of the media that should be recorded by the receiver.

This property specifies a hard limit on the duration of recorded files. Recording is stopped when the limit is reached and the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: delegate method is invoked with an appropriate error. The default value of this property is kCMTimeInvalid, which indicates no limit.

Source

pub unsafe fn setMaxRecordedDuration(&self, max_recorded_duration: CMTime)

Available on crate feature objc2-core-media only.

Setter for maxRecordedDuration.

Source

pub unsafe fn maxRecordedFileSize(&self) -> i64

Specifies the maximum size, in bytes, of the data that should be recorded by the receiver.

This property specifies a hard limit on the data size of recorded files. Recording is stopped when the limit is reached and the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: delegate method is invoked with an appropriate error. The default value of this property is 0, which indicates no limit.

Source

pub unsafe fn setMaxRecordedFileSize(&self, max_recorded_file_size: i64)

Setter for maxRecordedFileSize.

Source

pub unsafe fn minFreeDiskSpaceLimit(&self) -> i64

Specifies the minimum amount of free space, in bytes, required for recording to continue on a given volume.

This property specifies a hard lower limit on the amount of free space that must remain on a target volume for recording to continue. Recording is stopped when the limit is reached and the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: delegate method is invoked with an appropriate error.

Source

pub unsafe fn setMinFreeDiskSpaceLimit(&self, min_free_disk_space_limit: i64)

Methods from Deref<Target = AVCaptureOutput>§

Source

pub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>

Available on crate feature AVCaptureSession only.

The connections that describe the flow of media data to the receiver from AVCaptureInputs.

The value of this property is an NSArray of AVCaptureConnection objects, each describing the mapping between the receiver and the AVCaptureInputPorts of one or more AVCaptureInputs.

Source

pub unsafe fn connectionWithMediaType( &self, media_type: &AVMediaType, ) -> Option<Retained<AVCaptureConnection>>

Available on crate features AVCaptureSession and AVMediaFormat only.

Returns the first connection in the connections array with an inputPort of the specified mediaType.

Parameter mediaType: An AVMediaType constant from AVMediaFormat.h, e.g. AVMediaTypeVideo.

This convenience method returns the first AVCaptureConnection in the receiver’s connections array that has an AVCaptureInputPort of the specified mediaType. If no connection with the specified mediaType is found, nil is returned.

Source

pub unsafe fn transformedMetadataObjectForMetadataObject_connection( &self, metadata_object: &AVMetadataObject, connection: &AVCaptureConnection, ) -> Option<Retained<AVMetadataObject>>

Available on crate features AVCaptureSession and AVMetadataObject only.

Converts an AVMetadataObject’s visual properties to the receiver’s coordinates.

Parameter metadataObject: An AVMetadataObject originating from the same AVCaptureInput as the receiver.

Parameter connection: The receiver’s connection whose AVCaptureInput matches that of the metadata object to be converted.

Returns: An AVMetadataObject whose properties are in output coordinates.

AVMetadataObject bounds may be expressed as a rect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. Face metadata objects likewise express yaw and roll angles with respect to an unrotated picture. -transformedMetadataObjectForMetadataObject:connection: converts the visual properties in the coordinate space of the supplied AVMetadataObject to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. If the provided metadata object originates from an input source other than the preview layer’s, nil will be returned.

If an AVCaptureVideoDataOutput instance’s connection’s videoOrientation or videoMirrored properties are set to non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping sample buffers as they pass through it. AVCaptureStillImageOutput, on the other hand, does not physically rotate its buffers. It attaches an appropriate kCGImagePropertyOrientation number to captured still image buffers (see ImageIO/CGImageProperties.h) indicating how the image should be displayed on playback. Likewise, AVCaptureMovieFileOutput does not physically apply orientation/mirroring to its sample buffers – it uses a QuickTime track matrix to indicate how the buffers should be rotated and/or flipped on playback.

transformedMetadataObjectForMetadataObject:connection: alters the visual properties of the provided metadata object to match the physical rotation / mirroring of the sample buffers provided by the receiver through the indicated connection. I.e., for video data output, adjusted metadata object coordinates are rotated/mirrored. For still image and movie file output, they are not.

Source

pub unsafe fn metadataOutputRectOfInterestForRect( &self, rect_in_output_coordinates: CGRect, ) -> CGRect

Available on crate feature objc2-core-foundation only.

Converts a rectangle in the receiver’s coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver.

Parameter rectInOutputCoordinates: A CGRect in the receiver’s coordinates.

Returns: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.

AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.

Source

pub unsafe fn rectForMetadataOutputRectOfInterest( &self, rect_in_metadata_output_coordinates: CGRect, ) -> CGRect

Available on crate feature objc2-core-foundation only.

Converts a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver to a rectangle in the receiver’s coordinates.

Parameter rectInMetadataOutputCoordinates: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.

Returns: A CGRect in the receiver’s coordinates.

AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.

Source

pub unsafe fn isDeferredStartSupported(&self) -> bool

A BOOL value that indicates whether the output supports deferred start.

You can only set the deferredStartEnabled property value to true if the output supports deferred start.

Source

pub unsafe fn isDeferredStartEnabled(&self) -> bool

A BOOL value that indicates whether to defer starting this capture output.

When this value is true, the session does not prepare the output’s resources until some time after AVCaptureSession/startRunning returns. You can start the visual parts of your user interface (e.g. preview) prior to other parts (e.g. photo/movie capture, metadata output, etc..) to improve startup performance. Set this value to false for outputs that your app needs for startup, and true for the ones it does not need to start immediately. For example, an AVCaptureVideoDataOutput that you intend to use for displaying preview should set this value to false, so that the frames are available as soon as possible.

By default, for apps that are linked on or after iOS 26, this property value is true for AVCapturePhotoOutput and AVCaptureFileOutput subclasses if supported, and false otherwise. When set to true for AVCapturePhotoOutput, if you want to support multiple capture requests before running deferred start, set AVCapturePhotoOutput/responsiveCaptureEnabled to true on that output.

If deferredStartSupported is false, setting this property value to true results in the system throwing an NSInvalidArgumentException.

  • Note: Set this value before calling AVCaptureSession/commitConfiguration as it requires a lengthy reconfiguration of the capture render pipeline.
Source

pub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)

Methods from Deref<Target = NSObject>§

Source

pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !

Handle messages the object doesn’t recognize.

See Apple’s documentation for details.

Methods from Deref<Target = AnyObject>§

Source

pub fn class(&self) -> &'static AnyClass

Dynamically find the class of this object.

§Panics

May panic if the object is invalid (which may be the case for objects returned from unavailable init/new methods).

§Example

Check that an instance of NSObject has the precise class NSObject.

use objc2::ClassType;
use objc2::runtime::NSObject;

let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Source

pub unsafe fn get_ivar<T>(&self, name: &str) -> &T
where T: Encode,

👎Deprecated: this is difficult to use correctly, use Ivar::load instead.

Use Ivar::load instead.

§Safety

The object must have an instance variable with the given name, and it must be of type T.

See Ivar::load_ptr for details surrounding this.

Source

pub fn downcast_ref<T>(&self) -> Option<&T>
where T: DowncastTarget,

Attempt to downcast the object to a class of type T.

This is the reference-variant. Use Retained::downcast if you want to convert a retained object to another type.

§Mutable classes

Some classes have immutable and mutable variants, such as NSString and NSMutableString.

When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.

So using this method to convert a NSString to a NSMutableString, while not unsound, is generally frowned upon unless you created the string yourself, or the API explicitly documents the string to be mutable.

See Apple’s documentation on mutability and on isKindOfClass: for more details.

§Generic classes

Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.

You can, however, safely downcast to generic collections where all the type-parameters are AnyObject.

§Panics

This works internally by calling isKindOfClass:. That means that the object must have the instance method of that name, and an exception will be thrown (if CoreFoundation is linked) or the process will abort if that is not the case. In the vast majority of cases, you don’t need to worry about this, since both root objects NSObject and NSProxy implement this method.

§Examples

Cast an NSString back and forth from NSObject.

use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};

let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();

Try (and fail) to cast an NSObject to an NSString.

use objc2_foundation::{NSObject, NSString};

let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());

Try to cast to an array of strings.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();

This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.

Downcast when processing each element instead.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);

for elem in arr {
    if let Some(data) = elem.downcast_ref::<NSString>() {
        // handle `data`
    }
}

Trait Implementations§

Source§

impl AsRef<AVCaptureFileOutput> for AVCaptureMovieFileOutput

Source§

fn as_ref(&self) -> &AVCaptureFileOutput

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AVCaptureMovieFileOutput> for AVCaptureMovieFileOutput

Source§

fn as_ref(&self) -> &Self

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AVCaptureOutput> for AVCaptureMovieFileOutput

Source§

fn as_ref(&self) -> &AVCaptureOutput

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AnyObject> for AVCaptureMovieFileOutput

Source§

fn as_ref(&self) -> &AnyObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<NSObject> for AVCaptureMovieFileOutput

Source§

fn as_ref(&self) -> &NSObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl Borrow<AVCaptureFileOutput> for AVCaptureMovieFileOutput

Source§

fn borrow(&self) -> &AVCaptureFileOutput

Immutably borrows from an owned value. Read more
Source§

impl Borrow<AVCaptureOutput> for AVCaptureMovieFileOutput

Source§

fn borrow(&self) -> &AVCaptureOutput

Immutably borrows from an owned value. Read more
Source§

impl Borrow<AnyObject> for AVCaptureMovieFileOutput

Source§

fn borrow(&self) -> &AnyObject

Immutably borrows from an owned value. Read more
Source§

impl Borrow<NSObject> for AVCaptureMovieFileOutput

Source§

fn borrow(&self) -> &NSObject

Immutably borrows from an owned value. Read more
Source§

impl ClassType for AVCaptureMovieFileOutput

Source§

const NAME: &'static str = "AVCaptureMovieFileOutput"

The name of the Objective-C class that this type represents. Read more
Source§

type Super = AVCaptureFileOutput

The superclass of this class. Read more
Source§

type ThreadKind = <<AVCaptureMovieFileOutput as ClassType>::Super as ClassType>::ThreadKind

Whether the type can be used from any thread, or from only the main thread. Read more
Source§

fn class() -> &'static AnyClass

Get a reference to the Objective-C class that this type represents. Read more
Source§

fn as_super(&self) -> &Self::Super

Get an immutable reference to the superclass.
Source§

impl Debug for AVCaptureMovieFileOutput

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Deref for AVCaptureMovieFileOutput

Source§

type Target = AVCaptureFileOutput

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl Hash for AVCaptureMovieFileOutput

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Message for AVCaptureMovieFileOutput

Source§

fn retain(&self) -> Retained<Self>
where Self: Sized,

Increment the reference count of the receiver. Read more
Source§

impl NSObjectProtocol for AVCaptureMovieFileOutput

Source§

fn isEqual(&self, other: Option<&AnyObject>) -> bool
where Self: Sized + Message,

Check whether the object is equal to an arbitrary other object. Read more
Source§

fn hash(&self) -> usize
where Self: Sized + Message,

An integer that can be used as a table address in a hash table structure. Read more
Source§

fn isKindOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of the class, or one of its subclasses. Read more
Source§

fn is_kind_of<T>(&self) -> bool
where T: ClassType, Self: Sized + Message,

👎Deprecated: use isKindOfClass directly, or cast your objects with AnyObject::downcast_ref
Check if the object is an instance of the class type, or one of its subclasses. Read more
Source§

fn isMemberOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of a specific class, without checking subclasses. Read more
Source§

fn respondsToSelector(&self, aSelector: Sel) -> bool
where Self: Sized + Message,

Check whether the object implements or inherits a method with the given selector. Read more
Source§

fn conformsToProtocol(&self, aProtocol: &AnyProtocol) -> bool
where Self: Sized + Message,

Check whether the object conforms to a given protocol. Read more
Source§

fn description(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object. Read more
Source§

fn debugDescription(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object to use when debugging. Read more
Source§

fn isProxy(&self) -> bool
where Self: Sized + Message,

Check whether the receiver is a subclass of the NSProxy root class instead of the usual NSObject. Read more
Source§

fn retainCount(&self) -> usize
where Self: Sized + Message,

The reference count of the object. Read more
Source§

impl PartialEq for AVCaptureMovieFileOutput

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl RefEncode for AVCaptureMovieFileOutput

Source§

const ENCODING_REF: Encoding = <AVCaptureFileOutput as ::objc2::RefEncode>::ENCODING_REF

The Objective-C type-encoding for a reference of this type. Read more
Source§

impl DowncastTarget for AVCaptureMovieFileOutput

Source§

impl Eq for AVCaptureMovieFileOutput

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<'a, T> AnyThread for T
where T: ClassType<ThreadKind = dyn AnyThread + 'a> + ?Sized,

Source§

fn alloc() -> Allocated<Self>
where Self: Sized + ClassType,

Allocate a new instance of the class. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> AutoreleaseSafe for T
where T: ?Sized,