pub struct AVCaptureAudioFileOutput { /* private fields */ }AVCaptureFileOutput and AVCaptureOutputBase only.Expand description
AVCaptureAudioFileOutput is a concrete subclass of AVCaptureFileOutput that writes captured audio to any audio file type supported by CoreAudio.
AVCaptureAudioFileOutput implements the complete file recording interface declared by AVCaptureFileOutput for writing media data to audio files. In addition, instances of AVCaptureAudioFileOutput allow clients to configure options specific to the audio file formats, including allowing them to write metadata collections to each file and specify audio encoding options.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureAudioFileOutput
impl AVCaptureAudioFileOutput
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn availableOutputFileTypes() -> Retained<NSArray<AVFileType>>
Available on crate feature AVMediaFormat only.
pub unsafe fn availableOutputFileTypes() -> Retained<NSArray<AVFileType>>
AVMediaFormat only.Provides the file types AVCaptureAudioFileOutput can write.
Returns: An NSArray of UTIs identifying the file types the AVCaptureAudioFileOutput class can write.
Sourcepub unsafe fn startRecordingToOutputFileURL_outputFileType_recordingDelegate(
&self,
output_file_url: &NSURL,
file_type: &AVFileType,
delegate: &ProtocolObject<dyn AVCaptureFileOutputRecordingDelegate>,
)
Available on crate feature AVMediaFormat only.
pub unsafe fn startRecordingToOutputFileURL_outputFileType_recordingDelegate( &self, output_file_url: &NSURL, file_type: &AVFileType, delegate: &ProtocolObject<dyn AVCaptureFileOutputRecordingDelegate>, )
AVMediaFormat only.Tells the receiver to start recording to a new file of the specified format, and specifies a delegate that will be notified when recording is finished.
Parameter outputFileURL: An NSURL object containing the URL of the output file. This method throws an NSInvalidArgumentException if the URL is not a valid file URL.
Parameter fileType: A UTI indicating the format of the file to be written.
Parameter delegate: An object conforming to the AVCaptureFileOutputRecordingDelegate protocol. Clients must specify a delegate so that they can be notified when recording to the given URL is finished.
The method sets the file URL to which the receiver is currently writing output media. If a file at the given URL already exists when capturing starts, recording to the new file will fail.
The fileType argument is a UTI corresponding to the audio file format that should be written. UTIs for common audio file types are declared in AVMediaFormat.h.
Clients need not call stopRecording before calling this method while another recording is in progress. If this method is invoked while an existing output file was already being recorded, no media samples will be discarded between the old file and the new file.
When recording is stopped either by calling stopRecording, by changing files using this method, or because of an error, the remaining data that needs to be included to the file will be written in the background. Therefore, clients must specify a delegate that will be notified when all data has been written to the file using the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method. The recording delegate can also optionally implement methods that inform it when data starts being written, when recording is paused and resumed, and when recording is about to be finished.
On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the new file are guaranteed to be those contained in the sample buffer passed to that method.
Sourcepub unsafe fn metadata(&self) -> Retained<NSArray<AVMetadataItem>>
Available on crate feature AVMetadataItem only.
pub unsafe fn metadata(&self) -> Retained<NSArray<AVMetadataItem>>
AVMetadataItem only.A collection of metadata to be written to the receiver’s output files.
The value of this property is an array of AVMetadataItem objects representing the collection of top-level metadata to be written in each output file. Only ID3 v2.2, v2.3, or v2.4 style metadata items are supported.
Sourcepub unsafe fn setMetadata(&self, metadata: &NSArray<AVMetadataItem>)
Available on crate feature AVMetadataItem only.
pub unsafe fn setMetadata(&self, metadata: &NSArray<AVMetadataItem>)
AVMetadataItem only.Sourcepub unsafe fn audioSettings(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn audioSettings( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
Specifies the options the receiver uses to re-encode audio as it is being recorded.
The output settings dictionary can contain values for keys from AVAudioSettings.h. A value of nil indicates that the format of the audio should not be changed before being written to the file.
Sourcepub unsafe fn setAudioSettings(
&self,
audio_settings: Option<&NSDictionary<NSString, AnyObject>>,
)
pub unsafe fn setAudioSettings( &self, audio_settings: Option<&NSDictionary<NSString, AnyObject>>, )
Setter for audioSettings.
This is copied when set.
§Safety
audio_settings generic should be of the correct type.
Methods from Deref<Target = AVCaptureFileOutput>§
Sourcepub unsafe fn delegate(
&self,
) -> Option<Retained<ProtocolObject<dyn AVCaptureFileOutputDelegate>>>
pub unsafe fn delegate( &self, ) -> Option<Retained<ProtocolObject<dyn AVCaptureFileOutputDelegate>>>
The receiver’s delegate.
The value of this property is an object conforming to the AVCaptureFileOutputDelegate protocol that will be able to monitor and control recording along exact sample boundaries.
§Safety
This is not retained internally, you must ensure the object is still alive.
Sourcepub unsafe fn setDelegate(
&self,
delegate: Option<&ProtocolObject<dyn AVCaptureFileOutputDelegate>>,
)
pub unsafe fn setDelegate( &self, delegate: Option<&ProtocolObject<dyn AVCaptureFileOutputDelegate>>, )
Sourcepub unsafe fn outputFileURL(&self) -> Option<Retained<NSURL>>
pub unsafe fn outputFileURL(&self) -> Option<Retained<NSURL>>
The file URL of the file to which the receiver is currently recording incoming buffers.
The value of this property is an NSURL object containing the file URL of the file currently being written by the receiver. Returns nil if the receiver is not recording to any file.
Sourcepub unsafe fn startRecordingToOutputFileURL_recordingDelegate(
&self,
output_file_url: &NSURL,
delegate: &ProtocolObject<dyn AVCaptureFileOutputRecordingDelegate>,
)
pub unsafe fn startRecordingToOutputFileURL_recordingDelegate( &self, output_file_url: &NSURL, delegate: &ProtocolObject<dyn AVCaptureFileOutputRecordingDelegate>, )
Tells the receiver to start recording to a new file, and specifies a delegate that will be notified when recording is finished.
Parameter outputFileURL: An NSURL object containing the URL of the output file. This method throws an NSInvalidArgumentException if the URL is not a valid file URL.
Parameter delegate: An object conforming to the AVCaptureFileOutputRecordingDelegate protocol. Clients must specify a delegate so that they can be notified when recording to the given URL is finished.
The method sets the file URL to which the receiver is currently writing output media. If a file at the given URL already exists when capturing starts, recording to the new file will fail.
Clients need not call stopRecording before calling this method while another recording is in progress. On macOS, if this method is invoked while an existing output file was already being recorded, no media samples will be discarded between the old file and the new file.
When recording is stopped either by calling stopRecording, by changing files using this method, or because of an error, the remaining data that needs to be included to the file will be written in the background. Therefore, clients must specify a delegate that will be notified when all data has been written to the file using the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method. The recording delegate can also optionally implement methods that inform it when data starts being written, when recording is paused and resumed, and when recording is about to be finished.
On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the new file are guaranteed to be those contained in the sample buffer passed to that method.
Note: AVCaptureAudioFileOutput does not support -startRecordingToOutputFileURL:recordingDelegate:. Use -startRecordingToOutputFileURL:outputFileType:recordingDelegate: instead.
Sourcepub unsafe fn stopRecording(&self)
pub unsafe fn stopRecording(&self)
Tells the receiver to stop recording to the current file.
Clients can call this method when they want to stop recording new samples to the current file, and do not want to continue recording to another file. Clients that want to switch from one file to another should not call this method. Instead they should simply call startRecordingToOutputFileURL:recordingDelegate: with the new file URL.
When recording is stopped either by calling this method, by changing files using startRecordingToOutputFileURL:recordingDelegate:, or because of an error, the remaining data that needs to be included to the file will be written in the background. Therefore, before using the file, clients must wait until the delegate that was specified in startRecordingToOutputFileURL:recordingDelegate: is notified when all data has been written to the file using the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.
On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the last samples written to the current file are guaranteed to be those that were output immediately before those in the sample buffer passed to that method.
Sourcepub unsafe fn isRecording(&self) -> bool
pub unsafe fn isRecording(&self) -> bool
Indicates whether the receiver is currently recording.
The value of this property is YES when the receiver currently has a file to which it is writing new samples, NO otherwise.
Sourcepub unsafe fn isRecordingPaused(&self) -> bool
pub unsafe fn isRecordingPaused(&self) -> bool
Indicates whether recording to the current output file is paused.
This property indicates recording to the file returned by outputFileURL has been previously paused using the pauseRecording method. When a recording is paused, captured samples are not written to the output file, but new samples can be written to the same file in the future by calling resumeRecording.
Sourcepub unsafe fn pauseRecording(&self)
pub unsafe fn pauseRecording(&self)
Pauses recording to the current output file.
This method causes the receiver to stop writing captured samples to the current output file returned by outputFileURL, but leaves the file open so that samples can be written to it in the future, when resumeRecording is called. This allows clients to record multiple media segments that are not contiguous in time to a single file.
On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the last samples written to the current file are guaranteed to be those that were output immediately before those in the sample buffer passed to that method.
A recording can be stopped as normal, even when it’s paused.
A format or device change will result in the recording being stopped, even when it’s paused.
Sourcepub unsafe fn resumeRecording(&self)
pub unsafe fn resumeRecording(&self)
Resumes recording to the current output file after it was previously paused using pauseRecording.
This method causes the receiver to resume writing captured samples to the current output file returned by outputFileURL, after recording was previously paused using pauseRecording. This allows clients to record multiple media segments that are not contiguous in time to a single file.
On macOS, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the current file are guaranteed to be those contained in the sample buffer passed to that method.
Sourcepub unsafe fn recordedDuration(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn recordedDuration(&self) -> CMTime
objc2-core-media only.Indicates the duration of the media recorded to the current output file.
If recording is in progress, this property returns the total time recorded so far.
Sourcepub unsafe fn recordedFileSize(&self) -> i64
pub unsafe fn recordedFileSize(&self) -> i64
Indicates the size, in bytes, of the data recorded to the current output file.
If a recording is in progress, this property returns the size in bytes of the data recorded so far.
Sourcepub unsafe fn maxRecordedDuration(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn maxRecordedDuration(&self) -> CMTime
objc2-core-media only.Specifies the maximum duration of the media that should be recorded by the receiver.
This property specifies a hard limit on the duration of recorded files. Recording is stopped when the limit is reached and the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: delegate method is invoked with an appropriate error. The default value of this property is kCMTimeInvalid, which indicates no limit.
Sourcepub unsafe fn setMaxRecordedDuration(&self, max_recorded_duration: CMTime)
Available on crate feature objc2-core-media only.
pub unsafe fn setMaxRecordedDuration(&self, max_recorded_duration: CMTime)
objc2-core-media only.Setter for maxRecordedDuration.
Sourcepub unsafe fn maxRecordedFileSize(&self) -> i64
pub unsafe fn maxRecordedFileSize(&self) -> i64
Specifies the maximum size, in bytes, of the data that should be recorded by the receiver.
This property specifies a hard limit on the data size of recorded files. Recording is stopped when the limit is reached and the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: delegate method is invoked with an appropriate error. The default value of this property is 0, which indicates no limit.
Sourcepub unsafe fn setMaxRecordedFileSize(&self, max_recorded_file_size: i64)
pub unsafe fn setMaxRecordedFileSize(&self, max_recorded_file_size: i64)
Setter for maxRecordedFileSize.
Sourcepub unsafe fn minFreeDiskSpaceLimit(&self) -> i64
pub unsafe fn minFreeDiskSpaceLimit(&self) -> i64
Specifies the minimum amount of free space, in bytes, required for recording to continue on a given volume.
This property specifies a hard lower limit on the amount of free space that must remain on a target volume for recording to continue. Recording is stopped when the limit is reached and the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: delegate method is invoked with an appropriate error.
Sourcepub unsafe fn setMinFreeDiskSpaceLimit(&self, min_free_disk_space_limit: i64)
pub unsafe fn setMinFreeDiskSpaceLimit(&self, min_free_disk_space_limit: i64)
Setter for minFreeDiskSpaceLimit.
Methods from Deref<Target = AVCaptureOutput>§
Sourcepub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
Available on crate feature AVCaptureSession only.
pub unsafe fn connections(&self) -> Retained<NSArray<AVCaptureConnection>>
AVCaptureSession only.The connections that describe the flow of media data to the receiver from AVCaptureInputs.
The value of this property is an NSArray of AVCaptureConnection objects, each describing the mapping between the receiver and the AVCaptureInputPorts of one or more AVCaptureInputs.
Sourcepub unsafe fn connectionWithMediaType(
&self,
media_type: &AVMediaType,
) -> Option<Retained<AVCaptureConnection>>
Available on crate features AVCaptureSession and AVMediaFormat only.
pub unsafe fn connectionWithMediaType( &self, media_type: &AVMediaType, ) -> Option<Retained<AVCaptureConnection>>
AVCaptureSession and AVMediaFormat only.Returns the first connection in the connections array with an inputPort of the specified mediaType.
Parameter mediaType: An AVMediaType constant from AVMediaFormat.h, e.g. AVMediaTypeVideo.
This convenience method returns the first AVCaptureConnection in the receiver’s connections array that has an AVCaptureInputPort of the specified mediaType. If no connection with the specified mediaType is found, nil is returned.
Sourcepub unsafe fn transformedMetadataObjectForMetadataObject_connection(
&self,
metadata_object: &AVMetadataObject,
connection: &AVCaptureConnection,
) -> Option<Retained<AVMetadataObject>>
Available on crate features AVCaptureSession and AVMetadataObject only.
pub unsafe fn transformedMetadataObjectForMetadataObject_connection( &self, metadata_object: &AVMetadataObject, connection: &AVCaptureConnection, ) -> Option<Retained<AVMetadataObject>>
AVCaptureSession and AVMetadataObject only.Converts an AVMetadataObject’s visual properties to the receiver’s coordinates.
Parameter metadataObject: An AVMetadataObject originating from the same AVCaptureInput as the receiver.
Parameter connection: The receiver’s connection whose AVCaptureInput matches that of the metadata object to be converted.
Returns: An AVMetadataObject whose properties are in output coordinates.
AVMetadataObject bounds may be expressed as a rect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. Face metadata objects likewise express yaw and roll angles with respect to an unrotated picture. -transformedMetadataObjectForMetadataObject:connection: converts the visual properties in the coordinate space of the supplied AVMetadataObject to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. If the provided metadata object originates from an input source other than the preview layer’s, nil will be returned.
If an AVCaptureVideoDataOutput instance’s connection’s videoOrientation or videoMirrored properties are set to non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping sample buffers as they pass through it. AVCaptureStillImageOutput, on the other hand, does not physically rotate its buffers. It attaches an appropriate kCGImagePropertyOrientation number to captured still image buffers (see ImageIO/CGImageProperties.h) indicating how the image should be displayed on playback. Likewise, AVCaptureMovieFileOutput does not physically apply orientation/mirroring to its sample buffers – it uses a QuickTime track matrix to indicate how the buffers should be rotated and/or flipped on playback.
transformedMetadataObjectForMetadataObject:connection: alters the visual properties of the provided metadata object to match the physical rotation / mirroring of the sample buffers provided by the receiver through the indicated connection. I.e., for video data output, adjusted metadata object coordinates are rotated/mirrored. For still image and movie file output, they are not.
Sourcepub unsafe fn metadataOutputRectOfInterestForRect(
&self,
rect_in_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn metadataOutputRectOfInterestForRect( &self, rect_in_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle in the receiver’s coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver.
Parameter rectInOutputCoordinates: A CGRect in the receiver’s coordinates.
Returns: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn rectForMetadataOutputRectOfInterest(
&self,
rect_in_metadata_output_coordinates: CGRect,
) -> CGRect
Available on crate feature objc2-core-foundation only.
pub unsafe fn rectForMetadataOutputRectOfInterest( &self, rect_in_metadata_output_coordinates: CGRect, ) -> CGRect
objc2-core-foundation only.Converts a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput whose capture device is providing input to the receiver to a rectangle in the receiver’s coordinates.
Parameter rectInMetadataOutputCoordinates: A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
Returns: A CGRect in the receiver’s coordinates.
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in the coordinate space of an AVCaptureMetadataOutput whose AVCaptureDevice is providing input to the coordinate space of the receiver. The conversion takes orientation, mirroring, and scaling into consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring are applied to sample buffers passing through the output.
Sourcepub unsafe fn isDeferredStartSupported(&self) -> bool
pub unsafe fn isDeferredStartSupported(&self) -> bool
A BOOL value that indicates whether the output supports deferred start.
You can only set the deferredStartEnabled property value to true if the output supports deferred start.
Sourcepub unsafe fn isDeferredStartEnabled(&self) -> bool
pub unsafe fn isDeferredStartEnabled(&self) -> bool
A BOOL value that indicates whether to defer starting this capture output.
When this value is true, the session does not prepare the output’s resources until some time after AVCaptureSession/startRunning returns. You can start the visual parts of your user interface (e.g. preview) prior to other parts (e.g. photo/movie capture, metadata output, etc..) to improve startup performance. Set this value to false for outputs that your app needs for startup, and true for the ones it does not need to start immediately. For example, an AVCaptureVideoDataOutput that you intend to use for displaying preview should set this value to false, so that the frames are available as soon as possible.
By default, for apps that are linked on or after iOS 26, this property value is true for AVCapturePhotoOutput and AVCaptureFileOutput subclasses if supported, and false otherwise. When set to true for AVCapturePhotoOutput, if you want to support multiple capture requests before running deferred start, set AVCapturePhotoOutput/responsiveCaptureEnabled to true on that output.
If deferredStartSupported is false, setting this property value to true results in the system throwing an NSInvalidArgumentException.
- Note: Set this value before calling
AVCaptureSession/commitConfigurationas it requires a lengthy reconfiguration of the capture render pipeline.
Sourcepub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
pub unsafe fn setDeferredStartEnabled(&self, deferred_start_enabled: bool)
Setter for isDeferredStartEnabled.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCaptureFileOutput> for AVCaptureAudioFileOutput
impl AsRef<AVCaptureFileOutput> for AVCaptureAudioFileOutput
Source§fn as_ref(&self) -> &AVCaptureFileOutput
fn as_ref(&self) -> &AVCaptureFileOutput
Source§impl AsRef<AVCaptureOutput> for AVCaptureAudioFileOutput
impl AsRef<AVCaptureOutput> for AVCaptureAudioFileOutput
Source§fn as_ref(&self) -> &AVCaptureOutput
fn as_ref(&self) -> &AVCaptureOutput
Source§impl AsRef<AnyObject> for AVCaptureAudioFileOutput
impl AsRef<AnyObject> for AVCaptureAudioFileOutput
Source§impl AsRef<NSObject> for AVCaptureAudioFileOutput
impl AsRef<NSObject> for AVCaptureAudioFileOutput
Source§impl Borrow<AVCaptureFileOutput> for AVCaptureAudioFileOutput
impl Borrow<AVCaptureFileOutput> for AVCaptureAudioFileOutput
Source§fn borrow(&self) -> &AVCaptureFileOutput
fn borrow(&self) -> &AVCaptureFileOutput
Source§impl Borrow<AVCaptureOutput> for AVCaptureAudioFileOutput
impl Borrow<AVCaptureOutput> for AVCaptureAudioFileOutput
Source§fn borrow(&self) -> &AVCaptureOutput
fn borrow(&self) -> &AVCaptureOutput
Source§impl Borrow<AnyObject> for AVCaptureAudioFileOutput
impl Borrow<AnyObject> for AVCaptureAudioFileOutput
Source§impl Borrow<NSObject> for AVCaptureAudioFileOutput
impl Borrow<NSObject> for AVCaptureAudioFileOutput
Source§impl ClassType for AVCaptureAudioFileOutput
impl ClassType for AVCaptureAudioFileOutput
Source§const NAME: &'static str = "AVCaptureAudioFileOutput"
const NAME: &'static str = "AVCaptureAudioFileOutput"
Source§type Super = AVCaptureFileOutput
type Super = AVCaptureFileOutput
Source§type ThreadKind = <<AVCaptureAudioFileOutput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureAudioFileOutput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureAudioFileOutput
impl Debug for AVCaptureAudioFileOutput
Source§impl Deref for AVCaptureAudioFileOutput
impl Deref for AVCaptureAudioFileOutput
Source§impl Hash for AVCaptureAudioFileOutput
impl Hash for AVCaptureAudioFileOutput
Source§impl Message for AVCaptureAudioFileOutput
impl Message for AVCaptureAudioFileOutput
Source§impl NSObjectProtocol for AVCaptureAudioFileOutput
impl NSObjectProtocol for AVCaptureAudioFileOutput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref