pub struct AVCaptureDeviceInput { /* private fields */ }AVCaptureInput only.Expand description
AVCaptureDeviceInput is a concrete subclass of AVCaptureInput that provides an interface for capturing media from an AVCaptureDevice.
Instances of AVCaptureDeviceInput are input sources for AVCaptureSession that provide media data from devices connected to the system, represented by instances of AVCaptureDevice.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureDeviceInput
impl AVCaptureDeviceInput
Sourcepub unsafe fn deviceInputWithDevice_error(
device: &AVCaptureDevice,
) -> Result<Retained<Self>, Retained<NSError>>
Available on crate feature AVCaptureDevice only.
pub unsafe fn deviceInputWithDevice_error( device: &AVCaptureDevice, ) -> Result<Retained<Self>, Retained<NSError>>
AVCaptureDevice only.Returns an AVCaptureDeviceInput instance that provides media data from the given device.
Parameter device: An AVCaptureDevice instance to be used for capture.
Parameter outError: On return, if the given device cannot be used for capture, points to an NSError describing the problem.
Returns: An AVCaptureDeviceInput instance that provides data from the given device, or nil, if the device could not be used for capture.
This method returns an instance of AVCaptureDeviceInput that can be used to capture data from an AVCaptureDevice in an AVCaptureSession. This method attempts to open the device for capture, taking exclusive control of it if necessary. If the device cannot be opened because it is no longer available or because it is in use, for example, this method returns nil, and the optional outError parameter points to an NSError describing the problem.
Sourcepub unsafe fn initWithDevice_error(
this: Allocated<Self>,
device: &AVCaptureDevice,
) -> Result<Retained<Self>, Retained<NSError>>
Available on crate feature AVCaptureDevice only.
pub unsafe fn initWithDevice_error( this: Allocated<Self>, device: &AVCaptureDevice, ) -> Result<Retained<Self>, Retained<NSError>>
AVCaptureDevice only.Creates an AVCaptureDeviceInput instance that provides media data from the given device.
Parameter device: An AVCaptureDevice instance to be used for capture.
Parameter outError: On return, if the given device cannot be used for capture, points to an NSError describing the problem.
Returns: An AVCaptureDeviceInput instance that provides data from the given device, or nil, if the device could not be used for capture.
This method creates an instance of AVCaptureDeviceInput that can be used to capture data from an AVCaptureDevice in an AVCaptureSession. This method attempts to open the device for capture, taking exclusive control of it if necessary. If the device cannot be opened because it is no longer available or because it is in use, for example, this method returns nil, and the optional outError parameter points to an NSError describing the problem.
Sourcepub unsafe fn device(&self) -> Retained<AVCaptureDevice>
Available on crate feature AVCaptureDevice only.
pub unsafe fn device(&self) -> Retained<AVCaptureDevice>
AVCaptureDevice only.The device from which the receiver provides data.
The value of this property is the AVCaptureDevice instance that was used to create the receiver.
Sourcepub unsafe fn unifiedAutoExposureDefaultsEnabled(&self) -> bool
pub unsafe fn unifiedAutoExposureDefaultsEnabled(&self) -> bool
Specifies whether the source device should use the same default auto exposure behaviors for -[AVCaptureSession setSessionPreset:] and -[AVCaptureDevice setActiveFormat:].
AVCaptureDevice’s activeFormat property may be set two different ways. 1) You set it directly using one of the formats in the device’s -formats array, or 2) the AVCaptureSession sets it on your behalf when you set the AVCaptureSession’s sessionPreset property. Depending on the device and format, the default auto exposure behavior may be configured differently when you use one method or the other, resulting in non-uniform auto exposure behavior. Auto exposure defaults include min frame rate, max frame rate, and max exposure duration. If you wish to ensure that consistent default behaviors are applied to the device regardless of the API you use to configure the activeFormat, you may set the device input’s unifiedAutoExposureDefaultsEnabled property to YES. Default value for this property is NO.
Note that if you manually set the device’s min frame rate, max frame rate, or max exposure duration, your custom values will override the device defaults regardless of whether you’ve set this property to YES.
Sourcepub unsafe fn setUnifiedAutoExposureDefaultsEnabled(
&self,
unified_auto_exposure_defaults_enabled: bool,
)
pub unsafe fn setUnifiedAutoExposureDefaultsEnabled( &self, unified_auto_exposure_defaults_enabled: bool, )
Setter for unifiedAutoExposureDefaultsEnabled.
Sourcepub unsafe fn portsWithMediaType_sourceDeviceType_sourceDevicePosition(
&self,
media_type: Option<&AVMediaType>,
source_device_type: Option<&AVCaptureDeviceType>,
source_device_position: AVCaptureDevicePosition,
) -> Retained<NSArray<AVCaptureInputPort>>
Available on crate features AVCaptureDevice and AVMediaFormat only.
pub unsafe fn portsWithMediaType_sourceDeviceType_sourceDevicePosition( &self, media_type: Option<&AVMediaType>, source_device_type: Option<&AVCaptureDeviceType>, source_device_position: AVCaptureDevicePosition, ) -> Retained<NSArray<AVCaptureInputPort>>
AVCaptureDevice and AVMediaFormat only.An accessor method used to retrieve a virtual device’s constituent device ports for use in an AVCaptureMultiCamSession.
Parameter mediaType: The AVMediaType of the port for which you’re searching, or nil if all media types should be considered.
Parameter sourceDeviceType: The AVCaptureDeviceType of the port for which you’re searching, or nil if source device type is irrelevant.
Parameter sourceDevicePosition: The AVCaptureDevicePosition of the port for which you’re searching. AVCaptureDevicePositionUnspecified is germane to audio devices, indicating omnidirectional audio. For other types of capture devices (e.g. cameras), AVCaptureDevicePositionUnspecified means all positions should be considered in the search.
Returns: An array of AVCaptureInputPorts satisfying the search criteria, or an empty array could be found.
When using AVCaptureMultiCamSession, multiple devices may be run simultaneously. You may also run simultaneous streams from a virtual device such as the Dual Camera. By inspecting a virtual device’s constituentDevices property, you can find its underlying physical devices and, using this method, search for ports originating from one of those constituent devices. Note that the AVCaptureInput.ports array does not include constituent device ports for virtual devices. You must use this accessor method to discover the ports for which you’re specifically looking. These constituent device ports may be used to make connections to outputs for use with an AVCaptureMultiCamSession. Using the Dual Camera as an example, the AVCaptureInput.ports property exposes only those ports supported by the virtual device (it switches automatically between wide and telephoto cameras according to the zoom factor). You may use this method to find the video ports for the constituentDevices.
AVCaptureInputPort *wideVideoPort = [dualCameraInput portsWithMediaType:AVMediaTypeVideo sourceDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera sourceDevicePosition:AVCaptureDevicePositionBack].firstObject; AVCaptureInputPort *teleVideoPort = [dualCameraInput portsWithMediaType:AVMediaTypeVideo sourceDeviceType:AVCaptureDeviceTypeBuiltInTelephotoCamera sourceDevicePosition:AVCaptureDevicePositionBack].firstObject;
These ports may be used to create connections, say, to two AVCaptureVideoDataOutput instances, allowing for synchronized full frame rate delivery of both wide and telephoto streams.
As of iOS 13, constituent device ports may not be connected to AVCapturePhotoOutput instances. Clients who wish to capture multiple photos from a virtual device should use AVCapturePhotoOutput’s virtualDeviceConstituentPhotoDeliveryEnabled feature.
When used in conjunction with an audio device, this method allows you to discover microphones in different AVCaptureDevicePositions. When you intend to work with an AVCaptureMultiCamSession, you may use these ports to make connections and simultaneously capture both front facing and back facing audio simultaneously to two different outputs. When used with an AVCaptureMultiCamSession, the audio device port whose sourceDevicePosition is AVCaptureDevicePositionUnspecified produces omnidirectional sound.
Sourcepub unsafe fn videoMinFrameDurationOverride(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn videoMinFrameDurationOverride(&self) -> CMTime
objc2-core-media only.A property that acts as a modifier to the AVCaptureDevice’s activeVideoMinFrameDuration property. Default value is kCMTimeInvalid.
An AVCaptureDevice’s activeVideoMinFrameDuration property is the reciprocal of its active maximum frame rate. To limit the max frame rate of the capture device, clients may set the device’s activeVideoMinFrameDuration to a value supported by the receiver’s activeFormat (see AVCaptureDeviceFormat’s videoSupportedFrameRateRanges property). Changes you make to the device’s activeVideoMinFrameDuration property take effect immediately without disrupting preview. Therefore, the AVCaptureSession must always allocate sufficient resources to allow the device to run at its activeFormat’s max allowable frame rate. If you wish to use a particular device format but only ever run it at lower frame rates (for instance, only run a 1080p240 fps format at a max frame rate of 60), you can set the AVCaptureDeviceInput’s videoMinFrameDurationOverride property to the reciprocal of the max frame rate you intend to use before starting the session (or within a beginConfiguration / commitConfiguration block while running the session).
When a device input is added to a session, this property reverts back to the default of kCMTimeInvalid (no override).
Sourcepub unsafe fn setVideoMinFrameDurationOverride(
&self,
video_min_frame_duration_override: CMTime,
)
Available on crate feature objc2-core-media only.
pub unsafe fn setVideoMinFrameDurationOverride( &self, video_min_frame_duration_override: CMTime, )
objc2-core-media only.Setter for videoMinFrameDurationOverride.
Sourcepub unsafe fn isLockedVideoFrameDurationSupported(&self) -> bool
pub unsafe fn isLockedVideoFrameDurationSupported(&self) -> bool
Indicates whether the device input supports locked frame durations.
See AVCaptureDeviceInput/activeLockedVideoFrameDuration for more information on video frame duration locking.
Sourcepub unsafe fn activeLockedVideoFrameDuration(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn activeLockedVideoFrameDuration(&self) -> CMTime
objc2-core-media only.The receiver’s locked frame duration (the reciprocal of its frame rate). Setting this property guarantees the intra-frame duration delivered by the device input is precisely the frame duration you request.
Set this property to run the receiver’s associated AVCaptureDevice at precisely your provided frame rate (expressed as a duration). Query AVCaptureDevice/minSupportedLockedVideoFrameDuration to find the minimum value supported by this AVCaptureDeviceInput. In order to disable locked video frame duration, set this property to kCMTimeInvalid. This property resets itself to kCMTimeInvalid when the receiver’s attached AVCaptureDevice/activeFormat changes. When you set this property, its value is also reflected in the receiver’s AVCaptureDevice/activeVideoMinFrameDuration and AVCaptureDevice/activeVideoMaxFrameDuration.
-
Note: Locked frame duration availability may change depending on the device configuration. For example, locked frame duration is unsupported when
AVCaptureDevice/autoVideoFrameRateEnabledorAVCaptureMovieFileOutput/spatialVideoCaptureEnabledis set totrue. -
Note: Only one
AVCaptureDeviceInputadded to anAVCaptureMultiCamSessioncan follow an external sync device or run at a locked frame duration. -
Note: Setting this property may cause a lengthy reconfiguration of the receiver, similar to setting
AVCaptureDevice/activeFormatorAVCaptureSession/sessionPreset. -
Important: If you set this property to a valid value while the receiver’s
AVCaptureDevice/minSupportedLockedVideoFrameDurationiskCMTimeInvalid, it throws anNSInvalidArgumentException. -
Important: If you set this property while the receiver’s
lockedVideoFrameDurationSupportedproperty returnsfalse, it throws anNSInvalidArgumentException.
Sourcepub unsafe fn setActiveLockedVideoFrameDuration(
&self,
active_locked_video_frame_duration: CMTime,
)
Available on crate feature objc2-core-media only.
pub unsafe fn setActiveLockedVideoFrameDuration( &self, active_locked_video_frame_duration: CMTime, )
objc2-core-media only.Setter for activeLockedVideoFrameDuration.
Sourcepub unsafe fn isExternalSyncSupported(&self) -> bool
pub unsafe fn isExternalSyncSupported(&self) -> bool
Indicates whether the device input supports being configured to follow an external sync device.
See AVCaptureDeviceInput/followExternalSyncDevice:videoFrameDuration:delegate: for more information on external sync.
Sourcepub unsafe fn followExternalSyncDevice_videoFrameDuration_delegate(
&self,
external_sync_device: &AVExternalSyncDevice,
frame_duration: CMTime,
delegate: Option<&ProtocolObject<dyn AVExternalSyncDeviceDelegate>>,
)
Available on crate features AVExternalSyncDevice and objc2-core-media only.
pub unsafe fn followExternalSyncDevice_videoFrameDuration_delegate( &self, external_sync_device: &AVExternalSyncDevice, frame_duration: CMTime, delegate: Option<&ProtocolObject<dyn AVExternalSyncDeviceDelegate>>, )
AVExternalSyncDevice and objc2-core-media only.Configures the the device input to follow an external sync device at the given frame duration.
- Parameter externalSyncDevice: The
AVExternalSyncDevicehardware to follow. - Parameter videoFrameDuration: The frame duration to which the
AVExternalSyncDeviceis calibrated. - Parameter delegate: The delegate to notify when the connection status changes, or an error occurs.
Call this method to direct your AVCaptureDeviceInput to follow the external sync pulse from a sync device at the given frame duration.
Your provided videoFrameDuration value must match the sync pulse duration of the external sync device. If it does not, the request times out, the external sync device’s status returns to AVExternalSyncDeviceStatusReady, and your session stops running, posting a AVCaptureSessionRuntimeErrorNotification with AVErrorFollowExternalSyncDeviceTimedOut.
The ability to follow an external sync device may change depending on the device configuration. For example, followExternalSyncDevice:videoFrameDuration:delegate: cannot be used when AVCaptureDevice/autoVideoFrameRateEnabled is true.
To stop following an external pulse, call unfollowExternalSyncDevice. External sync device following is also disabled when your device’s AVCaptureDeviceFormat changes.
Your provided delegate’s AVExternalSyncDeviceDelegate/externalSyncDeviceStatusDidChange: method is called with a status of AVExternalSyncDeviceStatusReady if the external pulse signal is not close enough to the provided videoFrameDuration for successful calibration.
Once your AVExternalSyncDevice/status changes to AVExternalSyncDeviceStatusActiveSync, your input’s AVCaptureInput/activeExternalSyncVideoFrameDuration property reports the up-to-date frame duration. AVCaptureInput/activeExternalSyncVideoFrameDuration is also reflected in the AVCaptureDevice/activeVideoMinFrameDuration and AVCaptureDevice/activeVideoMaxFrameDuration of your input’s associated device.
-
Note: Calling this method may cause a lengthy reconfiguration of the receiver, similar to setting a new active format or
AVCaptureSession/sessionPreset. -
Important: Calling this method throws an
NSInvalidArgumentExceptionifAVCaptureDeviceInput/externalSyncSupportedreturnsfalse. -
Important: The provided external sync device’s
statusmust beAVExternalSyncDeviceStatusReadywhen you call this method, otherwise anNSInvalidArgumentExceptionis thrown.
Sourcepub unsafe fn activeExternalSyncVideoFrameDuration(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn activeExternalSyncVideoFrameDuration(&self) -> CMTime
objc2-core-media only.The receiver’s external sync frame duration (the reciprocal of its frame rate) when being driven by an external sync device.
Set up your input to follow an external sync device by calling followExternalSyncDevice:videoFrameDuration:delegate:.
- Note: The value of this readonly property is
kCMTimeInvalidunless theAVExternalSyncDeviceis actively driving theAVCaptureDeviceInput. This is reflected by theAVExternalSyncDevice/statusbeing eitherAVExternalSyncDeviceStatusActiveSyncorAVExternalSyncDeviceStatusFreeRunSync.
Sourcepub unsafe fn externalSyncDevice(
&self,
) -> Option<Retained<AVExternalSyncDevice>>
Available on crate feature AVExternalSyncDevice only.
pub unsafe fn externalSyncDevice( &self, ) -> Option<Retained<AVExternalSyncDevice>>
AVExternalSyncDevice only.The external sync device currently being followed by this input.
This readonly property returns the AVExternalSyncDevice instance you provided in followExternalSyncDevice:videoFrameDuration:delegate:. This property returns nil when an external sync device is disconnected or fails to calibrate.
Sourcepub unsafe fn unfollowExternalSyncDevice(&self)
pub unsafe fn unfollowExternalSyncDevice(&self)
Discontinues external sync.
This method stops your input from syncing to the external sync device you specified in followExternalSyncDevice:videoFrameDuration:delegate:.
Sourcepub unsafe fn isMultichannelAudioModeSupported(
&self,
multichannel_audio_mode: AVCaptureMultichannelAudioMode,
) -> bool
pub unsafe fn isMultichannelAudioModeSupported( &self, multichannel_audio_mode: AVCaptureMultichannelAudioMode, ) -> bool
Returns whether the receiver supports the given multichannel audio mode.
Parameter multichannelAudioMode: An AVCaptureMultichannelAudioMode to be checked.
Returns: YES if the receiver supports the given multichannel audio mode, NO otherwise.
The receiver’s multichannelAudioMode property can only be set to a certain mode if this method returns YES for that mode.
Multichannel audio modes are not supported when used in conjunction with AVCaptureMultiCamSession.
Sourcepub unsafe fn multichannelAudioMode(&self) -> AVCaptureMultichannelAudioMode
pub unsafe fn multichannelAudioMode(&self) -> AVCaptureMultichannelAudioMode
Indicates the multichannel audio mode to apply when recording audio.
This property only takes effect when audio is being routed through the built-in microphone, and is ignored if an external microphone is in use.
The default value is AVCaptureMultichannelAudioModeNone, in which case the default single channel audio recording is used.
Sourcepub unsafe fn setMultichannelAudioMode(
&self,
multichannel_audio_mode: AVCaptureMultichannelAudioMode,
)
pub unsafe fn setMultichannelAudioMode( &self, multichannel_audio_mode: AVCaptureMultichannelAudioMode, )
Setter for multichannelAudioMode.
Sourcepub unsafe fn isWindNoiseRemovalSupported(&self) -> bool
pub unsafe fn isWindNoiseRemovalSupported(&self) -> bool
Returns whether or not the device supports wind noise removal during audio capture.
YES if the device supports wind noise removal, NO otherwise.
Sourcepub unsafe fn isWindNoiseRemovalEnabled(&self) -> bool
pub unsafe fn isWindNoiseRemovalEnabled(&self) -> bool
Specifies whether or not wind noise is removed during audio capture.
Wind noise removal is available when the AVCaptureDeviceInput multichannelAudioMode property is set to any value other than AVCaptureMultichannelAudioModeNone.
Sourcepub unsafe fn setWindNoiseRemovalEnabled(
&self,
wind_noise_removal_enabled: bool,
)
pub unsafe fn setWindNoiseRemovalEnabled( &self, wind_noise_removal_enabled: bool, )
Setter for isWindNoiseRemovalEnabled.
Sourcepub unsafe fn isCinematicVideoCaptureSupported(&self) -> bool
pub unsafe fn isCinematicVideoCaptureSupported(&self) -> bool
A BOOL value specifying whether Cinematic Video capture is supported.
With Cinematic Video capture, you get a simulated depth-of-field effect that keeps your subjects (people, pets, and more) in sharp focus while applying a pleasing blur to the background (or foreground). Depending on the focus mode (see AVCaptureCinematicVideoFocusMode for detail), the camera either uses machine learning to automatically detect and focus on subjects in the scene, or it fixes focus on a subject until it exits the scene. Cinematic Videos can be played back and edited using the Cinematic framework.
You can adjust the video’s simulated aperture before starting a recording using the simulatedAperture property. With Cinematic Video specific focus methods on AVCaptureDevice, you can dynamically control focus transitions.
Movie files captured with Cinematic Video enabled can be played back and edited with the [Cinematic framework] (https://developer.apple.com/documentation/cinematic/playing-and-editing-cinematic-mode-video?language=objc).
This property returns true if the session’s current configuration allows Cinematic Video capture. When switching cameras or formats, this property may change. When this property changes from true to false, cinematicVideoCaptureEnabled also reverts to false. If you’ve previously opted in for Cinematic Video capture and then change configuration, you may need to set cinematicVideoCaptureEnabled to true again. This property is key-value observable.
- Note:
AVCaptureDepthDataOutputis not supported whencinematicVideoCaptureEnabledis set totrue. Running anAVCaptureSessionwith both of these features throws anNSInvalidArgumentException.
Sourcepub unsafe fn isCinematicVideoCaptureEnabled(&self) -> bool
pub unsafe fn isCinematicVideoCaptureEnabled(&self) -> bool
A BOOL value specifying whether the Cinematic Video effect is being applied to any movie file output, video data output, metadata output, or video preview layer added to the capture session.
Default is false. Set to true to enable support for Cinematic Video capture.
When you set this property to true, your input’s associated AVCaptureDevice/focusMode changes to AVCaptureFocusModeContinuousAutoFocus. While Cinematic Video capture is enabled, you are not permitted to change your device’s focus mode, and any attempt to do so results in an NSInvalidArgumentException. You may only set this property to true if cinematicVideoCaptureSupported is true.
- Note: Enabling Cinematic Video capture requires a lengthy reconfiguration of the capture render pipeline, so if you intend to capture Cinematic Video, you should set this property to
truebefore callingAVCaptureSession/startRunningor withinAVCaptureSession/beginConfigurationandAVCaptureSession/commitConfigurationwhile running.
Sourcepub unsafe fn setCinematicVideoCaptureEnabled(
&self,
cinematic_video_capture_enabled: bool,
)
pub unsafe fn setCinematicVideoCaptureEnabled( &self, cinematic_video_capture_enabled: bool, )
Setter for isCinematicVideoCaptureEnabled.
Sourcepub unsafe fn simulatedAperture(&self) -> c_float
pub unsafe fn simulatedAperture(&self) -> c_float
Shallow depth of field simulated aperture.
When capturing a Cinematic Video, use this property to control the amount of blur in the simulated depth of field effect.
This property only takes effect when cinematicVideoCaptureEnabled is set to true.
- Important: Setting this property to a value less than the
AVCaptureDevice/activeFormat/minSimulatedApertureor greater than theAVCaptureDevice/activeFormat/maxSimulatedAperturethrows anNSRangeException. you may only set this property ifAVCaptureDevice/activeFormat/minSimulatedAperturereturns a non-zero value, otherwise anNSInvalidArgumentExceptionis thrown. You must set this property before starting a Cinematic Video capture. If you attempt to set it while a recording is in progress, anNSInvalidArgumentExceptionis thrown.
This property is initialized to the associated AVCaptureDevice/activeFormat/defaultSimulatedAperture.
This property is key-value observable.
Sourcepub unsafe fn setSimulatedAperture(&self, simulated_aperture: c_float)
pub unsafe fn setSimulatedAperture(&self, simulated_aperture: c_float)
Setter for simulatedAperture.
Methods from Deref<Target = AVCaptureInput>§
Sourcepub unsafe fn ports(&self) -> Retained<NSArray<AVCaptureInputPort>>
pub unsafe fn ports(&self) -> Retained<NSArray<AVCaptureInputPort>>
The ports owned by the receiver.
The value of this property is an array of AVCaptureInputPort objects, each exposing an interface to a single stream of media data provided by an input.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCaptureInput> for AVCaptureDeviceInput
impl AsRef<AVCaptureInput> for AVCaptureDeviceInput
Source§fn as_ref(&self) -> &AVCaptureInput
fn as_ref(&self) -> &AVCaptureInput
Source§impl AsRef<AnyObject> for AVCaptureDeviceInput
impl AsRef<AnyObject> for AVCaptureDeviceInput
Source§impl AsRef<NSObject> for AVCaptureDeviceInput
impl AsRef<NSObject> for AVCaptureDeviceInput
Source§impl Borrow<AVCaptureInput> for AVCaptureDeviceInput
impl Borrow<AVCaptureInput> for AVCaptureDeviceInput
Source§fn borrow(&self) -> &AVCaptureInput
fn borrow(&self) -> &AVCaptureInput
Source§impl Borrow<AnyObject> for AVCaptureDeviceInput
impl Borrow<AnyObject> for AVCaptureDeviceInput
Source§impl Borrow<NSObject> for AVCaptureDeviceInput
impl Borrow<NSObject> for AVCaptureDeviceInput
Source§impl ClassType for AVCaptureDeviceInput
impl ClassType for AVCaptureDeviceInput
Source§const NAME: &'static str = "AVCaptureDeviceInput"
const NAME: &'static str = "AVCaptureDeviceInput"
Source§type Super = AVCaptureInput
type Super = AVCaptureInput
Source§type ThreadKind = <<AVCaptureDeviceInput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureDeviceInput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureDeviceInput
impl Debug for AVCaptureDeviceInput
Source§impl Deref for AVCaptureDeviceInput
impl Deref for AVCaptureDeviceInput
Source§impl Hash for AVCaptureDeviceInput
impl Hash for AVCaptureDeviceInput
Source§impl Message for AVCaptureDeviceInput
impl Message for AVCaptureDeviceInput
Source§impl NSObjectProtocol for AVCaptureDeviceInput
impl NSObjectProtocol for AVCaptureDeviceInput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref