AVCaptureConnection

Struct AVCaptureConnection 

Source
pub struct AVCaptureConnection { /* private fields */ }
Available on crate feature AVCaptureSession only.
Expand description

AVCaptureConnection represents a connection between an AVCaptureInputPort or ports, and an AVCaptureOutput or AVCaptureVideoPreviewLayer present in an AVCaptureSession.

AVCaptureInputs have one or more AVCaptureInputPorts. AVCaptureOutputs can accept data from one or more sources (example - an AVCaptureMovieFileOutput accepts both video and audio data). AVCaptureVideoPreviewLayers can accept data from one AVCaptureInputPort whose mediaType is AVMediaTypeVideo. When an input or output is added to a session, or a video preview layer is associated with a session, the session greedily forms connections between all the compatible AVCaptureInputs’ ports and AVCaptureOutputs or AVCaptureVideoPreviewLayers. Iterating through an output’s connections or a video preview layer’s sole connection, a client may enable or disable the flow of data from a given input to a given output or preview layer.

Connections involving audio expose an array of AVCaptureAudioChannel objects, which can be used for monitoring levels.

Connections involving video expose video specific properties, such as videoMirrored and videoRotationAngle.

See also Apple’s documentation

Implementations§

Source§

impl AVCaptureConnection

Source

pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>

Source

pub unsafe fn new() -> Retained<Self>

Source

pub unsafe fn connectionWithInputPorts_output( ports: &NSArray<AVCaptureInputPort>, output: &AVCaptureOutput, ) -> Retained<Self>

Available on crate features AVCaptureInput and AVCaptureOutputBase only.

Returns an AVCaptureConnection instance describing a connection between the specified inputPorts and the specified output.

Parameter ports: An array of AVCaptureInputPort objects associated with AVCaptureInput objects.

Parameter output: An AVCaptureOutput object.

Returns: An AVCaptureConnection instance joining the specified inputPorts to the specified output port.

This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using -addInput: or -addOutput:, connections are formed between all compatible inputs and outputs automatically. You do not need to manually create and add connections to the session unless you use the primitive -addInputWithNoConnections: or -addOutputWithNoConnections: methods.

Source

pub unsafe fn connectionWithInputPort_videoPreviewLayer( port: &AVCaptureInputPort, layer: &AVCaptureVideoPreviewLayer, ) -> Retained<Self>

Available on crate feature AVCaptureInput and crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.

Returns an AVCaptureConnection instance describing a connection between the specified inputPort and the specified AVCaptureVideoPreviewLayer instance.

Parameter port: An AVCaptureInputPort object associated with an AVCaptureInput object.

Parameter layer: An AVCaptureVideoPreviewLayer object.

Returns: An AVCaptureConnection instance joining the specified inputPort to the specified video preview layer.

This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using AVCaptureVideoPreviewLayer’s -initWithSession: or -setSession:, a connection is formed between the first compatible input port and the video preview layer automatically. You do not need to manually create and add connections to the session unless you use AVCaptureVideoPreviewLayer’s primitive -initWithSessionWithNoConnection: or -setSessionWithNoConnection: methods.

Source

pub unsafe fn initWithInputPorts_output( this: Allocated<Self>, ports: &NSArray<AVCaptureInputPort>, output: &AVCaptureOutput, ) -> Retained<Self>

Available on crate features AVCaptureInput and AVCaptureOutputBase only.

Returns an AVCaptureConnection instance describing a connection between the specified inputPorts and the specified output.

Parameter ports: An array of AVCaptureInputPort objects associated with AVCaptureInput objects.

Parameter output: An AVCaptureOutput object.

Returns: An AVCaptureConnection instance joining the specified inputPorts to the specified output port.

This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using -addInput: or -addOutput:, connections are formed between all compatible inputs and outputs automatically. You do not need to manually create and add connections to the session unless you use the primitive -addInputWithNoConnections: or -addOutputWithNoConnections: methods.

Source

pub unsafe fn initWithInputPort_videoPreviewLayer( this: Allocated<Self>, port: &AVCaptureInputPort, layer: &AVCaptureVideoPreviewLayer, ) -> Retained<Self>

Available on crate feature AVCaptureInput and crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.

Returns an AVCaptureConnection instance describing a connection between the specified inputPort and the specified AVCaptureVideoPreviewLayer instance.

Parameter port: An AVCaptureInputPort object associated with an AVCaptureInput object.

Parameter layer: An AVCaptureVideoPreviewLayer object.

Returns: An AVCaptureConnection instance joining the specified inputPort to the specified video preview layer.

This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using AVCaptureVideoPreviewLayer’s -initWithSession: or -setSession:, a connection is formed between the first compatible input port and the video preview layer automatically. You do not need to manually create and add connections to the session unless you use AVCaptureVideoPreviewLayer’s primitive -initWithSessionWithNoConnection: or -setSessionWithNoConnection: methods.

Source

pub unsafe fn inputPorts(&self) -> Retained<NSArray<AVCaptureInputPort>>

Available on crate feature AVCaptureInput only.

An array of AVCaptureInputPort instances providing data through this connection.

An AVCaptureConnection may involve one or more AVCaptureInputPorts producing data to the connection’s AVCaptureOutput. This property is read-only. An AVCaptureConnection’s inputPorts remain static for the life of the object.

Source

pub unsafe fn output(&self) -> Option<Retained<AVCaptureOutput>>

Available on crate feature AVCaptureOutputBase only.

The AVCaptureOutput instance consuming data from this connection’s inputPorts.

An AVCaptureConnection may involve one or more AVCaptureInputPorts producing data to the connection’s AVCaptureOutput. This property is read-only. An AVCaptureConnection’s output remains static for the life of the object. Note that a connection can either be to an output or a video preview layer, but never to both.

Source

pub unsafe fn videoPreviewLayer( &self, ) -> Option<Retained<AVCaptureVideoPreviewLayer>>

Available on crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.

The AVCaptureVideoPreviewLayer instance consuming data from this connection’s inputPort.

An AVCaptureConnection may involve one AVCaptureInputPort producing data to an AVCaptureVideoPreviewLayer object. This property is read-only. An AVCaptureConnection’s videoPreviewLayer remains static for the life of the object. Note that a connection can either be to an output or a video preview layer, but never to both.

Source

pub unsafe fn isEnabled(&self) -> bool

Indicates whether the connection’s output should consume data.

The value of this property is a BOOL that determines whether the receiver’s output should consume data from its connected inputPorts when a session is running. Clients can set this property to stop the flow of data to a given output during capture. The default value is YES.

Source

pub unsafe fn setEnabled(&self, enabled: bool)

Setter for isEnabled.

Source

pub unsafe fn isActive(&self) -> bool

Indicates whether the receiver’s output is currently capable of consuming data through this connection.

The value of this property is a BOOL that determines whether the receiver’s output can consume data provided through this connection. This property is read-only. Clients may key-value observe this property to know when a session’s configuration forces a connection to become inactive. The default value is YES.

Prior to iOS 11, the audio connection feeding an AVCaptureAudioDataOutput is made inactive when using AVCaptureSessionPresetPhoto or the equivalent photo format using -[AVCaptureDevice activeFormat]. On iOS 11 and later, the audio connection feeding AVCaptureAudioDataOutput is active for all presets and device formats.

Source

pub unsafe fn audioChannels(&self) -> Retained<NSArray<AVCaptureAudioChannel>>

An array of AVCaptureAudioChannel objects representing individual channels of audio data flowing through the connection.

This property is only applicable to AVCaptureConnection instances involving audio. In such connections, the audioChannels array contains one AVCaptureAudioChannel object for each channel of audio data flowing through this connection.

Source

pub unsafe fn isVideoMirroringSupported(&self) -> bool

Indicates whether the connection supports setting the videoMirrored property.

This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMirrored property may only be set if -isVideoMirroringSupported returns YES.

Source

pub unsafe fn isVideoMirrored(&self) -> bool

Indicates whether the video flowing through the connection should be mirrored about its vertical axis.

This property is only applicable to AVCaptureConnection instances involving video. if -isVideoMirroringSupported returns YES, videoMirrored may be set to flip the video about its vertical axis and produce a mirror-image effect. This property may not be set unless -isVideoMirroringSupported returns YES, otherwise a NSInvalidArgumentException is thrown. This property may not be set if -automaticallyAdjustsVideoMirroring returns YES, otherwise an NSInvalidArgumentException is thrown.

Source

pub unsafe fn setVideoMirrored(&self, video_mirrored: bool)

Setter for isVideoMirrored.

Source

pub unsafe fn automaticallyAdjustsVideoMirroring(&self) -> bool

Specifies whether or not the value of “ videoMirrored“ can change based on configuration of the session.

For some session configurations, video data flowing through the connection will be mirrored by default. When the value of this property is YES, the value of “ videoMirrored“ may change depending on the configuration of the session, for example after switching to a different AVCaptureDeviceInput. The default value is YES.

Source

pub unsafe fn setAutomaticallyAdjustsVideoMirroring( &self, automatically_adjusts_video_mirroring: bool, )

Source

pub unsafe fn isVideoRotationAngleSupported( &self, video_rotation_angle: CGFloat, ) -> bool

Available on crate feature objc2-core-foundation only.

Returns whether the connection supports the given rotation angle in degrees.

Parameter videoRotationAngle: A video rotation angle to be checked.

Returns: YES if the connection supports the given video rotation angle, NO otherwise.

The connection’s videoRotationAngle property can only be set to a certain angle if this method returns YES for that angle. Only rotation angles of 0, 90, 180 and 270 are supported.

Source

pub unsafe fn videoRotationAngle(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.

Indicates whether the video flowing through the connection should be rotated with a given angle in degrees.

This property is only applicable to AVCaptureConnection instances involving video or depth. -setVideoRotationAngle: throws an NSInvalidArgumentException if set to an unsupported value (see -isVideoRotationAngleSupported:). Note that setting videoRotationAngle does not necessarily result in physical rotation of video buffers. For instance, a video connection to an AVCaptureMovieFileOutput handles orientation using a Quicktime track matrix. In the AVCapturePhotoOutput, orientation is handled using Exif tags. And the AVCaptureVideoPreviewLayer applies transforms to its contents to perform rotations. However, the AVCaptureVideoDataOutput and AVCaptureDepthDataOutput do output physically rotated video buffers. Setting a video rotation angle for an output that does physically rotate buffers requires a lengthy configuration of the capture render pipeline and should be done before calling -[AVCaptureSession startRunning].

Starting with the Spring 2024 iPad line, the default value of videoRotationAngle is 180 degrees for video data on Front Camera as compared to 0 degrees on previous devices. So clients using AVCaptureVideoDataOutput and AVCaptureDepthDataOutput should set videoRotationAngle to 0 to avoid the physical buffer rotation described above. And clients rotating video data by themselves must account for the default value of videoRotationAngle when applying angles (videoRotationAngleForHorizonLevelPreview, videoRotationAngleForHorizonLevelCapture) from AVCaptureDeviceRotationCoordinator. Note that this change in default value is currently limited to these iPads, however it is recommended that clients rotating video data themselves incorporate the default rotation value into their workflows for all devices.

Source

pub unsafe fn setVideoRotationAngle(&self, video_rotation_angle: CGFloat)

Available on crate feature objc2-core-foundation only.

Setter for videoRotationAngle.

Source

pub unsafe fn isVideoOrientationSupported(&self) -> bool

👎Deprecated: Use -isVideoRotationAngleSupported: instead

Indicates whether the connection supports setting the videoOrientation property.

This property is deprecated. Use -isVideoRotationAngleSupported: instead.

Source

pub unsafe fn videoOrientation(&self) -> AVCaptureVideoOrientation

👎Deprecated: Use -videoRotationAngle instead

Indicates whether the video flowing through the connection should be rotated to a given orientation.

This property is deprecated. Use -videoRotationAngle instead. This property may only be set if -isVideoOrientationSupported returns YES, otherwise an NSInvalidArgumentException is thrown.

Source

pub unsafe fn setVideoOrientation( &self, video_orientation: AVCaptureVideoOrientation, )

👎Deprecated: Use -videoRotationAngle instead

Setter for videoOrientation.

Source

pub unsafe fn isVideoFieldModeSupported(&self) -> bool

Indicates whether the connection supports setting the videoFieldMode property.

This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoFieldMode property may only be set if -isVideoFieldModeSupported returns YES.

Source

pub unsafe fn videoFieldMode(&self) -> AVVideoFieldMode

Indicates how interlaced video flowing through the connection should be treated.

This property is only applicable to AVCaptureConnection instances involving video. If -isVideoFieldModeSupported returns YES, videoFieldMode may be set to affect interlaced video content flowing through the connection.

Source

pub unsafe fn setVideoFieldMode(&self, video_field_mode: AVVideoFieldMode)

Setter for videoFieldMode.

Source

pub unsafe fn isVideoMinFrameDurationSupported(&self) -> bool

👎Deprecated: Use AVCaptureDevice’s activeFormat.videoSupportedFrameRateRanges instead.

Indicates whether the connection supports setting the videoMinFrameDuration property.

This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMinFrameDuration property may only be set if -isVideoMinFrameDurationSupported returns YES.

This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.

Source

pub unsafe fn videoMinFrameDuration(&self) -> CMTime

👎Deprecated: Use AVCaptureDevice’s activeVideoMinFrameDuration instead.
Available on crate feature objc2-core-media only.

Indicates the minimum time interval at which the receiver should output consecutive video frames.

The value of this property is a CMTime specifying the minimum duration of each video frame output by the receiver, placing a lower bound on the amount of time that should separate consecutive frames. This is equivalent to the reciprocal of the maximum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited maximum frame rate. The default value is kCMTimeInvalid.

This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.

Source

pub unsafe fn setVideoMinFrameDuration(&self, video_min_frame_duration: CMTime)

👎Deprecated: Use AVCaptureDevice’s activeVideoMinFrameDuration instead.
Available on crate feature objc2-core-media only.
Source

pub unsafe fn isVideoMaxFrameDurationSupported(&self) -> bool

👎Deprecated: Use AVCaptureDevice’s activeFormat.videoSupportedFrameRateRanges instead.

Indicates whether the connection supports setting the videoMaxFrameDuration property.

This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMaxFrameDuration property may only be set if -isVideoMaxFrameDurationSupported returns YES.

This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.

Source

pub unsafe fn videoMaxFrameDuration(&self) -> CMTime

👎Deprecated: Use AVCaptureDevice’s activeVideoMaxFrameDuration instead.
Available on crate feature objc2-core-media only.

Indicates the maximum time interval at which the receiver should output consecutive video frames.

The value of this property is a CMTime specifying the maximum duration of each video frame output by the receiver, placing an upper bound on the amount of time that should separate consecutive frames. This is equivalent to the reciprocal of the minimum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited minimum frame rate. The default value is kCMTimeInvalid.

This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.

Source

pub unsafe fn setVideoMaxFrameDuration(&self, video_max_frame_duration: CMTime)

👎Deprecated: Use AVCaptureDevice’s activeVideoMaxFrameDuration instead.
Available on crate feature objc2-core-media only.
Source

pub unsafe fn videoMaxScaleAndCropFactor(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.

Indicates the maximum video scale and crop factor supported by the receiver.

This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMaxScaleAndCropFactor property specifies the maximum CGFloat value that may be used when setting the videoScaleAndCropFactor property.

Source

pub unsafe fn videoScaleAndCropFactor(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.

Indicates the current video scale and crop factor in use by the receiver.

This property only applies to AVCaptureStillImageOutput connections. In such connections, the videoScaleAndCropFactor property may be set to a value in the range of 1.0 to videoMaxScaleAndCropFactor. At a factor of 1.0, the image is its original size. At a factor greater than 1.0, the image is scaled by the factor and center-cropped to its original dimensions. This factor is applied in addition to any magnification from AVCaptureDevice’s videoZoomFactor property.

See: -[AVCaptureDevice videoZoomFactor]

Source

pub unsafe fn setVideoScaleAndCropFactor( &self, video_scale_and_crop_factor: CGFloat, )

Available on crate feature objc2-core-foundation only.
Source

pub unsafe fn preferredVideoStabilizationMode( &self, ) -> AVCaptureVideoStabilizationMode

Available on crate feature AVCaptureDevice only.

Indicates the stabilization mode to apply to video flowing through the receiver when it is supported.

This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats may be available for stabilization. By setting the preferredVideoStabilizationMode property to a value other than AVCaptureVideoStabilizationModeOff, video flowing through the receiver is stabilized when the mode is available. Enabling video stabilization introduces additional latency into the video capture pipeline and may consume more system memory depending on the stabilization mode and format. If the preferred stabilization mode isn’t available, the activeVideoStabilizationMode will be set to AVCaptureVideoStabilizationModeOff. Clients may key-value observe the activeVideoStabilizationMode property to know which stabilization mode is in use or when it is off. The default value is AVCaptureVideoStabilizationModeOff. When setting this property to AVCaptureVideoStabilizationModeAuto, an appropriate stabilization mode will be chosen based on the format and frame rate. For apps linked before iOS 6.0, the default value is AVCaptureVideoStabilizationModeStandard for a video connection attached to an AVCaptureMovieFileOutput instance. For apps linked on or after iOS 6.0, the default value is always AVCaptureVideoStabilizationModeOff. Setting a video stabilization mode using this property may change the value of enablesVideoStabilizationWhenAvailable.

Source

pub unsafe fn setPreferredVideoStabilizationMode( &self, preferred_video_stabilization_mode: AVCaptureVideoStabilizationMode, )

Available on crate feature AVCaptureDevice only.
Source

pub unsafe fn activeVideoStabilizationMode( &self, ) -> AVCaptureVideoStabilizationMode

Available on crate feature AVCaptureDevice only.

Indicates the stabilization mode currently being applied to video flowing through the receiver.

This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats may be stabilized. The activeVideoStabilizationMode property returns a value other than AVCaptureVideoStabilizationModeOff if video stabilization is currently in use. This property never returns AVCaptureVideoStabilizationModeAuto. This property is key-value observable.

Source

pub unsafe fn isVideoStabilizationSupported(&self) -> bool

Indicates whether the connection supports video stabilization.

This property is only applicable to AVCaptureConnection instances involving video. In such connections, the -enablesVideoStabilizationWhenAvailable property may only be set if -supportsVideoStabilization returns YES. This property returns YES if the connection’s input device has one or more formats that support video stabilization and the connection’s output supports video stabilization. See [AVCaptureDeviceFormat isVideoStabilizationModeSupported:] to check which video stabilization modes are supported by the active device format.

Source

pub unsafe fn isVideoStabilizationEnabled(&self) -> bool

👎Deprecated: Use activeVideoStabilizationMode instead.

Indicates whether stabilization is currently being applied to video flowing through the receiver.

This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats and resolutions may be available for stabilization. The videoStabilizationEnabled property returns YES if video stabilization is currently in use. This property is key-value observable. This property is deprecated. Use activeVideoStabilizationMode instead.

Source

pub unsafe fn enablesVideoStabilizationWhenAvailable(&self) -> bool

👎Deprecated: Use preferredVideoStabilizationMode instead.

Indicates whether stabilization should be applied to video flowing through the receiver when the feature is available.

This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats and resolutions may be available for stabilization. By setting the enablesVideoStabilizationWhenAvailable property to YES, video flowing through the receiver is stabilized when available. Enabling video stabilization may introduce additional latency into the video capture pipeline. Clients may key-value observe the videoStabilizationEnabled property to know when stabilization is in use or not. The default value is NO. For apps linked before iOS 6.0, the default value is YES for a video connection attached to an AVCaptureMovieFileOutput instance. For apps linked on or after iOS 6.0, the default value is always NO. This property is deprecated. Use preferredVideoStabilizationMode instead.

Source

pub unsafe fn setEnablesVideoStabilizationWhenAvailable( &self, enables_video_stabilization_when_available: bool, )

👎Deprecated: Use preferredVideoStabilizationMode instead.
Source

pub unsafe fn isCameraIntrinsicMatrixDeliverySupported(&self) -> bool

Indicates whether the connection supports camera intrinsic matrix delivery.

This property is only applicable to AVCaptureConnection instances involving video. For such connections, the cameraIntrinsicMatrixDeliveryEnabled property may only be set to YES if -isCameraIntrinsicMatrixDeliverySupported returns YES. This property returns YES if both the connection’s input device format and the connection’s output support camera intrinsic matrix delivery. Only the AVCaptureVideoDataOutput’s connection supports this property. Note that if video stabilization is enabled (preferredVideoStabilizationMode is set to something other than AVCaptureVideoStabilizationModeOff), camera intrinsic matrix delivery is not supported. Starting in iOS 14.3, camera intrinsics are delivered with video buffers on which geometric distortion correction is applied.

Source

pub unsafe fn isCameraIntrinsicMatrixDeliveryEnabled(&self) -> bool

Indicates whether camera intrinsic matrix delivery should be enabled.

This property is only applicable to AVCaptureConnection instances involving video. Refer to property cameraIntrinsicMatrixDeliverySupported before setting this property. When this property is set to YES, the receiver’s output will add the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix sample buffer attachment to all vended sample buffers. This property must be set before the session starts running.

Source

pub unsafe fn setCameraIntrinsicMatrixDeliveryEnabled( &self, camera_intrinsic_matrix_delivery_enabled: bool, )

Methods from Deref<Target = NSObject>§

Source

pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !

Handle messages the object doesn’t recognize.

See Apple’s documentation for details.

Methods from Deref<Target = AnyObject>§

Source

pub fn class(&self) -> &'static AnyClass

Dynamically find the class of this object.

§Panics

May panic if the object is invalid (which may be the case for objects returned from unavailable init/new methods).

§Example

Check that an instance of NSObject has the precise class NSObject.

use objc2::ClassType;
use objc2::runtime::NSObject;

let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Source

pub unsafe fn get_ivar<T>(&self, name: &str) -> &T
where T: Encode,

👎Deprecated: this is difficult to use correctly, use Ivar::load instead.

Use Ivar::load instead.

§Safety

The object must have an instance variable with the given name, and it must be of type T.

See Ivar::load_ptr for details surrounding this.

Source

pub fn downcast_ref<T>(&self) -> Option<&T>
where T: DowncastTarget,

Attempt to downcast the object to a class of type T.

This is the reference-variant. Use Retained::downcast if you want to convert a retained object to another type.

§Mutable classes

Some classes have immutable and mutable variants, such as NSString and NSMutableString.

When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.

So using this method to convert a NSString to a NSMutableString, while not unsound, is generally frowned upon unless you created the string yourself, or the API explicitly documents the string to be mutable.

See Apple’s documentation on mutability and on isKindOfClass: for more details.

§Generic classes

Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.

You can, however, safely downcast to generic collections where all the type-parameters are AnyObject.

§Panics

This works internally by calling isKindOfClass:. That means that the object must have the instance method of that name, and an exception will be thrown (if CoreFoundation is linked) or the process will abort if that is not the case. In the vast majority of cases, you don’t need to worry about this, since both root objects NSObject and NSProxy implement this method.

§Examples

Cast an NSString back and forth from NSObject.

use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};

let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();

Try (and fail) to cast an NSObject to an NSString.

use objc2_foundation::{NSObject, NSString};

let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());

Try to cast to an array of strings.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();

This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.

Downcast when processing each element instead.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);

for elem in arr {
    if let Some(data) = elem.downcast_ref::<NSString>() {
        // handle `data`
    }
}

Trait Implementations§

Source§

impl AsRef<AVCaptureConnection> for AVCaptureConnection

Source§

fn as_ref(&self) -> &Self

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AnyObject> for AVCaptureConnection

Source§

fn as_ref(&self) -> &AnyObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<NSObject> for AVCaptureConnection

Source§

fn as_ref(&self) -> &NSObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl Borrow<AnyObject> for AVCaptureConnection

Source§

fn borrow(&self) -> &AnyObject

Immutably borrows from an owned value. Read more
Source§

impl Borrow<NSObject> for AVCaptureConnection

Source§

fn borrow(&self) -> &NSObject

Immutably borrows from an owned value. Read more
Source§

impl ClassType for AVCaptureConnection

Source§

const NAME: &'static str = "AVCaptureConnection"

The name of the Objective-C class that this type represents. Read more
Source§

type Super = NSObject

The superclass of this class. Read more
Source§

type ThreadKind = <<AVCaptureConnection as ClassType>::Super as ClassType>::ThreadKind

Whether the type can be used from any thread, or from only the main thread. Read more
Source§

fn class() -> &'static AnyClass

Get a reference to the Objective-C class that this type represents. Read more
Source§

fn as_super(&self) -> &Self::Super

Get an immutable reference to the superclass.
Source§

impl Debug for AVCaptureConnection

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Deref for AVCaptureConnection

Source§

type Target = NSObject

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl Hash for AVCaptureConnection

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Message for AVCaptureConnection

Source§

fn retain(&self) -> Retained<Self>
where Self: Sized,

Increment the reference count of the receiver. Read more
Source§

impl NSObjectProtocol for AVCaptureConnection

Source§

fn isEqual(&self, other: Option<&AnyObject>) -> bool
where Self: Sized + Message,

Check whether the object is equal to an arbitrary other object. Read more
Source§

fn hash(&self) -> usize
where Self: Sized + Message,

An integer that can be used as a table address in a hash table structure. Read more
Source§

fn isKindOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of the class, or one of its subclasses. Read more
Source§

fn is_kind_of<T>(&self) -> bool
where T: ClassType, Self: Sized + Message,

👎Deprecated: use isKindOfClass directly, or cast your objects with AnyObject::downcast_ref
Check if the object is an instance of the class type, or one of its subclasses. Read more
Source§

fn isMemberOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of a specific class, without checking subclasses. Read more
Source§

fn respondsToSelector(&self, aSelector: Sel) -> bool
where Self: Sized + Message,

Check whether the object implements or inherits a method with the given selector. Read more
Source§

fn conformsToProtocol(&self, aProtocol: &AnyProtocol) -> bool
where Self: Sized + Message,

Check whether the object conforms to a given protocol. Read more
Source§

fn description(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object. Read more
Source§

fn debugDescription(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object to use when debugging. Read more
Source§

fn isProxy(&self) -> bool
where Self: Sized + Message,

Check whether the receiver is a subclass of the NSProxy root class instead of the usual NSObject. Read more
Source§

fn retainCount(&self) -> usize
where Self: Sized + Message,

The reference count of the object. Read more
Source§

impl PartialEq for AVCaptureConnection

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl RefEncode for AVCaptureConnection

Source§

const ENCODING_REF: Encoding = <NSObject as ::objc2::RefEncode>::ENCODING_REF

The Objective-C type-encoding for a reference of this type. Read more
Source§

impl DowncastTarget for AVCaptureConnection

Source§

impl Eq for AVCaptureConnection

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<'a, T> AnyThread for T
where T: ClassType<ThreadKind = dyn AnyThread + 'a> + ?Sized,

Source§

fn alloc() -> Allocated<Self>
where Self: Sized + ClassType,

Allocate a new instance of the class. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> AutoreleaseSafe for T
where T: ?Sized,