pub struct AVCaptureConnection { /* private fields */ }AVCaptureSession only.Expand description
AVCaptureConnection represents a connection between an AVCaptureInputPort or ports, and an AVCaptureOutput or AVCaptureVideoPreviewLayer present in an AVCaptureSession.
AVCaptureInputs have one or more AVCaptureInputPorts. AVCaptureOutputs can accept data from one or more sources (example - an AVCaptureMovieFileOutput accepts both video and audio data). AVCaptureVideoPreviewLayers can accept data from one AVCaptureInputPort whose mediaType is AVMediaTypeVideo. When an input or output is added to a session, or a video preview layer is associated with a session, the session greedily forms connections between all the compatible AVCaptureInputs’ ports and AVCaptureOutputs or AVCaptureVideoPreviewLayers. Iterating through an output’s connections or a video preview layer’s sole connection, a client may enable or disable the flow of data from a given input to a given output or preview layer.
Connections involving audio expose an array of AVCaptureAudioChannel objects, which can be used for monitoring levels.
Connections involving video expose video specific properties, such as videoMirrored and videoRotationAngle.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureConnection
impl AVCaptureConnection
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn connectionWithInputPorts_output(
ports: &NSArray<AVCaptureInputPort>,
output: &AVCaptureOutput,
) -> Retained<Self>
Available on crate features AVCaptureInput and AVCaptureOutputBase only.
pub unsafe fn connectionWithInputPorts_output( ports: &NSArray<AVCaptureInputPort>, output: &AVCaptureOutput, ) -> Retained<Self>
AVCaptureInput and AVCaptureOutputBase only.Returns an AVCaptureConnection instance describing a connection between the specified inputPorts and the specified output.
Parameter ports: An array of AVCaptureInputPort objects associated with AVCaptureInput objects.
Parameter output: An AVCaptureOutput object.
Returns: An AVCaptureConnection instance joining the specified inputPorts to the specified output port.
This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using -addInput: or -addOutput:, connections are formed between all compatible inputs and outputs automatically. You do not need to manually create and add connections to the session unless you use the primitive -addInputWithNoConnections: or -addOutputWithNoConnections: methods.
Sourcepub unsafe fn connectionWithInputPort_videoPreviewLayer(
port: &AVCaptureInputPort,
layer: &AVCaptureVideoPreviewLayer,
) -> Retained<Self>
Available on crate feature AVCaptureInput and crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.
pub unsafe fn connectionWithInputPort_videoPreviewLayer( port: &AVCaptureInputPort, layer: &AVCaptureVideoPreviewLayer, ) -> Retained<Self>
AVCaptureInput and crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.Returns an AVCaptureConnection instance describing a connection between the specified inputPort and the specified AVCaptureVideoPreviewLayer instance.
Parameter port: An AVCaptureInputPort object associated with an AVCaptureInput object.
Parameter layer: An AVCaptureVideoPreviewLayer object.
Returns: An AVCaptureConnection instance joining the specified inputPort to the specified video preview layer.
This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using AVCaptureVideoPreviewLayer’s -initWithSession: or -setSession:, a connection is formed between the first compatible input port and the video preview layer automatically. You do not need to manually create and add connections to the session unless you use AVCaptureVideoPreviewLayer’s primitive -initWithSessionWithNoConnection: or -setSessionWithNoConnection: methods.
Sourcepub unsafe fn initWithInputPorts_output(
this: Allocated<Self>,
ports: &NSArray<AVCaptureInputPort>,
output: &AVCaptureOutput,
) -> Retained<Self>
Available on crate features AVCaptureInput and AVCaptureOutputBase only.
pub unsafe fn initWithInputPorts_output( this: Allocated<Self>, ports: &NSArray<AVCaptureInputPort>, output: &AVCaptureOutput, ) -> Retained<Self>
AVCaptureInput and AVCaptureOutputBase only.Returns an AVCaptureConnection instance describing a connection between the specified inputPorts and the specified output.
Parameter ports: An array of AVCaptureInputPort objects associated with AVCaptureInput objects.
Parameter output: An AVCaptureOutput object.
Returns: An AVCaptureConnection instance joining the specified inputPorts to the specified output port.
This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using -addInput: or -addOutput:, connections are formed between all compatible inputs and outputs automatically. You do not need to manually create and add connections to the session unless you use the primitive -addInputWithNoConnections: or -addOutputWithNoConnections: methods.
Sourcepub unsafe fn initWithInputPort_videoPreviewLayer(
this: Allocated<Self>,
port: &AVCaptureInputPort,
layer: &AVCaptureVideoPreviewLayer,
) -> Retained<Self>
Available on crate feature AVCaptureInput and crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.
pub unsafe fn initWithInputPort_videoPreviewLayer( this: Allocated<Self>, port: &AVCaptureInputPort, layer: &AVCaptureVideoPreviewLayer, ) -> Retained<Self>
AVCaptureInput and crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.Returns an AVCaptureConnection instance describing a connection between the specified inputPort and the specified AVCaptureVideoPreviewLayer instance.
Parameter port: An AVCaptureInputPort object associated with an AVCaptureInput object.
Parameter layer: An AVCaptureVideoPreviewLayer object.
Returns: An AVCaptureConnection instance joining the specified inputPort to the specified video preview layer.
This method returns an instance of AVCaptureConnection that may be subsequently added to an AVCaptureSession instance using AVCaptureSession’s -addConnection: method. When using AVCaptureVideoPreviewLayer’s -initWithSession: or -setSession:, a connection is formed between the first compatible input port and the video preview layer automatically. You do not need to manually create and add connections to the session unless you use AVCaptureVideoPreviewLayer’s primitive -initWithSessionWithNoConnection: or -setSessionWithNoConnection: methods.
Sourcepub unsafe fn inputPorts(&self) -> Retained<NSArray<AVCaptureInputPort>>
Available on crate feature AVCaptureInput only.
pub unsafe fn inputPorts(&self) -> Retained<NSArray<AVCaptureInputPort>>
AVCaptureInput only.An array of AVCaptureInputPort instances providing data through this connection.
An AVCaptureConnection may involve one or more AVCaptureInputPorts producing data to the connection’s AVCaptureOutput. This property is read-only. An AVCaptureConnection’s inputPorts remain static for the life of the object.
Sourcepub unsafe fn output(&self) -> Option<Retained<AVCaptureOutput>>
Available on crate feature AVCaptureOutputBase only.
pub unsafe fn output(&self) -> Option<Retained<AVCaptureOutput>>
AVCaptureOutputBase only.The AVCaptureOutput instance consuming data from this connection’s inputPorts.
An AVCaptureConnection may involve one or more AVCaptureInputPorts producing data to the connection’s AVCaptureOutput. This property is read-only. An AVCaptureConnection’s output remains static for the life of the object. Note that a connection can either be to an output or a video preview layer, but never to both.
Sourcepub unsafe fn videoPreviewLayer(
&self,
) -> Option<Retained<AVCaptureVideoPreviewLayer>>
Available on crate feature AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.
pub unsafe fn videoPreviewLayer( &self, ) -> Option<Retained<AVCaptureVideoPreviewLayer>>
AVCaptureVideoPreviewLayer and crate feature objc2-quartz-core and non-watchOS only.The AVCaptureVideoPreviewLayer instance consuming data from this connection’s inputPort.
An AVCaptureConnection may involve one AVCaptureInputPort producing data to an AVCaptureVideoPreviewLayer object. This property is read-only. An AVCaptureConnection’s videoPreviewLayer remains static for the life of the object. Note that a connection can either be to an output or a video preview layer, but never to both.
Sourcepub unsafe fn isEnabled(&self) -> bool
pub unsafe fn isEnabled(&self) -> bool
Indicates whether the connection’s output should consume data.
The value of this property is a BOOL that determines whether the receiver’s output should consume data from its connected inputPorts when a session is running. Clients can set this property to stop the flow of data to a given output during capture. The default value is YES.
Sourcepub unsafe fn setEnabled(&self, enabled: bool)
pub unsafe fn setEnabled(&self, enabled: bool)
Setter for isEnabled.
Sourcepub unsafe fn isActive(&self) -> bool
pub unsafe fn isActive(&self) -> bool
Indicates whether the receiver’s output is currently capable of consuming data through this connection.
The value of this property is a BOOL that determines whether the receiver’s output can consume data provided through this connection. This property is read-only. Clients may key-value observe this property to know when a session’s configuration forces a connection to become inactive. The default value is YES.
Prior to iOS 11, the audio connection feeding an AVCaptureAudioDataOutput is made inactive when using AVCaptureSessionPresetPhoto or the equivalent photo format using -[AVCaptureDevice activeFormat]. On iOS 11 and later, the audio connection feeding AVCaptureAudioDataOutput is active for all presets and device formats.
Sourcepub unsafe fn audioChannels(&self) -> Retained<NSArray<AVCaptureAudioChannel>>
pub unsafe fn audioChannels(&self) -> Retained<NSArray<AVCaptureAudioChannel>>
An array of AVCaptureAudioChannel objects representing individual channels of audio data flowing through the connection.
This property is only applicable to AVCaptureConnection instances involving audio. In such connections, the audioChannels array contains one AVCaptureAudioChannel object for each channel of audio data flowing through this connection.
Sourcepub unsafe fn isVideoMirroringSupported(&self) -> bool
pub unsafe fn isVideoMirroringSupported(&self) -> bool
Indicates whether the connection supports setting the videoMirrored property.
This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMirrored property may only be set if -isVideoMirroringSupported returns YES.
Sourcepub unsafe fn isVideoMirrored(&self) -> bool
pub unsafe fn isVideoMirrored(&self) -> bool
Indicates whether the video flowing through the connection should be mirrored about its vertical axis.
This property is only applicable to AVCaptureConnection instances involving video. if -isVideoMirroringSupported returns YES, videoMirrored may be set to flip the video about its vertical axis and produce a mirror-image effect. This property may not be set unless -isVideoMirroringSupported returns YES, otherwise a NSInvalidArgumentException is thrown. This property may not be set if -automaticallyAdjustsVideoMirroring returns YES, otherwise an NSInvalidArgumentException is thrown.
Sourcepub unsafe fn setVideoMirrored(&self, video_mirrored: bool)
pub unsafe fn setVideoMirrored(&self, video_mirrored: bool)
Setter for isVideoMirrored.
Sourcepub unsafe fn automaticallyAdjustsVideoMirroring(&self) -> bool
pub unsafe fn automaticallyAdjustsVideoMirroring(&self) -> bool
Specifies whether or not the value of “ videoMirrored“ can change based on configuration of the session.
For some session configurations, video data flowing through the connection will be mirrored by default. When the value of this property is YES, the value of “ videoMirrored“ may change depending on the configuration of the session, for example after switching to a different AVCaptureDeviceInput. The default value is YES.
Sourcepub unsafe fn setAutomaticallyAdjustsVideoMirroring(
&self,
automatically_adjusts_video_mirroring: bool,
)
pub unsafe fn setAutomaticallyAdjustsVideoMirroring( &self, automatically_adjusts_video_mirroring: bool, )
Setter for automaticallyAdjustsVideoMirroring.
Sourcepub unsafe fn isVideoRotationAngleSupported(
&self,
video_rotation_angle: CGFloat,
) -> bool
Available on crate feature objc2-core-foundation only.
pub unsafe fn isVideoRotationAngleSupported( &self, video_rotation_angle: CGFloat, ) -> bool
objc2-core-foundation only.Returns whether the connection supports the given rotation angle in degrees.
Parameter videoRotationAngle: A video rotation angle to be checked.
Returns: YES if the connection supports the given video rotation angle, NO otherwise.
The connection’s videoRotationAngle property can only be set to a certain angle if this method returns YES for that angle. Only rotation angles of 0, 90, 180 and 270 are supported.
Sourcepub unsafe fn videoRotationAngle(&self) -> CGFloat
Available on crate feature objc2-core-foundation only.
pub unsafe fn videoRotationAngle(&self) -> CGFloat
objc2-core-foundation only.Indicates whether the video flowing through the connection should be rotated with a given angle in degrees.
This property is only applicable to AVCaptureConnection instances involving video or depth. -setVideoRotationAngle: throws an NSInvalidArgumentException if set to an unsupported value (see -isVideoRotationAngleSupported:). Note that setting videoRotationAngle does not necessarily result in physical rotation of video buffers. For instance, a video connection to an AVCaptureMovieFileOutput handles orientation using a Quicktime track matrix. In the AVCapturePhotoOutput, orientation is handled using Exif tags. And the AVCaptureVideoPreviewLayer applies transforms to its contents to perform rotations. However, the AVCaptureVideoDataOutput and AVCaptureDepthDataOutput do output physically rotated video buffers. Setting a video rotation angle for an output that does physically rotate buffers requires a lengthy configuration of the capture render pipeline and should be done before calling -[AVCaptureSession startRunning].
Starting with the Spring 2024 iPad line, the default value of videoRotationAngle is 180 degrees for video data on Front Camera as compared to 0 degrees on previous devices. So clients using AVCaptureVideoDataOutput and AVCaptureDepthDataOutput should set videoRotationAngle to 0 to avoid the physical buffer rotation described above. And clients rotating video data by themselves must account for the default value of videoRotationAngle when applying angles (videoRotationAngleForHorizonLevelPreview, videoRotationAngleForHorizonLevelCapture) from AVCaptureDeviceRotationCoordinator. Note that this change in default value is currently limited to these iPads, however it is recommended that clients rotating video data themselves incorporate the default rotation value into their workflows for all devices.
Sourcepub unsafe fn setVideoRotationAngle(&self, video_rotation_angle: CGFloat)
Available on crate feature objc2-core-foundation only.
pub unsafe fn setVideoRotationAngle(&self, video_rotation_angle: CGFloat)
objc2-core-foundation only.Setter for videoRotationAngle.
Sourcepub unsafe fn isVideoOrientationSupported(&self) -> bool
👎Deprecated: Use -isVideoRotationAngleSupported: instead
pub unsafe fn isVideoOrientationSupported(&self) -> bool
Indicates whether the connection supports setting the videoOrientation property.
This property is deprecated. Use -isVideoRotationAngleSupported: instead.
Sourcepub unsafe fn videoOrientation(&self) -> AVCaptureVideoOrientation
👎Deprecated: Use -videoRotationAngle instead
pub unsafe fn videoOrientation(&self) -> AVCaptureVideoOrientation
Indicates whether the video flowing through the connection should be rotated to a given orientation.
This property is deprecated. Use -videoRotationAngle instead. This property may only be set if -isVideoOrientationSupported returns YES, otherwise an NSInvalidArgumentException is thrown.
Sourcepub unsafe fn setVideoOrientation(
&self,
video_orientation: AVCaptureVideoOrientation,
)
👎Deprecated: Use -videoRotationAngle instead
pub unsafe fn setVideoOrientation( &self, video_orientation: AVCaptureVideoOrientation, )
Setter for videoOrientation.
Sourcepub unsafe fn isVideoFieldModeSupported(&self) -> bool
pub unsafe fn isVideoFieldModeSupported(&self) -> bool
Indicates whether the connection supports setting the videoFieldMode property.
This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoFieldMode property may only be set if -isVideoFieldModeSupported returns YES.
Sourcepub unsafe fn videoFieldMode(&self) -> AVVideoFieldMode
pub unsafe fn videoFieldMode(&self) -> AVVideoFieldMode
Indicates how interlaced video flowing through the connection should be treated.
This property is only applicable to AVCaptureConnection instances involving video. If -isVideoFieldModeSupported returns YES, videoFieldMode may be set to affect interlaced video content flowing through the connection.
Sourcepub unsafe fn setVideoFieldMode(&self, video_field_mode: AVVideoFieldMode)
pub unsafe fn setVideoFieldMode(&self, video_field_mode: AVVideoFieldMode)
Setter for videoFieldMode.
Sourcepub unsafe fn isVideoMinFrameDurationSupported(&self) -> bool
👎Deprecated: Use AVCaptureDevice’s activeFormat.videoSupportedFrameRateRanges instead.
pub unsafe fn isVideoMinFrameDurationSupported(&self) -> bool
Indicates whether the connection supports setting the videoMinFrameDuration property.
This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMinFrameDuration property may only be set if -isVideoMinFrameDurationSupported returns YES.
This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.
Sourcepub unsafe fn videoMinFrameDuration(&self) -> CMTime
👎Deprecated: Use AVCaptureDevice’s activeVideoMinFrameDuration instead.Available on crate feature objc2-core-media only.
pub unsafe fn videoMinFrameDuration(&self) -> CMTime
objc2-core-media only.Indicates the minimum time interval at which the receiver should output consecutive video frames.
The value of this property is a CMTime specifying the minimum duration of each video frame output by the receiver, placing a lower bound on the amount of time that should separate consecutive frames. This is equivalent to the reciprocal of the maximum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited maximum frame rate. The default value is kCMTimeInvalid.
This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.
Sourcepub unsafe fn setVideoMinFrameDuration(&self, video_min_frame_duration: CMTime)
👎Deprecated: Use AVCaptureDevice’s activeVideoMinFrameDuration instead.Available on crate feature objc2-core-media only.
pub unsafe fn setVideoMinFrameDuration(&self, video_min_frame_duration: CMTime)
objc2-core-media only.Setter for videoMinFrameDuration.
Sourcepub unsafe fn isVideoMaxFrameDurationSupported(&self) -> bool
👎Deprecated: Use AVCaptureDevice’s activeFormat.videoSupportedFrameRateRanges instead.
pub unsafe fn isVideoMaxFrameDurationSupported(&self) -> bool
Indicates whether the connection supports setting the videoMaxFrameDuration property.
This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMaxFrameDuration property may only be set if -isVideoMaxFrameDurationSupported returns YES.
This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.
Sourcepub unsafe fn videoMaxFrameDuration(&self) -> CMTime
👎Deprecated: Use AVCaptureDevice’s activeVideoMaxFrameDuration instead.Available on crate feature objc2-core-media only.
pub unsafe fn videoMaxFrameDuration(&self) -> CMTime
objc2-core-media only.Indicates the maximum time interval at which the receiver should output consecutive video frames.
The value of this property is a CMTime specifying the maximum duration of each video frame output by the receiver, placing an upper bound on the amount of time that should separate consecutive frames. This is equivalent to the reciprocal of the minimum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited minimum frame rate. The default value is kCMTimeInvalid.
This property is deprecated on iOS, where min and max frame rate adjustments are applied exclusively at the AVCaptureDevice using the activeVideoMinFrameDuration and activeVideoMaxFrameDuration properties. On macOS, frame rate adjustments are supported both at the AVCaptureDevice and at AVCaptureConnection, enabling connections to output different frame rates.
Sourcepub unsafe fn setVideoMaxFrameDuration(&self, video_max_frame_duration: CMTime)
👎Deprecated: Use AVCaptureDevice’s activeVideoMaxFrameDuration instead.Available on crate feature objc2-core-media only.
pub unsafe fn setVideoMaxFrameDuration(&self, video_max_frame_duration: CMTime)
objc2-core-media only.Setter for videoMaxFrameDuration.
Sourcepub unsafe fn videoMaxScaleAndCropFactor(&self) -> CGFloat
Available on crate feature objc2-core-foundation only.
pub unsafe fn videoMaxScaleAndCropFactor(&self) -> CGFloat
objc2-core-foundation only.Indicates the maximum video scale and crop factor supported by the receiver.
This property is only applicable to AVCaptureConnection instances involving video. In such connections, the videoMaxScaleAndCropFactor property specifies the maximum CGFloat value that may be used when setting the videoScaleAndCropFactor property.
Sourcepub unsafe fn videoScaleAndCropFactor(&self) -> CGFloat
Available on crate feature objc2-core-foundation only.
pub unsafe fn videoScaleAndCropFactor(&self) -> CGFloat
objc2-core-foundation only.Indicates the current video scale and crop factor in use by the receiver.
This property only applies to AVCaptureStillImageOutput connections. In such connections, the videoScaleAndCropFactor property may be set to a value in the range of 1.0 to videoMaxScaleAndCropFactor. At a factor of 1.0, the image is its original size. At a factor greater than 1.0, the image is scaled by the factor and center-cropped to its original dimensions. This factor is applied in addition to any magnification from AVCaptureDevice’s videoZoomFactor property.
See: -[AVCaptureDevice videoZoomFactor]
Sourcepub unsafe fn setVideoScaleAndCropFactor(
&self,
video_scale_and_crop_factor: CGFloat,
)
Available on crate feature objc2-core-foundation only.
pub unsafe fn setVideoScaleAndCropFactor( &self, video_scale_and_crop_factor: CGFloat, )
objc2-core-foundation only.Setter for videoScaleAndCropFactor.
Sourcepub unsafe fn preferredVideoStabilizationMode(
&self,
) -> AVCaptureVideoStabilizationMode
Available on crate feature AVCaptureDevice only.
pub unsafe fn preferredVideoStabilizationMode( &self, ) -> AVCaptureVideoStabilizationMode
AVCaptureDevice only.Indicates the stabilization mode to apply to video flowing through the receiver when it is supported.
This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats may be available for stabilization. By setting the preferredVideoStabilizationMode property to a value other than AVCaptureVideoStabilizationModeOff, video flowing through the receiver is stabilized when the mode is available. Enabling video stabilization introduces additional latency into the video capture pipeline and may consume more system memory depending on the stabilization mode and format. If the preferred stabilization mode isn’t available, the activeVideoStabilizationMode will be set to AVCaptureVideoStabilizationModeOff. Clients may key-value observe the activeVideoStabilizationMode property to know which stabilization mode is in use or when it is off. The default value is AVCaptureVideoStabilizationModeOff. When setting this property to AVCaptureVideoStabilizationModeAuto, an appropriate stabilization mode will be chosen based on the format and frame rate. For apps linked before iOS 6.0, the default value is AVCaptureVideoStabilizationModeStandard for a video connection attached to an AVCaptureMovieFileOutput instance. For apps linked on or after iOS 6.0, the default value is always AVCaptureVideoStabilizationModeOff. Setting a video stabilization mode using this property may change the value of enablesVideoStabilizationWhenAvailable.
Sourcepub unsafe fn setPreferredVideoStabilizationMode(
&self,
preferred_video_stabilization_mode: AVCaptureVideoStabilizationMode,
)
Available on crate feature AVCaptureDevice only.
pub unsafe fn setPreferredVideoStabilizationMode( &self, preferred_video_stabilization_mode: AVCaptureVideoStabilizationMode, )
AVCaptureDevice only.Setter for preferredVideoStabilizationMode.
Sourcepub unsafe fn activeVideoStabilizationMode(
&self,
) -> AVCaptureVideoStabilizationMode
Available on crate feature AVCaptureDevice only.
pub unsafe fn activeVideoStabilizationMode( &self, ) -> AVCaptureVideoStabilizationMode
AVCaptureDevice only.Indicates the stabilization mode currently being applied to video flowing through the receiver.
This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats may be stabilized. The activeVideoStabilizationMode property returns a value other than AVCaptureVideoStabilizationModeOff if video stabilization is currently in use. This property never returns AVCaptureVideoStabilizationModeAuto. This property is key-value observable.
Sourcepub unsafe fn isVideoStabilizationSupported(&self) -> bool
pub unsafe fn isVideoStabilizationSupported(&self) -> bool
Indicates whether the connection supports video stabilization.
This property is only applicable to AVCaptureConnection instances involving video. In such connections, the -enablesVideoStabilizationWhenAvailable property may only be set if -supportsVideoStabilization returns YES. This property returns YES if the connection’s input device has one or more formats that support video stabilization and the connection’s output supports video stabilization. See [AVCaptureDeviceFormat isVideoStabilizationModeSupported:] to check which video stabilization modes are supported by the active device format.
Sourcepub unsafe fn isVideoStabilizationEnabled(&self) -> bool
👎Deprecated: Use activeVideoStabilizationMode instead.
pub unsafe fn isVideoStabilizationEnabled(&self) -> bool
Indicates whether stabilization is currently being applied to video flowing through the receiver.
This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats and resolutions may be available for stabilization. The videoStabilizationEnabled property returns YES if video stabilization is currently in use. This property is key-value observable. This property is deprecated. Use activeVideoStabilizationMode instead.
Sourcepub unsafe fn enablesVideoStabilizationWhenAvailable(&self) -> bool
👎Deprecated: Use preferredVideoStabilizationMode instead.
pub unsafe fn enablesVideoStabilizationWhenAvailable(&self) -> bool
Indicates whether stabilization should be applied to video flowing through the receiver when the feature is available.
This property is only applicable to AVCaptureConnection instances involving video. On devices where the video stabilization feature is supported, only a subset of available source formats and resolutions may be available for stabilization. By setting the enablesVideoStabilizationWhenAvailable property to YES, video flowing through the receiver is stabilized when available. Enabling video stabilization may introduce additional latency into the video capture pipeline. Clients may key-value observe the videoStabilizationEnabled property to know when stabilization is in use or not. The default value is NO. For apps linked before iOS 6.0, the default value is YES for a video connection attached to an AVCaptureMovieFileOutput instance. For apps linked on or after iOS 6.0, the default value is always NO. This property is deprecated. Use preferredVideoStabilizationMode instead.
Sourcepub unsafe fn setEnablesVideoStabilizationWhenAvailable(
&self,
enables_video_stabilization_when_available: bool,
)
👎Deprecated: Use preferredVideoStabilizationMode instead.
pub unsafe fn setEnablesVideoStabilizationWhenAvailable( &self, enables_video_stabilization_when_available: bool, )
Setter for enablesVideoStabilizationWhenAvailable.
Sourcepub unsafe fn isCameraIntrinsicMatrixDeliverySupported(&self) -> bool
pub unsafe fn isCameraIntrinsicMatrixDeliverySupported(&self) -> bool
Indicates whether the connection supports camera intrinsic matrix delivery.
This property is only applicable to AVCaptureConnection instances involving video. For such connections, the cameraIntrinsicMatrixDeliveryEnabled property may only be set to YES if -isCameraIntrinsicMatrixDeliverySupported returns YES. This property returns YES if both the connection’s input device format and the connection’s output support camera intrinsic matrix delivery. Only the AVCaptureVideoDataOutput’s connection supports this property. Note that if video stabilization is enabled (preferredVideoStabilizationMode is set to something other than AVCaptureVideoStabilizationModeOff), camera intrinsic matrix delivery is not supported. Starting in iOS 14.3, camera intrinsics are delivered with video buffers on which geometric distortion correction is applied.
Sourcepub unsafe fn isCameraIntrinsicMatrixDeliveryEnabled(&self) -> bool
pub unsafe fn isCameraIntrinsicMatrixDeliveryEnabled(&self) -> bool
Indicates whether camera intrinsic matrix delivery should be enabled.
This property is only applicable to AVCaptureConnection instances involving video. Refer to property cameraIntrinsicMatrixDeliverySupported before setting this property. When this property is set to YES, the receiver’s output will add the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix sample buffer attachment to all vended sample buffers. This property must be set before the session starts running.
Sourcepub unsafe fn setCameraIntrinsicMatrixDeliveryEnabled(
&self,
camera_intrinsic_matrix_delivery_enabled: bool,
)
pub unsafe fn setCameraIntrinsicMatrixDeliveryEnabled( &self, camera_intrinsic_matrix_delivery_enabled: bool, )
Setter for isCameraIntrinsicMatrixDeliveryEnabled.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AnyObject> for AVCaptureConnection
impl AsRef<AnyObject> for AVCaptureConnection
Source§impl AsRef<NSObject> for AVCaptureConnection
impl AsRef<NSObject> for AVCaptureConnection
Source§impl Borrow<AnyObject> for AVCaptureConnection
impl Borrow<AnyObject> for AVCaptureConnection
Source§impl Borrow<NSObject> for AVCaptureConnection
impl Borrow<NSObject> for AVCaptureConnection
Source§impl ClassType for AVCaptureConnection
impl ClassType for AVCaptureConnection
Source§const NAME: &'static str = "AVCaptureConnection"
const NAME: &'static str = "AVCaptureConnection"
Source§type ThreadKind = <<AVCaptureConnection as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureConnection as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureConnection
impl Debug for AVCaptureConnection
Source§impl Deref for AVCaptureConnection
impl Deref for AVCaptureConnection
Source§impl Hash for AVCaptureConnection
impl Hash for AVCaptureConnection
Source§impl Message for AVCaptureConnection
impl Message for AVCaptureConnection
Source§impl NSObjectProtocol for AVCaptureConnection
impl NSObjectProtocol for AVCaptureConnection
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref