#[repr(C)]pub struct AVCaptureDeviceFormat { /* private fields */ }
AVCaptureDevice
only.Expand description
An AVCaptureDeviceFormat wraps a CMFormatDescription and other format-related information, such as min and max framerate.
An AVCaptureDevice exposes an array of formats, and its current activeFormat may be queried. The payload for the formats property is an array of AVCaptureDeviceFormat objects and the activeFormat property payload is an AVCaptureDeviceFormat. AVCaptureDeviceFormat is a thin wrapper around a CMFormatDescription, and can carry associated device format information that doesn’t go in a CMFormatDescription, such as min and max frame rate. An AVCaptureDeviceFormat object is immutable. Its values do not change for the life of the object.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureDeviceFormat
impl AVCaptureDeviceFormat
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn mediaType(&self) -> Retained<AVMediaType>
Available on crate feature AVMediaFormat
only.
pub unsafe fn mediaType(&self) -> Retained<AVMediaType>
AVMediaFormat
only.An NSString describing the media type of an AVCaptureDevice active or supported format.
Supported mediaTypes are listed in AVMediaFormat.h. This is a read-only property. The caller assumes no ownership of the returned value and should not CFRelease it.
Sourcepub unsafe fn formatDescription(&self) -> Retained<CMFormatDescription>
Available on crate feature objc2-core-media
only.
pub unsafe fn formatDescription(&self) -> Retained<CMFormatDescription>
objc2-core-media
only.A CMFormatDescription describing an AVCaptureDevice active or supported format.
A CMFormatDescription describing an AVCaptureDevice active or supported format. This is a read-only property. The caller assumes no ownership of the returned value and should not CFRelease it.
Sourcepub unsafe fn videoSupportedFrameRateRanges(
&self,
) -> Retained<NSArray<AVFrameRateRange>>
pub unsafe fn videoSupportedFrameRateRanges( &self, ) -> Retained<NSArray<AVFrameRateRange>>
A property indicating the format’s supported frame rate ranges.
videoSupportedFrameRateRanges is an array of AVFrameRateRange objects, one for each of the format’s supported video frame rate ranges.
Sourcepub unsafe fn videoFieldOfView(&self) -> c_float
pub unsafe fn videoFieldOfView(&self) -> c_float
A property indicating the format’s horizontal field of view.
videoFieldOfView is a float value indicating the receiver’s field of view in degrees. If field of view is unknown, a value of 0 is returned.
Sourcepub unsafe fn isVideoBinned(&self) -> bool
pub unsafe fn isVideoBinned(&self) -> bool
A property indicating whether the format is binned.
videoBinned is a BOOL indicating whether the format is a binned format. Binning is a pixel-combining process which can result in greater low light sensitivity at the cost of reduced resolution.
Sourcepub unsafe fn isVideoStabilizationModeSupported(
&self,
video_stabilization_mode: AVCaptureVideoStabilizationMode,
) -> bool
pub unsafe fn isVideoStabilizationModeSupported( &self, video_stabilization_mode: AVCaptureVideoStabilizationMode, ) -> bool
Returns whether the format supports the given video stabilization mode.
Parameter videoStabilizationMode
: An AVCaptureVideoStabilizationMode to be checked.
isVideoStabilizationModeSupported: returns a boolean value indicating whether the format can be stabilized using the given mode with -[AVCaptureConnection setPreferredVideoStabilizationMode:].
Sourcepub unsafe fn isVideoStabilizationSupported(&self) -> bool
👎Deprecated: Use isVideoStabilizationModeSupported: instead.
pub unsafe fn isVideoStabilizationSupported(&self) -> bool
A property indicating whether the format supports video stabilization.
videoStabilizationSupported is a BOOL indicating whether the format can be stabilized using AVCaptureConnection -setEnablesVideoStabilizationWhenAvailable. This property is deprecated. Use isVideoStabilizationModeSupported: instead.
Sourcepub unsafe fn videoMaxZoomFactor(&self) -> CGFloat
Available on crate feature objc2-core-foundation
only.
pub unsafe fn videoMaxZoomFactor(&self) -> CGFloat
objc2-core-foundation
only.Indicates the maximum zoom factor available for the AVCaptureDevice’s videoZoomFactor property.
If the device’s videoZoomFactor property is assigned a larger value, an NSRangeException will be thrown. A maximum zoom factor of 1 indicates no zoom is available.
Sourcepub unsafe fn videoZoomFactorUpscaleThreshold(&self) -> CGFloat
Available on crate feature objc2-core-foundation
only.
pub unsafe fn videoZoomFactorUpscaleThreshold(&self) -> CGFloat
objc2-core-foundation
only.Indicates the value of AVCaptureDevice’s videoZoomFactor property at which the image output begins to require upscaling.
In some cases the image sensor’s dimensions are larger than the dimensions reported by the video AVCaptureDeviceFormat. As long as the sensor crop is larger than the reported dimensions of the AVCaptureDeviceFormat, the image will be downscaled. Setting videoZoomFactor to the value of videoZoomFactorUpscalingThreshold will provide a center crop of the sensor image data without any scaling. If a greater zoom factor is used, then the sensor data will be upscaled to the device format’s dimensions.
Sourcepub unsafe fn systemRecommendedVideoZoomRange(
&self,
) -> Option<Retained<AVZoomRange>>
pub unsafe fn systemRecommendedVideoZoomRange( &self, ) -> Option<Retained<AVZoomRange>>
Indicates the system’s recommended zoom range for this device format.
This property can be used to create a slider in your app’s user interface to control the device’s zoom with a system-recommended video zoom range. When a recommendation is not available, this property returns nil. Clients can key value observe AVCaptureDevice’s minAvailableVideoZoomFactor and maxAvailableVideoZoomFactor properties to know when a device’s supported zoom is restricted within the recommended zoom range.
The value of this property is also used for the AVCaptureSystemZoomSlider’s range.
Sourcepub unsafe fn minExposureDuration(&self) -> CMTime
Available on crate feature objc2-core-media
only.
pub unsafe fn minExposureDuration(&self) -> CMTime
objc2-core-media
only.A CMTime indicating the minimum supported exposure duration.
This read-only property indicates the minimum supported exposure duration.
Sourcepub unsafe fn maxExposureDuration(&self) -> CMTime
Available on crate feature objc2-core-media
only.
pub unsafe fn maxExposureDuration(&self) -> CMTime
objc2-core-media
only.A CMTime indicating the maximum supported exposure duration.
This read-only property indicates the maximum supported exposure duration.
Sourcepub unsafe fn systemRecommendedExposureBiasRange(
&self,
) -> Option<Retained<AVExposureBiasRange>>
pub unsafe fn systemRecommendedExposureBiasRange( &self, ) -> Option<Retained<AVExposureBiasRange>>
Indicates the system’s recommended exposure bias range for this device format.
This property can be used to create a slider in your app’s user interface to control the device’s exposure bias with a system-recommended exposure bias range. When a recommendation is not available, this property returns nil.
The value of this property is also used for the AVCaptureSystemExposureBiasSlider’s range.
Sourcepub unsafe fn minISO(&self) -> c_float
pub unsafe fn minISO(&self) -> c_float
A float indicating the minimum supported exposure ISO value.
This read-only property indicates the minimum supported exposure ISO value.
Sourcepub unsafe fn maxISO(&self) -> c_float
pub unsafe fn maxISO(&self) -> c_float
An float indicating the maximum supported exposure ISO value.
This read-only property indicates the maximum supported exposure ISO value.
Sourcepub unsafe fn isGlobalToneMappingSupported(&self) -> bool
pub unsafe fn isGlobalToneMappingSupported(&self) -> bool
A property indicating whether the format supports global tone mapping.
globalToneMappingSupported is a BOOL indicating whether the format supports global tone mapping. See AVCaptureDevice’s globalToneMappingEnabled property.
Sourcepub unsafe fn isVideoHDRSupported(&self) -> bool
pub unsafe fn isVideoHDRSupported(&self) -> bool
A property indicating whether the format supports high dynamic range streaming.
videoHDRSupported is a BOOL indicating whether the format supports high dynamic range streaming, also known as Extended Dynamic Range (EDR). When enabled, the device streams at twice the published frame rate, capturing an under-exposed frame and correctly exposed frame for each frame time at the published rate. Portions of the under-exposed frame are combined with the correctly exposed frame to recover detail in darker areas of the scene. EDR is a separate and distinct feature from 10-bit HDR video (first seen in 2020 iPhones). 10-bit formats with HLG BT2020 color space have greater dynamic range by virtue of their expanded bit depth and HLG transfer function, and when captured in movies, contain Dolby Vision metadata. They are, in effect, “always on” HDR. And thus the videoHDRSupported property is always NO for 10-bit formats only supporting HLG BT2020 colorspace, since HDR cannot be enabled or disabled. To enable videoHDR (EDR), set the AVCaptureDevice.videoHDREnabled property.
Sourcepub unsafe fn highResolutionStillImageDimensions(&self) -> CMVideoDimensions
👎Deprecated: Use supportedMaxPhotoDimensions instead.Available on crate feature objc2-core-media
only.
pub unsafe fn highResolutionStillImageDimensions(&self) -> CMVideoDimensions
objc2-core-media
only.CMVideoDimensions indicating the highest resolution still image that can be produced by this format.
By default, AVCapturePhotoOutput and AVCaptureStillImageOutput emit images with the same dimensions as their source AVCaptureDevice’s activeFormat.formatDescription property. Some device formats support high resolution photo output. That is, they can stream video to an AVCaptureVideoDataOutput or AVCaptureMovieFileOutput at one resolution while outputting photos to AVCapturePhotoOutput at a higher resolution. You may query this property to discover a video format’s supported high resolution still image dimensions. See -[AVCapturePhotoOutput highResolutionPhotoEnabled], -[AVCapturePhotoSettings highResolutionPhotoEnabled], and -[AVCaptureStillImageOutput highResolutionStillImageOutputEnabled].
AVCaptureDeviceFormats of type AVMediaTypeDepthData may also support the delivery of a higher resolution depth data map to an AVCapturePhotoOutput. Chief differences are:
- Depth data accompanying still images is not supported by AVCaptureStillImageOutput. You must use AVCapturePhotoOutput.
- By opting in for depth data ( -[AVCapturePhotoSettings setDepthDataDeliveryEnabled:YES] ), you implicitly opt in for high resolution depth data if it’s available. You may query the -[AVCaptureDevice activeDepthDataFormat]’s highResolutionStillImageDimensions to discover the depth data resolution that will be delivered with captured photos.
Sourcepub unsafe fn isHighPhotoQualitySupported(&self) -> bool
pub unsafe fn isHighPhotoQualitySupported(&self) -> bool
A boolean value specifying whether this format supports high photo quality when selecting an AVCapturePhotoQualityPrioritization of .balanced or .quality.
If an AVCaptureDeviceFormat’s highPhotoQualitySupported property is YES, the format produces higher image quality when selecting .balanced or .quality AVCapturePhotoQualityPrioritization compared to .speed. Such formats adhere to the following rules:
- Photo requests with a prioritization of .speed produce the fastest image result (suitable for burst captures).
- Photo requests with a prioritization of .balanced produce higher image quality without dropping frames if a video recording is underway.
- Photo requests with a prioritization of .quality produce high image quality and may cause frame drops if a video recording is underway. For maximum backward compatibility, photo requests on high photo quality formats set to .quality only cause video frame drops if your app is linked on or after iOS 15. Formats that don’t support high photo quality produce the same image quality whether you select .speed, .balanced, or .quality. Note that high photo quality is only attainable when using the AVCapturePhotoOutput with these supported formats.
Sourcepub unsafe fn isHighestPhotoQualitySupported(&self) -> bool
pub unsafe fn isHighestPhotoQualitySupported(&self) -> bool
A boolean value specifying whether this format supports the highest possible photo quality that can be delivered on the current platform.
Of the many formats supported by an AVCaptureDevice, only a few of them are designated as “photo” formats which can produce the highest possible quality, such as still image stabilization and Live Photos. If you intend to connect an AVCaptureDeviceInput to an AVCapturePhotoOutput and receive the best possible images, you should ensure that you are either using the AVCaptureSessionPresetPhoto as your preset, or if using the parallel AVCaptureDevice activeFormat API, select as your activeFormat one for which this property is YES.
Sourcepub unsafe fn autoFocusSystem(&self) -> AVCaptureAutoFocusSystem
pub unsafe fn autoFocusSystem(&self) -> AVCaptureAutoFocusSystem
A property indicating the autofocus system.
This read-only property indicates the autofocus system.
Sourcepub unsafe fn supportedColorSpaces(&self) -> Retained<NSArray<NSNumber>>
pub unsafe fn supportedColorSpaces(&self) -> Retained<NSArray<NSNumber>>
A property indicating the receiver’s supported color spaces.
This read-only property indicates the receiver’s supported color spaces as an array of AVCaptureColorSpace constants sorted from narrow to wide color.
Sourcepub unsafe fn videoMinZoomFactorForDepthDataDelivery(&self) -> CGFloat
👎DeprecatedAvailable on crate feature objc2-core-foundation
only.
pub unsafe fn videoMinZoomFactorForDepthDataDelivery(&self) -> CGFloat
objc2-core-foundation
only.A deprecated property. Please use supportedVideoZoomFactorsForDepthDataDelivery instead
Sourcepub unsafe fn videoMaxZoomFactorForDepthDataDelivery(&self) -> CGFloat
👎DeprecatedAvailable on crate feature objc2-core-foundation
only.
pub unsafe fn videoMaxZoomFactorForDepthDataDelivery(&self) -> CGFloat
objc2-core-foundation
only.A deprecated property. Please use supportedVideoZoomFactorsForDepthDataDelivery instead
Sourcepub unsafe fn supportedVideoZoomFactorsForDepthDataDelivery(
&self,
) -> Retained<NSArray<NSNumber>>
👎Deprecated
pub unsafe fn supportedVideoZoomFactorsForDepthDataDelivery( &self, ) -> Retained<NSArray<NSNumber>>
A deprecated property. Please use supportedVideoZoomRangesForDepthDataDelivery
Sourcepub unsafe fn supportedVideoZoomRangesForDepthDataDelivery(
&self,
) -> Retained<NSArray<AVZoomRange>>
pub unsafe fn supportedVideoZoomRangesForDepthDataDelivery( &self, ) -> Retained<NSArray<AVZoomRange>>
This property returns the zoom ranges within which depth data can be delivered.
Virtual devices support limited zoom ranges when delivering depth data to any output. If this device format has no -supportedDepthDataFormats, this property returns an empty array. The presence of one or more ranges where the min and max zoom factors are not equal means that “continuous zoom” with depth is supported. For example: a) ranges: @ [ [2..2], [4..4] ] only zoom factors 2 and 4 are allowed to be set when depthDataDelivery is enabled. Any other zoom factor results in an exception. b) ranges: @ [ [2..5] ] depthDataDelivery is supported with zoom factors [2..5]. Zoom factors outside of this range may be set, but will result in loss of depthDataDeliery. Whenever zoom is set back to a value within the range of [2..5], depthDataDelivery will resume.
When depth data delivery is enabled, the effective videoZoomFactorUpscaleThreshold will be 1.0, meaning that all zoom factors that are not native zoom factors (see AVCaptureDevice.virtualDeviceSwitchOverVideoZoomFactors and AVCaptureDevice.secondaryNativeResolutionZoomFactors) result in digital upscaling.
Sourcepub unsafe fn zoomFactorsOutsideOfVideoZoomRangesForDepthDeliverySupported(
&self,
) -> bool
pub unsafe fn zoomFactorsOutsideOfVideoZoomRangesForDepthDeliverySupported( &self, ) -> bool
This property returns whether the format supports zoom factors outside of the supportedVideoZoomFactorRangesForDepthDataDelivery.
When a zoom factor outside of the supportedVideoZoomFactorRangesForDepthDataDelivery is set, depth data delivery will be suspended until a zoom factor within the supportedVideoZoomFactorRangesForDepthDataDelivery is set.
Sourcepub unsafe fn supportedDepthDataFormats(
&self,
) -> Retained<NSArray<AVCaptureDeviceFormat>>
pub unsafe fn supportedDepthDataFormats( &self, ) -> Retained<NSArray<AVCaptureDeviceFormat>>
Indicates this format’s companion depth data formats.
If no depth data formats are supported by the receiver, an empty array is returned. On virtual devices, the supportedDepthDataFormats list items always match the aspect ratio of their paired video format. When the receiver is set as the device’s activeFormat, you may set the device’s activeDepthDataFormat to one of these supported depth data formats.
Sourcepub unsafe fn unsupportedCaptureOutputClasses(
&self,
) -> Retained<NSArray<AnyClass>>
pub unsafe fn unsupportedCaptureOutputClasses( &self, ) -> Retained<NSArray<AnyClass>>
A property indicating AVCaptureOutput subclasses the receiver does not support.
As a rule, AVCaptureDeviceFormats of a given mediaType are available for use with all AVCaptureOutputs that accept that media type, but there are exceptions. For instance, on apps linked against iOS versions earlier than 12.0, the photo resolution video formats may not be used as sources for AVCaptureMovieFileOutput due to bandwidth limitations. On DualCamera devices, AVCaptureDepthDataOutput is not supported when outputting full resolution (i.e. 12 MP) video due to bandwidth limitations. In order to stream depth data plus video data from a photo format, ensure that your AVCaptureVideoDataOutput’s deliversPreviewSizedOutputBuffers property is set to YES. Likewise, to stream depth data while capturing video to a movie file using AVCaptureMovieFileOutput, call -[AVCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto]. When using the photo preset, video is captured at preview resolution rather than the full sensor resolution.
Sourcepub unsafe fn supportedMaxPhotoDimensions(&self) -> Retained<NSArray<NSValue>>
pub unsafe fn supportedMaxPhotoDimensions(&self) -> Retained<NSArray<NSValue>>
This property lists all of the supported maximum photo dimensions for this format. The array contains CMVideoDimensions structs encoded as NSValues.
Enumerate all supported resolution settings for which this format may be configured to capture photos. Use these values to set AVCapturePhotoOutput.maxPhotoDimensions and AVCapturePhotoSettings.maxPhotoDimensions.
Sourcepub unsafe fn secondaryNativeResolutionZoomFactors(
&self,
) -> Retained<NSArray<NSNumber>>
pub unsafe fn secondaryNativeResolutionZoomFactors( &self, ) -> Retained<NSArray<NSNumber>>
Indicates zoom factors at which this device transitions to secondary native resolution modes.
Devices with this property have the means to switch their pixel sampling mode on the fly to produce a high-fidelity, non-upsampled images at a fixed zoom factor beyond 1.0x.
Sourcepub unsafe fn isAutoVideoFrameRateSupported(&self) -> bool
pub unsafe fn isAutoVideoFrameRateSupported(&self) -> bool
Indicates whether the device format supports auto video frame rate.
See -[AVCaptureDevice autoVideoFrameRateEnabled] (above) for a detailed description of the feature.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatDepthDataAdditions.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatDepthDataAdditions.
pub unsafe fn isPortraitEffectsMatteStillImageDeliverySupported(&self) -> bool
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatMultiCamAdditions.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatMultiCamAdditions.
Sourcepub unsafe fn isMultiCamSupported(&self) -> bool
pub unsafe fn isMultiCamSupported(&self) -> bool
A property indicating whether this format is supported in an AVCaptureMultiCamSession.
When using an AVCaptureSession (single camera capture), any of the formats in the device’s -formats array may be set as the -activeFormat. However, when used with an AVCaptureMultiCamSession, the device’s -activeFormat may only be set to one of the formats for which multiCamSupported answers YES. This limited subset of capture formats are known to run sustainably in a multi camera capture scenario.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatSpatialVideoCapture.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatSpatialVideoCapture.
Sourcepub unsafe fn isSpatialVideoCaptureSupported(&self) -> bool
pub unsafe fn isSpatialVideoCaptureSupported(&self) -> bool
Returns whether or not the format supports capturing spatial video to a file.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatGeometricDistortionCorrection.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatGeometricDistortionCorrection.
Sourcepub unsafe fn geometricDistortionCorrectedVideoFieldOfView(&self) -> c_float
pub unsafe fn geometricDistortionCorrectedVideoFieldOfView(&self) -> c_float
A property indicating the format’s horizontal field of view post geometric distortion correction.
If the receiver’s AVCaptureDevice does not support GDC, geometricDistortionCorrectedVideoFieldOfView matches the videoFieldOfView property.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatCenterStage.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatCenterStage.
Sourcepub unsafe fn isCenterStageSupported(&self) -> bool
pub unsafe fn isCenterStageSupported(&self) -> bool
Indicates whether the format supports the Center Stage feature.
This property returns YES if the format supports “Center Stage”, which automatically adjusts the camera to keep people optimally framed within the field of view. See +AVCaptureDevice.centerStageEnabled for a detailed discussion.
pub unsafe fn videoMinZoomFactorForCenterStage(&self) -> CGFloat
objc2-core-foundation
only.Sourcepub unsafe fn videoMaxZoomFactorForCenterStage(&self) -> CGFloat
Available on crate feature objc2-core-foundation
only.
pub unsafe fn videoMaxZoomFactorForCenterStage(&self) -> CGFloat
objc2-core-foundation
only.Indicates the maximum zoom factor available for the AVCaptureDevice’s videoZoomFactor property when centerStageActive is YES.
Devices support a limited zoom range when Center Stage is active. If this device format does not support Center Stage, this property returns videoMaxZoomFactor.
Sourcepub unsafe fn videoFrameRateRangeForCenterStage(
&self,
) -> Option<Retained<AVFrameRateRange>>
pub unsafe fn videoFrameRateRangeForCenterStage( &self, ) -> Option<Retained<AVFrameRateRange>>
Indicates the minimum / maximum frame rates available when centerStageActive is YES.
Devices may support a limited frame rate range when Center Stage is active. If this device format does not support Center Stage, this property returns nil.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatPortraitEffect.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatPortraitEffect.
Sourcepub unsafe fn isPortraitEffectSupported(&self) -> bool
pub unsafe fn isPortraitEffectSupported(&self) -> bool
Indicates whether the format supports the Portrait Effect feature.
This property returns YES if the format supports Portrait Effect, the application of a shallow depth of field effect to objects in the background. See +AVCaptureDevice.portraitEffectEnabled for a detailed discussion.
Sourcepub unsafe fn videoFrameRateRangeForPortraitEffect(
&self,
) -> Option<Retained<AVFrameRateRange>>
pub unsafe fn videoFrameRateRangeForPortraitEffect( &self, ) -> Option<Retained<AVFrameRateRange>>
Indicates the minimum / maximum frame rates available when portraitEffectActive is YES.
Devices may support a limited frame rate range when Portrait Effect is active. If this device format does not support Portrait Effect, this property returns nil.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatStudioLight.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatStudioLight.
Sourcepub unsafe fn isStudioLightSupported(&self) -> bool
pub unsafe fn isStudioLightSupported(&self) -> bool
Indicates whether the format supports the Studio Light feature.
This property returns YES if the format supports Studio Light (artificial re-lighting of the subject’s face). See +AVCaptureDevice.studioLightEnabled.
Sourcepub unsafe fn videoFrameRateRangeForStudioLight(
&self,
) -> Option<Retained<AVFrameRateRange>>
pub unsafe fn videoFrameRateRangeForStudioLight( &self, ) -> Option<Retained<AVFrameRateRange>>
Indicates the minimum / maximum frame rates available when studioLight is YES.
Devices may support a limited frame rate range when Studio Light is active. If this device format does not support Studio Light, this property returns nil.
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatReactionEffects.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatReactionEffects.
Sourcepub unsafe fn reactionEffectsSupported(&self) -> bool
pub unsafe fn reactionEffectsSupported(&self) -> bool
Indicates whether the format supports the Reaction Effects feature.
This property returns YES if the format supports Reaction Effects. See +AVCaptureDevice.reactionEffectsEnabled.
Sourcepub unsafe fn videoFrameRateRangeForReactionEffectsInProgress(
&self,
) -> Option<Retained<AVFrameRateRange>>
pub unsafe fn videoFrameRateRangeForReactionEffectsInProgress( &self, ) -> Option<Retained<AVFrameRateRange>>
Indicates the minimum / maximum frame rates available when a reaction effect is running.
Unlike the other video effects, enabling reaction effects does not limit the stream’s frame rate because most of the time no rendering is being performed. The frame rate will only ramp down when a reaction is actually being rendered on the stream (see AVCaptureDevice.reactionEffectsInProgress)
Source§impl AVCaptureDeviceFormat
AVCaptureDeviceFormatBackgroundReplacement.
impl AVCaptureDeviceFormat
AVCaptureDeviceFormatBackgroundReplacement.
Sourcepub unsafe fn isBackgroundReplacementSupported(&self) -> bool
pub unsafe fn isBackgroundReplacementSupported(&self) -> bool
Indicates whether the format supports the Background Replacement feature.
This property returns YES if the format supports Background Replacement background replacement. See +AVCaptureDevice.backgroundReplacementEnabled.
Sourcepub unsafe fn videoFrameRateRangeForBackgroundReplacement(
&self,
) -> Option<Retained<AVFrameRateRange>>
pub unsafe fn videoFrameRateRangeForBackgroundReplacement( &self, ) -> Option<Retained<AVFrameRateRange>>
Indicates the minimum / maximum frame rates available when background replacement is active.
Devices may support a limited frame rate range when Background Replacement is active. If this device format does not support Background Replacement, this property returns nil.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init
/new
methods).
§Example
Check that an instance of NSObject
has the precise class NSObject
.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load
instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load
instead.Use Ivar::load
instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T
.
See Ivar::load_ptr
for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T
.
This is the reference-variant. Use Retained::downcast
if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString
.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString
to a NSMutableString
,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass:
for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject
.
§Panics
This works internally by calling isKindOfClass:
. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject
and
NSProxy
implement this method.
§Examples
Cast an NSString
back and forth from NSObject
.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();
Try (and fail) to cast an NSObject
to an NSString
.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());
Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();
This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}
Trait Implementations§
Source§impl AsRef<AnyObject> for AVCaptureDeviceFormat
impl AsRef<AnyObject> for AVCaptureDeviceFormat
Source§impl AsRef<NSObject> for AVCaptureDeviceFormat
impl AsRef<NSObject> for AVCaptureDeviceFormat
Source§impl Borrow<AnyObject> for AVCaptureDeviceFormat
impl Borrow<AnyObject> for AVCaptureDeviceFormat
Source§impl Borrow<NSObject> for AVCaptureDeviceFormat
impl Borrow<NSObject> for AVCaptureDeviceFormat
Source§impl ClassType for AVCaptureDeviceFormat
impl ClassType for AVCaptureDeviceFormat
Source§const NAME: &'static str = "AVCaptureDeviceFormat"
const NAME: &'static str = "AVCaptureDeviceFormat"
Source§type ThreadKind = <<AVCaptureDeviceFormat as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureDeviceFormat as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureDeviceFormat
impl Debug for AVCaptureDeviceFormat
Source§impl Deref for AVCaptureDeviceFormat
impl Deref for AVCaptureDeviceFormat
Source§impl Hash for AVCaptureDeviceFormat
impl Hash for AVCaptureDeviceFormat
Source§impl Message for AVCaptureDeviceFormat
impl Message for AVCaptureDeviceFormat
Source§impl NSObjectProtocol for AVCaptureDeviceFormat
impl NSObjectProtocol for AVCaptureDeviceFormat
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass
directly, or cast your objects with AnyObject::downcast_ref