Struct AVCaptureDeviceFormat

Source
#[repr(C)]
pub struct AVCaptureDeviceFormat { /* private fields */ }
Available on crate feature AVCaptureDevice only.
Expand description

An AVCaptureDeviceFormat wraps a CMFormatDescription and other format-related information, such as min and max framerate.

An AVCaptureDevice exposes an array of formats, and its current activeFormat may be queried. The payload for the formats property is an array of AVCaptureDeviceFormat objects and the activeFormat property payload is an AVCaptureDeviceFormat. AVCaptureDeviceFormat is a thin wrapper around a CMFormatDescription, and can carry associated device format information that doesn’t go in a CMFormatDescription, such as min and max frame rate. An AVCaptureDeviceFormat object is immutable. Its values do not change for the life of the object.

See also Apple’s documentation

Implementations§

Source§

impl AVCaptureDeviceFormat

Source

pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>

Source

pub unsafe fn new() -> Retained<Self>

Source

pub unsafe fn mediaType(&self) -> Retained<AVMediaType>

Available on crate feature AVMediaFormat only.

An NSString describing the media type of an AVCaptureDevice active or supported format.

Supported mediaTypes are listed in AVMediaFormat.h. This is a read-only property. The caller assumes no ownership of the returned value and should not CFRelease it.

Source

pub unsafe fn formatDescription(&self) -> Retained<CMFormatDescription>

Available on crate feature objc2-core-media only.

A CMFormatDescription describing an AVCaptureDevice active or supported format.

A CMFormatDescription describing an AVCaptureDevice active or supported format. This is a read-only property. The caller assumes no ownership of the returned value and should not CFRelease it.

Source

pub unsafe fn videoSupportedFrameRateRanges( &self, ) -> Retained<NSArray<AVFrameRateRange>>

A property indicating the format’s supported frame rate ranges.

videoSupportedFrameRateRanges is an array of AVFrameRateRange objects, one for each of the format’s supported video frame rate ranges.

Source

pub unsafe fn videoFieldOfView(&self) -> c_float

A property indicating the format’s horizontal field of view.

videoFieldOfView is a float value indicating the receiver’s field of view in degrees. If field of view is unknown, a value of 0 is returned.

Source

pub unsafe fn isVideoBinned(&self) -> bool

A property indicating whether the format is binned.

videoBinned is a BOOL indicating whether the format is a binned format. Binning is a pixel-combining process which can result in greater low light sensitivity at the cost of reduced resolution.

Source

pub unsafe fn isVideoStabilizationModeSupported( &self, video_stabilization_mode: AVCaptureVideoStabilizationMode, ) -> bool

Returns whether the format supports the given video stabilization mode.

Parameter videoStabilizationMode: An AVCaptureVideoStabilizationMode to be checked.

isVideoStabilizationModeSupported: returns a boolean value indicating whether the format can be stabilized using the given mode with -[AVCaptureConnection setPreferredVideoStabilizationMode:].

Source

pub unsafe fn isVideoStabilizationSupported(&self) -> bool

👎Deprecated: Use isVideoStabilizationModeSupported: instead.

A property indicating whether the format supports video stabilization.

videoStabilizationSupported is a BOOL indicating whether the format can be stabilized using AVCaptureConnection -setEnablesVideoStabilizationWhenAvailable. This property is deprecated. Use isVideoStabilizationModeSupported: instead.

Source

pub unsafe fn videoMaxZoomFactor(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.

Indicates the maximum zoom factor available for the AVCaptureDevice’s videoZoomFactor property.

If the device’s videoZoomFactor property is assigned a larger value, an NSRangeException will be thrown. A maximum zoom factor of 1 indicates no zoom is available.

Source

pub unsafe fn videoZoomFactorUpscaleThreshold(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.

Indicates the value of AVCaptureDevice’s videoZoomFactor property at which the image output begins to require upscaling.

In some cases the image sensor’s dimensions are larger than the dimensions reported by the video AVCaptureDeviceFormat. As long as the sensor crop is larger than the reported dimensions of the AVCaptureDeviceFormat, the image will be downscaled. Setting videoZoomFactor to the value of videoZoomFactorUpscalingThreshold will provide a center crop of the sensor image data without any scaling. If a greater zoom factor is used, then the sensor data will be upscaled to the device format’s dimensions.

Source

pub unsafe fn systemRecommendedVideoZoomRange( &self, ) -> Option<Retained<AVZoomRange>>

Indicates the system’s recommended zoom range for this device format.

This property can be used to create a slider in your app’s user interface to control the device’s zoom with a system-recommended video zoom range. When a recommendation is not available, this property returns nil. Clients can key value observe AVCaptureDevice’s minAvailableVideoZoomFactor and maxAvailableVideoZoomFactor properties to know when a device’s supported zoom is restricted within the recommended zoom range.

The value of this property is also used for the AVCaptureSystemZoomSlider’s range.

Source

pub unsafe fn minExposureDuration(&self) -> CMTime

Available on crate feature objc2-core-media only.

A CMTime indicating the minimum supported exposure duration.

This read-only property indicates the minimum supported exposure duration.

Source

pub unsafe fn maxExposureDuration(&self) -> CMTime

Available on crate feature objc2-core-media only.

A CMTime indicating the maximum supported exposure duration.

This read-only property indicates the maximum supported exposure duration.

Source

pub unsafe fn systemRecommendedExposureBiasRange( &self, ) -> Option<Retained<AVExposureBiasRange>>

Indicates the system’s recommended exposure bias range for this device format.

This property can be used to create a slider in your app’s user interface to control the device’s exposure bias with a system-recommended exposure bias range. When a recommendation is not available, this property returns nil.

The value of this property is also used for the AVCaptureSystemExposureBiasSlider’s range.

Source

pub unsafe fn minISO(&self) -> c_float

A float indicating the minimum supported exposure ISO value.

This read-only property indicates the minimum supported exposure ISO value.

Source

pub unsafe fn maxISO(&self) -> c_float

An float indicating the maximum supported exposure ISO value.

This read-only property indicates the maximum supported exposure ISO value.

Source

pub unsafe fn isGlobalToneMappingSupported(&self) -> bool

A property indicating whether the format supports global tone mapping.

globalToneMappingSupported is a BOOL indicating whether the format supports global tone mapping. See AVCaptureDevice’s globalToneMappingEnabled property.

Source

pub unsafe fn isVideoHDRSupported(&self) -> bool

A property indicating whether the format supports high dynamic range streaming.

videoHDRSupported is a BOOL indicating whether the format supports high dynamic range streaming, also known as Extended Dynamic Range (EDR). When enabled, the device streams at twice the published frame rate, capturing an under-exposed frame and correctly exposed frame for each frame time at the published rate. Portions of the under-exposed frame are combined with the correctly exposed frame to recover detail in darker areas of the scene. EDR is a separate and distinct feature from 10-bit HDR video (first seen in 2020 iPhones). 10-bit formats with HLG BT2020 color space have greater dynamic range by virtue of their expanded bit depth and HLG transfer function, and when captured in movies, contain Dolby Vision metadata. They are, in effect, “always on” HDR. And thus the videoHDRSupported property is always NO for 10-bit formats only supporting HLG BT2020 colorspace, since HDR cannot be enabled or disabled. To enable videoHDR (EDR), set the AVCaptureDevice.videoHDREnabled property.

Source

pub unsafe fn highResolutionStillImageDimensions(&self) -> CMVideoDimensions

👎Deprecated: Use supportedMaxPhotoDimensions instead.
Available on crate feature objc2-core-media only.

CMVideoDimensions indicating the highest resolution still image that can be produced by this format.

By default, AVCapturePhotoOutput and AVCaptureStillImageOutput emit images with the same dimensions as their source AVCaptureDevice’s activeFormat.formatDescription property. Some device formats support high resolution photo output. That is, they can stream video to an AVCaptureVideoDataOutput or AVCaptureMovieFileOutput at one resolution while outputting photos to AVCapturePhotoOutput at a higher resolution. You may query this property to discover a video format’s supported high resolution still image dimensions. See -[AVCapturePhotoOutput highResolutionPhotoEnabled], -[AVCapturePhotoSettings highResolutionPhotoEnabled], and -[AVCaptureStillImageOutput highResolutionStillImageOutputEnabled].

AVCaptureDeviceFormats of type AVMediaTypeDepthData may also support the delivery of a higher resolution depth data map to an AVCapturePhotoOutput. Chief differences are:

  • Depth data accompanying still images is not supported by AVCaptureStillImageOutput. You must use AVCapturePhotoOutput.
  • By opting in for depth data ( -[AVCapturePhotoSettings setDepthDataDeliveryEnabled:YES] ), you implicitly opt in for high resolution depth data if it’s available. You may query the -[AVCaptureDevice activeDepthDataFormat]’s highResolutionStillImageDimensions to discover the depth data resolution that will be delivered with captured photos.
Source

pub unsafe fn isHighPhotoQualitySupported(&self) -> bool

A boolean value specifying whether this format supports high photo quality when selecting an AVCapturePhotoQualityPrioritization of .balanced or .quality.

If an AVCaptureDeviceFormat’s highPhotoQualitySupported property is YES, the format produces higher image quality when selecting .balanced or .quality AVCapturePhotoQualityPrioritization compared to .speed. Such formats adhere to the following rules:

  • Photo requests with a prioritization of .speed produce the fastest image result (suitable for burst captures).
  • Photo requests with a prioritization of .balanced produce higher image quality without dropping frames if a video recording is underway.
  • Photo requests with a prioritization of .quality produce high image quality and may cause frame drops if a video recording is underway. For maximum backward compatibility, photo requests on high photo quality formats set to .quality only cause video frame drops if your app is linked on or after iOS 15. Formats that don’t support high photo quality produce the same image quality whether you select .speed, .balanced, or .quality. Note that high photo quality is only attainable when using the AVCapturePhotoOutput with these supported formats.
Source

pub unsafe fn isHighestPhotoQualitySupported(&self) -> bool

A boolean value specifying whether this format supports the highest possible photo quality that can be delivered on the current platform.

Of the many formats supported by an AVCaptureDevice, only a few of them are designated as “photo” formats which can produce the highest possible quality, such as still image stabilization and Live Photos. If you intend to connect an AVCaptureDeviceInput to an AVCapturePhotoOutput and receive the best possible images, you should ensure that you are either using the AVCaptureSessionPresetPhoto as your preset, or if using the parallel AVCaptureDevice activeFormat API, select as your activeFormat one for which this property is YES.

Source

pub unsafe fn autoFocusSystem(&self) -> AVCaptureAutoFocusSystem

A property indicating the autofocus system.

This read-only property indicates the autofocus system.

Source

pub unsafe fn supportedColorSpaces(&self) -> Retained<NSArray<NSNumber>>

A property indicating the receiver’s supported color spaces.

This read-only property indicates the receiver’s supported color spaces as an array of AVCaptureColorSpace constants sorted from narrow to wide color.

Source

pub unsafe fn videoMinZoomFactorForDepthDataDelivery(&self) -> CGFloat

👎Deprecated
Available on crate feature objc2-core-foundation only.

A deprecated property. Please use supportedVideoZoomFactorsForDepthDataDelivery instead

Source

pub unsafe fn videoMaxZoomFactorForDepthDataDelivery(&self) -> CGFloat

👎Deprecated
Available on crate feature objc2-core-foundation only.

A deprecated property. Please use supportedVideoZoomFactorsForDepthDataDelivery instead

Source

pub unsafe fn supportedVideoZoomFactorsForDepthDataDelivery( &self, ) -> Retained<NSArray<NSNumber>>

👎Deprecated

A deprecated property. Please use supportedVideoZoomRangesForDepthDataDelivery

Source

pub unsafe fn supportedVideoZoomRangesForDepthDataDelivery( &self, ) -> Retained<NSArray<AVZoomRange>>

This property returns the zoom ranges within which depth data can be delivered.

Virtual devices support limited zoom ranges when delivering depth data to any output. If this device format has no -supportedDepthDataFormats, this property returns an empty array. The presence of one or more ranges where the min and max zoom factors are not equal means that “continuous zoom” with depth is supported. For example: a) ranges: @ [ [2..2], [4..4] ] only zoom factors 2 and 4 are allowed to be set when depthDataDelivery is enabled. Any other zoom factor results in an exception. b) ranges: @ [ [2..5] ] depthDataDelivery is supported with zoom factors [2..5]. Zoom factors outside of this range may be set, but will result in loss of depthDataDeliery. Whenever zoom is set back to a value within the range of [2..5], depthDataDelivery will resume.

When depth data delivery is enabled, the effective videoZoomFactorUpscaleThreshold will be 1.0, meaning that all zoom factors that are not native zoom factors (see AVCaptureDevice.virtualDeviceSwitchOverVideoZoomFactors and AVCaptureDevice.secondaryNativeResolutionZoomFactors) result in digital upscaling.

Source

pub unsafe fn zoomFactorsOutsideOfVideoZoomRangesForDepthDeliverySupported( &self, ) -> bool

This property returns whether the format supports zoom factors outside of the supportedVideoZoomFactorRangesForDepthDataDelivery.

When a zoom factor outside of the supportedVideoZoomFactorRangesForDepthDataDelivery is set, depth data delivery will be suspended until a zoom factor within the supportedVideoZoomFactorRangesForDepthDataDelivery is set.

Source

pub unsafe fn supportedDepthDataFormats( &self, ) -> Retained<NSArray<AVCaptureDeviceFormat>>

Indicates this format’s companion depth data formats.

If no depth data formats are supported by the receiver, an empty array is returned. On virtual devices, the supportedDepthDataFormats list items always match the aspect ratio of their paired video format. When the receiver is set as the device’s activeFormat, you may set the device’s activeDepthDataFormat to one of these supported depth data formats.

Source

pub unsafe fn unsupportedCaptureOutputClasses( &self, ) -> Retained<NSArray<AnyClass>>

A property indicating AVCaptureOutput subclasses the receiver does not support.

As a rule, AVCaptureDeviceFormats of a given mediaType are available for use with all AVCaptureOutputs that accept that media type, but there are exceptions. For instance, on apps linked against iOS versions earlier than 12.0, the photo resolution video formats may not be used as sources for AVCaptureMovieFileOutput due to bandwidth limitations. On DualCamera devices, AVCaptureDepthDataOutput is not supported when outputting full resolution (i.e. 12 MP) video due to bandwidth limitations. In order to stream depth data plus video data from a photo format, ensure that your AVCaptureVideoDataOutput’s deliversPreviewSizedOutputBuffers property is set to YES. Likewise, to stream depth data while capturing video to a movie file using AVCaptureMovieFileOutput, call -[AVCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto]. When using the photo preset, video is captured at preview resolution rather than the full sensor resolution.

Source

pub unsafe fn supportedMaxPhotoDimensions(&self) -> Retained<NSArray<NSValue>>

This property lists all of the supported maximum photo dimensions for this format. The array contains CMVideoDimensions structs encoded as NSValues.

Enumerate all supported resolution settings for which this format may be configured to capture photos. Use these values to set AVCapturePhotoOutput.maxPhotoDimensions and AVCapturePhotoSettings.maxPhotoDimensions.

Source

pub unsafe fn secondaryNativeResolutionZoomFactors( &self, ) -> Retained<NSArray<NSNumber>>

Indicates zoom factors at which this device transitions to secondary native resolution modes.

Devices with this property have the means to switch their pixel sampling mode on the fly to produce a high-fidelity, non-upsampled images at a fixed zoom factor beyond 1.0x.

Source

pub unsafe fn isAutoVideoFrameRateSupported(&self) -> bool

Indicates whether the device format supports auto video frame rate.

See -[AVCaptureDevice autoVideoFrameRateEnabled] (above) for a detailed description of the feature.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatDepthDataAdditions.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatMultiCamAdditions.

Source

pub unsafe fn isMultiCamSupported(&self) -> bool

A property indicating whether this format is supported in an AVCaptureMultiCamSession.

When using an AVCaptureSession (single camera capture), any of the formats in the device’s -formats array may be set as the -activeFormat. However, when used with an AVCaptureMultiCamSession, the device’s -activeFormat may only be set to one of the formats for which multiCamSupported answers YES. This limited subset of capture formats are known to run sustainably in a multi camera capture scenario.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatSpatialVideoCapture.

Source

pub unsafe fn isSpatialVideoCaptureSupported(&self) -> bool

Returns whether or not the format supports capturing spatial video to a file.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatGeometricDistortionCorrection.

Source

pub unsafe fn geometricDistortionCorrectedVideoFieldOfView(&self) -> c_float

A property indicating the format’s horizontal field of view post geometric distortion correction.

If the receiver’s AVCaptureDevice does not support GDC, geometricDistortionCorrectedVideoFieldOfView matches the videoFieldOfView property.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatCenterStage.

Source

pub unsafe fn isCenterStageSupported(&self) -> bool

Indicates whether the format supports the Center Stage feature.

This property returns YES if the format supports “Center Stage”, which automatically adjusts the camera to keep people optimally framed within the field of view. See +AVCaptureDevice.centerStageEnabled for a detailed discussion.

Source

pub unsafe fn videoMinZoomFactorForCenterStage(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.
Source

pub unsafe fn videoMaxZoomFactorForCenterStage(&self) -> CGFloat

Available on crate feature objc2-core-foundation only.

Indicates the maximum zoom factor available for the AVCaptureDevice’s videoZoomFactor property when centerStageActive is YES.

Devices support a limited zoom range when Center Stage is active. If this device format does not support Center Stage, this property returns videoMaxZoomFactor.

Source

pub unsafe fn videoFrameRateRangeForCenterStage( &self, ) -> Option<Retained<AVFrameRateRange>>

Indicates the minimum / maximum frame rates available when centerStageActive is YES.

Devices may support a limited frame rate range when Center Stage is active. If this device format does not support Center Stage, this property returns nil.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatPortraitEffect.

Source

pub unsafe fn isPortraitEffectSupported(&self) -> bool

Indicates whether the format supports the Portrait Effect feature.

This property returns YES if the format supports Portrait Effect, the application of a shallow depth of field effect to objects in the background. See +AVCaptureDevice.portraitEffectEnabled for a detailed discussion.

Source

pub unsafe fn videoFrameRateRangeForPortraitEffect( &self, ) -> Option<Retained<AVFrameRateRange>>

Indicates the minimum / maximum frame rates available when portraitEffectActive is YES.

Devices may support a limited frame rate range when Portrait Effect is active. If this device format does not support Portrait Effect, this property returns nil.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatStudioLight.

Source

pub unsafe fn isStudioLightSupported(&self) -> bool

Indicates whether the format supports the Studio Light feature.

This property returns YES if the format supports Studio Light (artificial re-lighting of the subject’s face). See +AVCaptureDevice.studioLightEnabled.

Source

pub unsafe fn videoFrameRateRangeForStudioLight( &self, ) -> Option<Retained<AVFrameRateRange>>

Indicates the minimum / maximum frame rates available when studioLight is YES.

Devices may support a limited frame rate range when Studio Light is active. If this device format does not support Studio Light, this property returns nil.

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatReactionEffects.

Source

pub unsafe fn reactionEffectsSupported(&self) -> bool

Indicates whether the format supports the Reaction Effects feature.

This property returns YES if the format supports Reaction Effects. See +AVCaptureDevice.reactionEffectsEnabled.

Source

pub unsafe fn videoFrameRateRangeForReactionEffectsInProgress( &self, ) -> Option<Retained<AVFrameRateRange>>

Indicates the minimum / maximum frame rates available when a reaction effect is running.

Unlike the other video effects, enabling reaction effects does not limit the stream’s frame rate because most of the time no rendering is being performed. The frame rate will only ramp down when a reaction is actually being rendered on the stream (see AVCaptureDevice.reactionEffectsInProgress)

Source§

impl AVCaptureDeviceFormat

AVCaptureDeviceFormatBackgroundReplacement.

Source

pub unsafe fn isBackgroundReplacementSupported(&self) -> bool

Indicates whether the format supports the Background Replacement feature.

This property returns YES if the format supports Background Replacement background replacement. See +AVCaptureDevice.backgroundReplacementEnabled.

Source

pub unsafe fn videoFrameRateRangeForBackgroundReplacement( &self, ) -> Option<Retained<AVFrameRateRange>>

Indicates the minimum / maximum frame rates available when background replacement is active.

Devices may support a limited frame rate range when Background Replacement is active. If this device format does not support Background Replacement, this property returns nil.

Methods from Deref<Target = NSObject>§

Source

pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !

Handle messages the object doesn’t recognize.

See Apple’s documentation for details.

Methods from Deref<Target = AnyObject>§

Source

pub fn class(&self) -> &'static AnyClass

Dynamically find the class of this object.

§Panics

May panic if the object is invalid (which may be the case for objects returned from unavailable init/new methods).

§Example

Check that an instance of NSObject has the precise class NSObject.

use objc2::ClassType;
use objc2::runtime::NSObject;

let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Source

pub unsafe fn get_ivar<T>(&self, name: &str) -> &T
where T: Encode,

👎Deprecated: this is difficult to use correctly, use Ivar::load instead.

Use Ivar::load instead.

§Safety

The object must have an instance variable with the given name, and it must be of type T.

See Ivar::load_ptr for details surrounding this.

Source

pub fn downcast_ref<T>(&self) -> Option<&T>
where T: DowncastTarget,

Attempt to downcast the object to a class of type T.

This is the reference-variant. Use Retained::downcast if you want to convert a retained object to another type.

§Mutable classes

Some classes have immutable and mutable variants, such as NSString and NSMutableString.

When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.

So using this method to convert a NSString to a NSMutableString, while not unsound, is generally frowned upon unless you created the string yourself, or the API explicitly documents the string to be mutable.

See Apple’s documentation on mutability and on isKindOfClass: for more details.

§Generic classes

Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.

You can, however, safely downcast to generic collections where all the type-parameters are AnyObject.

§Panics

This works internally by calling isKindOfClass:. That means that the object must have the instance method of that name, and an exception will be thrown (if CoreFoundation is linked) or the process will abort if that is not the case. In the vast majority of cases, you don’t need to worry about this, since both root objects NSObject and NSProxy implement this method.

§Examples

Cast an NSString back and forth from NSObject.

use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};

let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();

Try (and fail) to cast an NSObject to an NSString.

use objc2_foundation::{NSObject, NSString};

let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());

Try to cast to an array of strings.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();

This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.

Downcast when processing each element instead.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);

for elem in arr {
    if let Some(data) = elem.downcast_ref::<NSString>() {
        // handle `data`
    }
}

Trait Implementations§

Source§

impl AsRef<AVCaptureDeviceFormat> for AVCaptureDeviceFormat

Source§

fn as_ref(&self) -> &Self

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AnyObject> for AVCaptureDeviceFormat

Source§

fn as_ref(&self) -> &AnyObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<NSObject> for AVCaptureDeviceFormat

Source§

fn as_ref(&self) -> &NSObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl Borrow<AnyObject> for AVCaptureDeviceFormat

Source§

fn borrow(&self) -> &AnyObject

Immutably borrows from an owned value. Read more
Source§

impl Borrow<NSObject> for AVCaptureDeviceFormat

Source§

fn borrow(&self) -> &NSObject

Immutably borrows from an owned value. Read more
Source§

impl ClassType for AVCaptureDeviceFormat

Source§

const NAME: &'static str = "AVCaptureDeviceFormat"

The name of the Objective-C class that this type represents. Read more
Source§

type Super = NSObject

The superclass of this class. Read more
Source§

type ThreadKind = <<AVCaptureDeviceFormat as ClassType>::Super as ClassType>::ThreadKind

Whether the type can be used from any thread, or from only the main thread. Read more
Source§

fn class() -> &'static AnyClass

Get a reference to the Objective-C class that this type represents. Read more
Source§

fn as_super(&self) -> &Self::Super

Get an immutable reference to the superclass.
Source§

impl Debug for AVCaptureDeviceFormat

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Deref for AVCaptureDeviceFormat

Source§

type Target = NSObject

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl Hash for AVCaptureDeviceFormat

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Message for AVCaptureDeviceFormat

Source§

fn retain(&self) -> Retained<Self>
where Self: Sized,

Increment the reference count of the receiver. Read more
Source§

impl NSObjectProtocol for AVCaptureDeviceFormat

Source§

fn isEqual(&self, other: Option<&AnyObject>) -> bool
where Self: Sized + Message,

Check whether the object is equal to an arbitrary other object. Read more
Source§

fn hash(&self) -> usize
where Self: Sized + Message,

An integer that can be used as a table address in a hash table structure. Read more
Source§

fn isKindOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of the class, or one of its subclasses. Read more
Source§

fn is_kind_of<T>(&self) -> bool
where T: ClassType, Self: Sized + Message,

👎Deprecated: use isKindOfClass directly, or cast your objects with AnyObject::downcast_ref
Check if the object is an instance of the class type, or one of its subclasses. Read more
Source§

fn isMemberOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of a specific class, without checking subclasses. Read more
Source§

fn respondsToSelector(&self, aSelector: Sel) -> bool
where Self: Sized + Message,

Check whether the object implements or inherits a method with the given selector. Read more
Source§

fn conformsToProtocol(&self, aProtocol: &AnyProtocol) -> bool
where Self: Sized + Message,

Check whether the object conforms to a given protocol. Read more
Source§

fn description(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object. Read more
Source§

fn debugDescription(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object to use when debugging. Read more
Source§

fn isProxy(&self) -> bool
where Self: Sized + Message,

Check whether the receiver is a subclass of the NSProxy root class instead of the usual NSObject. Read more
Source§

fn retainCount(&self) -> usize
where Self: Sized + Message,

The reference count of the object. Read more
Source§

impl PartialEq for AVCaptureDeviceFormat

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl RefEncode for AVCaptureDeviceFormat

Source§

const ENCODING_REF: Encoding = <NSObject as ::objc2::RefEncode>::ENCODING_REF

The Objective-C type-encoding for a reference of this type. Read more
Source§

impl DowncastTarget for AVCaptureDeviceFormat

Source§

impl Eq for AVCaptureDeviceFormat

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<'a, T> AnyThread for T
where T: ClassType<ThreadKind = dyn AnyThread + 'a> + ?Sized,

Source§

fn alloc() -> Allocated<Self>
where Self: Sized + ClassType,

Allocate a new instance of the class. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> AutoreleaseSafe for T
where T: ?Sized,