pub struct AVCapturePhoto { /* private fields */ }AVCapturePhotoOutput only.Expand description
An object representing a photo in memory, produced by the -captureOutput:didFinishingProcessingPhoto:error: in the AVCapturePhotoCaptureDelegate protocol method.
Beginning in iOS 11, AVCapturePhotoOutput’s AVCapturePhotoCaptureDelegate supports a simplified callback for delivering image data, namely -captureOutput:didFinishingProcessingPhoto:error:. This callback presents each image result for your capture request as an AVCapturePhoto object, an immutable wrapper from which various properties of the photo capture may be queried, such as the photo’s preview pixel buffer, metadata, depth data, camera calibration data, and image bracket specific properties. AVCapturePhoto can wrap file-containerized photo results, such as HEVC encoded image data, containerized in the HEIC file format. CMSampleBufferRef, on the other hand, may only be used to express non file format containerized photo data. For this reason, the AVCapturePhotoCaptureDelegate protocol methods that return CMSampleBuffers have been deprecated in favor of -captureOutput:didFinishingProcessingPhoto:error:. A AVCapturePhoto wraps a single image result. For instance, if you’ve requested a bracketed capture of 3 images, your callback is called 3 times, each time delivering an AVCapturePhoto.
See also Apple’s documentation
Implementations§
Source§impl AVCapturePhoto
impl AVCapturePhoto
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn timestamp(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn timestamp(&self) -> CMTime
objc2-core-media only.The time at which this image was captured, synchronized to the synchronizationClock of the AVCaptureSession
The timestamp property indicates the time the image was captured, and is analogous to CMSampleBufferGetPresentationTimeStamp(). If an error was provided in the -captureOutput:didFinishingProcessingPhoto:error: callback, timestamp returns kCMTimeInvalid.
Sourcepub unsafe fn isRawPhoto(&self) -> bool
pub unsafe fn isRawPhoto(&self) -> bool
This property returns YES if this photo is a RAW image.
Your AVCapturePhotoCaptureDelegate’s -captureOutput:didFinishingProcessingPhoto:error: method may be called one or more times with image results, including RAW or non-RAW images. This property distinguishes RAW from non-RAW image results, for instance, if you’ve requested a RAW + JPEG capture.
Sourcepub unsafe fn pixelBuffer(&self) -> Option<Retained<CVPixelBuffer>>
Available on crate feature objc2-core-video only.
pub unsafe fn pixelBuffer(&self) -> Option<Retained<CVPixelBuffer>>
objc2-core-video only.For uncompressed or RAW captures, this property offers access to the pixel data.
Uncompressed captures, such as ‘420f’ or ‘BGRA’, Bayer RAW captures, such as ‘bgg4’, or Apple ProRAW captures, such as ‘l64r’, present pixel data as a CVPixelBuffer. See AVCapturePhotoOutput’s -appleProRAWEnabled for a discussion on the differences between Bayer RAW and Apple ProRAW. This property is analogous to CMSampleBufferGetImageBuffer(). The pixel buffer contains only the minimal attachments required for correct display. Compressed captures, such as ‘jpeg’, return nil.
Sourcepub unsafe fn previewPixelBuffer(&self) -> Option<Retained<CVPixelBuffer>>
Available on crate feature objc2-core-video only.
pub unsafe fn previewPixelBuffer(&self) -> Option<Retained<CVPixelBuffer>>
objc2-core-video only.This property offers access to the preview image pixel data if you’ve requested it.
If you requested a preview image by calling -[AVCapturePhotoSettings setPreviewPhotoFormat:] with a non-nil value, this property offers access to the resulting preview image pixel data, and is analogous to CMSampleBufferGetImageBuffer(). The pixel buffer contains only the minimal attachments required for correct display. Nil is returned if you did not request a preview image.
Sourcepub unsafe fn embeddedThumbnailPhotoFormat(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn embeddedThumbnailPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
The format of the embedded thumbnail contained in this AVCapturePhoto.
If you requested an embedded thumbnail image by calling -[AVCapturePhotoSettings setEmbeddedThumbnailPhotoFormat:] with a non-nil value, this property offers access to the resolved embedded thumbnail AVVideoSettings dictionary. Nil is returned if you did not request an embedded thumbnail image.
Sourcepub unsafe fn depthData(&self) -> Option<Retained<AVDepthData>>
Available on crate feature AVDepthData only.
pub unsafe fn depthData(&self) -> Option<Retained<AVDepthData>>
AVDepthData only.An AVDepthData object wrapping a disparity/depth map associated with this photo.
If you requested depth data delivery by calling -[AVCapturePhotoSettings setDepthDataDeliveryEnabled:YES], this property offers access to the resulting AVDepthData object. Nil is returned if you did not request depth data delivery. Note that the depth data is only embedded in the photo’s internal file format container if you set -[AVCapturePhotoSettings setEmbedsDepthDataInPhoto:YES].
Sourcepub unsafe fn portraitEffectsMatte(
&self,
) -> Option<Retained<AVPortraitEffectsMatte>>
Available on crate feature AVPortraitEffectsMatte only.
pub unsafe fn portraitEffectsMatte( &self, ) -> Option<Retained<AVPortraitEffectsMatte>>
AVPortraitEffectsMatte only.An AVPortraitEffectsMatte object wrapping a matte associated with this photo.
If you requested portrait effects matte delivery by calling -[AVCapturePhotoSettings setPortraitEffectsMatteDeliveryEnabled:YES], this property offers access to the resulting AVPortraitEffectsMatte object. Nil is returned if you did not request portrait effects matte delivery. Note that the portrait effects matte is only embedded in the photo’s internal file format container if you set -[AVCapturePhotoSettings setEmbedsPortraitEffectsMatteInPhoto:YES].
Sourcepub unsafe fn semanticSegmentationMatteForType(
&self,
semantic_segmentation_matte_type: &AVSemanticSegmentationMatteType,
) -> Option<Retained<AVSemanticSegmentationMatte>>
Available on crate feature AVSemanticSegmentationMatte only.
pub unsafe fn semanticSegmentationMatteForType( &self, semantic_segmentation_matte_type: &AVSemanticSegmentationMatteType, ) -> Option<Retained<AVSemanticSegmentationMatte>>
AVSemanticSegmentationMatte only.An accessor for semantic segmentation mattes associated with this photo.
Parameter semanticSegmentationMatteType: The matte type of interest (hair, skin, etc).
Returns: An instance of AVSemanticSegmentationMatte, or nil if none could be found for the specified type.
If you requested one or more semantic segmentation mattes by calling -[AVCapturePhotoSettings setEnabledSemanticSegmentationMatteTypes:] with a non-empty array of types, this property offers access to the resulting AVSemanticSegmentationMatte objects. Nil is returned if you did not request semantic segmentation matte delivery, or if no mattes of the specified type are available. Note that semantic segmentation mattes are only embedded in the photo’s internal file format container if you call -[AVCapturePhotoSettings setEmbedsSemanticSegmentationMattesInPhoto:YES].
Sourcepub unsafe fn metadata(&self) -> Retained<NSDictionary<NSString, AnyObject>>
pub unsafe fn metadata(&self) -> Retained<NSDictionary<NSString, AnyObject>>
An ImageIO property style dictionary of metadata associated with this photo.
Valid metadata keys are found in <ImageIO /CGImageProperties.h>, such as kCGImagePropertyOrientation, kCGImagePropertyExifDictionary, kCGImagePropertyMakerAppleDictionary, etc.
Sourcepub unsafe fn cameraCalibrationData(
&self,
) -> Option<Retained<AVCameraCalibrationData>>
Available on crate feature AVCameraCalibrationData only.
pub unsafe fn cameraCalibrationData( &self, ) -> Option<Retained<AVCameraCalibrationData>>
AVCameraCalibrationData only.An AVCameraCalibrationData object representing the calibration information for the camera providing the photo.
Camera calibration data is only present if you set AVCapturePhotoSettings.setCameraCalibrationDataDeliveryEnabled to YES. When requesting virtual device constituent photo delivery plus cameraCalibrationDataDeliveryEnabled, camera calibration information is delivered with all resultant photos and is specific to the constituent device producing that photo.
Sourcepub unsafe fn resolvedSettings(
&self,
) -> Retained<AVCaptureResolvedPhotoSettings>
pub unsafe fn resolvedSettings( &self, ) -> Retained<AVCaptureResolvedPhotoSettings>
The AVCaptureResolvedPhotoSettings associated with all photo results for a given -[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] request.
Even in the event of an error, the resolved settings are always non nil.
Sourcepub unsafe fn photoCount(&self) -> NSInteger
pub unsafe fn photoCount(&self) -> NSInteger
This photo’s index (1-based) in the total expected photo count.
The resolvedSettings.expectedPhotoCount property indicates the total number of images that will be returned for a given capture request. This property indicates this photo’s index (1-based). When you receive a -captureOutput:didFinishProcessingPhoto:error: callback with a photo whose photoCount matches resolvedSettings.expectedPhotoCount, you know you’ve received the last one for the given capture request.
Sourcepub unsafe fn sourceDeviceType(&self) -> Option<Retained<AVCaptureDeviceType>>
Available on crate feature AVCaptureDevice only.
pub unsafe fn sourceDeviceType(&self) -> Option<Retained<AVCaptureDeviceType>>
AVCaptureDevice only.The device type of the source camera providing the photo.
When taking a virtual device constituent photo capture, you may query this property to find out the source type of the photo. For instance, on a DualCamera, resulting photos will be of sourceDeviceType AVCaptureDeviceTypeBuiltInWideCamera, or AVCaptureDeviceTypeBuiltInTelephotoCamera. For all other types of capture, the source device type is equal to the -[AVCaptureDevice deviceType] of the AVCaptureDevice to which the AVCapturePhotoOutput is connected. Returns nil if the source of the photo is not an AVCaptureDevice.
Sourcepub unsafe fn constantColorConfidenceMap(
&self,
) -> Option<Retained<CVPixelBuffer>>
Available on crate feature objc2-core-video only.
pub unsafe fn constantColorConfidenceMap( &self, ) -> Option<Retained<CVPixelBuffer>>
objc2-core-video only.Returns a pixel buffer with the same aspect ratio as the constant color photo, where each pixel value (unsigned 8-bit integer) indicates how fully the constant color effect has been achieved in the corresponding region of the constant color photo – 255 means full confidence, 0 means zero confidence.
NULL is returned for any non constant color photos.
Sourcepub unsafe fn constantColorCenterWeightedMeanConfidenceLevel(&self) -> c_float
pub unsafe fn constantColorCenterWeightedMeanConfidenceLevel(&self) -> c_float
Returns a score summarizing the overall confidence level of a constant color photo – 1.0 means full confidence, 0.0 means zero confidence.
Default is 0.0.
In most use cases (document scanning for example), the central region of the photo is considered more important than the peripherals, therefore the confidence level of the central pixels are weighted more heavily than pixels on the edges of the photo.
Use constantColorConfidenceMap for more use case specific analyses of the confidence level.
Sourcepub unsafe fn isConstantColorFallbackPhoto(&self) -> bool
pub unsafe fn isConstantColorFallbackPhoto(&self) -> bool
Indicates whether this photo is a fallback photo for a constant color capture.
Source§impl AVCapturePhoto
AVCapturePhotoConversions.
impl AVCapturePhoto
AVCapturePhotoConversions.
Sourcepub unsafe fn fileDataRepresentation(&self) -> Option<Retained<NSData>>
pub unsafe fn fileDataRepresentation(&self) -> Option<Retained<NSData>>
Flattens the AVCapturePhoto to an NSData using the file container format (processedFileType or rawFileType) specified in the AVCapturePhotoSettings (e.g. JFIF, HEIF, DNG, DICOM).
Returns: An NSData containing bits in the file container’s format, or nil if the flattening process fails.
Sourcepub unsafe fn fileDataRepresentationWithCustomizer(
&self,
customizer: &ProtocolObject<dyn AVCapturePhotoFileDataRepresentationCustomizer>,
) -> Option<Retained<NSData>>
pub unsafe fn fileDataRepresentationWithCustomizer( &self, customizer: &ProtocolObject<dyn AVCapturePhotoFileDataRepresentationCustomizer>, ) -> Option<Retained<NSData>>
Flattens the AVCapturePhoto to an NSData using the file container format (processedFileType or rawFileType) specified in the AVCapturePhotoSettings (e.g. JFIF, HEIF, DNG, DICOM), and allows you to strip or replace various pieces of metadata in the process.
Parameter customizer: An object conforming to the AVCapturePhotoFileDataRepresentationCustomizer protocol that will be called synchronously to provide customization of metadata written to the container format. An NSInvalidArgumentException is thrown if you pass nil.
Returns: An NSData containing bits in the file container’s format, or nil if the flattening process fails.
Sourcepub unsafe fn fileDataRepresentationWithReplacementMetadata_replacementEmbeddedThumbnailPhotoFormat_replacementEmbeddedThumbnailPixelBuffer_replacementDepthData(
&self,
replacement_metadata: Option<&NSDictionary<NSString, AnyObject>>,
replacement_embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>,
replacement_embedded_thumbnail_pixel_buffer: Option<&CVPixelBuffer>,
replacement_depth_data: Option<&AVDepthData>,
) -> Option<Retained<NSData>>
👎DeprecatedAvailable on crate features AVDepthData and objc2-core-video only.
pub unsafe fn fileDataRepresentationWithReplacementMetadata_replacementEmbeddedThumbnailPhotoFormat_replacementEmbeddedThumbnailPixelBuffer_replacementDepthData( &self, replacement_metadata: Option<&NSDictionary<NSString, AnyObject>>, replacement_embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>, replacement_embedded_thumbnail_pixel_buffer: Option<&CVPixelBuffer>, replacement_depth_data: Option<&AVDepthData>, ) -> Option<Retained<NSData>>
AVDepthData and objc2-core-video only.Flattens the AVCapturePhoto to an NSData using the file container format (processedFileType or rawFileType) specified in the AVCapturePhotoSettings (e.g. JFIF, HEIF, DNG, DICOM), and allows you to replace metadata, thumbnail, and depth data in the process.
Parameter replacementMetadata: A dictionary of keys and values from
<ImageIO
/CGImageProperties.h>. To preserve existing metadata to the file, pass self.metadata. To strip existing metadata, pass nil. To replace metadata, pass a replacement dictionary.
Parameter replacementEmbeddedThumbnailPhotoFormat: A dictionary of keys and values from
<AVFoundation
/AVVideoSettings.h>. If you pass a non-nil dictionary, AVVideoCodecKey is required, with AVVideoWidthKey and AVVideoHeightKey being optional. To preserve the existing embedded thumbnail photo to the file, pass self.embeddedThumbnailPhotoFormat and pass nil as your replacementEmbeddedThumbnailPixelBuffer parameter. To strip the existing embedded thumbnail, pass nil for both replacementEmbeddedThumbnailPhotoFormat and replacementEmbeddedThumbnailPixelBuffer. To replace the existing embedded thumbnail photo, pass both a non-nil replacementThumbnailPixelBuffer and replacementEmbeddedThumbnailPhotoFormat dictionary.
Parameter replacementEmbeddedThumbnailPixelBuffer: A pixel buffer containing a source image to be encoded to the file as the replacement thumbnail image. To preserve the existing embedded thumbnail photo to the file, pass self.embeddedThumbnailPhotoFormat as your replacementEmbeddedThumbnailPhotoFormat parameter and nil as your replacementEmbeddedThumbnailPixelBuffer parameter. To strip the existing embedded thumbnail, pass nil for both replacementEmbeddedThumbnailPhotoFormat and replacementEmbeddedThumbnailPixelBuffer. To replace the existing embedded thumbnail photo, pass both a non-nil replacementThumbnailPixelBuffer and replacementEmbeddedThumbnailPhotoFormat dictionary.
Parameter replacementDepthData: Replacement depth data to be written to the flattened file container. To preserve existing depth data to the file, pass self.depthData. To strip it, pass nil. To replace it, pass a new AVDepthData instance.
Returns: An NSData containing bits in the file container’s format, or nil if the flattening process fails.
§Safety
replacement_metadatageneric should be of the correct type.replacement_embedded_thumbnail_photo_formatgeneric should be of the correct type.
Sourcepub unsafe fn CGImageRepresentation(&self) -> Option<Retained<CGImage>>
Available on crate feature objc2-core-graphics only.
pub unsafe fn CGImageRepresentation(&self) -> Option<Retained<CGImage>>
objc2-core-graphics only.Utility method that converts the AVCapturePhoto’s primary photo to a CGImage.
Returns: A CGImageRef, or nil if the conversion process fails.
Each time you access this method, AVCapturePhoto generates a new CGImageRef. When backed by a compressed container (such as HEIC), the CGImageRepresentation is decoded lazily as needed. When backed by an uncompressed format such as BGRA, it is copied into a separate backing buffer whose lifetime is not tied to that of the AVCapturePhoto. For a 12 megapixel image, a BGRA CGImage represents ~48 megabytes per call. If you only intend to use the CGImage for on-screen rendering, use the previewCGImageRepresentation instead. Note that the physical rotation of the CGImageRef matches that of the main image. Exif orientation has not been applied. If you wish to apply rotation when working with UIImage, you can do so by querying the photo’s metadata[kCGImagePropertyOrientation] value, and passing it as the orientation parameter to +[UIImage imageWithCGImage:scale:orientation:]. RAW images always return a CGImageRepresentation of nil. If you wish to make a CGImageRef from a RAW image, use CIRAWFilter in the CoreImage framework.
Sourcepub unsafe fn previewCGImageRepresentation(&self) -> Option<Retained<CGImage>>
Available on crate feature objc2-core-graphics only.
pub unsafe fn previewCGImageRepresentation(&self) -> Option<Retained<CGImage>>
objc2-core-graphics only.Utility method that converts the AVCapturePhoto’s preview photo to a CGImage.
Returns: A CGImageRef, or nil if the conversion process fails, or if you did not request a preview photo.
Each time you access this method, AVCapturePhoto generates a new CGImageRef. This CGImageRepresentation is a RGB rendering of the previewPixelBuffer property. If you did not request a preview photo by setting the -[AVCapturePhotoSettings previewPhotoFormat] property, this method returns nil. Note that the physical rotation of the CGImageRef matches that of the main image. Exif orientation has not been applied. If you wish to apply rotation when working with UIImage, you can do so by querying the photo’s metadata[kCGImagePropertyOrientation] value, and passing it as the orientation parameter to +[UIImage imageWithCGImage:scale:orientation:].
Source§impl AVCapturePhoto
AVCapturePhotoBracketedCapture.
impl AVCapturePhoto
AVCapturePhotoBracketedCapture.
Sourcepub unsafe fn bracketSettings(
&self,
) -> Option<Retained<AVCaptureBracketedStillImageSettings>>
Available on crate feature AVCaptureStillImageOutput only.
pub unsafe fn bracketSettings( &self, ) -> Option<Retained<AVCaptureBracketedStillImageSettings>>
AVCaptureStillImageOutput only.The AVCaptureBracketedStillImageSettings associated with this photo.
When specifying a bracketed capture using AVCapturePhotoBracketSettings, you specify an array of AVCaptureBracketedStillImageSettings – one per image in the bracket. This property indicates the AVCaptureBracketedStillImageSettings associated with this particular photo, or nil if this photo is not part of a bracketed capture.
Sourcepub unsafe fn sequenceCount(&self) -> NSInteger
pub unsafe fn sequenceCount(&self) -> NSInteger
1-based sequence count of the photo.
If this photo is part of a bracketed capture (invoked using AVCapturePhotoBracketSettings), this property indicates the current result’s count in the sequence, starting with 1 for the first result, or 0 if this photo is not part of a bracketed capture.
Sourcepub unsafe fn lensStabilizationStatus(&self) -> AVCaptureLensStabilizationStatus
pub unsafe fn lensStabilizationStatus(&self) -> AVCaptureLensStabilizationStatus
The status of the lens stabilization module during capture of this photo.
In configurations where lens stabilization (OIS) is unsupported, AVCaptureLensStabilizationStatusUnsupported is returned. If lens stabilization is supported, but this photo is not part of a bracketed capture in which -[AVCapturePhotoBracketSettings setLensStabilizationEnabled:YES] was called, AVCaptureLensStabilizationStatusOff is returned. Otherwise a lens stabilization status is returned indicating how lens stabilization was applied during the capture.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVCapturePhoto> for AVCaptureDeferredPhotoProxy
impl AsRef<AVCapturePhoto> for AVCaptureDeferredPhotoProxy
Source§fn as_ref(&self) -> &AVCapturePhoto
fn as_ref(&self) -> &AVCapturePhoto
Source§impl AsRef<AVCapturePhoto> for AVCapturePhoto
impl AsRef<AVCapturePhoto> for AVCapturePhoto
Source§impl AsRef<AnyObject> for AVCapturePhoto
impl AsRef<AnyObject> for AVCapturePhoto
Source§impl AsRef<NSObject> for AVCapturePhoto
impl AsRef<NSObject> for AVCapturePhoto
Source§impl Borrow<AVCapturePhoto> for AVCaptureDeferredPhotoProxy
impl Borrow<AVCapturePhoto> for AVCaptureDeferredPhotoProxy
Source§fn borrow(&self) -> &AVCapturePhoto
fn borrow(&self) -> &AVCapturePhoto
Source§impl Borrow<AnyObject> for AVCapturePhoto
impl Borrow<AnyObject> for AVCapturePhoto
Source§impl Borrow<NSObject> for AVCapturePhoto
impl Borrow<NSObject> for AVCapturePhoto
Source§impl ClassType for AVCapturePhoto
impl ClassType for AVCapturePhoto
Source§const NAME: &'static str = "AVCapturePhoto"
const NAME: &'static str = "AVCapturePhoto"
Source§type ThreadKind = <<AVCapturePhoto as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCapturePhoto as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCapturePhoto
impl Debug for AVCapturePhoto
Source§impl Deref for AVCapturePhoto
impl Deref for AVCapturePhoto
Source§impl Hash for AVCapturePhoto
impl Hash for AVCapturePhoto
Source§impl Message for AVCapturePhoto
impl Message for AVCapturePhoto
Source§impl NSObjectProtocol for AVCapturePhoto
impl NSObjectProtocol for AVCapturePhoto
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref