#[repr(C)]pub struct AVCaptureResolvedPhotoSettings { /* private fields */ }AVCapturePhotoOutput only.Expand description
An immutable object produced by callbacks in each and every AVCapturePhotoCaptureDelegate protocol method.
When you initiate a photo capture request using -capturePhotoWithSettings:delegate:, some of your settings are not yet certain. For instance, auto flash and auto still image stabilization allow the AVCapturePhotoOutput to decide just in time whether to employ flash or still image stabilization, depending on the current scene. Once the request is issued, AVCapturePhotoOutput begins the capture, resolves the uncertain settings, and in its first callback informs you of its choices through an AVCaptureResolvedPhotoSettings object. This same object is presented to all the callbacks fired for a particular photo capture request. Its uniqueID property matches that of the AVCapturePhotoSettings instance you used to initiate the photo request.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureResolvedPhotoSettings
impl AVCaptureResolvedPhotoSettings
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn uniqueID(&self) -> i64
pub unsafe fn uniqueID(&self) -> i64
uniqueID matches that of the AVCapturePhotoSettings instance you passed to -capturePhotoWithSettings:delegate:.
Sourcepub unsafe fn photoDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn photoDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the photo buffer that will be delivered to the -captureOutput:didFinishProcessingPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error: callback.
If you request a RAW capture with no processed companion image, photoDimensions resolve to { 0, 0 }.
Sourcepub unsafe fn rawPhotoDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn rawPhotoDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the RAW photo buffer that will be delivered to the -captureOutput:didFinishProcessingRawPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error: callback.
If you request a non-RAW capture, rawPhotoDimensions resolve to { 0, 0 }.
Sourcepub unsafe fn previewDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn previewDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the preview photo buffer that will be delivered to the -captureOutput:didFinishProcessing{Photo | RawPhoto}… AVCapturePhotoCaptureDelegate callbacks.
If you don’t request a preview image, previewDimensions resolve to { 0, 0 }.
Sourcepub unsafe fn embeddedThumbnailDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn embeddedThumbnailDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the embedded thumbnail that will be written to the processed photo delivered to the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback.
If you don’t request an embedded thumbnail image, embeddedThumbnailDimensions resolve to { 0, 0 }.
Sourcepub unsafe fn rawEmbeddedThumbnailDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn rawEmbeddedThumbnailDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the embedded thumbnail that will be written to the RAW photo delivered to the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback.
If you don’t request a raw embedded thumbnail image, rawEmbeddedThumbnailDimensions resolve to { 0, 0 }.
Sourcepub unsafe fn portraitEffectsMatteDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn portraitEffectsMatteDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the portrait effects matte that will be delivered to the AVCapturePhoto in the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback.
If you request a portrait effects matte by calling -[AVCapturePhotoSettings setPortraitEffectsMatteDeliveryEnabled:YES], portraitEffectsMatteDimensions resolve to the expected dimensions of the portrait effects matte, assuming one is generated (see -[AVCapturePhotoSettings portraitEffectsMatteDeliveryEnabled] for a discussion of why a portrait effects matte might not be delivered). If you don’t request a portrait effects matte, portraitEffectsMatteDimensions always resolve to { 0, 0 }.
Sourcepub unsafe fn dimensionsForSemanticSegmentationMatteOfType(
&self,
semantic_segmentation_matte_type: &AVSemanticSegmentationMatteType,
) -> CMVideoDimensions
Available on crate features AVSemanticSegmentationMatte and objc2-core-media only.
pub unsafe fn dimensionsForSemanticSegmentationMatteOfType( &self, semantic_segmentation_matte_type: &AVSemanticSegmentationMatteType, ) -> CMVideoDimensions
AVSemanticSegmentationMatte and objc2-core-media only.Queries the resolved dimensions of semantic segmentation mattes that will be delivered to the AVCapturePhoto in the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback.
If you request semantic segmentation mattes by calling -[AVCapturePhotoSettings setEnabledSemanticSegmentationMatteTypes:] with a non-empty array, the dimensions resolve to the expected dimensions for each of the mattes, assuming they are generated (see -[AVCapturePhotoSettings enabledSemanticSegmentationMatteTypes] for a discussion of why a particular matte might not be delivered). If you don’t request any semantic segmentation mattes, the result will always be { 0, 0 }.
Sourcepub unsafe fn livePhotoMovieDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn livePhotoMovieDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the video track in the movie that will be delivered to the -captureOutput:didFinishProcessingLivePhotoToMovieFileAtURL:duration:photoDisplayTime:resolvedSettings:error: callback.
If you don’t request Live Photo capture, livePhotoMovieDimensions resolve to { 0, 0 }.
Sourcepub unsafe fn isFlashEnabled(&self) -> bool
pub unsafe fn isFlashEnabled(&self) -> bool
Indicates whether the flash will fire when capturing the photo.
When you specify AVCaptureFlashModeAuto as your AVCapturePhotoSettings.flashMode, you don’t know if flash capture will be chosen until you inspect the AVCaptureResolvedPhotoSettings flashEnabled property. If the device becomes too hot, the flash becomes temporarily unavailable. You can key-value observe AVCaptureDevice’s flashAvailable property to know when this occurs. If the flash is unavailable due to thermal issues, and you specify a flashMode of AVCaptureFlashModeOn, flashEnabled still resolves to NO until the device has sufficiently cooled off.
Sourcepub unsafe fn isRedEyeReductionEnabled(&self) -> bool
pub unsafe fn isRedEyeReductionEnabled(&self) -> bool
Indicates whether red-eye reduction will be applied as necessary when capturing the photo if flashEnabled is YES.
Sourcepub unsafe fn deferredPhotoProxyDimensions(&self) -> CMVideoDimensions
Available on crate feature objc2-core-media only.
pub unsafe fn deferredPhotoProxyDimensions(&self) -> CMVideoDimensions
objc2-core-media only.The resolved dimensions of the AVCaptureDeferredPhotoProxy when opting in to deferred photo delivery. See AVCaptureDeferredPhotoProxy.
If you don’t opt in to deferred photo delivery, deferredPhotoProxyDimensions resolve to { 0, 0 }. When an AVCaptureDeferredPhotoProxy is returned, the photoDimensions property of this object represents the dimensions of the final photo.
Sourcepub unsafe fn isStillImageStabilizationEnabled(&self) -> bool
👎Deprecated
pub unsafe fn isStillImageStabilizationEnabled(&self) -> bool
Indicates whether still image stabilization will be employed when capturing the photo.
As of iOS 13 hardware, the AVCapturePhotoOutput is capable of applying a variety of multi-image fusion techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), all of which have been previously lumped under the stillImageStabilization moniker. This property should no longer be used as it no longer provides meaningful information about the techniques used to improve quality in a photo capture. Instead, you should use -photoQualityPrioritization to indicate your preferred quality vs speed when configuring your AVCapturePhotoSettings. You may query -photoProcessingTimeRange to get an indication of how long the photo will take to process before delivery to your delegate.
Sourcepub unsafe fn isVirtualDeviceFusionEnabled(&self) -> bool
pub unsafe fn isVirtualDeviceFusionEnabled(&self) -> bool
Indicates whether fusion of virtual device constituent camera images will be used when capturing the photo, such as the wide-angle and telephoto images on a DualCamera.
Sourcepub unsafe fn isDualCameraFusionEnabled(&self) -> bool
👎Deprecated
pub unsafe fn isDualCameraFusionEnabled(&self) -> bool
Indicates whether DualCamera wide-angle and telephoto image fusion will be employed when capturing the photo. As of iOS 13, this property is deprecated in favor of virtualDeviceFusionEnabled.
Sourcepub unsafe fn expectedPhotoCount(&self) -> NSUInteger
pub unsafe fn expectedPhotoCount(&self) -> NSUInteger
Indicates the number of times your -captureOutput:didFinishProcessingPhoto:error: callback will be called. For instance, if you’ve requested an auto exposure bracket of 3 with JPEG and RAW, the expectedPhotoCount is 6.
Sourcepub unsafe fn photoProcessingTimeRange(&self) -> CMTimeRange
Available on crate feature objc2-core-media only.
pub unsafe fn photoProcessingTimeRange(&self) -> CMTimeRange
objc2-core-media only.Indicates the processing time range you can expect for this photo to be delivered to your delegate. the .start field of the CMTimeRange is zero-based. In other words, if photoProcessingTimeRange.start is equal to .5 seconds, then the minimum processing time for this photo is .5 seconds. The .start field plus the .duration field of the CMTimeRange indicate the max expected processing time for this photo. Consider implementing a UI affordance if the max processing time is uncomfortably long.
Sourcepub unsafe fn isContentAwareDistortionCorrectionEnabled(&self) -> bool
pub unsafe fn isContentAwareDistortionCorrectionEnabled(&self) -> bool
Indicates whether content aware distortion correction will be employed when capturing the photo.
Sourcepub unsafe fn isFastCapturePrioritizationEnabled(&self) -> bool
pub unsafe fn isFastCapturePrioritizationEnabled(&self) -> bool
Indicates whether fast capture prioritization will be employed when capturing the photo.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl ClassType for AVCaptureResolvedPhotoSettings
impl ClassType for AVCaptureResolvedPhotoSettings
Source§const NAME: &'static str = "AVCaptureResolvedPhotoSettings"
const NAME: &'static str = "AVCaptureResolvedPhotoSettings"
Source§type ThreadKind = <<AVCaptureResolvedPhotoSettings as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureResolvedPhotoSettings as ClassType>::Super as ClassType>::ThreadKind
Source§impl NSObjectProtocol for AVCaptureResolvedPhotoSettings
impl NSObjectProtocol for AVCaptureResolvedPhotoSettings
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref