Struct AVCapturePhotoSettings

Source
#[repr(C)]
pub struct AVCapturePhotoSettings { /* private fields */ }
Available on crate feature AVCapturePhotoOutput only.
Expand description

A mutable settings object encapsulating all the desired properties of a photo capture.

To take a picture, a client instantiates and configures an AVCapturePhotoSettings object, then calls AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate:, passing the settings and a delegate to be informed when events relating to the photo capture occur. Since AVCapturePhotoSettings has no reference to the AVCapturePhotoOutput instance with which it will be used, minimal validation occurs while you configure an AVCapturePhotoSettings instance. The bulk of the validation is executed when you call AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate:.

See also Apple’s documentation

Implementations§

Source§

impl AVCapturePhotoSettings

Source

pub unsafe fn photoSettings() -> Retained<Self>

Creates a default instance of AVCapturePhotoSettings.

Returns: An instance of AVCapturePhotoSettings.

A default AVCapturePhotoSettings object has a format of AVVideoCodecTypeJPEG, a fileType of AVFileTypeJPEG, and photoQualityPrioritization set to AVCapturePhotoQualityPrioritizationBalanced.

Source

pub unsafe fn photoSettingsWithFormat( format: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>

Creates an instance of AVCapturePhotoSettings with a user-specified output format.

Parameter format: A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property.

Returns: An instance of AVCapturePhotoSettings.

If you wish an uncompressed format, your dictionary must contain kCVPixelBufferPixelFormatTypeKey, and the format specified must be present in AVCapturePhotoOutput’s -availablePhotoPixelFormatTypes array. kCVPixelBufferPixelFormatTypeKey is the only supported key when expressing uncompressed output. If you wish a compressed format, your dictionary must contain AVVideoCodecKey and the codec specified must be present in AVCapturePhotoOutput’s -availablePhotoCodecTypes array. If you are specifying a compressed format, the AVVideoCompressionPropertiesKey is also supported, with a payload dictionary containing a single AVVideoQualityKey. Passing a nil format dictionary is analogous to calling +photoSettings.

Source

pub unsafe fn photoSettingsWithRawPixelFormatType( raw_pixel_format_type: u32, ) -> Retained<Self>

Creates an instance of AVCapturePhotoSettings specifying RAW only output.

Parameter rawPixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).

Returns: An instance of AVCapturePhotoSettings.

rawPixelFormatType must be one of the OSTypes contained in AVCapturePhotoOutput’s -availableRawPhotoPixelFormatTypes array. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.

Source

pub unsafe fn photoSettingsWithRawPixelFormatType_processedFormat( raw_pixel_format_type: u32, processed_format: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>

Creates an instance of AVCapturePhotoSettings specifying RAW + a processed format (such as JPEG).

Parameter rawPixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).

Parameter processedFormat: A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property.

Returns: An instance of AVCapturePhotoSettings.

rawPixelFormatType must be one of the OSTypes contained in AVCapturePhotoOutput’s -availableRawPhotoPixelFormatTypes array. If you wish an uncompressed processedFormat, your dictionary must contain kCVPixelBufferPixelFormatTypeKey, and the processedFormat specified must be present in AVCapturePhotoOutput’s -availablePhotoPixelFormatTypes array. kCVPixelBufferPixelFormatTypeKey is the only supported key when expressing uncompressed processedFormat. If you wish a compressed format, your dictionary must contain AVVideoCodecKey and the codec specified must be present in AVCapturePhotoOutput’s -availablePhotoCodecTypes array. If you are specifying a compressed format, the AVVideoCompressionPropertiesKey is also supported, with a payload dictionary containing a single AVVideoQualityKey. Passing a nil processedFormat dictionary is analogous to calling +photoSettingsWithRawPixelFormatType:. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.

Source

pub unsafe fn photoSettingsWithRawPixelFormatType_rawFileType_processedFormat_processedFileType( raw_pixel_format_type: u32, raw_file_type: Option<&AVFileType>, processed_format: Option<&NSDictionary<NSString, AnyObject>>, processed_file_type: Option<&AVFileType>, ) -> Retained<Self>

Available on crate feature AVMediaFormat only.

Creates an instance of AVCapturePhotoSettings specifying RAW + a processed format (such as JPEG) and a file container to which it will be written.

Parameter rawPixelFormatType: A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h). Pass 0 if you do not desire a RAW photo callback.

Parameter rawFileType: The file container for which the RAW image should be formatted to be written. Pass nil if you have no preferred file container. A default container will be chosen for you.

Parameter processedFormat: A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property. Pass nil if you do not desire a processed photo callback.

Parameter processedFileType: The file container for which the processed image should be formatted to be written. Pass nil if you have no preferred file container. A default container will be chosen for you.

Returns: An instance of AVCapturePhotoSettings.

rawPixelFormatType must be one of the OSTypes contained in AVCapturePhotoOutput’s -availableRawPhotoPixelFormatTypes array. Set rawPixelFormatType to 0 if you do not desire a RAW photo callback. If you are specifying a rawFileType, it must be present in AVCapturePhotoOutput’s -availableRawPhotoFileTypes array. If you wish an uncompressed processedFormat, your dictionary must contain kCVPixelBufferPixelFormatTypeKey, and the processedFormat specified must be present in AVCapturePhotoOutput’s -availablePhotoPixelFormatTypes array. kCVPixelBufferPixelFormatTypeKey is the only supported key when expressing uncompressed processedFormat. If you wish a compressed format, your dictionary must contain AVVideoCodecKey and the codec specified must be present in AVCapturePhotoOutput’s -availablePhotoCodecTypes array. If you are specifying a compressed format, the AVVideoCompressionPropertiesKey is also supported, with a payload dictionary containing a single AVVideoQualityKey. If you are specifying a processedFileType, it must be present in AVCapturePhotoOutput’s -availablePhotoFileTypes array. Pass a nil processedFormat dictionary if you only desire a RAW photo capture. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.

Source

pub unsafe fn photoSettingsFromPhotoSettings( photo_settings: &AVCapturePhotoSettings, ) -> Retained<Self>

Creates an instance of AVCapturePhotoSettings with a new uniqueID from an existing instance of AVCapturePhotoSettings.

Parameter photoSettings: An existing AVCapturePhotoSettings instance.

Returns: An new instance of AVCapturePhotoSettings with new uniqueID.

Use this factory method to create a clone of an existing photo settings instance, but with a new uniqueID that can safely be passed to AVCapturePhotoOutput -capturePhotoWithSettings:delegate:.

Source

pub unsafe fn uniqueID(&self) -> i64

A 64-bit number that uniquely identifies this instance.

When you create an instance of AVCapturePhotoSettings, a uniqueID is generated automatically. This uniqueID is guaranteed to be unique for the life time of your process.

Source

pub unsafe fn format( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>

A dictionary of Core Video pixel buffer attributes or AVVideoSettings, analogous to AVCaptureStillImageOutput’s outputSettings property.

The format dictionary you passed to one of the creation methods. May be nil if you’ve specified RAW-only capture.

Source

pub unsafe fn rawFileFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>

A dictionary of AVVideoSettings keys specifying the RAW file format to be used for the RAW photo.

One can specify desired format properties of the RAW file that will be created. Currently only the key AVVideoAppleProRAWBitDepthKey is allowed and the value to which it can be set should be from 8-16. The AVVideoCodecKey must be present in the receiver’s -availableRawPhotoCodecTypes array as well as in -supportedRawPhotoCodecTypesForRawPhotoPixelFormatType:fileType:. AVVideoQualityKey (NSNumber in range [0.0,1.0]) can be optionally set and a value between [0.0,1.0] will use lossy compression with lower values being more lossy resulting in smaller file sizes but lower image quality, while a value of 1.0 will use lossless compression resulting in the largest file size but also the best quality.

Source

pub unsafe fn setRawFileFormat( &self, raw_file_format: Option<&NSDictionary<NSString, AnyObject>>, )

Setter for rawFileFormat.

Source

pub unsafe fn processedFileType(&self) -> Option<Retained<AVFileType>>

Available on crate feature AVMediaFormat only.

The file container for which the processed photo is formatted to be stored.

The formatting of data within a photo buffer is often dependent on the file format intended for storage. For instance, a JPEG encoded photo buffer intended for storage in a JPEG (JPEG File Interchange Format) file differs from JPEG to be stored in HEIF. The HEIF-containerized JPEG buffer is tiled for readback efficiency and partitioned into the box structure dictated by the HEIF file format. Some codecs are only supported by AVCapturePhotoOutput if containerized. For instance, the AVVideoCodecTypeHEVC is only supported with AVFileTypeHEIF and AVFileTypeHEIC formatting. To discover which photo pixel format types and video codecs are supported for a given file type, you may query AVCapturePhotoOutput’s -supportedPhotoPixelFormatTypesForFileType:, or -supportedPhotoCodecTypesForFileType: respectively.

Source

pub unsafe fn rawPhotoPixelFormatType(&self) -> u32

A Bayer RAW or Apple ProRAW pixel format OSType (defined in CVPixelBuffer.h).

The rawPixelFormatType you specified in one of the creation methods. Returns 0 if you did not specify RAW capture. See AVCapturePhotoOutput’s -capturePhotoWithSettings:delegate: inline documentation for a discussion of restrictions on AVCapturePhotoSettings when requesting RAW capture.

Source

pub unsafe fn rawFileType(&self) -> Option<Retained<AVFileType>>

Available on crate feature AVMediaFormat only.

The file container for which the RAW photo is formatted to be stored.

The formatting of data within a RAW photo buffer may be dependent on the file format intended for storage. To discover which RAW photo pixel format types are supported for a given file type, you may query AVCapturePhotoOutput’s -supportedRawPhotoPixelFormatTypesForFileType:.

Source

pub unsafe fn flashMode(&self) -> AVCaptureFlashMode

Available on crate feature AVCaptureDevice only.

Specifies whether the flash should be on, off, or chosen automatically by AVCapturePhotoOutput.

flashMode takes the place of the deprecated AVCaptureDevice -flashMode API. Setting AVCaptureDevice.flashMode has no effect on AVCapturePhotoOutput, which only pays attention to the flashMode specified in your AVCapturePhotoSettings. The default value is AVCaptureFlashModeOff. Flash modes are defined in AVCaptureDevice.h. If you specify a flashMode of AVCaptureFlashModeOn, it wins over autoStillImageStabilizationEnabled=YES. When the device becomes very hot, the flash becomes temporarily unavailable until the device cools down (see AVCaptureDevice’s -flashAvailable). While the flash is unavailable, AVCapturePhotoOutput’s -supportedFlashModes property still reports AVCaptureFlashModeOn and AVCaptureFlashModeAuto as being available, thus allowing you to specify a flashMode of AVCaptureModeOn. You should always check the AVCaptureResolvedPhotoSettings provided to you in the AVCapturePhotoCaptureDelegate callbacks, as the resolved flashEnabled property will tell you definitively if the flash is being used.

Source

pub unsafe fn setFlashMode(&self, flash_mode: AVCaptureFlashMode)

Available on crate feature AVCaptureDevice only.

Setter for flashMode.

Source

pub unsafe fn isAutoRedEyeReductionEnabled(&self) -> bool

Specifies whether red-eye reduction should be applied automatically on flash captures.

Default is YES on platforms that support automatic red-eye reduction unless you are capturing a bracket using AVCapturePhotoBracketSettings or a RAW photo without a processed photo. For RAW photos with a processed photo the red-eye reduction will be applied to the processed photo only (RAW photos by definition are not processed). When set to YES, red-eye reduction is applied as needed for flash captures if the photo output’s autoRedEyeReductionSupported property returns YES.

Source

pub unsafe fn setAutoRedEyeReductionEnabled( &self, auto_red_eye_reduction_enabled: bool, )

Source

pub unsafe fn photoQualityPrioritization( &self, ) -> AVCapturePhotoQualityPrioritization

Indicates how photo quality should be prioritized against speed of photo delivery.

Default value is AVCapturePhotoQualityPrioritizationBalanced. The AVCapturePhotoOutput is capable of applying a variety of techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), depending on the source device’s activeFormat. Some of these techniques can take significant processing time before the photo is returned to your delegate callback. The photoQualityPrioritization property allows you to specify your preferred quality vs speed of delivery. By default, speed and quality are considered to be of equal importance. When you specify AVCapturePhotoQualityPrioritizationSpeed, you indicate that speed should be prioritized at the expense of quality. Likewise, when you choose AVCapturePhotoQualityPrioritizationQuality, you signal your willingness to prioritize the very best quality at the expense of speed, and your readiness to wait (perhaps significantly) longer for the photo to be returned to your delegate.

Source

pub unsafe fn setPhotoQualityPrioritization( &self, photo_quality_prioritization: AVCapturePhotoQualityPrioritization, )

Source

pub unsafe fn isAutoStillImageStabilizationEnabled(&self) -> bool

👎Deprecated

Specifies whether still image stabilization should be used automatically.

Default is YES unless you are capturing a Bayer RAW photo (Bayer RAW photos may not be processed by definition) or a bracket using AVCapturePhotoBracketSettings. When set to YES, still image stabilization is applied automatically in low light to counteract hand shake. If the device has optical image stabilization, autoStillImageStabilizationEnabled makes use of lens stabilization as well.

As of iOS 13 hardware, the AVCapturePhotoOutput is capable of applying a variety of multi-image fusion techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), all of which have been previously lumped under the stillImageStabilization moniker. This property should no longer be used as it no longer provides meaningful information about the techniques used to improve quality in a photo capture. Instead, you should use -photoQualityPrioritization to indicate your preferred quality vs speed.

Source

pub unsafe fn setAutoStillImageStabilizationEnabled( &self, auto_still_image_stabilization_enabled: bool, )

👎Deprecated
Source

pub unsafe fn isAutoVirtualDeviceFusionEnabled(&self) -> bool

Specifies whether virtual device image fusion should be used automatically.

Default is YES unless you are capturing a RAW photo (RAW photos may not be processed by definition) or a bracket using AVCapturePhotoBracketSettings. When set to YES, and -[AVCapturePhotoOutput isVirtualDeviceFusionSupported] is also YES, constituent camera images of a virtual device may be fused to improve still image quality, depending on the current zoom factor, light levels, and focus position. You may determine whether virtual device fusion is enabled for a particular capture request by inspecting the virtualDeviceFusionEnabled property of the AVCaptureResolvedPhotoSettings. Note that when using the deprecated AVCaptureStillImageOutput interface with a virtual device, autoVirtualDeviceFusionEnabled fusion is always enabled if supported, and may not be turned off.

Source

pub unsafe fn setAutoVirtualDeviceFusionEnabled( &self, auto_virtual_device_fusion_enabled: bool, )

Source

pub unsafe fn isAutoDualCameraFusionEnabled(&self) -> bool

👎Deprecated

Specifies whether DualCamera image fusion should be used automatically.

Default is YES unless you are capturing a RAW photo (RAW photos may not be processed by definition) or a bracket using AVCapturePhotoBracketSettings. When set to YES, and -[AVCapturePhotoOutput isDualCameraFusionSupported] is also YES, wide-angle and telephoto images may be fused to improve still image quality, depending on the current zoom factor, light levels, and focus position. You may determine whether DualCamera fusion is enabled for a particular capture request by inspecting the dualCameraFusionEnabled property of the AVCaptureResolvedPhotoSettings. Note that when using the deprecated AVCaptureStillImageOutput interface with the DualCamera, auto DualCamera fusion is always enabled and may not be turned off. As of iOS 13, this property is deprecated in favor of autoVirtualDeviceFusionEnabled.

Source

pub unsafe fn setAutoDualCameraFusionEnabled( &self, auto_dual_camera_fusion_enabled: bool, )

👎Deprecated
Source

pub unsafe fn virtualDeviceConstituentPhotoDeliveryEnabledDevices( &self, ) -> Retained<NSArray<AVCaptureDevice>>

Available on crate feature AVCaptureDevice only.

Specifies the constituent devices for which the virtual device should deliver photos.

Default is empty array. To opt in for constituent device photo delivery, you may set this property to any subset of 2 or more of the devices in virtualDevice.constituentDevices. Your captureOutput:didFinishProcessingPhoto:error: callback will be called n times – one for each of the devices you include in the array. You may only set this property to a non-nil array if you’ve set your AVCapturePhotoOutput’s virtualDeviceConstituentPhotoDeliveryEnabled property to YES, and your delegate responds to the captureOutput:didFinishProcessingPhoto:error: selector.

Source

pub unsafe fn setVirtualDeviceConstituentPhotoDeliveryEnabledDevices( &self, virtual_device_constituent_photo_delivery_enabled_devices: &NSArray<AVCaptureDevice>, )

Available on crate feature AVCaptureDevice only.
Source

pub unsafe fn isDualCameraDualPhotoDeliveryEnabled(&self) -> bool

👎Deprecated

Specifies whether the DualCamera should return both the telephoto and wide image.

Default is NO. When set to YES, your captureOutput:didFinishProcessingPhoto:error: callback will receive twice the number of callbacks, as both the telephoto image(s) and wide-angle image(s) are delivered. You may only set this property to YES if you’ve set your AVCapturePhotoOutput’s dualCameraDualPhotoDeliveryEnabled property to YES, and your delegate responds to the captureOutput:didFinishProcessingPhoto:error: selector. As of iOS 13, this property is deprecated in favor of virtualDeviceConstituentPhotoDeliveryEnabledDevices.

Source

pub unsafe fn setDualCameraDualPhotoDeliveryEnabled( &self, dual_camera_dual_photo_delivery_enabled: bool, )

👎Deprecated
Source

pub unsafe fn isHighResolutionPhotoEnabled(&self) -> bool

👎Deprecated: Use maxPhotoDimensions instead.

Specifies whether photos should be captured at the highest resolution supported by the source AVCaptureDevice’s activeFormat.

Default is NO. By default, AVCapturePhotoOutput emits images with the same dimensions as its source AVCaptureDevice’s activeFormat.formatDescription. However, if you set this property to YES, the AVCapturePhotoOutput emits images at its source AVCaptureDevice’s activeFormat.highResolutionStillImageDimensions. Note that if you enable video stabilization (see AVCaptureConnection’s preferredVideoStabilizationMode) for any output, the high resolution photos emitted by AVCapturePhotoOutput may be smaller by 10 or more percent. You may inspect your AVCaptureResolvedPhotoSettings in the delegate callbacks to discover the exact dimensions of the capture photo(s).

Starting in iOS 14.5 if you disable geometric distortion correction, the high resolution photo emitted by AVCapturePhotoOutput may be is smaller depending on the format.

Source

pub unsafe fn setHighResolutionPhotoEnabled( &self, high_resolution_photo_enabled: bool, )

👎Deprecated: Use maxPhotoDimensions instead.
Source

pub unsafe fn maxPhotoDimensions(&self) -> CMVideoDimensions

Available on crate feature objc2-core-media only.

Indicates the maximum resolution photo that will be captured.

By setting this property you are requesting an image that may be up to as large as the specified dimensions, but no larger. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the currently configured format and be equal to or smaller than the value of AVCapturePhotoOutput.maxPhotoDimensions. This property defaults to the smallest dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions.

Source

pub unsafe fn setMaxPhotoDimensions( &self, max_photo_dimensions: CMVideoDimensions, )

Available on crate feature objc2-core-media only.

Setter for maxPhotoDimensions.

Source

pub unsafe fn isDepthDataDeliveryEnabled(&self) -> bool

Specifies whether AVDepthData should be captured along with the photo.

Default is NO. Set to YES if you wish to receive depth data with your photo. Throws an exception if -[AVCapturePhotoOutput depthDataDeliveryEnabled] is not set to YES or your delegate does not respond to the captureOutput:didFinishProcessingPhoto:error: selector. Note that setting this property to YES may add significant processing time to the delivery of your didFinishProcessingPhoto: callback.

For best rendering results in Apple’s Photos.app, portrait photos should be captured with both embedded depth data and a portrait effects matte (see portraitEffectsMatteDeliveryEnabled). When supported, it is recommended to opt in for both of these auxiliary images in your photo captures involving depth.

Source

pub unsafe fn setDepthDataDeliveryEnabled( &self, depth_data_delivery_enabled: bool, )

Source

pub unsafe fn embedsDepthDataInPhoto(&self) -> bool

Specifies whether depth data included with this photo should be written to the photo’s file structure.

Default is YES. When depthDataDeliveryEnabled is set to YES, this property specifies whether the included depth data should be written to the resulting photo’s internal file structure. Depth data is currently only supported in HEIF and JPEG. This property is ignored if depthDataDeliveryEnabled is set to NO.

Source

pub unsafe fn setEmbedsDepthDataInPhoto(&self, embeds_depth_data_in_photo: bool)

Source

pub unsafe fn isDepthDataFiltered(&self) -> bool

Specifies whether the depth data delivered with the photo should be filtered to fill invalid values.

Default is YES. This property is ignored unless depthDataDeliveryEnabled is set to YES. Depth data maps may contain invalid pixel values due to a variety of factors including occlusions and low light. When depthDataFiltered is set to YES, the photo output interpolates missing data, filling in all holes.

Source

pub unsafe fn setDepthDataFiltered(&self, depth_data_filtered: bool)

Setter for isDepthDataFiltered.

Source

pub unsafe fn isCameraCalibrationDataDeliveryEnabled(&self) -> bool

Specifies whether AVCameraCalibrationData should be captured and delivered along with this photo.

Default is NO. Set to YES if you wish to receive camera calibration data with your photo. Camera calibration data is delivered as a property of an AVCapturePhoto, so if you are using the CMSampleBuffer delegate callbacks rather than -captureOutput:didFinishProcessingPhoto:error:, an exception is thrown. Also, you may only set this property to YES if your AVCapturePhotoOutput’s cameraCalibrationDataDeliverySupported property is YES and 2 or more devices are selected for virtual device constituent photo delivery. When requesting virtual device constituent photo delivery plus camera calibration data, the photos for each constituent device each contain camera calibration data. Note that AVCameraCalibrationData can be delivered as a property of an AVCapturePhoto or an AVDepthData, thus your delegate must respond to the captureOutput:didFinishProcessingPhoto:error: selector.

Source

pub unsafe fn setCameraCalibrationDataDeliveryEnabled( &self, camera_calibration_data_delivery_enabled: bool, )

Source

pub unsafe fn isPortraitEffectsMatteDeliveryEnabled(&self) -> bool

Specifies whether an AVPortraitEffectsMatte should be captured along with the photo.

Default is NO. Set to YES if you wish to receive a portrait effects matte with your photo. Throws an exception if -[AVCapturePhotoOutput portraitEffectsMatteDeliveryEnabled] is not set to YES or your delegate does not respond to the captureOutput:didFinishProcessingPhoto:error: selector. Portrait effects matte generation requires depth to be present, so if you wish to enable portrait effects matte delivery, you must set depthDataDeliveryEnabled to YES. Setting this property to YES does not guarantee that a portrait effects matte will be present in the resulting AVCapturePhoto. As the property name implies, the matte is primarily used to improve the rendering quality of portrait effects on the image. If the photo’s content lacks a clear foreground subject, no portrait effects matte is generated, and the -[AVCapturePhoto portraitEffectsMatte] property returns nil. Note that setting this property to YES may add significant processing time to the delivery of your didFinishProcessingPhoto: callback.

For best rendering results in Apple’s Photos.app, portrait photos should be captured with both embedded depth data (see depthDataDeliveryEnabled) and a portrait effects matte. When supported, it is recommended to opt in for both of these auxiliary images in your photo captures involving depth.

Source

pub unsafe fn setPortraitEffectsMatteDeliveryEnabled( &self, portrait_effects_matte_delivery_enabled: bool, )

Source

pub unsafe fn embedsPortraitEffectsMatteInPhoto(&self) -> bool

Specifies whether the portrait effects matte captured with this photo should be written to the photo’s file structure.

Default is YES. When portraitEffectsMatteDeliveryEnabled is set to YES, this property specifies whether the included portrait effects matte should be written to the resulting photo’s internal file structure. Portrait effects mattes are currently only supported in HEIF and JPEG. This property is ignored if portraitEffectsMatteDeliveryEnabled is set to NO.

Source

pub unsafe fn setEmbedsPortraitEffectsMatteInPhoto( &self, embeds_portrait_effects_matte_in_photo: bool, )

Source

pub unsafe fn enabledSemanticSegmentationMatteTypes( &self, ) -> Retained<NSArray<AVSemanticSegmentationMatteType>>

Available on crate feature AVSemanticSegmentationMatte only.

Specifies which types of AVSemanticSegmentationMatte should be captured along with the photo.

Default is empty array. You may set this property to an array of AVSemanticSegmentationMatteTypes you’d like to capture. Throws an exception if -[AVCapturePhotoOutput enabledSemanticSegmentationMatteTypes] does not contain any of the AVSemanticSegmentationMatteTypes specified. In other words, when setting up a capture session, you opt in for the superset of segmentation matte types you might like to receive, and then on a shot-by-shot basis, you may opt in to all or a subset of the previously specified types by setting this property. An exception is also thrown during -[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] if your delegate does not respond to the captureOutput:didFinishProcessingPhoto:error: selector. Setting this property to YES does not guarantee that the specified mattes will be present in the resulting AVCapturePhoto. If the photo’s content lacks any persons, for instance, no hair, skin, or teeth mattes are generated, and the -[AVCapturePhoto semanticSegmentationMatteForType:] property returns nil. Note that setting this property to YES may add significant processing time to the delivery of your didFinishProcessingPhoto: callback.

Source

pub unsafe fn setEnabledSemanticSegmentationMatteTypes( &self, enabled_semantic_segmentation_matte_types: &NSArray<AVSemanticSegmentationMatteType>, )

Available on crate feature AVSemanticSegmentationMatte only.
Source

pub unsafe fn embedsSemanticSegmentationMattesInPhoto(&self) -> bool

Specifies whether enabledSemanticSegmentationMatteTypes captured with this photo should be written to the photo’s file structure.

Default is YES. This property specifies whether the captured semantic segmentation mattes should be written to the resulting photo’s internal file structure. Semantic segmentation mattes are currently only supported in HEIF and JPEG. This property is ignored if enabledSemanticSegmentationMatteTypes is set to an empty array.

Source

pub unsafe fn setEmbedsSemanticSegmentationMattesInPhoto( &self, embeds_semantic_segmentation_mattes_in_photo: bool, )

Source

pub unsafe fn metadata(&self) -> Retained<NSDictionary<NSString, AnyObject>>

A dictionary of metadata key/value pairs you’d like to have written to each photo in the capture request.

Valid metadata keys are found in <ImageIO /CGImageProperties.h>. AVCapturePhotoOutput inserts a base set of metadata into each photo it captures, such as kCGImagePropertyOrientation, kCGImagePropertyExifDictionary, and kCGImagePropertyMakerAppleDictionary. You may specify metadata keys and values that should be written to each photo in the capture request. If you’ve specified metadata that also appears in AVCapturePhotoOutput’s base set, your value replaces the base value. An NSInvalidArgumentException is thrown if you specify keys other than those found in <ImageIO /CGImageProperties.h>.

Source

pub unsafe fn setMetadata(&self, metadata: &NSDictionary<NSString, AnyObject>)

Setter for metadata.

Source

pub unsafe fn livePhotoMovieFileURL(&self) -> Option<Retained<NSURL>>

Specifies that a Live Photo movie be captured to complement the still photo.

A Live Photo movie is a short movie (with audio, if you’ve added an audio input to your session) containing the moments right before and after the still photo. A QuickTime movie file will be written to disk at the URL specified if it is a valid file URL accessible to your app’s sandbox. You may only set this property if AVCapturePhotoOutput’s livePhotoCaptureSupported property is YES. When you specify a Live Photo, your AVCapturePhotoCaptureDelegate object must implement -captureOutput:didFinishProcessingLivePhotoToMovieFileAtURL:duration:photoDisplayTime:resolvedSettings:error:.

Source

pub unsafe fn setLivePhotoMovieFileURL( &self, live_photo_movie_file_url: Option<&NSURL>, )

Source

pub unsafe fn livePhotoVideoCodecType(&self) -> Retained<AVVideoCodecType>

Available on crate feature AVVideoSettings only.

Specifies the video codec type to use when compressing video for the Live Photo movie complement.

Prior to iOS 11, all Live Photo movie video tracks are compressed using H.264. Beginning in iOS 11, you can select the Live Photo movie video compression format by specifying one of the strings present in AVCapturePhotoOutput’s availableLivePhotoVideoCodecTypes array.

Source

pub unsafe fn setLivePhotoVideoCodecType( &self, live_photo_video_codec_type: &AVVideoCodecType, )

Available on crate feature AVVideoSettings only.
Source

pub unsafe fn livePhotoMovieMetadata(&self) -> Retained<NSArray<AVMetadataItem>>

Available on crate feature AVMetadataItem only.

Movie-level metadata to be written to the Live Photo movie.

An array of AVMetadataItems to be inserted into the top level of the Live Photo movie. The receiver makes immutable copies of the AVMetadataItems in the array. Live Photo movies always contain a AVMetadataQuickTimeMetadataKeyContentIdentifier which allow them to be paired with a similar identifier in the MakerNote of the photo complement. AVCapturePhotoSettings generates a unique content identifier for you. If you provide a metadata array containing an AVMetadataItem with keyspace = AVMetadataKeySpaceQuickTimeMetadata and key = AVMetadataQuickTimeMetadataKeyContentIdentifier, an NSInvalidArgumentException is thrown.

Source

pub unsafe fn setLivePhotoMovieMetadata( &self, live_photo_movie_metadata: Option<&NSArray<AVMetadataItem>>, )

Available on crate feature AVMetadataItem only.
Source

pub unsafe fn availablePreviewPhotoPixelFormatTypes( &self, ) -> Retained<NSArray<NSNumber>>

An array of available kCVPixelBufferPixelFormatTypeKeys that may be used when specifying a previewPhotoFormat.

The array is sorted such that the preview format requiring the fewest conversions is presented first.

Source

pub unsafe fn previewPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>

A dictionary of Core Video pixel buffer attributes specifying the preview photo format to be delivered along with the RAW or processed photo.

A dictionary of pixel buffer attributes specifying a smaller version of the RAW or processed photo for preview purposes. The kCVPixelBufferPixelFormatTypeKey is required and must be present in the receiver’s -availablePreviewPhotoPixelFormatTypes array. Optional keys are { kCVPixelBufferWidthKey | kCVPixelBufferHeightKey }. If you wish to specify dimensions, you must add both width and height. Width and height are only honored up to the display dimensions. If you specify a width and height whose aspect ratio differs from the RAW or processed photo, the larger of the two dimensions is honored and aspect ratio of the RAW or processed photo is always preserved.

Source

pub unsafe fn setPreviewPhotoFormat( &self, preview_photo_format: Option<&NSDictionary<NSString, AnyObject>>, )

Setter for previewPhotoFormat.

Source

pub unsafe fn availableEmbeddedThumbnailPhotoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>

Available on crate feature AVVideoSettings only.

An array of available AVVideoCodecKeys that may be used when specifying an embeddedThumbnailPhotoFormat.

The array is sorted such that the thumbnail codec type that is most backward compatible is listed first.

Source

pub unsafe fn embeddedThumbnailPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>

A dictionary of AVVideoSettings keys specifying the thumbnail format to be written to the processed or RAW photo.

A dictionary of AVVideoSettings keys specifying a thumbnail (usually smaller) version of the processed photo to be embedded in that image before calling the AVCapturePhotoCaptureDelegate. This image is sometimes referred to as a “thumbnail image”. The AVVideoCodecKey is required and must be present in the receiver’s -availableEmbeddedThumbnailPhotoCodecTypes array. Optional keys are { AVVideoWidthKey | AVVideoHeightKey }. If you wish to specify dimensions, you must specify both width and height. If you specify a width and height whose aspect ratio differs from the processed photo, the larger of the two dimensions is honored and aspect ratio of the RAW or processed photo is always preserved. For RAW captures, use -rawEmbeddedThumbnailPhotoFormat to specify the thumbnail format you’d like to capture in the RAW image. For apps linked on or after iOS 12, the raw thumbnail format must be specified using the -rawEmbeddedThumbnailPhotoFormat API rather than -embeddedThumbnailPhotoFormat. Beginning in iOS 12, HEIC files may contain thumbnails up to the full resolution of the main image.

Source

pub unsafe fn setEmbeddedThumbnailPhotoFormat( &self, embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>, )

Source

pub unsafe fn availableRawEmbeddedThumbnailPhotoCodecTypes( &self, ) -> Retained<NSArray<AVVideoCodecType>>

Available on crate feature AVVideoSettings only.

An array of available AVVideoCodecKeys that may be used when specifying a rawEmbeddedThumbnailPhotoFormat.

The array is sorted such that the thumbnail codec type that is most backward compatible is listed first.

Source

pub unsafe fn rawEmbeddedThumbnailPhotoFormat( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>

A dictionary of AVVideoSettings keys specifying the thumbnail format to be written to the RAW photo in a RAW photo request.

A dictionary of AVVideoSettings keys specifying a thumbnail (usually smaller) version of the RAW photo to be embedded in that image’s DNG before calling back the AVCapturePhotoCaptureDelegate. The AVVideoCodecKey is required and must be present in the receiver’s -availableRawEmbeddedThumbnailPhotoCodecTypes array. Optional keys are { AVVideoWidthKey | AVVideoHeightKey }. If you wish to specify dimensions, you must specify both width and height. If you specify a width and height whose aspect ratio differs from the RAW or processed photo, the larger of the two dimensions is honored and aspect ratio of the RAW or processed photo is always preserved. For apps linked on or after iOS 12, the raw thumbnail format must be specified using the -rawEmbeddedThumbnailPhotoFormat API rather than -embeddedThumbnailPhotoFormat. Beginning in iOS 12, DNG files may contain thumbnails up to the full resolution of the RAW image.

Source

pub unsafe fn setRawEmbeddedThumbnailPhotoFormat( &self, raw_embedded_thumbnail_photo_format: Option<&NSDictionary<NSString, AnyObject>>, )

Source

pub unsafe fn isAutoContentAwareDistortionCorrectionEnabled(&self) -> bool

Specifies whether the photo output should use content aware distortion correction on this photo request (at its discretion).

Default is NO. Set to YES if you wish content aware distortion correction to be performed on your AVCapturePhotos, when the photo output deems it necessary. Photos may or may not benefit from distortion correction. For instance, photos lacking faces may be left as is. Setting this property to YES does introduce a small additional amount of latency to the photo processing. You may check your AVCaptureResolvedPhotoSettings to see whether content aware distortion correction will be enabled for a given photo request. Throws an exception if -[AVCapturePhotoOutput contentAwareDistortionCorrectionEnabled] is not set to YES.

Source

pub unsafe fn setAutoContentAwareDistortionCorrectionEnabled( &self, auto_content_aware_distortion_correction_enabled: bool, )

Source

pub unsafe fn isConstantColorEnabled(&self) -> bool

Specifies whether the photo will be captured with constant color.

Default is NO. Set to YES if you wish to capture a constant color photo. Throws an exception if -[AVCapturePhotoOutput constantColorEnabled] is not set to YES.

Source

pub unsafe fn setConstantColorEnabled(&self, constant_color_enabled: bool)

Source

pub unsafe fn isConstantColorFallbackPhotoDeliveryEnabled(&self) -> bool

Specifies whether a fallback photo is delivered when taking a constant color capture.

Default is NO. Set to YES if you wish to receive a fallback photo that can be used in case the main constant color photo’s confidence level is too low for your use case.

Source

pub unsafe fn setConstantColorFallbackPhotoDeliveryEnabled( &self, constant_color_fallback_photo_delivery_enabled: bool, )

Source

pub unsafe fn isShutterSoundSuppressionEnabled(&self) -> bool

Specifies whether the built-in shutter sound should be suppressed when capturing a photo with these settings.

Default is NO. Set to YES if you wish to suppress AVCapturePhotoOutput’s built-in shutter sound for this request. AVCapturePhotoOutput throws an NSInvalidArgumentException in -capturePhotoWithSettings: if its shutterSoundSuppressionSupported property returns NO.

Source

pub unsafe fn setShutterSoundSuppressionEnabled( &self, shutter_sound_suppression_enabled: bool, )

Source§

impl AVCapturePhotoSettings

Methods declared on superclass NSObject.

Source

pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>

Source

pub unsafe fn new() -> Retained<Self>

Methods from Deref<Target = NSObject>§

Source

pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !

Handle messages the object doesn’t recognize.

See Apple’s documentation for details.

Methods from Deref<Target = AnyObject>§

Source

pub fn class(&self) -> &'static AnyClass

Dynamically find the class of this object.

§Example

Check that an instance of NSObject has the precise class NSObject.

use objc2::ClassType;
use objc2::runtime::NSObject;

let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Source

pub unsafe fn get_ivar<T>(&self, name: &str) -> &T
where T: Encode,

👎Deprecated: this is difficult to use correctly, use Ivar::load instead.

Use Ivar::load instead.

§Safety

The object must have an instance variable with the given name, and it must be of type T.

See Ivar::load_ptr for details surrounding this.

Source

pub fn downcast_ref<T>(&self) -> Option<&T>
where T: DowncastTarget,

Attempt to downcast the object to a class of type T.

This is the reference-variant. Use Retained::downcast if you want to convert a retained object to another type.

§Mutable classes

Some classes have immutable and mutable variants, such as NSString and NSMutableString.

When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.

So using this method to convert a NSString to a NSMutableString, while not unsound, is generally frowned upon unless you created the string yourself, or the API explicitly documents the string to be mutable.

See Apple’s documentation on mutability and on isKindOfClass: for more details.

§Generic classes

Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.

You can, however, safely downcast to generic collections where all the type-parameters are AnyObject.

§Panics

This works internally by calling isKindOfClass:. That means that the object must have the instance method of that name, and an exception will be thrown (if CoreFoundation is linked) or the process will abort if that is not the case. In the vast majority of cases, you don’t need to worry about this, since both root objects NSObject and NSProxy implement this method.

§Examples

Cast an NSString back and forth from NSObject.

use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};

let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();

Try (and fail) to cast an NSObject to an NSString.

use objc2_foundation::{NSObject, NSString};

let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());

Try to cast to an array of strings.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();

This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.

Downcast when processing each element instead.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);

for elem in arr {
    if let Some(data) = elem.downcast_ref::<NSString>() {
        // handle `data`
    }
}

Trait Implementations§

Source§

impl AsRef<AVCapturePhotoSettings> for AVCapturePhotoBracketSettings

Source§

fn as_ref(&self) -> &AVCapturePhotoSettings

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AVCapturePhotoSettings> for AVCapturePhotoSettings

Source§

fn as_ref(&self) -> &Self

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AnyObject> for AVCapturePhotoSettings

Source§

fn as_ref(&self) -> &AnyObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<NSObject> for AVCapturePhotoSettings

Source§

fn as_ref(&self) -> &NSObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl Borrow<AVCapturePhotoSettings> for AVCapturePhotoBracketSettings

Source§

fn borrow(&self) -> &AVCapturePhotoSettings

Immutably borrows from an owned value. Read more
Source§

impl Borrow<AnyObject> for AVCapturePhotoSettings

Source§

fn borrow(&self) -> &AnyObject

Immutably borrows from an owned value. Read more
Source§

impl Borrow<NSObject> for AVCapturePhotoSettings

Source§

fn borrow(&self) -> &NSObject

Immutably borrows from an owned value. Read more
Source§

impl ClassType for AVCapturePhotoSettings

Source§

const NAME: &'static str = "AVCapturePhotoSettings"

The name of the Objective-C class that this type represents. Read more
Source§

type Super = NSObject

The superclass of this class. Read more
Source§

type ThreadKind = <<AVCapturePhotoSettings as ClassType>::Super as ClassType>::ThreadKind

Whether the type can be used from any thread, or from only the main thread. Read more
Source§

fn class() -> &'static AnyClass

Get a reference to the Objective-C class that this type represents. Read more
Source§

fn as_super(&self) -> &Self::Super

Get an immutable reference to the superclass.
Source§

impl CopyingHelper for AVCapturePhotoSettings

Source§

type Result = AVCapturePhotoSettings

The immutable counterpart of the type, or Self if the type has no immutable counterpart. Read more
Source§

impl Debug for AVCapturePhotoSettings

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Deref for AVCapturePhotoSettings

Source§

type Target = NSObject

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl Hash for AVCapturePhotoSettings

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Message for AVCapturePhotoSettings

Source§

fn retain(&self) -> Retained<Self>
where Self: Sized,

Increment the reference count of the receiver. Read more
Source§

impl NSCopying for AVCapturePhotoSettings

Source§

fn copy(&self) -> Retained<Self::Result>
where Self: Sized + Message + CopyingHelper,

Returns a new instance that’s a copy of the receiver. Read more
Source§

unsafe fn copyWithZone(&self, zone: *mut NSZone) -> Retained<Self::Result>
where Self: Sized + Message + CopyingHelper,

Returns a new instance that’s a copy of the receiver. Read more
Source§

impl NSObjectProtocol for AVCapturePhotoSettings

Source§

fn isEqual(&self, other: Option<&AnyObject>) -> bool
where Self: Sized + Message,

Check whether the object is equal to an arbitrary other object. Read more
Source§

fn hash(&self) -> usize
where Self: Sized + Message,

An integer that can be used as a table address in a hash table structure. Read more
Source§

fn isKindOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of the class, or one of its subclasses. Read more
Source§

fn is_kind_of<T>(&self) -> bool
where T: ClassType, Self: Sized + Message,

👎Deprecated: use isKindOfClass directly, or cast your objects with AnyObject::downcast_ref
Check if the object is an instance of the class type, or one of its subclasses. Read more
Source§

fn isMemberOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of a specific class, without checking subclasses. Read more
Source§

fn respondsToSelector(&self, aSelector: Sel) -> bool
where Self: Sized + Message,

Check whether the object implements or inherits a method with the given selector. Read more
Source§

fn conformsToProtocol(&self, aProtocol: &AnyProtocol) -> bool
where Self: Sized + Message,

Check whether the object conforms to a given protocol. Read more
Source§

fn description(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object. Read more
Source§

fn debugDescription(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object to use when debugging. Read more
Source§

fn isProxy(&self) -> bool
where Self: Sized + Message,

Check whether the receiver is a subclass of the NSProxy root class instead of the usual NSObject. Read more
Source§

fn retainCount(&self) -> usize
where Self: Sized + Message,

The reference count of the object. Read more
Source§

impl PartialEq for AVCapturePhotoSettings

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl RefEncode for AVCapturePhotoSettings

Source§

const ENCODING_REF: Encoding = <NSObject as ::objc2::RefEncode>::ENCODING_REF

The Objective-C type-encoding for a reference of this type. Read more
Source§

impl DowncastTarget for AVCapturePhotoSettings

Source§

impl Eq for AVCapturePhotoSettings

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<'a, T> AllocAnyThread for T
where T: ClassType<ThreadKind = dyn AllocAnyThread + 'a> + ?Sized,

Source§

fn alloc() -> Allocated<Self>
where Self: Sized + ClassType,

Allocate a new instance of the class. Read more
Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> AutoreleaseSafe for T
where T: ?Sized,