pub struct AVVideoComposition { /* private fields */ }AVVideoComposition only.Expand description
An AVVideoComposition object represents an immutable video composition.
A video composition describes, for any time in the aggregate time range of its instructions, the number and IDs of video tracks that are to be used in order to produce a composed video frame corresponding to that time. When AVFoundation’s built-in video compositor is used, the instructions an AVVideoComposition contain can specify a spatial transformation, an opacity value, and a cropping rectangle for each video source, and these can vary over time via simple linear ramping functions.
A client can implement their own custom video compositor by implementing the AVVideoCompositing protocol; a custom video compositor is provided with pixel buffers for each of its video sources during playback and other operations and can perform arbitrary graphical operations on them in order to produce visual output.
Subclasses of this type that are used from Swift must fulfill the requirements of a Sendable type.
See also Apple’s documentation
Implementations§
Source§impl AVVideoComposition
impl AVVideoComposition
Sourcepub unsafe fn videoCompositionWithPropertiesOfAsset(
asset: &AVAsset,
) -> Retained<AVVideoComposition>
👎Deprecated: Use videoCompositionWithPropertiesOfAsset:completionHandler: insteadAvailable on crate feature AVAsset only.
pub unsafe fn videoCompositionWithPropertiesOfAsset( asset: &AVAsset, ) -> Retained<AVVideoComposition>
AVAsset only.Returns a new instance of AVVideoComposition with values and instructions suitable for presenting the video tracks of the specified asset according to its temporal and geometric properties and those of its tracks.
The returned AVVideoComposition will have instructions that respect the spatial properties and timeRanges of the specified asset’s video tracks. It will also have the following values for its properties:
- If the asset has exactly one video track, the original timing of the source video track will be used. If the asset has more than one video track, and the nominal frame rate of any of video tracks is known, the reciprocal of the greatest known nominalFrameRate will be used as the value of frameDuration. Otherwise, a default framerate of 30fps is used.
- If the specified asset is an instance of AVComposition, the renderSize will be set to the naturalSize of the AVComposition; otherwise the renderSize will be set to a value that encompasses all of the asset’s video tracks.
- A renderScale of 1.0.
- A nil animationTool.
If the specified asset has no video tracks, this method will return an AVVideoComposition instance with an empty collection of instructions.
-
Parameter asset: An instance of AVAsset. Ensure that the duration and tracks properties of the asset are already loaded before invoking this method.
-
Returns: An instance of AVVideoComposition.
Sourcepub unsafe fn videoCompositionWithPropertiesOfAsset_completionHandler(
asset: &AVAsset,
completion_handler: &DynBlock<dyn Fn(*mut AVVideoComposition, *mut NSError)>,
)
Available on crate features AVAsset and block2 only.
pub unsafe fn videoCompositionWithPropertiesOfAsset_completionHandler( asset: &AVAsset, completion_handler: &DynBlock<dyn Fn(*mut AVVideoComposition, *mut NSError)>, )
AVAsset and block2 only.Vends a new instance of AVVideoComposition with values and instructions suitable for presenting the video tracks of the specified asset according to its temporal and geometric properties and those of its tracks.
The new AVVideoComposition will have instructions that respect the spatial properties and timeRanges of the specified asset’s video tracks. It will also have the following values for its properties:
- If the asset has exactly one video track, the original timing of the source video track will be used. If the asset has more than one video track, and the nominal frame rate of any of video tracks is known, the reciprocal of the greatest known nominalFrameRate will be used as the value of frameDuration. Otherwise, a default framerate of 30fps is used.
- If the specified asset is an instance of AVComposition, the renderSize will be set to the naturalSize of the AVComposition; otherwise the renderSize will be set to a value that encompasses all of the asset’s video tracks.
- A renderScale of 1.0.
- A nil animationTool.
If the specified asset has no video tracks, this method will return an AVVideoComposition instance with an empty collection of instructions.
- Parameter asset: An instance of AVAsset.
- Parameter completionHandler: A block that is invoked when the new video composition has finished being created. If the
videoCompositionparameter is nil, theerrorparameter describes the failure that occurred.
§Safety
completion_handler block must be sendable.
Sourcepub unsafe fn videoCompositionWithVideoComposition(
video_composition: &AVVideoComposition,
) -> Retained<AVVideoComposition>
pub unsafe fn videoCompositionWithVideoComposition( video_composition: &AVVideoComposition, ) -> Retained<AVVideoComposition>
Pass-through initializer, for internal use in AVFoundation only
Sourcepub unsafe fn customVideoCompositorClass(&self) -> Option<&'static AnyClass>
Available on crate feature AVVideoCompositing only.
pub unsafe fn customVideoCompositorClass(&self) -> Option<&'static AnyClass>
AVVideoCompositing only.Indicates a custom compositor class to use. The class must implement the AVVideoCompositing protocol. If nil, the default, internal video compositor is used
Sourcepub unsafe fn frameDuration(&self) -> CMTime
Available on crate feature objc2-core-media only.
pub unsafe fn frameDuration(&self) -> CMTime
objc2-core-media only.Indicates the interval which the video composition, when enabled, should render composed video frames
Sourcepub unsafe fn sourceTrackIDForFrameTiming(&self) -> CMPersistentTrackID
Available on crate feature objc2-core-media only.
pub unsafe fn sourceTrackIDForFrameTiming(&self) -> CMPersistentTrackID
objc2-core-media only.If sourceTrackIDForFrameTiming is not kCMPersistentTrackID_Invalid, frame timing for the video composition is derived from the source asset’s track with the corresponding ID. This may be used to preserve a source asset’s variable frame timing. If an empty edit is encountered in the source asset’s track, the compositor composes frames as needed up to the frequency specified in frameDuration property. */
Sourcepub unsafe fn renderSize(&self) -> CGSize
Available on crate feature objc2-core-foundation only.
pub unsafe fn renderSize(&self) -> CGSize
objc2-core-foundation only.Indicates the size at which the video composition, when enabled, should render
Sourcepub unsafe fn renderScale(&self) -> c_float
pub unsafe fn renderScale(&self) -> c_float
Indicates the scale at which the video composition should render. May only be other than 1.0 for a video composition set on an AVPlayerItem
Sourcepub unsafe fn instructions(
&self,
) -> Retained<NSArray<ProtocolObject<dyn AVVideoCompositionInstructionProtocol>>>
Available on crate feature AVVideoCompositing only.
pub unsafe fn instructions( &self, ) -> Retained<NSArray<ProtocolObject<dyn AVVideoCompositionInstructionProtocol>>>
AVVideoCompositing only.Indicates instructions for video composition via an NSArray of instances of classes implementing the AVVideoCompositionInstruction protocol. For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted (note that this will typically be kCMTimeZero). For subsequent instructions, timeRange.start must be equal to the prior instruction’s end time. The end time of the last instruction must be greater than or equal to the latest time for which playback or other processing will be attempted (note that this will often be the duration of the asset with which the instance of AVVideoComposition is associated).
Sourcepub unsafe fn animationTool(
&self,
) -> Option<Retained<AVVideoCompositionCoreAnimationTool>>
pub unsafe fn animationTool( &self, ) -> Option<Retained<AVVideoCompositionCoreAnimationTool>>
Indicates a special video composition tool for use of Core Animation; may be nil
Sourcepub unsafe fn sourceSampleDataTrackIDs(&self) -> Retained<NSArray<NSNumber>>
pub unsafe fn sourceSampleDataTrackIDs(&self) -> Retained<NSArray<NSNumber>>
List of all track IDs for tracks from which sample data should be presented to the compositor at any point in the overall composition. The sample data will be delivered to the custom compositor via AVAsynchronousVideoCompositionRequest.
Sourcepub unsafe fn outputBufferDescription(&self) -> Option<Retained<NSArray>>
pub unsafe fn outputBufferDescription(&self) -> Option<Retained<NSArray>>
The output buffers of the video composition can be specified with the outputBufferDescription. The value is an array of CMTagCollectionRef objects that describes the output buffers.
If the video composition will output tagged buffers, the details of those buffers should be specified with CMTags. Specifically, the StereoView (eyes) and ProjectionKind must be specified. The behavior is undefined if the output tagged buffers do not match the outputBufferDescription. The default is nil, which means monoscopic output. Note that an empty array is not valid. An exception will be thrown if the objects in the array are not of type CMTagCollectionRef. Note that tagged buffers are only supported for custom compositors.
Sourcepub unsafe fn spatialVideoConfigurations(
&self,
) -> Retained<NSArray<AVSpatialVideoConfiguration>>
Available on crate feature AVSpatialVideoConfiguration only.
pub unsafe fn spatialVideoConfigurations( &self, ) -> Retained<NSArray<AVSpatialVideoConfiguration>>
AVSpatialVideoConfiguration only.Indicates the spatial configurations that are available to associate with the output of the video composition.
A custom compositor can output spatial video by specifying one of these spatial configurations. A spatial configuration with all nil values indicates the video is not spatial. A nil spatial configuration also indicates the video is not spatial. The value can be nil, which indicates the output will not be spatial. NOTE: If this property is not empty, then the client must attach one of the spatial configurations in this array to all of the pixel buffers, otherwise an exception will be thrown.
Source§impl AVVideoComposition
Methods declared on superclass NSObject.
impl AVVideoComposition
Methods declared on superclass NSObject.
Source§impl AVVideoComposition
AVVideoCompositionColorimetery.
impl AVVideoComposition
AVVideoCompositionColorimetery.
Indicates the color space of the frames output from the video composition.
Collectively the properties colorPrimaries, colorYCbCrMatrix, and colorTransferFunction define the color space that the rendered frames will be tagged with. For custom video compositing these properties are also used to specify the required color space of the source frames.
For examples of common color spaces see AVVideoSettings.h.
How to preserve the color space of the source frames:
Decide which color space to be preserved by examining the source asset’s video tracks. Copy the source track’s primaries, matrix and transfer function into the video composition’s colorPrimaries, colorYCbCrMatrix and colorTransferFunction respectively.
-
When using custom video compositing Setting these properties will cause source frames to be converted into the specified color space and tagged as such. New frames allocated using -[AVVideoCompositionRenderContext newPixelBuffer] will also be tagged correctly.
-
When using Core Image via videoCompositionWithAsset:options:applyingCIFiltersWithHandler: Setting these properties will cause source frames to be converted into the specified color space and tagged as such. The source frames provided as CIImages will have the appropriate CGColorSpace applied. The color space is preserved when the output CIImage is finally rendered internally.
-
When using basic compositing (i.e. AVVideoCompositionLayerInstruction) Setting these properties will ensure that the internal compositor renders (or passes through) frames in specified color space and are tagged as such.
Sourcepub unsafe fn colorPrimaries(&self) -> Option<Retained<NSString>>
pub unsafe fn colorPrimaries(&self) -> Option<Retained<NSString>>
Rendering will use these primaries and frames will be tagged as such. If the value of this property is nil then the source’s primaries will be propagated and used.
Default is nil. Valid values are those suitable for AVVideoColorPrimariesKey. Generally set as a triple along with colorYCbCrMatrix and colorTransferFunction.
Sourcepub unsafe fn colorYCbCrMatrix(&self) -> Option<Retained<NSString>>
pub unsafe fn colorYCbCrMatrix(&self) -> Option<Retained<NSString>>
Rendering will use this matrix and frames will be tagged as such. If the value of this property is nil then the source’s matrix will be propagated and used.
Default is nil. Valid values are those suitable for AVVideoYCbCrMatrixKey. Generally set as a triple along with colorPrimaries and colorTransferFunction.
Sourcepub unsafe fn colorTransferFunction(&self) -> Option<Retained<NSString>>
pub unsafe fn colorTransferFunction(&self) -> Option<Retained<NSString>>
Rendering will use this transfer function and frames will be tagged as such. If the value of this property is nil then the source’s transfer function will be propagated and used.
Default is nil. Valid values are those suitable for AVVideoTransferFunctionKey. Generally set as a triple along with colorYCbCrMatrix and colorYCbCrMatrix.
Sourcepub unsafe fn perFrameHDRDisplayMetadataPolicy(
&self,
) -> Retained<AVVideoCompositionPerFrameHDRDisplayMetadataPolicy>
pub unsafe fn perFrameHDRDisplayMetadataPolicy( &self, ) -> Retained<AVVideoCompositionPerFrameHDRDisplayMetadataPolicy>
Configures policy for per frame HDR display metadata on the rendered frame
Allows the system to identify situations where HDR metadata can be generated and attached to the rendered video frame. Default is AVVideoCompositionPerFrameHDRDisplayMetadataPolicyPropagate. Any HDR metadata attached to the composed frame will be propagated to the rendered video frames.
Source§impl AVVideoComposition
AVVideoCompositionFiltering.
impl AVVideoComposition
AVVideoCompositionFiltering.
Sourcepub unsafe fn videoCompositionWithAsset_applyingCIFiltersWithHandler(
asset: &AVAsset,
applier: &DynBlock<dyn Fn(NonNull<AVAsynchronousCIImageFilteringRequest>)>,
) -> Retained<AVVideoComposition>
👎Deprecated: Use videoCompositionWithAsset:applyingCIFiltersWithHandler:completionHandler: insteadAvailable on crate features AVAsset and AVVideoCompositing and block2 only.
pub unsafe fn videoCompositionWithAsset_applyingCIFiltersWithHandler( asset: &AVAsset, applier: &DynBlock<dyn Fn(NonNull<AVAsynchronousCIImageFilteringRequest>)>, ) -> Retained<AVVideoComposition>
AVAsset and AVVideoCompositing and block2 only.Returns a new instance of AVVideoComposition with values and instructions that will apply the specified handler block to video frames represented as instances of CIImage.
The returned AVVideoComposition will cause the specified handler block to be called to filter each frame of the asset’s first enabled video track. The handler block should use the properties of the provided AVAsynchronousCIImageFilteringRequest and respond using finishWithImage:context: with a “filtered” new CIImage (or the provided source image for no affect). In the event of an error, respond to the request using finishWithError:. The error can be observed via AVPlayerItemFailedToPlayToEndTimeNotification, see AVPlayerItemFailedToPlayToEndTimeErrorKey in notification payload.
NOTE: The returned AVVideoComposition’s properties are private and support only CIFilter-based operations. Mutations are not supported, either in the values of properties of the AVVideoComposition itself or in its private instructions. If rotations or other transformations are desired, they must be accomplished via the application of CIFilters during the execution of your specified handler.
The video composition will also have the following values for its properties:
- The original timing of the asset’s first enabled video track will be used.
- A renderSize that encompasses the asset’s first enabled video track respecting the track’s preferredTransform.
- A renderScale of 1.0.
The default CIContext has the following properties:
- iOS: Device RGB color space
- macOS: sRGB color space
Example usage:
playerItem.videoComposition = [AVVideoComposition videoCompositionWithAsset:srcAsset applyingCIFiltersWithHandler:
^(AVAsynchronousCIImageFilteringRequest *request)
{
NSError *err = nil;
CIImage *filtered = myRenderer(request,
&err
);
if (filtered)
[request finishWithImage:filtered context:nil];
else
[request finishWithError:err];
}];-
Parameter asset: An instance of AVAsset. For best performance, ensure that the duration and tracks properties of the asset are already loaded before invoking this method.
-
Returns: An instance of AVVideoComposition.
§Safety
applier block must be sendable.
Sourcepub unsafe fn videoCompositionWithAsset_applyingCIFiltersWithHandler_completionHandler(
asset: &AVAsset,
applier: &DynBlock<dyn Fn(NonNull<AVAsynchronousCIImageFilteringRequest>)>,
completion_handler: &DynBlock<dyn Fn(*mut AVVideoComposition, *mut NSError)>,
)
Available on crate features AVAsset and AVVideoCompositing and block2 only.
pub unsafe fn videoCompositionWithAsset_applyingCIFiltersWithHandler_completionHandler( asset: &AVAsset, applier: &DynBlock<dyn Fn(NonNull<AVAsynchronousCIImageFilteringRequest>)>, completion_handler: &DynBlock<dyn Fn(*mut AVVideoComposition, *mut NSError)>, )
AVAsset and AVVideoCompositing and block2 only.Vends a new instance of AVVideoComposition with values and instructions that will apply the specified handler block to video frames represented as instances of CIImage.
The new AVVideoComposition will cause the specified handler block to be called to filter each frame of the asset’s first enabled video track. The handler block should use the properties of the provided AVAsynchronousCIImageFilteringRequest and respond using finishWithImage:context: with a “filtered” new CIImage (or the provided source image for no affect). In the event of an error, respond to the request using finishWithError:. The error can be observed via AVPlayerItemFailedToPlayToEndTimeNotification, see AVPlayerItemFailedToPlayToEndTimeErrorKey in notification payload.
NOTE: The returned AVVideoComposition’s properties are private and support only CIFilter-based operations. Mutations are not supported, either in the values of properties of the AVVideoComposition itself or in its private instructions. If rotations or other transformations are desired, they must be accomplished via the application of CIFilters during the execution of your specified handler.
The video composition will also have the following values for its properties:
- The original timing of the asset’s first enabled video track will be used.
- A renderSize that encompasses the asset’s first enabled video track respecting the track’s preferredTransform.
- A renderScale of 1.0.
The default CIContext has the following properties:
- iOS: Device RGB color space
- macOS: sRGB color space
Example usage:
[AVVideoComposition videoCompositionWithAsset:srcAsset applyingCIFiltersWithHandler:
^(AVAsynchronousCIImageFilteringRequest *request)
{
NSError *err = nil;
CIImage *filtered = myRenderer(request,
&err
);
if (filtered)
[request finishWithImage:filtered context:nil];
else
[request finishWithError:err];
} completionHandler:
^(AVVideoComposition * _Nullable videoComposition, NSError * _Nullable error)
{
if (videoComposition != nil) {
playerItem.videoComposition = videoComposition
else {
// handle error
}];- Parameter asset: An instance of AVAsset.
- Parameter completionHandler: A block that is invoked when the new video composition has finished being created. If the
videoCompositionparameter is nil, theerrorparameter describes the failure that occurred.
§Safety
applierblock must be sendable.completion_handlerblock must be sendable.
Source§impl AVVideoComposition
AVVideoCompositionValidation.
impl AVVideoComposition
AVVideoCompositionValidation.
Sourcepub unsafe fn isValidForAsset_timeRange_validationDelegate(
&self,
asset: Option<&AVAsset>,
time_range: CMTimeRange,
validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>,
) -> bool
👎Deprecated: Use isValidForTracks:assetDuration:timeRange:validationDelegate: insteadAvailable on crate features AVAsset and objc2-core-media only.
pub unsafe fn isValidForAsset_timeRange_validationDelegate( &self, asset: Option<&AVAsset>, time_range: CMTimeRange, validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>, ) -> bool
AVAsset and objc2-core-media only.Indicates whether the timeRanges of the receiver’s instructions conform to the requirements described for them immediately above (in connection with the instructions property) and also whether all of the layer instructions have a value for trackID that corresponds either to a track of the specified asset or to the receiver’s animationTool.
In the course of validation, the receiver will invoke its validationDelegate with reference to any trouble spots in the video composition. An exception will be raised if the delegate modifies the receiver’s array of instructions or the array of layerInstructions of any AVVideoCompositionInstruction contained therein during validation.
- Parameter asset: Pass a reference to an AVAsset if you wish to validate the timeRanges of the instructions against the duration of the asset and the trackIDs of the layer instructions against the asset’s tracks. Pass nil to skip that validation. Clients should ensure that the keys “ tracks“ and @“duration” are already loaded on the AVAsset before validation is attempted.
- Parameter timeRange: A CMTimeRange. Only those instructions with timeRanges that overlap with the specified timeRange will be validated. To validate all instructions that may be used for playback or other processing, regardless of timeRange, pass CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity).
- Parameter validationDelegate: Indicates an object implementing the AVVideoCompositionValidationHandling protocol to receive information about troublesome portions of a video composition during processing of -isValidForAsset:. May be nil.
Sourcepub unsafe fn determineValidityForAsset_timeRange_validationDelegate_completionHandler(
&self,
asset: Option<&AVAsset>,
time_range: CMTimeRange,
validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>,
completion_handler: &DynBlock<dyn Fn(Bool, *mut NSError)>,
)
👎DeprecatedAvailable on crate features AVAsset and block2 and objc2-core-media only.
pub unsafe fn determineValidityForAsset_timeRange_validationDelegate_completionHandler( &self, asset: Option<&AVAsset>, time_range: CMTimeRange, validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>, completion_handler: &DynBlock<dyn Fn(Bool, *mut NSError)>, )
AVAsset and block2 and objc2-core-media only.Determines whether the timeRanges of the receiver’s instructions conform to the requirements described for them immediately above (in connection with the instructions property) and also whether all of the layer instructions have a value for trackID that corresponds either to a track of the specified asset or to the receiver’s animationTool.
In the course of validation, the receiver will invoke its validationDelegate with reference to any trouble spots in the video composition. An exception will be raised if the delegate modifies the receiver’s array of instructions or the array of layerInstructions of any AVVideoCompositionInstruction contained therein during validation.
- Parameter asset: Pass a reference to an AVAsset if you wish to validate the timeRanges of the instructions against the duration of the asset and the trackIDs of the layer instructions against the asset’s tracks. Pass nil to skip that validation.
- Parameter timeRange: A CMTimeRange. Only those instructions with timeRanges that overlap with the specified timeRange will be validated. To validate all instructions that may be used for playback or other processing, regardless of timeRange, pass CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity).
- Parameter validationDelegate: Indicates an object implementing the AVVideoCompositionValidationHandling protocol to receive information about troublesome portions of a video composition during processing of -determineValidityForAsset:. May be nil.
- Parameter completionHandler: A block that is invoked when a determination is made about whether the video composition is valid. If the
isValidparameter is NO, either the video composition is not valid, in which case theerrorparameter will be nil, or the answer could not be determined, in which case theerrorparameter will be non-nil and describe the failure that occurred.
§Safety
completion_handler block must be sendable.
Sourcepub unsafe fn isValidForTracks_assetDuration_timeRange_validationDelegate(
&self,
tracks: &NSArray<AVAssetTrack>,
duration: CMTime,
time_range: CMTimeRange,
validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>,
) -> bool
Available on crate features AVAssetTrack and objc2-core-media only.
pub unsafe fn isValidForTracks_assetDuration_timeRange_validationDelegate( &self, tracks: &NSArray<AVAssetTrack>, duration: CMTime, time_range: CMTimeRange, validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>, ) -> bool
AVAssetTrack and objc2-core-media only.Indicates whether the timeRanges of the receiver’s instructions conform to the requirements described for them immediately above (in connection with the instructions property) and also whether all of the layer instructions have a value for trackID that corresponds either to a track of the specified asset or to the receiver’s animationTool.
In the course of validation, the receiver will invoke its validationDelegate with reference to any trouble spots in the video composition. An exception will be raised if the delegate modifies the receiver’s array of instructions or the array of layerInstructions of any AVVideoCompositionInstruction contained therein during validation.
- Parameter tracks: Pass a reference to an AVAsset’s tracks if you wish to validate the trackIDs of the layer instructions against the asset’s tracks. Pass nil to skip that validation. This method throws an exception if the tracks are not all from the same asset.
- Parameter duration: Pass an AVAsset if you wish to validate the timeRanges of the instructions against the duration of the asset. Pass kCMTimeInvalid to skip that validation.
- Parameter timeRange: A CMTimeRange. Only those instructions with timeRanges that overlap with the specified timeRange will be validated. To validate all instructions that may be used for playback or other processing, regardless of timeRange, pass CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity).
- Parameter validationDelegate: Indicates an object implementing the AVVideoCompositionValidationHandling protocol to receive information about troublesome portions of a video composition during processing of -isValidForAsset:. May be nil.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVVideoComposition> for AVMutableVideoComposition
impl AsRef<AVVideoComposition> for AVMutableVideoComposition
Source§fn as_ref(&self) -> &AVVideoComposition
fn as_ref(&self) -> &AVVideoComposition
Source§impl AsRef<AVVideoComposition> for AVVideoComposition
impl AsRef<AVVideoComposition> for AVVideoComposition
Source§impl AsRef<AnyObject> for AVVideoComposition
impl AsRef<AnyObject> for AVVideoComposition
Source§impl AsRef<NSObject> for AVVideoComposition
impl AsRef<NSObject> for AVVideoComposition
Source§impl Borrow<AVVideoComposition> for AVMutableVideoComposition
impl Borrow<AVVideoComposition> for AVMutableVideoComposition
Source§fn borrow(&self) -> &AVVideoComposition
fn borrow(&self) -> &AVVideoComposition
Source§impl Borrow<AnyObject> for AVVideoComposition
impl Borrow<AnyObject> for AVVideoComposition
Source§impl Borrow<NSObject> for AVVideoComposition
impl Borrow<NSObject> for AVVideoComposition
Source§impl ClassType for AVVideoComposition
impl ClassType for AVVideoComposition
Source§const NAME: &'static str = "AVVideoComposition"
const NAME: &'static str = "AVVideoComposition"
Source§type ThreadKind = <<AVVideoComposition as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVVideoComposition as ClassType>::Super as ClassType>::ThreadKind
Source§impl CopyingHelper for AVVideoComposition
impl CopyingHelper for AVVideoComposition
Source§type Result = AVVideoComposition
type Result = AVVideoComposition
Self if the type has no
immutable counterpart. Read moreSource§impl Debug for AVVideoComposition
impl Debug for AVVideoComposition
Source§impl Deref for AVVideoComposition
impl Deref for AVVideoComposition
Source§impl Hash for AVVideoComposition
impl Hash for AVVideoComposition
Source§impl Message for AVVideoComposition
impl Message for AVVideoComposition
Source§impl MutableCopyingHelper for AVVideoComposition
impl MutableCopyingHelper for AVVideoComposition
Source§type Result = AVMutableVideoComposition
type Result = AVMutableVideoComposition
Self if the type has no
mutable counterpart. Read moreSource§impl NSCopying for AVVideoComposition
impl NSCopying for AVVideoComposition
Source§impl NSMutableCopying for AVVideoComposition
impl NSMutableCopying for AVVideoComposition
Source§impl NSObjectProtocol for AVVideoComposition
impl NSObjectProtocol for AVVideoComposition
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref