AVVideoComposition

Struct AVVideoComposition 

Source
pub struct AVVideoComposition { /* private fields */ }
Available on crate feature AVVideoComposition only.
Expand description

An AVVideoComposition object represents an immutable video composition.

A video composition describes, for any time in the aggregate time range of its instructions, the number and IDs of video tracks that are to be used in order to produce a composed video frame corresponding to that time. When AVFoundation’s built-in video compositor is used, the instructions an AVVideoComposition contain can specify a spatial transformation, an opacity value, and a cropping rectangle for each video source, and these can vary over time via simple linear ramping functions.

A client can implement their own custom video compositor by implementing the AVVideoCompositing protocol; a custom video compositor is provided with pixel buffers for each of its video sources during playback and other operations and can perform arbitrary graphical operations on them in order to produce visual output.

Subclasses of this type that are used from Swift must fulfill the requirements of a Sendable type.

See also Apple’s documentation

Implementations§

Source§

impl AVVideoComposition

Source

pub unsafe fn videoCompositionWithPropertiesOfAsset( asset: &AVAsset, ) -> Retained<AVVideoComposition>

👎Deprecated: Use videoCompositionWithPropertiesOfAsset:completionHandler: instead
Available on crate feature AVAsset only.

Returns a new instance of AVVideoComposition with values and instructions suitable for presenting the video tracks of the specified asset according to its temporal and geometric properties and those of its tracks.

The returned AVVideoComposition will have instructions that respect the spatial properties and timeRanges of the specified asset’s video tracks. It will also have the following values for its properties:

  • If the asset has exactly one video track, the original timing of the source video track will be used. If the asset has more than one video track, and the nominal frame rate of any of video tracks is known, the reciprocal of the greatest known nominalFrameRate will be used as the value of frameDuration. Otherwise, a default framerate of 30fps is used.
  • If the specified asset is an instance of AVComposition, the renderSize will be set to the naturalSize of the AVComposition; otherwise the renderSize will be set to a value that encompasses all of the asset’s video tracks.
  • A renderScale of 1.0.
  • A nil animationTool.

If the specified asset has no video tracks, this method will return an AVVideoComposition instance with an empty collection of instructions.

  • Parameter asset: An instance of AVAsset. Ensure that the duration and tracks properties of the asset are already loaded before invoking this method.

  • Returns: An instance of AVVideoComposition.

Source

pub unsafe fn videoCompositionWithPropertiesOfAsset_completionHandler( asset: &AVAsset, completion_handler: &DynBlock<dyn Fn(*mut AVVideoComposition, *mut NSError)>, )

Available on crate features AVAsset and block2 only.

Vends a new instance of AVVideoComposition with values and instructions suitable for presenting the video tracks of the specified asset according to its temporal and geometric properties and those of its tracks.

The new AVVideoComposition will have instructions that respect the spatial properties and timeRanges of the specified asset’s video tracks. It will also have the following values for its properties:

  • If the asset has exactly one video track, the original timing of the source video track will be used. If the asset has more than one video track, and the nominal frame rate of any of video tracks is known, the reciprocal of the greatest known nominalFrameRate will be used as the value of frameDuration. Otherwise, a default framerate of 30fps is used.
  • If the specified asset is an instance of AVComposition, the renderSize will be set to the naturalSize of the AVComposition; otherwise the renderSize will be set to a value that encompasses all of the asset’s video tracks.
  • A renderScale of 1.0.
  • A nil animationTool.

If the specified asset has no video tracks, this method will return an AVVideoComposition instance with an empty collection of instructions.

  • Parameter asset: An instance of AVAsset.
  • Parameter completionHandler: A block that is invoked when the new video composition has finished being created. If the videoComposition parameter is nil, the error parameter describes the failure that occurred.
§Safety

completion_handler block must be sendable.

Source

pub unsafe fn videoCompositionWithVideoComposition( video_composition: &AVVideoComposition, ) -> Retained<AVVideoComposition>

Pass-through initializer, for internal use in AVFoundation only

Source

pub unsafe fn customVideoCompositorClass(&self) -> Option<&'static AnyClass>

Available on crate feature AVVideoCompositing only.

Indicates a custom compositor class to use. The class must implement the AVVideoCompositing protocol. If nil, the default, internal video compositor is used

Source

pub unsafe fn frameDuration(&self) -> CMTime

Available on crate feature objc2-core-media only.

Indicates the interval which the video composition, when enabled, should render composed video frames

Source

pub unsafe fn sourceTrackIDForFrameTiming(&self) -> CMPersistentTrackID

Available on crate feature objc2-core-media only.

If sourceTrackIDForFrameTiming is not kCMPersistentTrackID_Invalid, frame timing for the video composition is derived from the source asset’s track with the corresponding ID. This may be used to preserve a source asset’s variable frame timing. If an empty edit is encountered in the source asset’s track, the compositor composes frames as needed up to the frequency specified in frameDuration property. */

Source

pub unsafe fn renderSize(&self) -> CGSize

Available on crate feature objc2-core-foundation only.

Indicates the size at which the video composition, when enabled, should render

Source

pub unsafe fn renderScale(&self) -> c_float

Indicates the scale at which the video composition should render. May only be other than 1.0 for a video composition set on an AVPlayerItem

Source

pub unsafe fn instructions( &self, ) -> Retained<NSArray<ProtocolObject<dyn AVVideoCompositionInstructionProtocol>>>

Available on crate feature AVVideoCompositing only.

Indicates instructions for video composition via an NSArray of instances of classes implementing the AVVideoCompositionInstruction protocol. For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted (note that this will typically be kCMTimeZero). For subsequent instructions, timeRange.start must be equal to the prior instruction’s end time. The end time of the last instruction must be greater than or equal to the latest time for which playback or other processing will be attempted (note that this will often be the duration of the asset with which the instance of AVVideoComposition is associated).

Source

pub unsafe fn animationTool( &self, ) -> Option<Retained<AVVideoCompositionCoreAnimationTool>>

Indicates a special video composition tool for use of Core Animation; may be nil

Source

pub unsafe fn sourceSampleDataTrackIDs(&self) -> Retained<NSArray<NSNumber>>

List of all track IDs for tracks from which sample data should be presented to the compositor at any point in the overall composition. The sample data will be delivered to the custom compositor via AVAsynchronousVideoCompositionRequest.

Source

pub unsafe fn outputBufferDescription(&self) -> Option<Retained<NSArray>>

The output buffers of the video composition can be specified with the outputBufferDescription. The value is an array of CMTagCollectionRef objects that describes the output buffers.

If the video composition will output tagged buffers, the details of those buffers should be specified with CMTags. Specifically, the StereoView (eyes) and ProjectionKind must be specified. The behavior is undefined if the output tagged buffers do not match the outputBufferDescription. The default is nil, which means monoscopic output. Note that an empty array is not valid. An exception will be thrown if the objects in the array are not of type CMTagCollectionRef. Note that tagged buffers are only supported for custom compositors.

Source

pub unsafe fn spatialVideoConfigurations( &self, ) -> Retained<NSArray<AVSpatialVideoConfiguration>>

Available on crate feature AVSpatialVideoConfiguration only.

Indicates the spatial configurations that are available to associate with the output of the video composition.

A custom compositor can output spatial video by specifying one of these spatial configurations. A spatial configuration with all nil values indicates the video is not spatial. A nil spatial configuration also indicates the video is not spatial. The value can be nil, which indicates the output will not be spatial. NOTE: If this property is not empty, then the client must attach one of the spatial configurations in this array to all of the pixel buffers, otherwise an exception will be thrown.

Source§

impl AVVideoComposition

Methods declared on superclass NSObject.

Source

pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>

Source

pub unsafe fn new() -> Retained<Self>

Source§

impl AVVideoComposition

AVVideoCompositionColorimetery.

Indicates the color space of the frames output from the video composition.

Collectively the properties colorPrimaries, colorYCbCrMatrix, and colorTransferFunction define the color space that the rendered frames will be tagged with. For custom video compositing these properties are also used to specify the required color space of the source frames.

For examples of common color spaces see AVVideoSettings.h.

How to preserve the color space of the source frames:

Decide which color space to be preserved by examining the source asset’s video tracks. Copy the source track’s primaries, matrix and transfer function into the video composition’s colorPrimaries, colorYCbCrMatrix and colorTransferFunction respectively.

  • When using custom video compositing Setting these properties will cause source frames to be converted into the specified color space and tagged as such. New frames allocated using -[AVVideoCompositionRenderContext newPixelBuffer] will also be tagged correctly.

  • When using Core Image via videoCompositionWithAsset:options:applyingCIFiltersWithHandler: Setting these properties will cause source frames to be converted into the specified color space and tagged as such. The source frames provided as CIImages will have the appropriate CGColorSpace applied. The color space is preserved when the output CIImage is finally rendered internally.

  • When using basic compositing (i.e. AVVideoCompositionLayerInstruction) Setting these properties will ensure that the internal compositor renders (or passes through) frames in specified color space and are tagged as such.

Source

pub unsafe fn colorPrimaries(&self) -> Option<Retained<NSString>>

Rendering will use these primaries and frames will be tagged as such. If the value of this property is nil then the source’s primaries will be propagated and used.

Default is nil. Valid values are those suitable for AVVideoColorPrimariesKey. Generally set as a triple along with colorYCbCrMatrix and colorTransferFunction.

Source

pub unsafe fn colorYCbCrMatrix(&self) -> Option<Retained<NSString>>

Rendering will use this matrix and frames will be tagged as such. If the value of this property is nil then the source’s matrix will be propagated and used.

Default is nil. Valid values are those suitable for AVVideoYCbCrMatrixKey. Generally set as a triple along with colorPrimaries and colorTransferFunction.

Source

pub unsafe fn colorTransferFunction(&self) -> Option<Retained<NSString>>

Rendering will use this transfer function and frames will be tagged as such. If the value of this property is nil then the source’s transfer function will be propagated and used.

Default is nil. Valid values are those suitable for AVVideoTransferFunctionKey. Generally set as a triple along with colorYCbCrMatrix and colorYCbCrMatrix.

Source

pub unsafe fn perFrameHDRDisplayMetadataPolicy( &self, ) -> Retained<AVVideoCompositionPerFrameHDRDisplayMetadataPolicy>

Configures policy for per frame HDR display metadata on the rendered frame

Allows the system to identify situations where HDR metadata can be generated and attached to the rendered video frame. Default is AVVideoCompositionPerFrameHDRDisplayMetadataPolicyPropagate. Any HDR metadata attached to the composed frame will be propagated to the rendered video frames.

Source§

impl AVVideoComposition

AVVideoCompositionFiltering.

Source

pub unsafe fn videoCompositionWithAsset_applyingCIFiltersWithHandler( asset: &AVAsset, applier: &DynBlock<dyn Fn(NonNull<AVAsynchronousCIImageFilteringRequest>)>, ) -> Retained<AVVideoComposition>

👎Deprecated: Use videoCompositionWithAsset:applyingCIFiltersWithHandler:completionHandler: instead
Available on crate features AVAsset and AVVideoCompositing and block2 only.

Returns a new instance of AVVideoComposition with values and instructions that will apply the specified handler block to video frames represented as instances of CIImage.

The returned AVVideoComposition will cause the specified handler block to be called to filter each frame of the asset’s first enabled video track. The handler block should use the properties of the provided AVAsynchronousCIImageFilteringRequest and respond using finishWithImage:context: with a “filtered” new CIImage (or the provided source image for no affect). In the event of an error, respond to the request using finishWithError:. The error can be observed via AVPlayerItemFailedToPlayToEndTimeNotification, see AVPlayerItemFailedToPlayToEndTimeErrorKey in notification payload.

NOTE: The returned AVVideoComposition’s properties are private and support only CIFilter-based operations. Mutations are not supported, either in the values of properties of the AVVideoComposition itself or in its private instructions. If rotations or other transformations are desired, they must be accomplished via the application of CIFilters during the execution of your specified handler.

The video composition will also have the following values for its properties:

  • The original timing of the asset’s first enabled video track will be used.
  • A renderSize that encompasses the asset’s first enabled video track respecting the track’s preferredTransform.
  • A renderScale of 1.0.

The default CIContext has the following properties:

  • iOS: Device RGB color space
  • macOS: sRGB color space

Example usage:

playerItem.videoComposition = [AVVideoComposition videoCompositionWithAsset:srcAsset applyingCIFiltersWithHandler:
^(AVAsynchronousCIImageFilteringRequest *request)
{
NSError *err = nil;
CIImage *filtered = myRenderer(request,
&err
);
if (filtered)
[request finishWithImage:filtered context:nil];
else
[request finishWithError:err];
}];
  • Parameter asset: An instance of AVAsset. For best performance, ensure that the duration and tracks properties of the asset are already loaded before invoking this method.

  • Returns: An instance of AVVideoComposition.

§Safety

applier block must be sendable.

Source

pub unsafe fn videoCompositionWithAsset_applyingCIFiltersWithHandler_completionHandler( asset: &AVAsset, applier: &DynBlock<dyn Fn(NonNull<AVAsynchronousCIImageFilteringRequest>)>, completion_handler: &DynBlock<dyn Fn(*mut AVVideoComposition, *mut NSError)>, )

Available on crate features AVAsset and AVVideoCompositing and block2 only.

Vends a new instance of AVVideoComposition with values and instructions that will apply the specified handler block to video frames represented as instances of CIImage.

The new AVVideoComposition will cause the specified handler block to be called to filter each frame of the asset’s first enabled video track. The handler block should use the properties of the provided AVAsynchronousCIImageFilteringRequest and respond using finishWithImage:context: with a “filtered” new CIImage (or the provided source image for no affect). In the event of an error, respond to the request using finishWithError:. The error can be observed via AVPlayerItemFailedToPlayToEndTimeNotification, see AVPlayerItemFailedToPlayToEndTimeErrorKey in notification payload.

NOTE: The returned AVVideoComposition’s properties are private and support only CIFilter-based operations. Mutations are not supported, either in the values of properties of the AVVideoComposition itself or in its private instructions. If rotations or other transformations are desired, they must be accomplished via the application of CIFilters during the execution of your specified handler.

The video composition will also have the following values for its properties:

  • The original timing of the asset’s first enabled video track will be used.
  • A renderSize that encompasses the asset’s first enabled video track respecting the track’s preferredTransform.
  • A renderScale of 1.0.

The default CIContext has the following properties:

  • iOS: Device RGB color space
  • macOS: sRGB color space

Example usage:

[AVVideoComposition videoCompositionWithAsset:srcAsset applyingCIFiltersWithHandler:
^(AVAsynchronousCIImageFilteringRequest *request)
{
NSError *err = nil;
CIImage *filtered = myRenderer(request,
&err
);
if (filtered)
[request finishWithImage:filtered context:nil];
else
[request finishWithError:err];
} completionHandler:
^(AVVideoComposition * _Nullable videoComposition, NSError * _Nullable error)
{
if (videoComposition != nil) {
playerItem.videoComposition = videoComposition
else {
// handle error
}];
  • Parameter asset: An instance of AVAsset.
  • Parameter completionHandler: A block that is invoked when the new video composition has finished being created. If the videoComposition parameter is nil, the error parameter describes the failure that occurred.
§Safety
  • applier block must be sendable.
  • completion_handler block must be sendable.
Source§

impl AVVideoComposition

AVVideoCompositionValidation.

Source

pub unsafe fn isValidForAsset_timeRange_validationDelegate( &self, asset: Option<&AVAsset>, time_range: CMTimeRange, validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>, ) -> bool

👎Deprecated: Use isValidForTracks:assetDuration:timeRange:validationDelegate: instead
Available on crate features AVAsset and objc2-core-media only.

Indicates whether the timeRanges of the receiver’s instructions conform to the requirements described for them immediately above (in connection with the instructions property) and also whether all of the layer instructions have a value for trackID that corresponds either to a track of the specified asset or to the receiver’s animationTool.

In the course of validation, the receiver will invoke its validationDelegate with reference to any trouble spots in the video composition. An exception will be raised if the delegate modifies the receiver’s array of instructions or the array of layerInstructions of any AVVideoCompositionInstruction contained therein during validation.

  • Parameter asset: Pass a reference to an AVAsset if you wish to validate the timeRanges of the instructions against the duration of the asset and the trackIDs of the layer instructions against the asset’s tracks. Pass nil to skip that validation. Clients should ensure that the keys “ tracks“ and @“duration” are already loaded on the AVAsset before validation is attempted.
  • Parameter timeRange: A CMTimeRange. Only those instructions with timeRanges that overlap with the specified timeRange will be validated. To validate all instructions that may be used for playback or other processing, regardless of timeRange, pass CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity).
  • Parameter validationDelegate: Indicates an object implementing the AVVideoCompositionValidationHandling protocol to receive information about troublesome portions of a video composition during processing of -isValidForAsset:. May be nil.
Source

pub unsafe fn determineValidityForAsset_timeRange_validationDelegate_completionHandler( &self, asset: Option<&AVAsset>, time_range: CMTimeRange, validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>, completion_handler: &DynBlock<dyn Fn(Bool, *mut NSError)>, )

👎Deprecated
Available on crate features AVAsset and block2 and objc2-core-media only.

Determines whether the timeRanges of the receiver’s instructions conform to the requirements described for them immediately above (in connection with the instructions property) and also whether all of the layer instructions have a value for trackID that corresponds either to a track of the specified asset or to the receiver’s animationTool.

In the course of validation, the receiver will invoke its validationDelegate with reference to any trouble spots in the video composition. An exception will be raised if the delegate modifies the receiver’s array of instructions or the array of layerInstructions of any AVVideoCompositionInstruction contained therein during validation.

  • Parameter asset: Pass a reference to an AVAsset if you wish to validate the timeRanges of the instructions against the duration of the asset and the trackIDs of the layer instructions against the asset’s tracks. Pass nil to skip that validation.
  • Parameter timeRange: A CMTimeRange. Only those instructions with timeRanges that overlap with the specified timeRange will be validated. To validate all instructions that may be used for playback or other processing, regardless of timeRange, pass CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity).
  • Parameter validationDelegate: Indicates an object implementing the AVVideoCompositionValidationHandling protocol to receive information about troublesome portions of a video composition during processing of -determineValidityForAsset:. May be nil.
  • Parameter completionHandler: A block that is invoked when a determination is made about whether the video composition is valid. If the isValid parameter is NO, either the video composition is not valid, in which case the error parameter will be nil, or the answer could not be determined, in which case the error parameter will be non-nil and describe the failure that occurred.
§Safety

completion_handler block must be sendable.

Source

pub unsafe fn isValidForTracks_assetDuration_timeRange_validationDelegate( &self, tracks: &NSArray<AVAssetTrack>, duration: CMTime, time_range: CMTimeRange, validation_delegate: Option<&ProtocolObject<dyn AVVideoCompositionValidationHandling>>, ) -> bool

Available on crate features AVAssetTrack and objc2-core-media only.

Indicates whether the timeRanges of the receiver’s instructions conform to the requirements described for them immediately above (in connection with the instructions property) and also whether all of the layer instructions have a value for trackID that corresponds either to a track of the specified asset or to the receiver’s animationTool.

In the course of validation, the receiver will invoke its validationDelegate with reference to any trouble spots in the video composition. An exception will be raised if the delegate modifies the receiver’s array of instructions or the array of layerInstructions of any AVVideoCompositionInstruction contained therein during validation.

  • Parameter tracks: Pass a reference to an AVAsset’s tracks if you wish to validate the trackIDs of the layer instructions against the asset’s tracks. Pass nil to skip that validation. This method throws an exception if the tracks are not all from the same asset.
  • Parameter duration: Pass an AVAsset if you wish to validate the timeRanges of the instructions against the duration of the asset. Pass kCMTimeInvalid to skip that validation.
  • Parameter timeRange: A CMTimeRange. Only those instructions with timeRanges that overlap with the specified timeRange will be validated. To validate all instructions that may be used for playback or other processing, regardless of timeRange, pass CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity).
  • Parameter validationDelegate: Indicates an object implementing the AVVideoCompositionValidationHandling protocol to receive information about troublesome portions of a video composition during processing of -isValidForAsset:. May be nil.

Methods from Deref<Target = NSObject>§

Source

pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !

Handle messages the object doesn’t recognize.

See Apple’s documentation for details.

Methods from Deref<Target = AnyObject>§

Source

pub fn class(&self) -> &'static AnyClass

Dynamically find the class of this object.

§Panics

May panic if the object is invalid (which may be the case for objects returned from unavailable init/new methods).

§Example

Check that an instance of NSObject has the precise class NSObject.

use objc2::ClassType;
use objc2::runtime::NSObject;

let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Source

pub unsafe fn get_ivar<T>(&self, name: &str) -> &T
where T: Encode,

👎Deprecated: this is difficult to use correctly, use Ivar::load instead.

Use Ivar::load instead.

§Safety

The object must have an instance variable with the given name, and it must be of type T.

See Ivar::load_ptr for details surrounding this.

Source

pub fn downcast_ref<T>(&self) -> Option<&T>
where T: DowncastTarget,

Attempt to downcast the object to a class of type T.

This is the reference-variant. Use Retained::downcast if you want to convert a retained object to another type.

§Mutable classes

Some classes have immutable and mutable variants, such as NSString and NSMutableString.

When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.

So using this method to convert a NSString to a NSMutableString, while not unsound, is generally frowned upon unless you created the string yourself, or the API explicitly documents the string to be mutable.

See Apple’s documentation on mutability and on isKindOfClass: for more details.

§Generic classes

Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.

You can, however, safely downcast to generic collections where all the type-parameters are AnyObject.

§Panics

This works internally by calling isKindOfClass:. That means that the object must have the instance method of that name, and an exception will be thrown (if CoreFoundation is linked) or the process will abort if that is not the case. In the vast majority of cases, you don’t need to worry about this, since both root objects NSObject and NSProxy implement this method.

§Examples

Cast an NSString back and forth from NSObject.

use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};

let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();

Try (and fail) to cast an NSObject to an NSString.

use objc2_foundation::{NSObject, NSString};

let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());

Try to cast to an array of strings.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();

This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.

Downcast when processing each element instead.

use objc2_foundation::{NSArray, NSObject, NSString};

let arr = NSArray::from_retained_slice(&[NSObject::new()]);

for elem in arr {
    if let Some(data) = elem.downcast_ref::<NSString>() {
        // handle `data`
    }
}

Trait Implementations§

Source§

impl AsRef<AVVideoComposition> for AVMutableVideoComposition

Source§

fn as_ref(&self) -> &AVVideoComposition

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AVVideoComposition> for AVVideoComposition

Source§

fn as_ref(&self) -> &Self

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<AnyObject> for AVVideoComposition

Source§

fn as_ref(&self) -> &AnyObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl AsRef<NSObject> for AVVideoComposition

Source§

fn as_ref(&self) -> &NSObject

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl Borrow<AVVideoComposition> for AVMutableVideoComposition

Source§

fn borrow(&self) -> &AVVideoComposition

Immutably borrows from an owned value. Read more
Source§

impl Borrow<AnyObject> for AVVideoComposition

Source§

fn borrow(&self) -> &AnyObject

Immutably borrows from an owned value. Read more
Source§

impl Borrow<NSObject> for AVVideoComposition

Source§

fn borrow(&self) -> &NSObject

Immutably borrows from an owned value. Read more
Source§

impl ClassType for AVVideoComposition

Source§

const NAME: &'static str = "AVVideoComposition"

The name of the Objective-C class that this type represents. Read more
Source§

type Super = NSObject

The superclass of this class. Read more
Source§

type ThreadKind = <<AVVideoComposition as ClassType>::Super as ClassType>::ThreadKind

Whether the type can be used from any thread, or from only the main thread. Read more
Source§

fn class() -> &'static AnyClass

Get a reference to the Objective-C class that this type represents. Read more
Source§

fn as_super(&self) -> &Self::Super

Get an immutable reference to the superclass.
Source§

impl CopyingHelper for AVVideoComposition

Source§

type Result = AVVideoComposition

The immutable counterpart of the type, or Self if the type has no immutable counterpart. Read more
Source§

impl Debug for AVVideoComposition

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Deref for AVVideoComposition

Source§

type Target = NSObject

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl Hash for AVVideoComposition

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Message for AVVideoComposition

Source§

fn retain(&self) -> Retained<Self>
where Self: Sized,

Increment the reference count of the receiver. Read more
Source§

impl MutableCopyingHelper for AVVideoComposition

Source§

type Result = AVMutableVideoComposition

The mutable counterpart of the type, or Self if the type has no mutable counterpart. Read more
Source§

impl NSCopying for AVVideoComposition

Source§

fn copy(&self) -> Retained<Self::Result>
where Self: Sized + Message + CopyingHelper,

Returns a new instance that’s a copy of the receiver. Read more
Source§

unsafe fn copyWithZone(&self, zone: *mut NSZone) -> Retained<Self::Result>
where Self: Sized + Message + CopyingHelper,

Returns a new instance that’s a copy of the receiver. Read more
Source§

impl NSMutableCopying for AVVideoComposition

Source§

fn mutableCopy(&self) -> Retained<Self::Result>

Returns a new instance that’s a mutable copy of the receiver. Read more
Source§

unsafe fn mutableCopyWithZone( &self, zone: *mut NSZone, ) -> Retained<Self::Result>

Returns a new instance that’s a mutable copy of the receiver. Read more
Source§

impl NSObjectProtocol for AVVideoComposition

Source§

fn isEqual(&self, other: Option<&AnyObject>) -> bool
where Self: Sized + Message,

Check whether the object is equal to an arbitrary other object. Read more
Source§

fn hash(&self) -> usize
where Self: Sized + Message,

An integer that can be used as a table address in a hash table structure. Read more
Source§

fn isKindOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of the class, or one of its subclasses. Read more
Source§

fn is_kind_of<T>(&self) -> bool
where T: ClassType, Self: Sized + Message,

👎Deprecated: use isKindOfClass directly, or cast your objects with AnyObject::downcast_ref
Check if the object is an instance of the class type, or one of its subclasses. Read more
Source§

fn isMemberOfClass(&self, cls: &AnyClass) -> bool
where Self: Sized + Message,

Check if the object is an instance of a specific class, without checking subclasses. Read more
Source§

fn respondsToSelector(&self, aSelector: Sel) -> bool
where Self: Sized + Message,

Check whether the object implements or inherits a method with the given selector. Read more
Source§

fn conformsToProtocol(&self, aProtocol: &AnyProtocol) -> bool
where Self: Sized + Message,

Check whether the object conforms to a given protocol. Read more
Source§

fn description(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object. Read more
Source§

fn debugDescription(&self) -> Retained<NSObject>
where Self: Sized + Message,

A textual representation of the object to use when debugging. Read more
Source§

fn isProxy(&self) -> bool
where Self: Sized + Message,

Check whether the receiver is a subclass of the NSProxy root class instead of the usual NSObject. Read more
Source§

fn retainCount(&self) -> usize
where Self: Sized + Message,

The reference count of the object. Read more
Source§

impl PartialEq for AVVideoComposition

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl RefEncode for AVVideoComposition

Source§

const ENCODING_REF: Encoding = <NSObject as ::objc2::RefEncode>::ENCODING_REF

The Objective-C type-encoding for a reference of this type. Read more
Source§

impl DowncastTarget for AVVideoComposition

Source§

impl Eq for AVVideoComposition

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<'a, T> AnyThread for T
where T: ClassType<ThreadKind = dyn AnyThread + 'a> + ?Sized,

Source§

fn alloc() -> Allocated<Self>
where Self: Sized + ClassType,

Allocate a new instance of the class. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> AutoreleaseSafe for T
where T: ?Sized,