pub struct AVAssetReaderVideoCompositionOutput { /* private fields */ }AVAssetReaderOutput only.Expand description
AVAssetReaderVideoCompositionOutput is a concrete subclass of AVAssetReaderOutput that defines an interface for reading video frames that have been composited together from the frames in one or more AVAssetTracks of an AVAssetReader’s AVAsset.
Clients can read the video frames composited from one or more asset tracks by adding an instance of AVAssetReaderVideoCompositionOutput to an AVAssetReader using the -[AVAssetReader addOutput:] method.
See also Apple’s documentation
Implementations§
Source§impl AVAssetReaderVideoCompositionOutput
impl AVAssetReaderVideoCompositionOutput
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn assetReaderVideoCompositionOutputWithVideoTracks_videoSettings(
video_tracks: &NSArray<AVAssetTrack>,
video_settings: Option<&NSDictionary<NSString, AnyObject>>,
) -> Retained<Self>
Available on crate feature AVAssetTrack only.
pub unsafe fn assetReaderVideoCompositionOutputWithVideoTracks_videoSettings( video_tracks: &NSArray<AVAssetTrack>, video_settings: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>
AVAssetTrack only.Creates an instance of AVAssetReaderVideoCompositionOutput for reading composited video from the specified video tracks and supplying media data according to the specified video settings.
Parameter tracks: An NSArray of AVAssetTrack objects from which the resulting AVAssetReaderVideoCompositionOutput should read video frames for compositing.
Parameter videoSettings: An NSDictionary of video settings to be used for video output. See AVVideoSettings.h for more information about how to construct a video settings dictionary.
Returns: An instance of AVAssetReaderVideoCompositionOutput.
Each track must be one of the tracks owned by the target AVAssetReader’s asset and must be of media type AVMediaTypeVideo.
A value of nil for videoSettings configures the output to return samples in a convenient uncompressed format, with properties determined according to the properties of the specified video tracks. Initialization will fail if the video settings cannot be used with the specified tracks.
AVAssetReaderVideoCompositionOutput can only produce uncompressed output. This means that the video settings dictionary must follow the rules for uncompressed video output, as laid out in AVVideoSettings.h. In addition, the following keys are not supported:
AVVideoCleanApertureKey AVVideoPixelAspectRatioKey AVVideoScalingModeKey
§Safety
video_settings generic should be of the correct type.
Sourcepub unsafe fn initWithVideoTracks_videoSettings(
this: Allocated<Self>,
video_tracks: &NSArray<AVAssetTrack>,
video_settings: Option<&NSDictionary<NSString, AnyObject>>,
) -> Retained<Self>
Available on crate feature AVAssetTrack only.
pub unsafe fn initWithVideoTracks_videoSettings( this: Allocated<Self>, video_tracks: &NSArray<AVAssetTrack>, video_settings: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>
AVAssetTrack only.Creates an instance of AVAssetReaderVideoCompositionOutput for reading composited video from the specified video tracks and supplying media data according to the specified video settings.
Parameter tracks: An NSArray of AVAssetTrack objects from which the resulting AVAssetReaderVideoCompositionOutput should read video frames for compositing.
Parameter videoSettings: An NSDictionary of video settings to be used for video output. See AVVideoSettings.h for more information about how to construct a video settings dictionary.
Returns: An instance of AVAssetReaderVideoCompositionOutput.
Each track must be one of the tracks owned by the target AVAssetReader’s asset and must be of media type AVMediaTypeVideo.
A value of nil for videoSettings configures the output to return samples in a convenient uncompressed format, with properties determined according to the properties of the specified video tracks. Initialization will fail if the video settings cannot be used with the specified tracks.
AVAssetReaderVideoCompositionOutput can only produce uncompressed output. This means that the video settings dictionary must follow the rules for uncompressed video output, as laid out in AVVideoSettings.h.
This method throws an exception for any of the following reasons:
- any video track is not of media type AVMediaTypeVideo
- any video track is not part of this asset reader output’s AVAsset
- track output settings would cause the output to yield compressed samples
- video settings does not follow the rules for uncompressed video output (AVVideoSettings.h)
- video settings contains any of the following keys:
- AVVideoCleanApertureKey
- AVVideoPixelAspectRatioKey
- AVVideoScalingModeKey
- AVVideoDecompressionPropertiesKey
§Safety
video_settings generic should be of the correct type.
Sourcepub unsafe fn videoTracks(&self) -> Retained<NSArray<AVAssetTrack>>
Available on crate feature AVAssetTrack only.
pub unsafe fn videoTracks(&self) -> Retained<NSArray<AVAssetTrack>>
AVAssetTrack only.The tracks from which the receiver reads composited video.
The value of this property is an NSArray of AVAssetTracks owned by the target AVAssetReader’s asset.
Sourcepub unsafe fn videoSettings(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn videoSettings( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
The video settings used by the receiver.
The value of this property is an NSDictionary that contains values for keys as specified by AVVideoSettings.h. A value of nil indicates that the receiver will return video frames in a convenient uncompressed format, with properties determined according to the properties of the receiver’s video tracks.
Sourcepub unsafe fn videoComposition(&self) -> Option<Retained<AVVideoComposition>>
Available on crate feature AVVideoComposition only.
pub unsafe fn videoComposition(&self) -> Option<Retained<AVVideoComposition>>
AVVideoComposition only.The composition of video used by the receiver.
The value of this property is an AVVideoComposition that can be used to specify the visual arrangement of video frames read from each source track over the timeline of the source asset.
This property throws an exception if a value is set after reading has started.
Sourcepub unsafe fn setVideoComposition(
&self,
video_composition: Option<&AVVideoComposition>,
)
Available on crate feature AVVideoComposition only.
pub unsafe fn setVideoComposition( &self, video_composition: Option<&AVVideoComposition>, )
AVVideoComposition only.Setter for videoComposition.
This is copied when set.
Sourcepub unsafe fn customVideoCompositor(
&self,
) -> Option<Retained<ProtocolObject<dyn AVVideoCompositing>>>
Available on crate feature AVVideoCompositing only.
pub unsafe fn customVideoCompositor( &self, ) -> Option<Retained<ProtocolObject<dyn AVVideoCompositing>>>
AVVideoCompositing only.Indicates the custom video compositor instance used by the receiver.
This property is nil if there is no video compositor, or if the internal video compositor is in use.
Methods from Deref<Target = AVAssetReaderOutput>§
Sourcepub unsafe fn mediaType(&self) -> Retained<AVMediaType>
Available on crate feature AVMediaFormat only.
pub unsafe fn mediaType(&self) -> Retained<AVMediaType>
AVMediaFormat only.The media type of the samples that can be read from the receiver.
The value of this property is one of the media type strings defined in AVMediaFormat.h.
Sourcepub unsafe fn alwaysCopiesSampleData(&self) -> bool
pub unsafe fn alwaysCopiesSampleData(&self) -> bool
Indicates whether or not the data in buffers gets copied before being vended to the client.
When the value of this property is YES, the AVAssetReaderOutput will always vend a buffer with copied data to the client. Data in such buffers can be freely modified by the client. When the value of this property is NO, the buffers vended to the client may not be copied. Such buffers may still be referenced by other entities. The result of modifying a buffer whose data hasn’t been copied is undefined. Requesting buffers whose data hasn’t been copied when possible can lead to performance improvements.
The default value is YES.
This property throws an exception if a value is set after reading has started (the asset reader has progressed beyond AVAssetReaderStatusUnknown).
Sourcepub unsafe fn setAlwaysCopiesSampleData(&self, always_copies_sample_data: bool)
pub unsafe fn setAlwaysCopiesSampleData(&self, always_copies_sample_data: bool)
Setter for alwaysCopiesSampleData.
Sourcepub unsafe fn copyNextSampleBuffer(&self) -> Option<Retained<CMSampleBuffer>>
Available on crate feature objc2-core-media only.
pub unsafe fn copyNextSampleBuffer(&self) -> Option<Retained<CMSampleBuffer>>
objc2-core-media only.Copies the next sample buffer for the output synchronously.
Returns: A CMSampleBuffer object referencing the output sample buffer.
The client is responsible for calling CFRelease on the returned CMSampleBuffer object when finished with it. This method will return NULL if there are no more sample buffers available for the receiver within the time range specified by its AVAssetReader’s timeRange property, or if there is an error that prevents the AVAssetReader from reading more media data. When this method returns NULL, clients should check the value of the associated AVAssetReader’s status property to determine why no more samples could be read.
In certain configurations, such as when outputSettings is nil, copyNextSampleBuffer may return marker-only sample buffers as well as sample buffers containing media data. Marker-only sample buffers can be identified by CMSampleBufferGetNumSamples returning 0. Clients who do not need the information attached to marker-only sample buffers may skip them.
This method throws an exception if this output is not added to an instance of AVAssetReader (using -addOutput:) and -startReading is not called on that asset reader.
Sourcepub unsafe fn supportsRandomAccess(&self) -> bool
pub unsafe fn supportsRandomAccess(&self) -> bool
Indicates whether the asset reader output supports reconfiguration of the time ranges to read.
When the value of this property is YES, the time ranges read by the asset reader output can be reconfigured during reading using the -resetForReadingTimeRanges: method. This also prevents the attached AVAssetReader from progressing to AVAssetReaderStatusCompleted until -markConfigurationAsFinal has been invoked.
The default value is NO, which means that the asset reader output may not be reconfigured once reading has begun. When the value of this property is NO, AVAssetReader may be able to read media data more efficiently, particularly when multiple asset reader outputs are attached.
This property throws an exception if a value is set after reading has started (the asset reader has progressed beyond AVAssetReaderStatusUnknown) or after an AVAssetReaderOutput.Provider is attached.
Sourcepub unsafe fn setSupportsRandomAccess(&self, supports_random_access: bool)
pub unsafe fn setSupportsRandomAccess(&self, supports_random_access: bool)
Setter for supportsRandomAccess.
Sourcepub unsafe fn resetForReadingTimeRanges(&self, time_ranges: &NSArray<NSValue>)
pub unsafe fn resetForReadingTimeRanges(&self, time_ranges: &NSArray<NSValue>)
Starts reading over with a new set of time ranges.
Parameter timeRanges: An NSArray of NSValue objects, each representing a single CMTimeRange structure
This method may only be used if supportsRandomAccess has been set to YES and may not be called after -markConfigurationAsFinal has been invoked.
This method is often used in conjunction with AVAssetWriter multi-pass (see AVAssetWriterInput category AVAssetWriterInputMultiPass). In this usage, the caller will invoke -copyNextSampleBuffer until that method returns NULL and then ask the AVAssetWriterInput for a set of time ranges from which it thinks media data should be re-encoded. These time ranges are then given to this method to set up the asset reader output for the next pass.
The time ranges set here override the time range set on AVAssetReader.timeRange. Just as with that property, for each time range in the array the intersection of that time range and CMTimeRangeMake(kCMTimeZero, asset.duration) will take effect.
If this method is invoked after the status of the attached AVAssetReader has become AVAssetReaderStatusFailed or AVAssetReaderStatusCancelled, no change in status will occur and the result of the next call to -copyNextSampleBuffer will be NULL.
This method throws an exception if the following conditions are not honored:
- each item in time ranges must be an NSValue
- the start of each time range must be numeric - see CMTIME_IS_NUMERIC
- the duration of each time range must be nonnegative and numeric, or kCMTimePositiveInfinity
- the start of each time range must be greater than or equal to the end of the previous time range
- start times must be strictly increasing
- time ranges must not overlap
- cannot be called before -startReading has been invoked on the attached asset reader
- cannot be called until all samples of media data have been read (i.e. copyNextSampleBuffer returns NULL and the asset reader has not entered a failure state)
- cannot be called without setting “supportsRandomAccess” to YES
- cannot be called after calling -markConfigurationAsFinal
Sourcepub unsafe fn markConfigurationAsFinal(&self)
pub unsafe fn markConfigurationAsFinal(&self)
Informs the receiver that no more reconfiguration of time ranges is necessary and allows the attached AVAssetReader to advance to AVAssetReaderStatusCompleted.
When the value of supportsRandomAccess is YES, the attached asset reader will not advance to AVAssetReaderStatusCompleted until this method is called.
When the destination of media data vended by the receiver is an AVAssetWriterInput configured for multi-pass encoding, a convenient time to invoke this method is after the asset writer input indicates that no more passes will be performed.
Once this method has been called, further invocations of -resetForReadingTimeRanges: are disallowed.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVAssetReaderOutput> for AVAssetReaderVideoCompositionOutput
impl AsRef<AVAssetReaderOutput> for AVAssetReaderVideoCompositionOutput
Source§fn as_ref(&self) -> &AVAssetReaderOutput
fn as_ref(&self) -> &AVAssetReaderOutput
Source§impl Borrow<AVAssetReaderOutput> for AVAssetReaderVideoCompositionOutput
impl Borrow<AVAssetReaderOutput> for AVAssetReaderVideoCompositionOutput
Source§fn borrow(&self) -> &AVAssetReaderOutput
fn borrow(&self) -> &AVAssetReaderOutput
Source§impl ClassType for AVAssetReaderVideoCompositionOutput
impl ClassType for AVAssetReaderVideoCompositionOutput
Source§const NAME: &'static str = "AVAssetReaderVideoCompositionOutput"
const NAME: &'static str = "AVAssetReaderVideoCompositionOutput"
Source§type Super = AVAssetReaderOutput
type Super = AVAssetReaderOutput
Source§type ThreadKind = <<AVAssetReaderVideoCompositionOutput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVAssetReaderVideoCompositionOutput as ClassType>::Super as ClassType>::ThreadKind
Source§impl NSObjectProtocol for AVAssetReaderVideoCompositionOutput
impl NSObjectProtocol for AVAssetReaderVideoCompositionOutput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref