pub struct AVAssetReaderTrackOutput { /* private fields */ }AVAssetReaderOutput only.Expand description
AVAssetReaderTrackOutput is a concrete subclass of AVAssetReaderOutput that defines an interface for reading media data from a single AVAssetTrack of an AVAssetReader’s AVAsset.
Clients can read the media data of an asset track by adding an instance of AVAssetReaderTrackOutput to an AVAssetReader using the -[AVAssetReader addOutput:] method. The track’s media samples can either be read in the format in which they are stored in the asset, or they can be converted to a different format.
See also Apple’s documentation
Implementations§
Source§impl AVAssetReaderTrackOutput
impl AVAssetReaderTrackOutput
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
pub unsafe fn new() -> Retained<Self>
Sourcepub unsafe fn assetReaderTrackOutputWithTrack_outputSettings(
track: &AVAssetTrack,
output_settings: Option<&NSDictionary<NSString, AnyObject>>,
) -> Retained<Self>
Available on crate feature AVAssetTrack only.
pub unsafe fn assetReaderTrackOutputWithTrack_outputSettings( track: &AVAssetTrack, output_settings: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>
AVAssetTrack only.Returns an instance of AVAssetReaderTrackOutput for reading from the specified track and supplying media data according to the specified output settings.
Parameter track: The AVAssetTrack from which the resulting AVAssetReaderTrackOutput should read sample buffers.
Parameter outputSettings: An NSDictionary of output settings to be used for sample output. See AVAudioSettings.h for available output settings for audio tracks or AVVideoSettings.h for available output settings for video tracks and also for more information about how to construct an output settings dictionary.
Returns: An instance of AVAssetReaderTrackOutput.
The track must be one of the tracks contained by the target AVAssetReader’s asset.
A value of nil for outputSettings configures the output to vend samples in their original format as stored by the specified track. Initialization will fail if the output settings cannot be used with the specified track.
AVAssetReaderTrackOutput can only produce uncompressed output. For audio output settings, this means that AVFormatIDKey must be kAudioFormatLinearPCM. For video output settings, this means that the dictionary must follow the rules for uncompressed video output, as laid out in AVVideoSettings.h. AVAssetReaderTrackOutput does not support the AVAudioSettings.h key AVSampleRateConverterAudioQualityKey or the following AVVideoSettings.h keys:
AVVideoCleanApertureKey AVVideoPixelAspectRatioKey AVVideoScalingModeKey
When constructing video output settings the choice of pixel format will affect the performance and quality of the decompression. For optimal performance when decompressing video the requested pixel format should be one that the decoder supports natively to avoid unnecessary conversions. Below are some recommendations:
For H.264 use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, or kCVPixelFormatType_420YpCbCr8BiPlanarFullRange if the video is known to be full range. For JPEG on iOS, use kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.
For other codecs on OSX, kCVPixelFormatType_422YpCbCr8 is the preferred pixel format for video and is generally the most performant when decoding. If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended.
ProRes encoded media can contain up to 12bits/ch. If your source is ProRes encoded and you wish to preserve more than 8bits/ch during decompression then use one of the following pixel formats: kCVPixelFormatType_4444AYpCbCr16, kCVPixelFormatType_422YpCbCr16, kCVPixelFormatType_422YpCbCr10, or kCVPixelFormatType_64ARGB. AVAssetReader does not support scaling with any of these high bit depth pixel formats. If you use them then do not specify kCVPixelBufferWidthKey or kCVPixelBufferHeightKey in your outputSettings dictionary. If you plan to append these sample buffers to an AVAssetWriterInput then note that only the ProRes encoders support these pixel formats.
ProRes 4444 encoded media can contain a mathematically lossless alpha channel. To preserve the alpha channel during decompression use a pixel format with an alpha component such as kCVPixelFormatType_4444AYpCbCr16 or kCVPixelFormatType_64ARGB. To test whether your source contains an alpha channel check that the track’s format description has kCMFormatDescriptionExtension_Depth and that its value is 32.
§Safety
output_settings generic should be of the correct type.
Sourcepub unsafe fn initWithTrack_outputSettings(
this: Allocated<Self>,
track: &AVAssetTrack,
output_settings: Option<&NSDictionary<NSString, AnyObject>>,
) -> Retained<Self>
Available on crate feature AVAssetTrack only.
pub unsafe fn initWithTrack_outputSettings( this: Allocated<Self>, track: &AVAssetTrack, output_settings: Option<&NSDictionary<NSString, AnyObject>>, ) -> Retained<Self>
AVAssetTrack only.Returns an instance of AVAssetReaderTrackOutput for reading from the specified track and supplying media data according to the specified output settings.
Parameter track: The AVAssetTrack from which the resulting AVAssetReaderTrackOutput should read sample buffers.
Parameter outputSettings: An NSDictionary of output settings to be used for sample output. See AVAudioSettings.h for available output settings for audio tracks or AVVideoSettings.h for available output settings for video tracks and also for more information about how to construct an output settings dictionary.
Returns: An instance of AVAssetReaderTrackOutput.
The track must be one of the tracks contained by the target AVAssetReader’s asset.
A value of nil for outputSettings configures the output to vend samples in their original format as stored by the specified track. Initialization will fail if the output settings cannot be used with the specified track.
AVAssetReaderTrackOutput can only produce uncompressed output. For audio output settings, this means that AVFormatIDKey must be kAudioFormatLinearPCM. For video output settings, this means that the dictionary must follow the rules for uncompressed video output, as laid out in AVVideoSettings.h. AVAssetReaderTrackOutput does not support the AVAudioSettings.h key AVSampleRateConverterAudioQualityKey or the following AVVideoSettings.h keys:
AVVideoCleanApertureKey AVVideoPixelAspectRatioKey AVVideoScalingModeKey
When constructing video output settings the choice of pixel format will affect the performance and quality of the decompression. For optimal performance when decompressing video the requested pixel format should be one that the decoder supports natively to avoid unnecessary conversions. Below are some recommendations:
For H.264 use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, or kCVPixelFormatType_420YpCbCr8BiPlanarFullRange if the video is known to be full range. For JPEG on iOS, use kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.
For other codecs on OSX, kCVPixelFormatType_422YpCbCr8 is the preferred pixel format for video and is generally the most performant when decoding. If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended.
ProRes encoded media can contain up to 12bits/ch. If your source is ProRes encoded and you wish to preserve more than 8bits/ch during decompression then use one of the following pixel formats: kCVPixelFormatType_4444AYpCbCr16, kCVPixelFormatType_422YpCbCr16, kCVPixelFormatType_422YpCbCr10, or kCVPixelFormatType_64ARGB. AVAssetReader does not support scaling with any of these high bit depth pixel formats. If you use them then do not specify kCVPixelBufferWidthKey or kCVPixelBufferHeightKey in your outputSettings dictionary. If you plan to append these sample buffers to an AVAssetWriterInput then note that only the ProRes encoders support these pixel formats.
ProRes 4444 encoded media can contain a mathematically lossless alpha channel. To preserve the alpha channel during decompression use a pixel format with an alpha component such as kCVPixelFormatType_4444AYpCbCr16 or kCVPixelFormatType_64ARGB. To test whether your source contains an alpha channel check that the track’s format description has kCMFormatDescriptionExtension_Depth and that its value is 32.
This method throws an exception for any of the following reasons:
- the output settings dictionary contains an unsupported key mentioned above
- the output settings dictionary does not contain any recognized key
- output settings are not compatible with track’s media type
- track output settings would cause the output to yield compressed samples
§Safety
output_settings generic should be of the correct type.
Sourcepub unsafe fn track(&self) -> Retained<AVAssetTrack>
Available on crate feature AVAssetTrack only.
pub unsafe fn track(&self) -> Retained<AVAssetTrack>
AVAssetTrack only.The track from which the receiver reads sample buffers.
The value of this property is an AVAssetTrack owned by the target AVAssetReader’s asset.
Sourcepub unsafe fn outputSettings(
&self,
) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
pub unsafe fn outputSettings( &self, ) -> Option<Retained<NSDictionary<NSString, AnyObject>>>
The output settings used by the receiver.
The value of this property is an NSDictionary that contains values for keys as specified by either AVAudioSettings.h for audio tracks or AVVideoSettings.h for video tracks. A value of nil indicates that the receiver will vend samples in their original format as stored in the target track.
Sourcepub unsafe fn audioTimePitchAlgorithm(
&self,
) -> Retained<AVAudioTimePitchAlgorithm>
Available on crate feature AVAudioProcessingSettings only.
pub unsafe fn audioTimePitchAlgorithm( &self, ) -> Retained<AVAudioTimePitchAlgorithm>
AVAudioProcessingSettings only.Indicates the processing algorithm used to manage audio pitch for scaled audio edits.
Constants for various time pitch algorithms, e.g. AVAudioTimePitchAlgorithmSpectral, are defined in AVAudioProcessingSettings.h. An NSInvalidArgumentException will be raised if this property is set to a value other than the constants defined in that file.
The default value is AVAudioTimePitchAlgorithmSpectral.
This property throws an exception for any of the following reasons:
- a value is set value after reading has started
- a value is set other than AVAudioTimePitchAlgorithmSpectral, AVAudioTimePitchAlgorithmTimeDomain, or AVAudioTimePitchAlgorithmVarispeed.
Sourcepub unsafe fn setAudioTimePitchAlgorithm(
&self,
audio_time_pitch_algorithm: &AVAudioTimePitchAlgorithm,
)
Available on crate feature AVAudioProcessingSettings only.
pub unsafe fn setAudioTimePitchAlgorithm( &self, audio_time_pitch_algorithm: &AVAudioTimePitchAlgorithm, )
AVAudioProcessingSettings only.Setter for audioTimePitchAlgorithm.
This is copied when set.
Methods from Deref<Target = AVAssetReaderOutput>§
Sourcepub unsafe fn mediaType(&self) -> Retained<AVMediaType>
Available on crate feature AVMediaFormat only.
pub unsafe fn mediaType(&self) -> Retained<AVMediaType>
AVMediaFormat only.The media type of the samples that can be read from the receiver.
The value of this property is one of the media type strings defined in AVMediaFormat.h.
Sourcepub unsafe fn alwaysCopiesSampleData(&self) -> bool
pub unsafe fn alwaysCopiesSampleData(&self) -> bool
Indicates whether or not the data in buffers gets copied before being vended to the client.
When the value of this property is YES, the AVAssetReaderOutput will always vend a buffer with copied data to the client. Data in such buffers can be freely modified by the client. When the value of this property is NO, the buffers vended to the client may not be copied. Such buffers may still be referenced by other entities. The result of modifying a buffer whose data hasn’t been copied is undefined. Requesting buffers whose data hasn’t been copied when possible can lead to performance improvements.
The default value is YES.
This property throws an exception if a value is set after reading has started (the asset reader has progressed beyond AVAssetReaderStatusUnknown).
Sourcepub unsafe fn setAlwaysCopiesSampleData(&self, always_copies_sample_data: bool)
pub unsafe fn setAlwaysCopiesSampleData(&self, always_copies_sample_data: bool)
Setter for alwaysCopiesSampleData.
Sourcepub unsafe fn copyNextSampleBuffer(&self) -> Option<Retained<CMSampleBuffer>>
Available on crate feature objc2-core-media only.
pub unsafe fn copyNextSampleBuffer(&self) -> Option<Retained<CMSampleBuffer>>
objc2-core-media only.Copies the next sample buffer for the output synchronously.
Returns: A CMSampleBuffer object referencing the output sample buffer.
The client is responsible for calling CFRelease on the returned CMSampleBuffer object when finished with it. This method will return NULL if there are no more sample buffers available for the receiver within the time range specified by its AVAssetReader’s timeRange property, or if there is an error that prevents the AVAssetReader from reading more media data. When this method returns NULL, clients should check the value of the associated AVAssetReader’s status property to determine why no more samples could be read.
In certain configurations, such as when outputSettings is nil, copyNextSampleBuffer may return marker-only sample buffers as well as sample buffers containing media data. Marker-only sample buffers can be identified by CMSampleBufferGetNumSamples returning 0. Clients who do not need the information attached to marker-only sample buffers may skip them.
This method throws an exception if this output is not added to an instance of AVAssetReader (using -addOutput:) and -startReading is not called on that asset reader.
Sourcepub unsafe fn supportsRandomAccess(&self) -> bool
pub unsafe fn supportsRandomAccess(&self) -> bool
Indicates whether the asset reader output supports reconfiguration of the time ranges to read.
When the value of this property is YES, the time ranges read by the asset reader output can be reconfigured during reading using the -resetForReadingTimeRanges: method. This also prevents the attached AVAssetReader from progressing to AVAssetReaderStatusCompleted until -markConfigurationAsFinal has been invoked.
The default value is NO, which means that the asset reader output may not be reconfigured once reading has begun. When the value of this property is NO, AVAssetReader may be able to read media data more efficiently, particularly when multiple asset reader outputs are attached.
This property throws an exception if a value is set after reading has started (the asset reader has progressed beyond AVAssetReaderStatusUnknown) or after an AVAssetReaderOutput.Provider is attached.
Sourcepub unsafe fn setSupportsRandomAccess(&self, supports_random_access: bool)
pub unsafe fn setSupportsRandomAccess(&self, supports_random_access: bool)
Setter for supportsRandomAccess.
Sourcepub unsafe fn resetForReadingTimeRanges(&self, time_ranges: &NSArray<NSValue>)
pub unsafe fn resetForReadingTimeRanges(&self, time_ranges: &NSArray<NSValue>)
Starts reading over with a new set of time ranges.
Parameter timeRanges: An NSArray of NSValue objects, each representing a single CMTimeRange structure
This method may only be used if supportsRandomAccess has been set to YES and may not be called after -markConfigurationAsFinal has been invoked.
This method is often used in conjunction with AVAssetWriter multi-pass (see AVAssetWriterInput category AVAssetWriterInputMultiPass). In this usage, the caller will invoke -copyNextSampleBuffer until that method returns NULL and then ask the AVAssetWriterInput for a set of time ranges from which it thinks media data should be re-encoded. These time ranges are then given to this method to set up the asset reader output for the next pass.
The time ranges set here override the time range set on AVAssetReader.timeRange. Just as with that property, for each time range in the array the intersection of that time range and CMTimeRangeMake(kCMTimeZero, asset.duration) will take effect.
If this method is invoked after the status of the attached AVAssetReader has become AVAssetReaderStatusFailed or AVAssetReaderStatusCancelled, no change in status will occur and the result of the next call to -copyNextSampleBuffer will be NULL.
This method throws an exception if the following conditions are not honored:
- each item in time ranges must be an NSValue
- the start of each time range must be numeric - see CMTIME_IS_NUMERIC
- the duration of each time range must be nonnegative and numeric, or kCMTimePositiveInfinity
- the start of each time range must be greater than or equal to the end of the previous time range
- start times must be strictly increasing
- time ranges must not overlap
- cannot be called before -startReading has been invoked on the attached asset reader
- cannot be called until all samples of media data have been read (i.e. copyNextSampleBuffer returns NULL and the asset reader has not entered a failure state)
- cannot be called without setting “supportsRandomAccess” to YES
- cannot be called after calling -markConfigurationAsFinal
Sourcepub unsafe fn markConfigurationAsFinal(&self)
pub unsafe fn markConfigurationAsFinal(&self)
Informs the receiver that no more reconfiguration of time ranges is necessary and allows the attached AVAssetReader to advance to AVAssetReaderStatusCompleted.
When the value of supportsRandomAccess is YES, the attached asset reader will not advance to AVAssetReaderStatusCompleted until this method is called.
When the destination of media data vended by the receiver is an AVAssetWriterInput configured for multi-pass encoding, a convenient time to invoke this method is after the asset writer input indicates that no more passes will be performed.
Once this method has been called, further invocations of -resetForReadingTimeRanges: are disallowed.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVAssetReaderOutput> for AVAssetReaderTrackOutput
impl AsRef<AVAssetReaderOutput> for AVAssetReaderTrackOutput
Source§fn as_ref(&self) -> &AVAssetReaderOutput
fn as_ref(&self) -> &AVAssetReaderOutput
Source§impl AsRef<AnyObject> for AVAssetReaderTrackOutput
impl AsRef<AnyObject> for AVAssetReaderTrackOutput
Source§impl AsRef<NSObject> for AVAssetReaderTrackOutput
impl AsRef<NSObject> for AVAssetReaderTrackOutput
Source§impl Borrow<AVAssetReaderOutput> for AVAssetReaderTrackOutput
impl Borrow<AVAssetReaderOutput> for AVAssetReaderTrackOutput
Source§fn borrow(&self) -> &AVAssetReaderOutput
fn borrow(&self) -> &AVAssetReaderOutput
Source§impl Borrow<AnyObject> for AVAssetReaderTrackOutput
impl Borrow<AnyObject> for AVAssetReaderTrackOutput
Source§impl Borrow<NSObject> for AVAssetReaderTrackOutput
impl Borrow<NSObject> for AVAssetReaderTrackOutput
Source§impl ClassType for AVAssetReaderTrackOutput
impl ClassType for AVAssetReaderTrackOutput
Source§const NAME: &'static str = "AVAssetReaderTrackOutput"
const NAME: &'static str = "AVAssetReaderTrackOutput"
Source§type Super = AVAssetReaderOutput
type Super = AVAssetReaderOutput
Source§type ThreadKind = <<AVAssetReaderTrackOutput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVAssetReaderTrackOutput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVAssetReaderTrackOutput
impl Debug for AVAssetReaderTrackOutput
Source§impl Deref for AVAssetReaderTrackOutput
impl Deref for AVAssetReaderTrackOutput
Source§impl Hash for AVAssetReaderTrackOutput
impl Hash for AVAssetReaderTrackOutput
Source§impl Message for AVAssetReaderTrackOutput
impl Message for AVAssetReaderTrackOutput
Source§impl NSObjectProtocol for AVAssetReaderTrackOutput
impl NSObjectProtocol for AVAssetReaderTrackOutput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref