#[repr(C)]pub struct AVSampleBufferAudioRenderer { /* private fields */ }AVSampleBufferAudioRenderer only.Expand description
Implementations§
Source§impl AVSampleBufferAudioRenderer
impl AVSampleBufferAudioRenderer
pub unsafe fn status(&self) -> AVQueuedSampleBufferRenderingStatus
AVQueuedSampleBufferRendering only.pub unsafe fn error(&self) -> Option<Retained<NSError>>
Sourcepub unsafe fn audioOutputDeviceUniqueID(&self) -> Option<Retained<NSString>>
pub unsafe fn audioOutputDeviceUniqueID(&self) -> Option<Retained<NSString>>
Specifies the unique ID of the Core Audio output device used to play audio.
By default, the value of this property is nil, indicating that the default audio output device is used. Otherwise the value of this property is an NSString containing the unique ID of the Core Audio output device to be used for audio output.
Core Audio’s kAudioDevicePropertyDeviceUID is a suitable source of audio output device unique IDs.
Modifying this property while the timebase’s rate is not 0.0 may cause the rate to briefly change to 0.0.
On macOS, the audio device clock may be used as the AVSampleBufferRenderSynchronizer’s and all attached AVQueuedSampleBufferRendering’s timebase’s clocks. If the audioOutputDeviceUniqueID is modified, the clocks of all these timebases may also change.
If multiple AVSampleBufferAudioRenderers with different values for audioOutputDeviceUniqueID are attached to the same AVSampleBufferRenderSynchronizer, audio may not stay in sync during playback. To avoid this, ensure that all synchronized AVSampleBufferAudioRenderers are using the same audio output device.
Sourcepub unsafe fn setAudioOutputDeviceUniqueID(
&self,
audio_output_device_unique_id: Option<&NSString>,
)
pub unsafe fn setAudioOutputDeviceUniqueID( &self, audio_output_device_unique_id: Option<&NSString>, )
Setter for audioOutputDeviceUniqueID.
Sourcepub unsafe fn audioTimePitchAlgorithm(
&self,
) -> Retained<AVAudioTimePitchAlgorithm>
Available on crate feature AVAudioProcessingSettings only.
pub unsafe fn audioTimePitchAlgorithm( &self, ) -> Retained<AVAudioTimePitchAlgorithm>
AVAudioProcessingSettings only.Indicates the processing algorithm used to manage audio pitch at varying rates.
Constants for various time pitch algorithms, e.g. AVAudioTimePitchSpectral, are defined in AVAudioProcessingSettings.h.
The default value for applications linked on or after iOS 15.0 or macOS 12.0 is AVAudioTimePitchAlgorithmTimeDomain. For iOS versions prior to 15.0 the default value is AVAudioTimePitchAlgorithmLowQualityZeroLatency. For macOS versions prior to 12.0 the default value is AVAudioTimePitchAlgorithmSpectral.
If the timebase’s rate is not supported by the audioTimePitchAlgorithm, audio will be muted.
Modifying this property while the timebase’s rate is not 0.0 may cause the rate to briefly change to 0.0.
Sourcepub unsafe fn setAudioTimePitchAlgorithm(
&self,
audio_time_pitch_algorithm: &AVAudioTimePitchAlgorithm,
)
Available on crate feature AVAudioProcessingSettings only.
pub unsafe fn setAudioTimePitchAlgorithm( &self, audio_time_pitch_algorithm: &AVAudioTimePitchAlgorithm, )
AVAudioProcessingSettings only.Setter for audioTimePitchAlgorithm.
Sourcepub unsafe fn allowedAudioSpatializationFormats(
&self,
) -> AVAudioSpatializationFormats
Available on crate feature AVAudioProcessingSettings only.
pub unsafe fn allowedAudioSpatializationFormats( &self, ) -> AVAudioSpatializationFormats
AVAudioProcessingSettings only.Indicates the source audio channel layouts allowed by the receiver for spatialization.
Spatialization uses psychoacoustic methods to create a more immersive audio rendering when the content is played on specialized headphones and speaker arrangements. When an AVSampleBufferAudioRenderer’s allowedAudioSpatializationFormats property is set to AVAudioSpatializationFormatMonoAndStereo the AVSampleBufferAudioRenderer will attempt to spatialize content tagged with a stereo channel layout, two-channel content with no layout specified as well as mono. It is considered incorrect to render a binaural recording with spatialization. A binaural recording is captured using two carefully placed microphones at each ear where the intent, when played on headphones, is to reproduce a naturally occurring spatial effect. Content tagged with a binaural channel layout will ignore this property value. When an AVSampleBufferAudioRenderer’s allowedAudioSpatializationFormats property is set to AVAudioSpatializationFormatMultichannel the AVSampleBufferAudioRenderer will attempt to spatialize any decodable multichannel layout. Setting this property to AVAudioSpatializationFormatMonoStereoAndMultichannel indicates that the sender allows the AVSampleBufferAudioRenderer to spatialize any decodable mono, stereo or multichannel layout. This property is not observable. The default value for this property is AVAudioSpatializationFormatMultichannel.
Sourcepub unsafe fn setAllowedAudioSpatializationFormats(
&self,
allowed_audio_spatialization_formats: AVAudioSpatializationFormats,
)
Available on crate feature AVAudioProcessingSettings only.
pub unsafe fn setAllowedAudioSpatializationFormats( &self, allowed_audio_spatialization_formats: AVAudioSpatializationFormats, )
AVAudioProcessingSettings only.Setter for allowedAudioSpatializationFormats.
Source§impl AVSampleBufferAudioRenderer
Methods declared on superclass NSObject.
impl AVSampleBufferAudioRenderer
Methods declared on superclass NSObject.
Source§impl AVSampleBufferAudioRenderer
AVSampleBufferAudioRendererVolumeControl.
impl AVSampleBufferAudioRenderer
AVSampleBufferAudioRendererVolumeControl.
Source§impl AVSampleBufferAudioRenderer
AVSampleBufferAudioRendererQueueManagement.
impl AVSampleBufferAudioRenderer
AVSampleBufferAudioRendererQueueManagement.
Sourcepub unsafe fn flushFromSourceTime_completionHandler(
&self,
time: CMTime,
completion_handler: &Block<dyn Fn(Bool)>,
)
Available on crate features block2 and objc2-core-media only.
pub unsafe fn flushFromSourceTime_completionHandler( &self, time: CMTime, completion_handler: &Block<dyn Fn(Bool)>, )
block2 and objc2-core-media only.Flushes enqueued sample buffers with presentation time stamps later than or equal to the specified time.
Parameter completionHandler: A block that is invoked, possibly asynchronously, after the flush operation completes or fails.
This method can be used to replace media data scheduled to be rendered in the future, without interrupting playback. One example of this is when the data that has already been enqueued is from a sequence of two songs and the second song is swapped for a new song. In this case, this method would be called with the time stamp of the first sample buffer from the second song. After the completion handler is executed with a YES parameter, media data may again be enqueued with timestamps at the specified time.
If NO is provided to the completion handler, the flush did not succeed and the set of enqueued sample buffers remains unchanged. A flush can fail becuse the source time was too close to (or earlier than) the current time or because the current configuration of the receiver does not support flushing at a particular time. In these cases, the caller can choose to flush all enqueued media data by invoking the -flush method.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AVQueuedSampleBufferRendering for AVSampleBufferAudioRenderer
Available on crate feature AVQueuedSampleBufferRendering only.
impl AVQueuedSampleBufferRendering for AVSampleBufferAudioRenderer
AVQueuedSampleBufferRendering only.Source§unsafe fn timebase(&self) -> Retained<CMTimebase>
unsafe fn timebase(&self) -> Retained<CMTimebase>
objc2-core-media only.Source§unsafe fn enqueueSampleBuffer(&self, sample_buffer: &CMSampleBuffer)
unsafe fn enqueueSampleBuffer(&self, sample_buffer: &CMSampleBuffer)
objc2-core-media only.Source§unsafe fn flush(&self)
unsafe fn flush(&self)
Source§unsafe fn isReadyForMoreMediaData(&self) -> bool
unsafe fn isReadyForMoreMediaData(&self) -> bool
Source§unsafe fn stopRequestingMediaData(&self)
unsafe fn stopRequestingMediaData(&self)
Source§impl AsRef<AnyObject> for AVSampleBufferAudioRenderer
impl AsRef<AnyObject> for AVSampleBufferAudioRenderer
Source§impl AsRef<NSObject> for AVSampleBufferAudioRenderer
impl AsRef<NSObject> for AVSampleBufferAudioRenderer
Source§impl Borrow<NSObject> for AVSampleBufferAudioRenderer
impl Borrow<NSObject> for AVSampleBufferAudioRenderer
Source§impl ClassType for AVSampleBufferAudioRenderer
impl ClassType for AVSampleBufferAudioRenderer
Source§const NAME: &'static str = "AVSampleBufferAudioRenderer"
const NAME: &'static str = "AVSampleBufferAudioRenderer"
Source§type ThreadKind = <<AVSampleBufferAudioRenderer as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVSampleBufferAudioRenderer as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVSampleBufferAudioRenderer
impl Debug for AVSampleBufferAudioRenderer
Source§impl Deref for AVSampleBufferAudioRenderer
impl Deref for AVSampleBufferAudioRenderer
Source§impl Hash for AVSampleBufferAudioRenderer
impl Hash for AVSampleBufferAudioRenderer
Source§impl NSObjectProtocol for AVSampleBufferAudioRenderer
impl NSObjectProtocol for AVSampleBufferAudioRenderer
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref