pub struct AVAudioPlayerNode { /* private fields */ }AVAudioNode and AVAudioPlayerNode only.Expand description
Play buffers or segments of audio files.
AVAudioPlayerNode supports scheduling the playback of AVAudioBuffer instances,
or segments of audio files opened via AVAudioFile. Buffers and segments may be
scheduled at specific points in time, or to play immediately following preceding segments.
FORMATS
Normally, you will want to configure the node’s output format with the same number of
channels as are in the files and buffers to be played. Otherwise, channels will be dropped
or added as required. It is usually better to use an AVAudioMixerNode to
do this.
Similarly, when playing file segments, the node will sample rate convert if necessary, but it is often preferable to configure the node’s output sample rate to match that of the file(s) and use a mixer to perform the rate conversion.
When playing buffers, there is an implicit assumption that the buffers are at the same sample rate as the node’s output format.
TIMELINES
The usual AVAudioNode sample times (as observed by lastRenderTime)
have an arbitrary zero point. AVAudioPlayerNode superimposes a second “player timeline” on
top of this, to reflect when the player was started, and intervals during which it was
paused. The methods nodeTimeForPlayerTime: and playerTimeForNodeTime:
convert between the two.
This class’ stop method unschedules all previously scheduled buffers and
file segments, and returns the player timeline to sample time 0.
TIMESTAMPS
The “schedule” methods all take an AVAudioTime “when” parameter. This is
interpreted as follows:
- nil:
- if there have been previous commands, the new one is played immediately following the last one.
- otherwise, if the node is playing, the event is played in the very near future.
- otherwise, the command is played at sample time 0.
- sample time:
- relative to the node’s start time (which begins at 0 when the node is started).
- host time:
- ignored unless the sample time is invalid when the engine is rendering to an audio device.
- ignored in manual rendering mode.
ERRORS
The “schedule” methods can fail if:
- a buffer’s channel count does not match that of the node’s output format.
- a file can’t be accessed.
- an AVAudioTime specifies neither a valid sample time or host time.
- a segment’s start frame or frame count is negative.
BUFFER/FILE COMPLETION HANDLERS
The buffer or file completion handlers (see scheduling methods) are a means to schedule
more data if available on the player node. See AVAudioPlayerNodeCompletionCallbackType
for details on the different buffer/file completion callback types.
Note that a player should not be stopped from within a completion handler callback because it can deadlock while trying to unschedule previously scheduled buffers.
OFFLINE RENDERING
When a player node is used with the engine operating in the manual rendering mode, the
buffer/file completion handlers, lastRenderTime and the latencies (latency and
outputPresentationLatency) can be used to track how much data the player has rendered and
how much more data is left to render.
See also Apple’s documentation
Implementations§
Source§impl AVAudioPlayerNode
impl AVAudioPlayerNode
pub unsafe fn init(this: Allocated<Self>) -> Retained<Self>
Sourcepub unsafe fn scheduleBuffer_completionHandler(
&self,
buffer: &AVAudioPCMBuffer,
completion_handler: AVAudioNodeCompletionHandler,
)
Available on crate features AVAudioBuffer and AVAudioTypes and block2 only.
pub unsafe fn scheduleBuffer_completionHandler( &self, buffer: &AVAudioPCMBuffer, completion_handler: AVAudioNodeCompletionHandler, )
AVAudioBuffer and AVAudioTypes and block2 only.Schedule playing samples from an AVAudioBuffer.
Parameter buffer: the buffer to play
Parameter completionHandler: called after the buffer has been consumed by the player or the player is stopped. may be nil.
Schedules the buffer to be played following any previously scheduled commands.
It is possible for the completionHandler to be called before rendering begins or before the buffer is played completely.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleBuffer_completionCallbackType_completionHandler(
&self,
buffer: &AVAudioPCMBuffer,
callback_type: AVAudioPlayerNodeCompletionCallbackType,
completion_handler: AVAudioPlayerNodeCompletionHandler,
)
Available on crate features AVAudioBuffer and block2 only.
pub unsafe fn scheduleBuffer_completionCallbackType_completionHandler( &self, buffer: &AVAudioPCMBuffer, callback_type: AVAudioPlayerNodeCompletionCallbackType, completion_handler: AVAudioPlayerNodeCompletionHandler, )
AVAudioBuffer and block2 only.Schedule playing samples from an AVAudioBuffer.
Parameter buffer: the buffer to play
Parameter callbackType: option to specify when the completion handler must be called
Parameter completionHandler: called after the buffer has been consumed by the player or has finished playing back or
the player is stopped. may be nil.
Schedules the buffer to be played following any previously scheduled commands.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleBuffer_atTime_options_completionHandler(
&self,
buffer: &AVAudioPCMBuffer,
when: Option<&AVAudioTime>,
options: AVAudioPlayerNodeBufferOptions,
completion_handler: AVAudioNodeCompletionHandler,
)
Available on crate features AVAudioBuffer and AVAudioTime and AVAudioTypes and block2 only.
pub unsafe fn scheduleBuffer_atTime_options_completionHandler( &self, buffer: &AVAudioPCMBuffer, when: Option<&AVAudioTime>, options: AVAudioPlayerNodeBufferOptions, completion_handler: AVAudioNodeCompletionHandler, )
AVAudioBuffer and AVAudioTime and AVAudioTypes and block2 only.Schedule playing samples from an AVAudioBuffer.
Parameter buffer: the buffer to play
Parameter when: the time at which to play the buffer. see the discussion of timestamps, above.
Parameter options: options for looping, interrupting other buffers, etc.
Parameter completionHandler: called after the buffer has been consumed by the player or the player is stopped. may be nil.
It is possible for the completionHandler to be called before rendering begins or before the buffer is played completely.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleBuffer_atTime_options_completionCallbackType_completionHandler(
&self,
buffer: &AVAudioPCMBuffer,
when: Option<&AVAudioTime>,
options: AVAudioPlayerNodeBufferOptions,
callback_type: AVAudioPlayerNodeCompletionCallbackType,
completion_handler: AVAudioPlayerNodeCompletionHandler,
)
Available on crate features AVAudioBuffer and AVAudioTime and block2 only.
pub unsafe fn scheduleBuffer_atTime_options_completionCallbackType_completionHandler( &self, buffer: &AVAudioPCMBuffer, when: Option<&AVAudioTime>, options: AVAudioPlayerNodeBufferOptions, callback_type: AVAudioPlayerNodeCompletionCallbackType, completion_handler: AVAudioPlayerNodeCompletionHandler, )
AVAudioBuffer and AVAudioTime and block2 only.Schedule playing samples from an AVAudioBuffer.
Parameter buffer: the buffer to play
Parameter when: the time at which to play the buffer. see the discussion of timestamps, above.
Parameter options: options for looping, interrupting other buffers, etc.
Parameter callbackType: option to specify when the completion handler must be called
Parameter completionHandler: called after the buffer has been consumed by the player or has finished playing back or
the player is stopped. may be nil.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleFile_atTime_completionHandler(
&self,
file: &AVAudioFile,
when: Option<&AVAudioTime>,
completion_handler: AVAudioNodeCompletionHandler,
)
Available on crate features AVAudioFile and AVAudioTime and AVAudioTypes and block2 only.
pub unsafe fn scheduleFile_atTime_completionHandler( &self, file: &AVAudioFile, when: Option<&AVAudioTime>, completion_handler: AVAudioNodeCompletionHandler, )
AVAudioFile and AVAudioTime and AVAudioTypes and block2 only.Schedule playing of an entire audio file.
Parameter file: the file to play
Parameter when: the time at which to play the file. see the discussion of timestamps, above.
Parameter completionHandler: called after the file has been consumed by the player or the player is stopped. may be nil.
It is possible for the completionHandler to be called before rendering begins or before the file is played completely.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleFile_atTime_completionCallbackType_completionHandler(
&self,
file: &AVAudioFile,
when: Option<&AVAudioTime>,
callback_type: AVAudioPlayerNodeCompletionCallbackType,
completion_handler: AVAudioPlayerNodeCompletionHandler,
)
Available on crate features AVAudioFile and AVAudioTime and block2 only.
pub unsafe fn scheduleFile_atTime_completionCallbackType_completionHandler( &self, file: &AVAudioFile, when: Option<&AVAudioTime>, callback_type: AVAudioPlayerNodeCompletionCallbackType, completion_handler: AVAudioPlayerNodeCompletionHandler, )
AVAudioFile and AVAudioTime and block2 only.Schedule playing of an entire audio file.
Parameter file: the file to play
Parameter when: the time at which to play the file. see the discussion of timestamps, above.
Parameter callbackType: option to specify when the completion handler must be called
Parameter completionHandler: called after the file has been consumed by the player or has finished playing back or
the player is stopped. may be nil.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleSegment_startingFrame_frameCount_atTime_completionHandler(
&self,
file: &AVAudioFile,
start_frame: AVAudioFramePosition,
number_frames: AVAudioFrameCount,
when: Option<&AVAudioTime>,
completion_handler: AVAudioNodeCompletionHandler,
)
Available on crate features AVAudioFile and AVAudioTime and AVAudioTypes and block2 only.
pub unsafe fn scheduleSegment_startingFrame_frameCount_atTime_completionHandler( &self, file: &AVAudioFile, start_frame: AVAudioFramePosition, number_frames: AVAudioFrameCount, when: Option<&AVAudioTime>, completion_handler: AVAudioNodeCompletionHandler, )
AVAudioFile and AVAudioTime and AVAudioTypes and block2 only.Schedule playing a segment of an audio file.
Parameter file: the file to play
Parameter startFrame: the starting frame position in the stream
Parameter numberFrames: the number of frames to play
Parameter when: the time at which to play the region. see the discussion of timestamps, above.
Parameter completionHandler: called after the segment has been consumed by the player or the player is stopped. may be nil.
It is possible for the completionHandler to be called before rendering begins or before the segment is played completely.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn scheduleSegment_startingFrame_frameCount_atTime_completionCallbackType_completionHandler(
&self,
file: &AVAudioFile,
start_frame: AVAudioFramePosition,
number_frames: AVAudioFrameCount,
when: Option<&AVAudioTime>,
callback_type: AVAudioPlayerNodeCompletionCallbackType,
completion_handler: AVAudioPlayerNodeCompletionHandler,
)
Available on crate features AVAudioFile and AVAudioTime and AVAudioTypes and block2 only.
pub unsafe fn scheduleSegment_startingFrame_frameCount_atTime_completionCallbackType_completionHandler( &self, file: &AVAudioFile, start_frame: AVAudioFramePosition, number_frames: AVAudioFrameCount, when: Option<&AVAudioTime>, callback_type: AVAudioPlayerNodeCompletionCallbackType, completion_handler: AVAudioPlayerNodeCompletionHandler, )
AVAudioFile and AVAudioTime and AVAudioTypes and block2 only.Schedule playing a segment of an audio file.
Parameter file: the file to play
Parameter startFrame: the starting frame position in the stream
Parameter numberFrames: the number of frames to play
Parameter when: the time at which to play the region. see the discussion of timestamps, above.
Parameter callbackType: option to specify when the completion handler must be called
Parameter completionHandler: called after the segment has been consumed by the player or has finished playing back or
the player is stopped. may be nil.
§Safety
completion_handler must be a valid pointer or null.
Sourcepub unsafe fn stop(&self)
pub unsafe fn stop(&self)
Clear all of the node’s previously scheduled events and stop playback.
All of the node’s previously scheduled events are cleared, including any that are in the middle of playing. The node’s sample time (and therefore the times to which new events are to be scheduled) is reset to 0, and will not proceed until the node is started again (via play or playAtTime).
Note that pausing or stopping all the players connected to an engine does not pause or stop the engine or the underlying hardware. The engine must be explicitly paused or stopped for the hardware to stop.
Sourcepub unsafe fn prepareWithFrameCount(&self, frame_count: AVAudioFrameCount)
Available on crate feature AVAudioTypes only.
pub unsafe fn prepareWithFrameCount(&self, frame_count: AVAudioFrameCount)
AVAudioTypes only.Prepares previously scheduled file regions or buffers for playback.
Parameter frameCount: The number of sample frames of data to be prepared before returning.
Sourcepub unsafe fn playAtTime(&self, when: Option<&AVAudioTime>)
Available on crate feature AVAudioTime only.
pub unsafe fn playAtTime(&self, when: Option<&AVAudioTime>)
AVAudioTime only.Start or resume playback at a specific time.
Parameter when: the node time at which to start or resume playback. nil signifies “now”.
This node is initially paused. Requests to play buffers or file segments are enqueued, and any necessary decoding begins immediately. Playback does not begin, however, until the player has started playing, via this method.
Note that providing an AVAudioTime which is past (before lastRenderTime) will cause the player to begin playback immediately.
E.g. To start a player X seconds in future:
// start engine and player
NSError *nsErr = nil;
[_engine startAndReturnError:
&nsErr
];
if (!nsErr) {
const float kStartDelayTime = 0.5; // sec
AVAudioFormat *outputFormat = [_player outputFormatForBus:0];
AVAudioFramePosition startSampleTime = _player.lastRenderTime.sampleTime + kStartDelayTime * outputFormat.sampleRate;
AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:startSampleTime atRate:outputFormat.sampleRate];
[_player playAtTime:startTime];
}
Sourcepub unsafe fn pause(&self)
pub unsafe fn pause(&self)
Pause playback.
The player’s sample time does not advance while the node is paused.
Note that pausing or stopping all the players connected to an engine does not pause or stop the engine or the underlying hardware. The engine must be explicitly paused or stopped for the hardware to stop.
Sourcepub unsafe fn nodeTimeForPlayerTime(
&self,
player_time: &AVAudioTime,
) -> Option<Retained<AVAudioTime>>
Available on crate feature AVAudioTime only.
pub unsafe fn nodeTimeForPlayerTime( &self, player_time: &AVAudioTime, ) -> Option<Retained<AVAudioTime>>
AVAudioTime only.Convert from player time to node time.
Parameter playerTime: a time relative to the player’s start time
Returns: a node time
This method and its inverse playerTimeForNodeTime: are discussed in the
introduction to this class.
If the player is not playing when this method is called, nil is returned.
Sourcepub unsafe fn playerTimeForNodeTime(
&self,
node_time: &AVAudioTime,
) -> Option<Retained<AVAudioTime>>
Available on crate feature AVAudioTime only.
pub unsafe fn playerTimeForNodeTime( &self, node_time: &AVAudioTime, ) -> Option<Retained<AVAudioTime>>
AVAudioTime only.Convert from node time to player time.
Parameter nodeTime: a node time
Returns: a time relative to the player’s start time
This method and its inverse nodeTimeForPlayerTime: are discussed in the
introduction to this class.
If the player is not playing when this method is called, nil is returned.
Methods from Deref<Target = AVAudioNode>§
Sourcepub unsafe fn inputFormatForBus(
&self,
bus: AVAudioNodeBus,
) -> Retained<AVAudioFormat>
Available on crate features AVAudioFormat and AVAudioTypes only.
pub unsafe fn inputFormatForBus( &self, bus: AVAudioNodeBus, ) -> Retained<AVAudioFormat>
AVAudioFormat and AVAudioTypes only.Obtain an input bus’s format.
Sourcepub unsafe fn outputFormatForBus(
&self,
bus: AVAudioNodeBus,
) -> Retained<AVAudioFormat>
Available on crate features AVAudioFormat and AVAudioTypes only.
pub unsafe fn outputFormatForBus( &self, bus: AVAudioNodeBus, ) -> Retained<AVAudioFormat>
AVAudioFormat and AVAudioTypes only.Obtain an output bus’s format.
Sourcepub unsafe fn nameForInputBus(
&self,
bus: AVAudioNodeBus,
) -> Option<Retained<NSString>>
Available on crate feature AVAudioTypes only.
pub unsafe fn nameForInputBus( &self, bus: AVAudioNodeBus, ) -> Option<Retained<NSString>>
AVAudioTypes only.Return the name of an input bus.
Sourcepub unsafe fn nameForOutputBus(
&self,
bus: AVAudioNodeBus,
) -> Option<Retained<NSString>>
Available on crate feature AVAudioTypes only.
pub unsafe fn nameForOutputBus( &self, bus: AVAudioNodeBus, ) -> Option<Retained<NSString>>
AVAudioTypes only.Return the name of an output bus.
Sourcepub unsafe fn installTapOnBus_bufferSize_format_block(
&self,
bus: AVAudioNodeBus,
buffer_size: AVAudioFrameCount,
format: Option<&AVAudioFormat>,
tap_block: AVAudioNodeTapBlock,
)
Available on crate features AVAudioBuffer and AVAudioFormat and AVAudioTime and AVAudioTypes and block2 only.
pub unsafe fn installTapOnBus_bufferSize_format_block( &self, bus: AVAudioNodeBus, buffer_size: AVAudioFrameCount, format: Option<&AVAudioFormat>, tap_block: AVAudioNodeTapBlock, )
AVAudioBuffer and AVAudioFormat and AVAudioTime and AVAudioTypes and block2 only.Create a “tap” to record/monitor/observe the output of the node.
Parameter bus: the node output bus to which to attach the tap
Parameter bufferSize: the requested size of the incoming buffers in sample frames. Supported range is [100, 400] ms.
Parameter format: If non-nil, attempts to apply this as the format of the specified output bus. This should
only be done when attaching to an output bus which is not connected to another node; an
error will result otherwise.
The tap and connection formats (if non-nil) on the specified bus should be identical.
Otherwise, the latter operation will override any previously set format.
Parameter tapBlock: a block to be called with audio buffers
Only one tap may be installed on any bus. Taps may be safely installed and removed while the engine is running.
Note that if you have a tap installed on AVAudioOutputNode, there could be a mismatch between the tap buffer format and AVAudioOutputNode’s output format, depending on the underlying physical device. Hence, instead of tapping the AVAudioOutputNode, it is advised to tap the node connected to it.
E.g. to capture audio from input node:
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
AVAudioInputNode *input = [engine inputNode];
AVAudioFormat *format = [input outputFormatForBus: 0];
[input installTapOnBus: 0 bufferSize: 8192 format: format block: ^(AVAudioPCMBuffer *buf, AVAudioTime *when) {
// ‘buf' contains audio captured from input node at time 'when'
}];
....
// start engine
§Safety
tap_block must be a valid pointer.
Sourcepub unsafe fn removeTapOnBus(&self, bus: AVAudioNodeBus)
Available on crate feature AVAudioTypes only.
pub unsafe fn removeTapOnBus(&self, bus: AVAudioNodeBus)
AVAudioTypes only.Destroy a tap.
Parameter bus: the node output bus whose tap is to be destroyed
Sourcepub unsafe fn engine(&self) -> Option<Retained<AVAudioEngine>>
Available on crate feature AVAudioEngine only.
pub unsafe fn engine(&self) -> Option<Retained<AVAudioEngine>>
AVAudioEngine only.The engine to which the node is attached (or nil).
Sourcepub unsafe fn numberOfInputs(&self) -> NSUInteger
pub unsafe fn numberOfInputs(&self) -> NSUInteger
The node’s number of input busses.
Sourcepub unsafe fn numberOfOutputs(&self) -> NSUInteger
pub unsafe fn numberOfOutputs(&self) -> NSUInteger
The node’s number of output busses.
Sourcepub unsafe fn lastRenderTime(&self) -> Option<Retained<AVAudioTime>>
Available on crate feature AVAudioTime only.
pub unsafe fn lastRenderTime(&self) -> Option<Retained<AVAudioTime>>
AVAudioTime only.Obtain the time for which the node most recently rendered.
Will return nil if the engine is not running or if the node is not connected to an input or output node.
Sourcepub unsafe fn AUAudioUnit(&self) -> Retained<AUAudioUnit>
Available on crate feature objc2-audio-toolbox and non-watchOS only.
pub unsafe fn AUAudioUnit(&self) -> Retained<AUAudioUnit>
objc2-audio-toolbox and non-watchOS only.An AUAudioUnit wrapping or underlying the implementation’s AudioUnit.
This provides an AUAudioUnit which either wraps or underlies the implementation’s AudioUnit, depending on how that audio unit is packaged. Applications can interact with this AUAudioUnit to control custom properties, select presets, change parameters, etc.
No operations that may conflict with state maintained by the engine should be performed directly on the audio unit. These include changing initialization state, stream formats, channel layouts or connections to other audio units.
Sourcepub unsafe fn latency(&self) -> NSTimeInterval
pub unsafe fn latency(&self) -> NSTimeInterval
The processing latency of the node, in seconds.
This property reflects the delay between when an impulse in the audio stream arrives at the input vs. output of the node. This should reflect the delay due to signal processing (e.g. filters, FFT’s, etc.), not delay or reverberation which is being applied as an effect. A value of zero indicates either no latency or an unknown latency.
Sourcepub unsafe fn outputPresentationLatency(&self) -> NSTimeInterval
pub unsafe fn outputPresentationLatency(&self) -> NSTimeInterval
The maximum render pipeline latency downstream of the node, in seconds.
This describes the maximum time it will take for the audio at the output of a node to be presented. For instance, the output presentation latency of the output node in the engine is:
- zero in manual rendering mode
- the presentation latency of the device itself when rendering to an audio device
(see
AVAudioIONode(presentationLatency)) The output presentation latency of a node connected directly to the output node is the output node’s presentation latency plus the output node’s processing latency (seelatency).
For a node which is exclusively in the input node chain (i.e. not connected to engine’s output node), this property reflects the latency for the output of this node to be presented at the output of the terminating node in the input chain.
A value of zero indicates either an unknown or no latency.
Note that this latency value can change as the engine is reconfigured (started/stopped, connections made/altered downstream of this node etc.). So it is recommended not to cache this value and fetch it whenever it’s needed.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AVAudio3DMixing for AVAudioPlayerNode
impl AVAudio3DMixing for AVAudioPlayerNode
Source§unsafe fn renderingAlgorithm(&self) -> AVAudio3DMixingRenderingAlgorithm
unsafe fn renderingAlgorithm(&self) -> AVAudio3DMixingRenderingAlgorithm
AVAudioMixing only.Source§unsafe fn setRenderingAlgorithm(
&self,
rendering_algorithm: AVAudio3DMixingRenderingAlgorithm,
)
unsafe fn setRenderingAlgorithm( &self, rendering_algorithm: AVAudio3DMixingRenderingAlgorithm, )
AVAudioMixing only.renderingAlgorithm.Source§unsafe fn sourceMode(&self) -> AVAudio3DMixingSourceMode
unsafe fn sourceMode(&self) -> AVAudio3DMixingSourceMode
AVAudioMixing only.Source§unsafe fn setSourceMode(&self, source_mode: AVAudio3DMixingSourceMode)
unsafe fn setSourceMode(&self, source_mode: AVAudio3DMixingSourceMode)
AVAudioMixing only.sourceMode.Source§unsafe fn pointSourceInHeadMode(&self) -> AVAudio3DMixingPointSourceInHeadMode
unsafe fn pointSourceInHeadMode(&self) -> AVAudio3DMixingPointSourceInHeadMode
AVAudioMixing only.Source§unsafe fn setPointSourceInHeadMode(
&self,
point_source_in_head_mode: AVAudio3DMixingPointSourceInHeadMode,
)
unsafe fn setPointSourceInHeadMode( &self, point_source_in_head_mode: AVAudio3DMixingPointSourceInHeadMode, )
AVAudioMixing only.pointSourceInHeadMode.Source§unsafe fn rate(&self) -> c_float
unsafe fn rate(&self) -> c_float
AVAudioMixing only.Source§unsafe fn setRate(&self, rate: c_float)
unsafe fn setRate(&self, rate: c_float)
AVAudioMixing only.rate.Source§unsafe fn reverbBlend(&self) -> c_float
unsafe fn reverbBlend(&self) -> c_float
AVAudioMixing only.Source§unsafe fn setReverbBlend(&self, reverb_blend: c_float)
unsafe fn setReverbBlend(&self, reverb_blend: c_float)
AVAudioMixing only.reverbBlend.Source§unsafe fn obstruction(&self) -> c_float
unsafe fn obstruction(&self) -> c_float
AVAudioMixing only.Source§unsafe fn setObstruction(&self, obstruction: c_float)
unsafe fn setObstruction(&self, obstruction: c_float)
AVAudioMixing only.obstruction.Source§unsafe fn occlusion(&self) -> c_float
unsafe fn occlusion(&self) -> c_float
AVAudioMixing only.Source§unsafe fn setOcclusion(&self, occlusion: c_float)
unsafe fn setOcclusion(&self, occlusion: c_float)
AVAudioMixing only.occlusion.Source§unsafe fn position(&self) -> AVAudio3DPoint
unsafe fn position(&self) -> AVAudio3DPoint
AVAudioMixing and AVAudioTypes only.Source§unsafe fn setPosition(&self, position: AVAudio3DPoint)
unsafe fn setPosition(&self, position: AVAudio3DPoint)
AVAudioMixing and AVAudioTypes only.position.Source§impl AVAudioMixing for AVAudioPlayerNode
impl AVAudioMixing for AVAudioPlayerNode
Source§unsafe fn destinationForMixer_bus(
&self,
mixer: &AVAudioNode,
bus: AVAudioNodeBus,
) -> Option<Retained<AVAudioMixingDestination>>
unsafe fn destinationForMixer_bus( &self, mixer: &AVAudioNode, bus: AVAudioNodeBus, ) -> Option<Retained<AVAudioMixingDestination>>
AVAudioNode and AVAudioTypes and AVAudioMixing only.Source§impl AsRef<AVAudioNode> for AVAudioPlayerNode
impl AsRef<AVAudioNode> for AVAudioPlayerNode
Source§fn as_ref(&self) -> &AVAudioNode
fn as_ref(&self) -> &AVAudioNode
Source§impl AsRef<AVAudioPlayerNode> for AVAudioPlayerNode
impl AsRef<AVAudioPlayerNode> for AVAudioPlayerNode
Source§impl AsRef<AnyObject> for AVAudioPlayerNode
impl AsRef<AnyObject> for AVAudioPlayerNode
Source§impl AsRef<NSObject> for AVAudioPlayerNode
impl AsRef<NSObject> for AVAudioPlayerNode
Source§impl Borrow<AVAudioNode> for AVAudioPlayerNode
impl Borrow<AVAudioNode> for AVAudioPlayerNode
Source§fn borrow(&self) -> &AVAudioNode
fn borrow(&self) -> &AVAudioNode
Source§impl Borrow<AnyObject> for AVAudioPlayerNode
impl Borrow<AnyObject> for AVAudioPlayerNode
Source§impl Borrow<NSObject> for AVAudioPlayerNode
impl Borrow<NSObject> for AVAudioPlayerNode
Source§impl ClassType for AVAudioPlayerNode
impl ClassType for AVAudioPlayerNode
Source§const NAME: &'static str = "AVAudioPlayerNode"
const NAME: &'static str = "AVAudioPlayerNode"
Source§type Super = AVAudioNode
type Super = AVAudioNode
Source§type ThreadKind = <<AVAudioPlayerNode as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVAudioPlayerNode as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVAudioPlayerNode
impl Debug for AVAudioPlayerNode
Source§impl Deref for AVAudioPlayerNode
impl Deref for AVAudioPlayerNode
Source§impl Hash for AVAudioPlayerNode
impl Hash for AVAudioPlayerNode
Source§impl Message for AVAudioPlayerNode
impl Message for AVAudioPlayerNode
Source§impl NSObjectProtocol for AVAudioPlayerNode
impl NSObjectProtocol for AVAudioPlayerNode
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref