pub struct AVAudioOutputNode { /* private fields */ }AVAudioIONode and AVAudioNode only.Expand description
A node that performs audio output in the engine.
When the engine is rendering to/from an audio device, this node connects to the system’s audio output. When the engine is operating in manual rendering mode, this node performs output in response to client’s requests.
This node has one element. The format of the output scope reflects:
- the audio hardware sample rate and channel count, when connected to the hardware
- the engine’s manual rendering mode output format (see
AVAudioEngine(manualRenderingFormat)), in the manual rendering mode
The format of the input scope is initially the same as that of the output, but you may set it to a different format, in which case the node will convert.
See also Apple’s documentation
Implementations§
Methods from Deref<Target = AVAudioIONode>§
Sourcepub unsafe fn presentationLatency(&self) -> NSTimeInterval
pub unsafe fn presentationLatency(&self) -> NSTimeInterval
The presentation or hardware latency, applicable when the engine is rendering to/from an audio device.
This corresponds to kAudioDevicePropertyLatency and kAudioStreamPropertyLatency. See <CoreAudio /AudioHardwareBase.h>.
Sourcepub unsafe fn audioUnit(&self) -> AudioUnit
Available on crate feature objc2-audio-toolbox and non-watchOS only.
pub unsafe fn audioUnit(&self) -> AudioUnit
objc2-audio-toolbox and non-watchOS only.The node’s underlying AudioUnit, if any.
This is only necessary for certain advanced usages.
Sourcepub unsafe fn isVoiceProcessingEnabled(&self) -> bool
pub unsafe fn isVoiceProcessingEnabled(&self) -> bool
Indicates whether voice processing is enabled.
Sourcepub unsafe fn setVoiceProcessingEnabled_error(
&self,
enabled: bool,
) -> Result<(), Retained<NSError>>
pub unsafe fn setVoiceProcessingEnabled_error( &self, enabled: bool, ) -> Result<(), Retained<NSError>>
Enable or disable voice processing on the IO node.
Parameter enabled: Whether voice processing is to be enabled.
Parameter outError: On exit, if the IO node cannot enable or diable voice processing, a description of the error
Returns: YES for success
If enabled, the input node does signal processing on the incoming audio (taking out any of the audio that is played from the device at a given time from the incoming audio). Disabling this mode on either of the IO nodes automatically disabled it on the other IO node.
Voice processing requires both input and output nodes to be in the voice processing mode. Enabling this mode on either of the IO nodes automatically enables it on the other IO node. Voice processing is only supported when the engine is rendering to the audio device and not in the manual rendering mode. Voice processing can only be be enabled or disabled when the engine is in a stopped state.
The output format of the input node and the input format of the output node have to be the same and they can only be changed when the engine is in a stopped state.
Methods from Deref<Target = AVAudioNode>§
Sourcepub unsafe fn inputFormatForBus(
&self,
bus: AVAudioNodeBus,
) -> Retained<AVAudioFormat>
Available on crate features AVAudioFormat and AVAudioTypes only.
pub unsafe fn inputFormatForBus( &self, bus: AVAudioNodeBus, ) -> Retained<AVAudioFormat>
AVAudioFormat and AVAudioTypes only.Obtain an input bus’s format.
Sourcepub unsafe fn outputFormatForBus(
&self,
bus: AVAudioNodeBus,
) -> Retained<AVAudioFormat>
Available on crate features AVAudioFormat and AVAudioTypes only.
pub unsafe fn outputFormatForBus( &self, bus: AVAudioNodeBus, ) -> Retained<AVAudioFormat>
AVAudioFormat and AVAudioTypes only.Obtain an output bus’s format.
Sourcepub unsafe fn nameForInputBus(
&self,
bus: AVAudioNodeBus,
) -> Option<Retained<NSString>>
Available on crate feature AVAudioTypes only.
pub unsafe fn nameForInputBus( &self, bus: AVAudioNodeBus, ) -> Option<Retained<NSString>>
AVAudioTypes only.Return the name of an input bus.
Sourcepub unsafe fn nameForOutputBus(
&self,
bus: AVAudioNodeBus,
) -> Option<Retained<NSString>>
Available on crate feature AVAudioTypes only.
pub unsafe fn nameForOutputBus( &self, bus: AVAudioNodeBus, ) -> Option<Retained<NSString>>
AVAudioTypes only.Return the name of an output bus.
Sourcepub unsafe fn installTapOnBus_bufferSize_format_block(
&self,
bus: AVAudioNodeBus,
buffer_size: AVAudioFrameCount,
format: Option<&AVAudioFormat>,
tap_block: AVAudioNodeTapBlock,
)
Available on crate features AVAudioBuffer and AVAudioFormat and AVAudioTime and AVAudioTypes and block2 only.
pub unsafe fn installTapOnBus_bufferSize_format_block( &self, bus: AVAudioNodeBus, buffer_size: AVAudioFrameCount, format: Option<&AVAudioFormat>, tap_block: AVAudioNodeTapBlock, )
AVAudioBuffer and AVAudioFormat and AVAudioTime and AVAudioTypes and block2 only.Create a “tap” to record/monitor/observe the output of the node.
Parameter bus: the node output bus to which to attach the tap
Parameter bufferSize: the requested size of the incoming buffers in sample frames. Supported range is [100, 400] ms.
Parameter format: If non-nil, attempts to apply this as the format of the specified output bus. This should
only be done when attaching to an output bus which is not connected to another node; an
error will result otherwise.
The tap and connection formats (if non-nil) on the specified bus should be identical.
Otherwise, the latter operation will override any previously set format.
Parameter tapBlock: a block to be called with audio buffers
Only one tap may be installed on any bus. Taps may be safely installed and removed while the engine is running.
Note that if you have a tap installed on AVAudioOutputNode, there could be a mismatch between the tap buffer format and AVAudioOutputNode’s output format, depending on the underlying physical device. Hence, instead of tapping the AVAudioOutputNode, it is advised to tap the node connected to it.
E.g. to capture audio from input node:
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
AVAudioInputNode *input = [engine inputNode];
AVAudioFormat *format = [input outputFormatForBus: 0];
[input installTapOnBus: 0 bufferSize: 8192 format: format block: ^(AVAudioPCMBuffer *buf, AVAudioTime *when) {
// ‘buf' contains audio captured from input node at time 'when'
}];
....
// start engine
§Safety
tap_block must be a valid pointer.
Sourcepub unsafe fn removeTapOnBus(&self, bus: AVAudioNodeBus)
Available on crate feature AVAudioTypes only.
pub unsafe fn removeTapOnBus(&self, bus: AVAudioNodeBus)
AVAudioTypes only.Destroy a tap.
Parameter bus: the node output bus whose tap is to be destroyed
Sourcepub unsafe fn engine(&self) -> Option<Retained<AVAudioEngine>>
Available on crate feature AVAudioEngine only.
pub unsafe fn engine(&self) -> Option<Retained<AVAudioEngine>>
AVAudioEngine only.The engine to which the node is attached (or nil).
Sourcepub unsafe fn numberOfInputs(&self) -> NSUInteger
pub unsafe fn numberOfInputs(&self) -> NSUInteger
The node’s number of input busses.
Sourcepub unsafe fn numberOfOutputs(&self) -> NSUInteger
pub unsafe fn numberOfOutputs(&self) -> NSUInteger
The node’s number of output busses.
Sourcepub unsafe fn lastRenderTime(&self) -> Option<Retained<AVAudioTime>>
Available on crate feature AVAudioTime only.
pub unsafe fn lastRenderTime(&self) -> Option<Retained<AVAudioTime>>
AVAudioTime only.Obtain the time for which the node most recently rendered.
Will return nil if the engine is not running or if the node is not connected to an input or output node.
Sourcepub unsafe fn AUAudioUnit(&self) -> Retained<AUAudioUnit>
Available on crate feature objc2-audio-toolbox and non-watchOS only.
pub unsafe fn AUAudioUnit(&self) -> Retained<AUAudioUnit>
objc2-audio-toolbox and non-watchOS only.An AUAudioUnit wrapping or underlying the implementation’s AudioUnit.
This provides an AUAudioUnit which either wraps or underlies the implementation’s AudioUnit, depending on how that audio unit is packaged. Applications can interact with this AUAudioUnit to control custom properties, select presets, change parameters, etc.
No operations that may conflict with state maintained by the engine should be performed directly on the audio unit. These include changing initialization state, stream formats, channel layouts or connections to other audio units.
Sourcepub unsafe fn latency(&self) -> NSTimeInterval
pub unsafe fn latency(&self) -> NSTimeInterval
The processing latency of the node, in seconds.
This property reflects the delay between when an impulse in the audio stream arrives at the input vs. output of the node. This should reflect the delay due to signal processing (e.g. filters, FFT’s, etc.), not delay or reverberation which is being applied as an effect. A value of zero indicates either no latency or an unknown latency.
Sourcepub unsafe fn outputPresentationLatency(&self) -> NSTimeInterval
pub unsafe fn outputPresentationLatency(&self) -> NSTimeInterval
The maximum render pipeline latency downstream of the node, in seconds.
This describes the maximum time it will take for the audio at the output of a node to be presented. For instance, the output presentation latency of the output node in the engine is:
- zero in manual rendering mode
- the presentation latency of the device itself when rendering to an audio device
(see
AVAudioIONode(presentationLatency)) The output presentation latency of a node connected directly to the output node is the output node’s presentation latency plus the output node’s processing latency (seelatency).
For a node which is exclusively in the input node chain (i.e. not connected to engine’s output node), this property reflects the latency for the output of this node to be presented at the output of the terminating node in the input chain.
A value of zero indicates either an unknown or no latency.
Note that this latency value can change as the engine is reconfigured (started/stopped, connections made/altered downstream of this node etc.). So it is recommended not to cache this value and fetch it whenever it’s needed.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<AVAudioIONode> for AVAudioOutputNode
impl AsRef<AVAudioIONode> for AVAudioOutputNode
Source§fn as_ref(&self) -> &AVAudioIONode
fn as_ref(&self) -> &AVAudioIONode
Source§impl AsRef<AVAudioNode> for AVAudioOutputNode
impl AsRef<AVAudioNode> for AVAudioOutputNode
Source§fn as_ref(&self) -> &AVAudioNode
fn as_ref(&self) -> &AVAudioNode
Source§impl AsRef<AVAudioOutputNode> for AVAudioOutputNode
impl AsRef<AVAudioOutputNode> for AVAudioOutputNode
Source§impl AsRef<AnyObject> for AVAudioOutputNode
impl AsRef<AnyObject> for AVAudioOutputNode
Source§impl AsRef<NSObject> for AVAudioOutputNode
impl AsRef<NSObject> for AVAudioOutputNode
Source§impl Borrow<AVAudioIONode> for AVAudioOutputNode
impl Borrow<AVAudioIONode> for AVAudioOutputNode
Source§fn borrow(&self) -> &AVAudioIONode
fn borrow(&self) -> &AVAudioIONode
Source§impl Borrow<AVAudioNode> for AVAudioOutputNode
impl Borrow<AVAudioNode> for AVAudioOutputNode
Source§fn borrow(&self) -> &AVAudioNode
fn borrow(&self) -> &AVAudioNode
Source§impl Borrow<AnyObject> for AVAudioOutputNode
impl Borrow<AnyObject> for AVAudioOutputNode
Source§impl Borrow<NSObject> for AVAudioOutputNode
impl Borrow<NSObject> for AVAudioOutputNode
Source§impl ClassType for AVAudioOutputNode
impl ClassType for AVAudioOutputNode
Source§const NAME: &'static str = "AVAudioOutputNode"
const NAME: &'static str = "AVAudioOutputNode"
Source§type Super = AVAudioIONode
type Super = AVAudioIONode
Source§type ThreadKind = <<AVAudioOutputNode as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVAudioOutputNode as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVAudioOutputNode
impl Debug for AVAudioOutputNode
Source§impl Deref for AVAudioOutputNode
impl Deref for AVAudioOutputNode
Source§impl Hash for AVAudioOutputNode
impl Hash for AVAudioOutputNode
Source§impl Message for AVAudioOutputNode
impl Message for AVAudioOutputNode
Source§impl NSObjectProtocol for AVAudioOutputNode
impl NSObjectProtocol for AVAudioOutputNode
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref