Module libpulse_binding::stream [−][src]
Audio streams for input, output and sample upload.
Overview
Audio streams form the central functionality of the sound server. Data is routed, converted and mixed from several sources before it is passed along to a final output. Currently, there are three forms of audio streams:
- Playback streams: Data flows from the client to the server.
- Record streams: Data flows from the server to the client.
- Upload streams: Similar to playback streams, but the data is stored in the sample cache. See
::context::scache
for more information about controlling the sample cache.
Creating
To access a stream, a Stream
object must be created using
Stream::new
or Stream::new_extended
. new
is for PCM streams only, while new_extended
can be used for both PCM and compressed audio streams. At this point the application must
specify what stream format(s) it supports. See ::sample
and
::channelmap
for more information on the stream format parameters.
FIXME: Those references only talk about PCM parameters, we should also have an overview
page for how the ::format::Info
based stream format
configuration works. Bug filed.
This first step will only create a client-side object, representing the stream. To use the stream, a server-side object must be created and associated with the local object. Depending on which type of stream is desired, a different function is needed:
- Playback stream:
Stream::connect_playback
- Record stream:
Stream::connect_record
- Upload stream:
Stream::connect_upload
(see::context::scache
)
Similar to how connections are done in contexts, connecting a stream will not generate an
::operation::Operation
object. Also like contexts, the application should register a state
change callback, using Stream::set_state_callback
, and wait for the stream to enter an
active state.
Note: there is a user-controllable slider in mixer applications such as pavucontrol corresponding to each of the created streams. Multiple (especially identically named) volume sliders for the same application might confuse the user. Also, the server supports only a limited number of simultaneous streams. Because of this, it is not always appropriate to create multiple streams in one application that needs to output multiple sounds. The rough guideline is: if there is no use case that would require separate user-initiated volume changes for each stream, perform the mixing inside the application.
Buffer Attributes
Playback and record streams always have a server-side buffer as part of the data flow. The size of this buffer needs to be chosen in a compromise between low latency and sensitivity for buffer overflows/underruns.
The buffer metrics may be controlled by the application. They are described with a
::def::BufferAttr
structure.
If flags::ADJUST_LATENCY
is set, then the tlength
/fragsize
parameters of this structure
will be interpreted slightly differently than otherwise when passed to
Stream::connect_record
and Stream::connect_playback
: the overall latency that is
comprised of both the server side playback buffer length, the hardware playback buffer length
and additional latencies will be adjusted in a way that it matches tlength
resp. fragsize
.
Set flags::ADJUST_LATENCY
if you want to control the overall playback latency for your
stream. Unset it if you want to control only the latency induced by the server-side, rewritable
playback buffer. The server will try to fulfill the client's latency requests as good as
possible. However if the underlying hardware cannot change the hardware buffer length or only in
a limited range, the actually resulting latency might be different from what the client
requested. Thus, for synchronization clients always need to check the actual measured latency
via Stream::get_latency
or a similar call, and not make any assumptions about the latency
available. The function Stream::get_buffer_attr
will always return the actual size of the
server-side per-stream buffer in tlength
/fragsize
, regardless whether
flags::ADJUST_LATENCY
is set or not.
The server-side per-stream playback buffers are indexed by a write and a read index. The
application writes to the write index and the sound device reads from the read index. The read
index is increased monotonically, while the write index may be freely controlled by the
application. Subtracting the read index from the write index will give you the current fill
level of the buffer. The read/write indexes are 64bit values and measured in bytes, they will
never wrap. The current read/write index may be queried using Stream::get_timing_info
(see below for more information). In case of a buffer underrun the read index is equal or
larger than the write index. Unless the prebuf
value is 0
, PulseAudio will temporarily pause
playback in such a case, and wait until the buffer is filled up to prebuf
bytes again. If
prebuf
is 0
, the read index may be larger than the write index, in which case silence is
played. If the application writes data to indexes lower than the read index, the data is
immediately lost.
Transferring Data
Once the stream is up, data can start flowing between the client and the server. Two different access models can be used to transfer the data:
- Asynchronous: The application registers a callback using
Stream::set_write_callback
andStream::set_read_callback
to receive notifications that data can either be written or read. - Polled: Query the library for available data/space using
Stream::writable_size
andStream::readable_size
and transfer data as needed. The sizes are stored locally, in the client end, so there is no delay when reading them.
It is also possible to mix the two models freely.
Once there is data/space available, it can be transferred using either Stream::write
for
playback, or Stream::peek
/ Stream::discard
for record. Make sure you do not overflow
the playback buffers as data will be dropped.
Buffer Control
The transfer buffers can be controlled through a number of operations:
Stream::cork
: Stop the playback or recording.Stream::uncork
: Start the playback or recording.Stream::trigger
: Start playback immediately and do not wait for the buffer to fill up to the set trigger level.Stream::prebuf
: Re-enable the playback trigger level.Stream::drain
: Wait for the playback buffer to go empty. Will return an::operation::Operation
object that will indicate when the buffer is completely drained.Stream::flush
: Drop all data from the playback or record buffer. Do not wait for it to finish playing.
Seeking in the Playback Buffer
A client application may freely seek in the playback buffer. To accomplish that the
Stream::write
function takes a seek mode and an offset argument. The seek mode is one of:
SeekMode::Relative
: seek relative to the current write index.SeekMode::Absolute
: seek relative to the beginning of the playback buffer, (i.e. the first that was ever played in the stream).SeekMode::RelativeOnRead
: seek relative to the current read index. Use this to write data to the output buffer that should be played as soon as possible.SeekMode::RelativeEnd
: seek relative to the last byte ever written.
If an application just wants to append some data to the output buffer, SeekMode::Relative
and an offset of 0
should be used.
After a call to Stream::write
the write index will be left at the position right after the
last byte of the written data.
Latency
A major problem with networked audio is the increased latency caused by the network. To remedy this, PulseAudio supports an advanced system of monitoring the current latency.
To get the raw data needed to calculate latencies, call Stream::get_timing_info
. This will
give you a ::def::TimingInfo
structure that contains everything that is known about the
server side buffer transport delays and the backend active in the server. (Besides other things
it contains the write and read index values mentioned above.)
This structure is updated every time a Stream::update_timing_info
operation is executed.
(i.e. before the first call to this function the timing information structure is not available!)
Since it is a lot of work to keep this structure up-to-date manually, PulseAudio can do that
automatically for you: if flags::AUTO_TIMING_UPDATE
is passed when connecting the stream
PulseAudio will automatically update the structure every 100ms and every time a function is
called that might invalidate the previously known timing data (such as Stream::write
or
Stream::flush
). Please note however, that there always is a short time window when the data
in the timing information structure is out-of-date. PulseAudio tries to mark these situations by
setting the write_index_corrupt
and read_index_corrupt
fields accordingly.
The raw timing data in the ::def::TimingInfo
structure is usually hard to deal with.
Therefore a simpler interface is available: you can call Stream::get_time
or
Stream::get_latency
. The former will return the current playback time of the hardware since
the stream has been started. The latter returns the overall time a sample that you write now
takes to be played by the hardware. These two functions base their calculations on the same data
that is returned by Stream::get_timing_info
. Hence the same rules for keeping the timing
data up-to-date apply here. In case the write or read index is corrupted, these two functions
will fail.
Since updating the timing info structure usually requires a full network round trip and some
applications monitor the timing very often PulseAudio offers a timing interpolation system. If
flags::INTERPOLATE_TIMING
is passed when connecting the stream, Stream::get_time
and
Stream::get_latency
will try to interpolate the current playback time/latency by estimating
the number of samples that have been played back by the hardware since the last regular timing
update. It is especially useful to combine this option with flags::AUTO_TIMING_UPDATE
, which
will enable you to monitor the current playback time/latency very precisely and very frequently
without requiring a network round trip every time.
Overflow and underflow
Even with the best precautions, buffers will sometime over - or underflow. To handle this
gracefully, the application can be notified when this happens. Callbacks are registered using
Stream::set_overflow_callback
and Stream::set_underflow_callback
.
Synchronizing Multiple Playback Streams
PulseAudio allows applications to fully synchronize multiple playback streams that are connected
to the same output device. That means the streams will always be played back sample-by-sample
synchronously. If stream operations like Stream::cork
are issued on one of the synchronized
streams, they are simultaneously issued on the others.
To synchronize a stream to another, just pass the "master" stream as the last argument to
Stream::connect_playback
. To make sure that the freshly created stream doesn't start
playback right-away, make sure to pass flags::START_CORKED
and, after all streams have been
created, uncork them all with a single call to Stream::uncork
for the master stream.
To make sure that a particular stream doesn't stop to play when a server side buffer underrun
happens on it while the other synchronized streams continue playing and hence deviate, you need
to pass a ::def::BufferAttr
with prebuf
set to 0
when connecting.
Disconnecting
When a stream has served is purpose it must be disconnected with Stream::disconnect
. If you
only unreference it, then it will live on and eat resources both locally and on the server until
you disconnect the context. This is done automatically upon drop of the stream object.
Modules
event_names |
Common event names supplied to the |
flags |
Some special flags for stream connections. |
Structs
Stream |
An opaque stream for playback or recording. |
Enums
Direction | |
Latency |
Result type for |
PeekResult |
Result type for the |
SeekMode | |
State |
The state of a stream |
Type Definitions
FlagSet |