Struct quinn::TransportConfig[][src]

pub struct TransportConfig { /* fields omitted */ }

Parameters governing the core QUIC state machine

Default values should be suitable for most internet applications. Applications protocols which forbid remotely-initiated streams should set max_concurrent_bidi_streams and max_concurrent_uni_streams to zero.

In some cases, performance or resource requirements can be improved by tuning these values to suit a particular application and/or network connection. In particular, data window sizes can be tuned for a particular expected round trip time, link capacity, and memory availability. Tuning for higher bandwidths and latencies increases worst-case memory consumption, but does not impair performance at lower bandwidths and latencies. The default configuration is tuned for a 100Mbps link with a 100ms round trip time.

Implementations

impl TransportConfig[src]

pub fn max_concurrent_bidi_streams(
    &mut self,
    value: u64
) -> Result<&mut TransportConfig, ConfigError>
[src]

Maximum number of bidirectional streams that may be open concurrently

Must be nonzero for the peer to open any bidirectional streams.

Worst-case memory use is directly proportional to max_concurrent_bidi_streams * stream_receive_window, with an upper bound proportional to receive_window.

pub fn max_concurrent_uni_streams(
    &mut self,
    value: u64
) -> Result<&mut TransportConfig, ConfigError>
[src]

Variant of max_concurrent_bidi_streams affecting unidirectional streams

pub fn max_idle_timeout(
    &mut self,
    value: Option<Duration>
) -> Result<&mut TransportConfig, ConfigError>
[src]

Maximum duration of inactivity to accept before timing out the connection.

The true idle timeout is the minimum of this and the peer’s own max idle timeout. None represents an infinite timeout.

WARNING: If a peer or its network path malfunctions or acts maliciously, an infinite idle timeout can result in permanently hung futures!

pub fn stream_receive_window(
    &mut self,
    value: u64
) -> Result<&mut TransportConfig, ConfigError>
[src]

Maximum number of bytes the peer may transmit without acknowledgement on any one stream before becoming blocked.

This should be set to at least the expected connection latency multiplied by the maximum desired throughput. Setting this smaller than receive_window helps ensure that a single stream doesn’t monopolize receive buffers, which may otherwise occur if the application chooses not to read from a large stream for a time while still requiring data on other streams.

pub fn receive_window(
    &mut self,
    value: u64
) -> Result<&mut TransportConfig, ConfigError>
[src]

Maximum number of bytes the peer may transmit across all streams of a connection before becoming blocked.

This should be set to at least the expected connection latency multiplied by the maximum desired throughput. Larger values can be useful to allow maximum throughput within a stream while another is blocked.

pub fn send_window(&mut self, value: u64) -> &mut TransportConfig[src]

Maximum number of bytes to transmit to a peer without acknowledgment

Provides an upper bound on memory when communicating with peers that issue large amounts of flow control credit. Endpoints that wish to handle large numbers of connections robustly should take care to set this low enough to guarantee memory exhaustion does not occur if every connection uses the entire window.

pub fn max_tlps(&mut self, value: u32) -> &mut TransportConfig[src]

Maximum number of tail loss probes before an RTO fires.

pub fn packet_threshold(&mut self, value: u32) -> &mut TransportConfig[src]

Maximum reordering in packet number space before FACK style loss detection considers a packet lost. Should not be less than 3, per RFC5681.

pub fn time_threshold(&mut self, value: f32) -> &mut TransportConfig[src]

Maximum reordering in time space before time based loss detection considers a packet lost, as a factor of RTT

pub fn initial_rtt(&mut self, value: Duration) -> &mut TransportConfig[src]

The RTT used before an RTT sample is taken

pub fn persistent_congestion_threshold(
    &mut self,
    value: u32
) -> &mut TransportConfig
[src]

Number of consecutive PTOs after which network is considered to be experiencing persistent congestion.

pub fn keep_alive_interval(
    &mut self,
    value: Option<Duration>
) -> &mut TransportConfig
[src]

Period of inactivity before sending a keep-alive packet

Keep-alive packets prevent an inactive but otherwise healthy connection from timing out.

None to disable, which is the default. Only one side of any given connection needs keep-alive enabled for the connection to be preserved. Must be set lower than the idle_timeout of both peers to be effective.

pub fn crypto_buffer_size(&mut self, value: usize) -> &mut TransportConfig[src]

Maximum quantity of out-of-order crypto layer data to buffer

pub fn allow_spin(&mut self, value: bool) -> &mut TransportConfig[src]

Whether the implementation is permitted to set the spin bit on this connection

This allows passive observers to easily judge the round trip time of a connection, which can be useful for network administration but sacrifices a small amount of privacy.

pub fn datagram_receive_buffer_size(
    &mut self,
    value: Option<usize>
) -> &mut TransportConfig
[src]

Maximum number of incoming application datagram bytes to buffer, or None to disable incoming datagrams

The peer is forbidden to send single datagrams larger than this size. If the aggregate size of all datagrams that have been received from the peer but not consumed by the application exceeds this value, old datagrams are dropped until it is no longer exceeded.

pub fn datagram_send_buffer_size(
    &mut self,
    value: usize
) -> &mut TransportConfig
[src]

Maximum number of outgoing application datagram bytes to buffer

While datagrams are sent ASAP, it is possible for an application to generate data faster than the link, or even the underlying hardware, can transmit them. This limits the amount of memory that may be consumed in that case. When the send buffer is full and a new datagram is sent, older datagrams are dropped until sufficient space is available.

pub fn congestion_controller_factory(
    &mut self,
    factory: impl ControllerFactory + Send + Sync + 'static
) -> &mut TransportConfig
[src]

How to construct new congestion::Controllers

Typically the refcounted configuration of a congestion::Controller, e.g. a congestion::NewRenoConfig.

Example

let mut config = TransportConfig::default();
config.congestion_controller_factory(Arc::new(congestion::NewRenoConfig::default()));

Trait Implementations

impl Debug for TransportConfig[src]

impl Default for TransportConfig[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>,