Struct RiotApiConfig

Source
pub struct RiotApiConfig { /* private fields */ }
Expand description

Configuration for instantiating RiotApi.

Implementations§

Source§

impl RiotApiConfig

Source

pub const RIOT_KEY_HEADER: &'static str = "X-Riot-Token"

Request header name for the Riot API key, "X-Riot-Token".

When using set_client_builder, the supplied builder should include this default header with the Riot API key as the value.

Source

pub const DEFAULT_BASE_URL: &'static str = "https://{}.api.riotgames.com"

"https://{}.api.riotgames.com"

Default base URL, including {} placeholder for region platform.

Source

pub const DEFAULT_RETRIES: u8 = 3u8

3

Default number of retries.

Source

pub const DEFAULT_RATE_USAGE_FACTOR: f32 = 1f32

1.0

Default rate limit usage factor.

Source

pub const PRECONFIG_BURST_BURST_FACTOR: f32 = 0.990000009f32

0.99

Default burst_factor, also used by preconfig_burst.

Source

pub const PRECONFIG_BURST_DURATION_OVERHEAD: Duration

989 ms

Default duration_overhead, also used by preconfig_burst.

Source

pub const PRECONFIG_THROUGHPUT_BURST_FACTOR: f32 = 0.469999999f32

0.47

burst_factor used by preconfig_throughput.

Source

pub const PRECONFIG_THROUGHPUT_DURATION_OVERHEAD: Duration

10 ms.

duration_overhead used by preconfig_throughput.

Source

pub fn with_key(api_key: impl AsRef<[u8]>) -> Self

Creates a new RiotApiConfig with the given api_key with the following configuration:

  • retries = 3 (RiotApiConfig::DEFAULT_RETRIES).
  • burst_factor = 0.99 (preconfig_burst).
  • duration_overhead = 989 ms (preconfig_burst).

api_key should be a Riot Games API key from https://developer.riotgames.com/, and should look like "RGAPI-01234567-89ab-cdef-0123-456789abcdef".

Source

pub fn with_client_builder(client_builder: ClientBuilder) -> Self

Creates a new RiotApiConfig with the given client builder.

The client builder default headers should include a value for RiotApiConfig::RIOT_KEY_HEADER ("X-Riot-Token"), otherwise authentication will fail.

  • retries = 3 (RiotApiConfig::DEFAULT_RETRIES).
  • burst_factor = 0.99 (preconfig_burst).
  • duration_overhead = 989 ms (preconfig_burst).
Source

pub fn preconfig_burst(self) -> Self

Sets rate limiting settings to preconfigured values optimized for burst, low latency:

  • burst_factor = 0.99 (PRECONFIG_BURST_BURST_FACTOR).
  • duration_overhead = 989 ms (PRECONFIG_BURST_DURATION_OVERHEAD_MILLIS).
§Returns

self, for chaining.

Source

pub fn preconfig_throughput(self) -> Self

Sets the rate limiting settings to preconfigured values optimized for high throughput:

  • burst_factor = 0.47 (PRECONFIG_THROUGHPUT_BURST_FACTOR).
  • duration_overhead = 10 ms (PRECONFIG_THROUGHPUT_DURATION_OVERHEAD_MILLIS).
§Returns

self, for chaining.

Source

pub fn set_base_url(self, base_url: impl Into<String>) -> Self

Set the base url for requests. The string should contain a "{}" literal which will be replaced with the region platform name. (However multiple or zero "{}"s may be included if needed).

§Returns

self, for chaining.

Source

pub fn set_retries(self, retries: u8) -> Self

Set number of times to retry requests. Naturally, only retryable requests will be retried: responses with status codes 5xx or 429 (after waiting for retry-after headers). A value of 0 means one request will be sent and it will not be retried if it fails.

§Returns

self, for chaining.

Source

pub fn set_rate_usage_factor(self, rate_usage_factor: f32) -> Self

The rate limit usage percentage controls how much of the API key’s rate limit will be used. The default value of 1.0 means the entirety of the rate limit may be used if it is needed. This applies to both the API key’s rate limit (per route) and to endpoint method rate limits.

Setting a value lower than 1.0 can be useful if you are running multiple API instances on the same API key.

For example, four instances, possibly running on different machines, could each have a value of 0.25 to share an API key’s rate limit evenly.

Note that if you have multiple instances hitting different methods, you should use Self::set_app_rate_usage_factor() and Self::set_method_rate_usage_factor() separately, as this sets both.

This also can be used to reduce the chance of hitting 429s, although 429s should be rare even with this set to 1.0.

§Panics

If rate_usage_factor is not in range (0, 1].

§Returns

self, for chaining.

Source

pub fn set_app_rate_usage_factor(self, app_rate_usage_factor: f32) -> Self

See Self::set_rate_usage_factor. Setting this is useful if you have multiple instances sharing the app rate limit, but are hitting distinct methods and therefore do not need their method usage decreased.

§Panics

If app_rate_usage_factor is not in range (0, 1].

§Returns

self, for chaining.

Source

pub fn set_method_rate_usage_factor(self, method_rate_usage_factor: f32) -> Self

See Self::set_rate_usage_factor and Self::set_app_rate_usage_factor. This method is mainly provided for completeness, though it may be useful in advanced use cases.

§Panics

If method_rate_usage_factor is not in range (0, 1].

§Returns

self, for chaining.

Source

pub fn set_burst_factor(self, burst_factor: f32) -> Self

Burst percentage controls how many burst requests are allowed and therefore how requests are spread out. Higher equals more burst, less spread. Lower equals less burst, more spread.

The value must be in the range (0, 1]; Between 0, exclusive, and 1, inclusive. However values should generally be larger than 0.25.

Burst percentage behaves as follows:
A burst percentage of x% means, for each token bucket, “x% of the tokens can be used in x% of the bucket duration.” So, for example, if x is 90%, a bucket would allow 90% of the requests to be made without any delay. Then, after waiting 90% of the bucket’s duration, the remaining 10% of requests could be made.

A burst percentage of 100% results in no request spreading, which would allow for the largest bursts and lowest latency, but could result in 429s as bucket boundaries occur.

A burst percentage of near 0% results in high spreading causing temporally equidistant requests. This prevents 429s but has the highest latency. Additionally, if the number of tokens is high, this may lower the overall throughput due to the rate at which requests can be scheduled.

Therefore, for interactive applications like summoner & match history lookup, a higher percentage may be better. For data-collection apps like champion winrate aggregation, a medium-low percentage may be better.

§Panics

If burst_factor is not in range (0, 1].

§Returns

self, for chaining.

Source

pub fn set_duration_overhead(self, duration_overhead: Duration) -> Self

Sets the additional bucket duration to consider when rate limiting. Increasing this value will decrease the chances of 429s, but will lower the overall throughput.

In a sense, the duration_overhead is how much to “widen” the temporal width of buckets.

Given a particular Riot Game API rate limit bucket that allows N requests per D duration, when counting requests this library will consider requests sent in the past D + duration_overhead duration.

§Returns

self, for chaining.

Source

pub fn set_rso_clear_header(self, rso_clear_header: Option<String>) -> Self

Sets the header to clear for RSO requests (if Some), or will not override any headers (if None).

This is a bit of a hack. The client used by Riven is expected to include the API key as a default header. However, if the API key is included in an RSO request the server responds with a 400 “Bad request - Invalid authorization specified” error. To avoid this the rso_clear_header header is overridden to be empty for RSO requests.

This is set to Some(Self::RIOT_KEY_HEADER) by default.

§Returns

self, for chaining.

Trait Implementations§

Source§

impl Debug for RiotApiConfig

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<T: AsRef<[u8]>> From<T> for RiotApiConfig

Source§

fn from(api_key: T) -> Self

Converts to this type from the input type.

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,

Source§

impl<T> MaybeSendSync for T