Struct riven::RiotApiConfig [−][src]
pub struct RiotApiConfig { /* fields omitted */ }
Expand description
Configuration for instantiating RiotApi.
Implementations
"https://{}.api.riotgames.com"
Default base URL, including {}
placeholder for region platform.
3
Default number of retries.
1.0
Default rate limit usage factor.
0.99
Default burst_factor
, also used by preconfig_burst
.
989
ms
Default duration_overhead
, also used by preconfig_burst
.
0.47
burst_factor
used by preconfig_throughput
.
10
ms.
duration_overhead
used by preconfig_throughput
.
Creates a new RiotApiConfig
with the given api_key
with the following
configuration:
retries = 3
(RiotApiConfig::DEFAULT_RETRIES
).burst_factor = 0.99
(preconfig_burst
).duration_overhead = 989 ms
(preconfig_burst
).
api_key
should be a Riot Games API key from
https://developer.riotgames.com/,
and should look like "RGAPI-01234567-89ab-cdef-0123-456789abcdef"
.
Creates a new RiotApiConfig
with the given client builder.
The client builder default headers should include a value for
RiotApiConfig::RIOT_KEY_HEADER
, otherwise authentication will fail.
retries = 3
(RiotApiConfig::DEFAULT_RETRIES
).burst_factor = 0.99
(preconfig_burst
).duration_overhead = 989 ms
(preconfig_burst
).
Sets rate limiting settings to preconfigured values optimized for burst, low latency:
burst_factor = 0.99
(PRECONFIG_BURST_BURST_FACTOR
).duration_overhead = 989 ms
(PRECONFIG_BURST_DURATION_OVERHEAD_MILLIS
).
Returns
self
, for chaining.
Sets the rate limiting settings to preconfigured values optimized for high throughput:
burst_factor = 0.47
(PRECONFIG_THROUGHPUT_BURST_FACTOR
).duration_overhead = 10 ms
(PRECONFIG_THROUGHPUT_DURATION_OVERHEAD_MILLIS
).
Returns
self
, for chaining.
Set the base url for requests. The string should contain a "{}"
literal which will be replaced with the region platform name. (However
multiple or zero "{}"
s may be included if needed).
Returns
self
, for chaining.
Set number of times to retry requests. Naturally, only retryable requests
will be retried: responses with status codes 5xx or 429 (after waiting
for retry-after headers). A value of 0
means one request will be sent
and it will not be retried if it fails.
Returns
self
, for chaining.
The rate limit usage percentage controls how much of the API key’s rate
limit will be used. The default value of 1.0
means the entirety of
the rate limit may be used if it is needed. This applies to both the
API key’s rate limit (per route) and to endpoint method rate limits.
Setting a value lower than 1.0
can be useful if you are running
multiple API instances on the same API key.
For example, four instances, possibly running on different machines,
could each have a value of 0.25
to share an API key’s rate limit
evenly.
Note that if you have multiple instances hitting different methods, you should use [set_app_rate_usage_factor()] and [set_method_rate_usage_factor()] separately, as this sets both.
This also can be used to reduce the chance of hitting 429s, although
429s should be rare even with this set to 1.0
.
Panics
If rate_usage_factor
is not in range (0, 1].
Returns
self
, for chaining.
Burst percentage controls how many burst requests are allowed and therefore how requests are spread out. Higher equals more burst, less spread. Lower equals less burst, more spread.
The value must be in the range (0, 1]; Between 0, exclusive, and 1, inclusive. However values should generally be larger than 0.25.
Burst percentage behaves as follows:
A burst percentage of x% means, for each token bucket, “x% of the
tokens can be used in x% of the bucket duration.” So, for example, if x
is 90%, a bucket would allow 90% of the requests to be made without
any delay. Then, after waiting 90% of the bucket’s duration, the
remaining 10% of requests could be made.
A burst percentage of 100% results in no request spreading, which would allow for the largest bursts and lowest latency, but could result in 429s as bucket boundaries occur.
A burst percentage of near 0% results in high spreading causing temporally equidistant requests. This prevents 429s but has the highest latency. Additionally, if the number of tokens is high, this may lower the overall throughput due to the rate at which requests can be scheduled.
Therefore, for interactive applications like summoner & match history lookup, a higher percentage may be better. For data-collection apps like champion winrate aggregation, a medium-low percentage may be better.
Panics
If burst_factor
is not in range (0, 1].
Returns
self
, for chaining.
Sets the additional bucket duration to consider when rate limiting. Increasing this value will decrease the chances of 429s, but will lower the overall throughput.
In a sense, the duration_overhead
is how much to “widen” the temporal
width of buckets.
Given a particular Riot Game API rate limit bucket that allows N requests
per D duration, when counting requests this library will consider requests
sent in the past D + duration_overhead
duration.
Returns
self
, for chaining.