1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
//! Skillratings provides a collection of well-known (and lesser known) skill rating algorithms, that allow you to assess a player's skill level instantly.\
//! You can easily calculate skill ratings instantly in 1vs1 matches, Team vs Team matches, or in tournaments / rating periods.\
//! This library is incredibly lightweight (no dependencies by default), user-friendly, and of course, *blazingly fast*.
//!
//! Currently supported algorithms:
//!
//! - [Elo](https://docs.rs/skillratings/latest/skillratings/elo/)
//! - [Glicko](https://docs.rs/skillratings/latest/skillratings/glicko/)
//! - [Glicko-2](https://docs.rs/skillratings/latest/skillratings/glicko2/)
//! - [TrueSkill](https://docs.rs/skillratings/latest/skillratings/trueskill/)
//! - [Weng-Lin (OpenSkill)](https://docs.rs/skillratings/latest/skillratings/weng_lin/)
//! - [FIFA Men's World Ranking](https://docs.rs/skillratings/latest/skillratings/fifa/)
//! - [Sticko (Stephenson Rating System)](https://docs.rs/skillratings/latest/skillratings/sticko/)
//! - [Glicko-Boost](https://docs.rs/skillratings/latest/skillratings/glicko_boost/)
//! - [USCF (US Chess Federation Ratings)](https://docs.rs/skillratings/latest/skillratings/uscf/)
//! - [EGF (European Go Federation Ratings)](https://docs.rs/skillratings/latest/skillratings/egf/)
//! - [DWZ (Deutsche Wertungszahl)](https://docs.rs/skillratings/latest/skillratings/dwz/)
//! - [Ingo](https://docs.rs/skillratings/latest/skillratings/ingo/)
//! - [Yuksel](https://docs.rs/skillratings/latest/skillratings/yuksel/)
//!
//! Most of these are known from their usage in chess and various other games.\
//! Click on the documentation for the modules linked above for more information about the specific rating algorithms, and their advantages and disadvantages.
//!
//! ## Table of Contents
//!
//! - [Installation](#installation)
//! - [Serde Support](#serde-support)
//! - [Usage and Examples](#usage-and-examples)
//! - [Player vs. Player](#player-vs-player)
//! - [Team vs. Team](#team-vs-team)
//! - [Free-For-Alls and Multiple Teams](#free-for-alls-and-multiple-teams)
//! - [Expected Outcome](#expected-outcome)
//! - [Rating Period](#rating-period)
//! - [Switching between different rating systems](#switching-between-different-rating-systems)
//! - [Contributing](#contributing)
//! - [License](#license)
//!
//!
//! ## Installation
//!
//! If you are on Rust 1.62 or higher use `cargo add` to install the latest version:
//!
//! ```bash
//! cargo add skillratings
//! ```
//!
//! Alternatively, you can add the following to your `Cargo.toml` file manually:
//!
//! ```toml
//! [dependencies]
//! skillratings = "0.29"
//! ```
//!
//! ### Serde support
//!
//! Serde support is gated behind the `serde` feature. You can enable it like so:
//!
//! Using `cargo add`:
//!
//! ```bash
//! cargo add skillratings --features serde
//! ```
//!
//! By editing `Cargo.toml` manually:
//!
//! ```toml
//! [dependencies]
//! skillratings = {version = "0.29", features = ["serde"]}
//! ```
//!
//! ## Usage and Examples
//!
//! Below you can find some basic examples of the use cases of this crate.\
//! There are many more rating algorithms available with lots of useful functions that are not covered here.\
//! For more information head over to the modules linked above or below.
//!
//! ### Player-vs-Player
//!
//! Every rating algorithm included here can be used to rate 1v1 games.\
//! We use *Glicko-2* in this example here.
//!
//! ```rust
//! use skillratings::{
//! glicko2::{glicko2, Glicko2Config, Glicko2Rating},
//! Outcomes,
//! };
//!
//! // Initialise a new player rating.
//! // The default values are: 1500, 350, and 0.06.
//! let player_one = Glicko2Rating::new();
//!
//! // Or you can initialise it with your own values of course.
//! // Imagine these numbers being pulled from a database.
//! let (some_rating, some_deviation, some_volatility) = (1325.0, 230.0, 0.05932);
//! let player_two = Glicko2Rating {
//! rating: some_rating,
//! deviation: some_deviation,
//! volatility: some_volatility,
//! };
//!
//! // The outcome of the match is from the perspective of player one.
//! let outcome = Outcomes::WIN;
//!
//! // The config allows you to specify certain values in the Glicko-2 calculation.
//! let config = Glicko2Config::new();
//!
//! // The glicko2 function will calculate the new ratings for both players and return them.
//! let (new_player_one, new_player_two) = glicko2(&player_one, &player_two, &outcome, &config);
//!
//! // The first players rating increased by ~112 points.
//! assert_eq!(new_player_one.rating.round(), 1612.0);
//! ```
//!
//! ### Team-vs-Team
//!
//! Some algorithms like TrueSkill or Weng-Lin allow you to rate team-based games as well.\
//! This example shows a 3v3 game using *TrueSkill*.
//!
//! ```rust
//! use skillratings::{
//! trueskill::{trueskill_two_teams, TrueSkillConfig, TrueSkillRating},
//! Outcomes,
//! };
//!
//! // We initialise Team One as a Vec of multiple TrueSkillRatings.
//! // The default values for the rating are: 25, 25/3 ≈ 8.33.
//! let team_one = vec![
//! TrueSkillRating {
//! rating: 33.3,
//! uncertainty: 3.3,
//! },
//! TrueSkillRating {
//! rating: 25.1,
//! uncertainty: 1.2,
//! },
//! TrueSkillRating {
//! rating: 43.2,
//! uncertainty: 2.0,
//! },
//! ];
//!
//! // Team Two will be made up of 3 new players, for simplicity.
//! // Note that teams do not necessarily have to be the same size.
//! let team_two = vec![
//! TrueSkillRating::new(),
//! TrueSkillRating::new(),
//! TrueSkillRating::new(),
//! ];
//!
//! // The outcome of the match is from the perspective of team one.
//! let outcome = Outcomes::LOSS;
//!
//! // The config allows you to specify certain values in the TrueSkill calculation.
//! let config = TrueSkillConfig::new();
//!
//! // The trueskill_two_teams function will calculate the new ratings for both teams and return them.
//! let (new_team_one, new_team_two) = trueskill_two_teams(&team_one, &team_two, &outcome, &config);
//!
//! // The rating of the first player on team one decreased by around ~1.2 points.
//! assert_eq!(new_team_one[0].rating.round(), 32.0);
//! ```
//!
//! ### Free-For-Alls and Multiple Teams
//!
//! The *Weng-Lin* algorithm supports rating matches with multiple Teams.\
//! Here is an example showing a 3-Team game with 3 players each.
//!
//! ```rust
//! use skillratings::{
//! weng_lin::{weng_lin_multi_team, WengLinConfig, WengLinRating},
//! MultiTeamOutcome,
//! };
//!
//! // Initialise the teams as Vecs of WengLinRatings.
//! // Note that teams do not necessarily have to be the same size.
//! // The default values for the rating are: 25, 25/3 ≈ 8.33.
//! let team_one = vec![
//! WengLinRating {
//! rating: 25.1,
//! uncertainty: 5.0,
//! },
//! WengLinRating {
//! rating: 24.0,
//! uncertainty: 1.2,
//! },
//! WengLinRating {
//! rating: 18.0,
//! uncertainty: 6.5,
//! },
//! ];
//!
//! let team_two = vec![
//! WengLinRating {
//! rating: 44.0,
//! uncertainty: 1.2,
//! },
//! WengLinRating {
//! rating: 32.0,
//! uncertainty: 2.0,
//! },
//! WengLinRating {
//! rating: 12.0,
//! uncertainty: 3.2,
//! },
//! ];
//!
//! // Using the default rating for team three for simplicity.
//! let team_three = vec![
//! WengLinRating::new(),
//! WengLinRating::new(),
//! WengLinRating::new(),
//! ];
//!
//! // Every team is assigned a rank, depending on their placement. The lower the rank, the better.
//! // If two or more teams tie with each other, assign them the same rank.
//! let rating_groups = vec![
//! (&team_one[..], MultiTeamOutcome::new(1)), // team one takes the 1st place.
//! (&team_two[..], MultiTeamOutcome::new(3)), // team two takes the 3rd place.
//! (&team_three[..], MultiTeamOutcome::new(2)), // team three takes the 2nd place.
//! ];
//!
//! // The weng_lin_multi_team function will calculate the new ratings for all teams and return them.
//! let new_teams = weng_lin_multi_team(&rating_groups, &WengLinConfig::new());
//!
//! // The rating of the first player of team one increased by around ~2.9 points.
//! assert_eq!(new_teams[0][0].rating.round(), 28.0);
//! ```
//!
//! ### Expected outcome
//!
//! Every rating algorithm has an `expected_score` function that you can use to predict the outcome of a game.\
//! This example is using *Glicko* (*not Glicko-2!*) to demonstrate.
//!
//! ```rust
//! use skillratings::glicko::{expected_score, GlickoRating};
//!
//! // Initialise a new player rating.
//! // The default values are: 1500, and 350.
//! let player_one = GlickoRating::new();
//!
//! // Initialising a new rating with custom numbers.
//! let player_two = GlickoRating {
//! rating: 1812.0,
//! deviation: 195.0,
//! };
//!
//! // The expected_score function will return two floats between 0 and 1 for each player.
//! // A value of 1 means guaranteed victory, 0 means certain loss.
//! // Values near 0.5 mean draws are likely to occur.
//! let (exp_one, exp_two) = expected_score(&player_one, &player_two);
//!
//! // The expected score for player one is ~0.25.
//! // If these players would play 100 games, player one is expected to score around 25 points.
//! // (Win = 1 point, Draw = 0.5, Loss = 0)
//! assert_eq!((exp_one * 100.0).round(), 25.0);
//! ```
//!
//! ### Rating period
//!
//! Every rating algorithm included here has a `..._rating_period` that allows you to calculate a player's new rating using a list of results.\
//! This can be useful in tournaments, or if you only update ratings at the end of a certain rating period, as the name suggests.\
//! We are using the *Elo* rating algorithm in this example.
//!
//! ```rust
//! use skillratings::{
//! elo::{elo_rating_period, EloConfig, EloRating},
//! Outcomes,
//! };
//!
//! // We initialise a new Elo Rating here.
//! // The default rating value is 1000.
//! let player = EloRating { rating: 1402.1 };
//!
//! // We need a list of results to pass to the elo_rating_period function.
//! let mut results = Vec::new();
//!
//! // And then we populate the list with tuples containing the opponent,
//! // and the outcome of the match from our perspective.
//! results.push((EloRating::new(), Outcomes::WIN));
//! results.push((EloRating { rating: 954.0 }, Outcomes::DRAW));
//! results.push((EloRating::new(), Outcomes::LOSS));
//!
//! // The elo_rating_period function calculates the new rating for the player and returns it.
//! let new_player = elo_rating_period(&player, &results, &EloConfig::new());
//!
//! // The rating of the player decreased by around ~40 points.
//! assert_eq!(new_player.rating.round(), 1362.0);
//! ```
//!
//! ### Switching between different rating systems
//!
//! If you want to switch between different rating systems, for example to compare results or to do scientific analyisis,
//! we provide Traits to make switching as easy and fast as possible.\
//! All you have to do is provide the right Config for your rating system.
//!
//! _**Disclaimer:**_ For more accurate and fine-tuned calculations it is recommended that you use the rating system modules directly.\
//! The Traits are primarily meant to be used for comparisions between systems.
//!
//! In the following example, we are using the `RatingSystem` (1v1) Trait with Glicko-2:
//!
//! ```rust
//! use skillratings::{
//! glicko2::{Glicko2, Glicko2Config},
//! Outcomes, Rating, RatingSystem,
//! };
//!
//! // Initialise a new player rating with a rating value and uncertainty value.
//! // Not every rating system has an uncertainty value, so it may be discarded.
//! // Some rating systems might consider other values too (volatility, age, matches played etc.).
//! // If that is the case, we will use the default values for those.
//! let player_one = Rating::new(Some(1200.0), Some(120.0));
//! // Some rating systems might use widely different scales for measuring a player's skill.
//! // So if you always want the default values for every rating system, use None instead.
//! let player_two = Rating::new(None, None);
//!
//! // The config needs to be specific to the rating system.
//! // When you swap rating systems, make sure to update the config.
//! let config = Glicko2Config::new();
//!
//! // For 1v1 matches we are using the `RatingSystem` trait with the provided config.
//! // If no config is available for the rating system, pass in empty brackets.
//! // You may also need to use a type annotation here for the compiler.
//! let rating_system: Glicko2 = RatingSystem::new(config);
//!
//! // The outcome of the match is from the perspective of player one.
//! let outcome = Outcomes::WIN;
//!
//! // Calculate the expected score of the match.
//! let expected_score = rating_system.expected_score(&player_one, &player_two);
//! // Calculate the new ratings.
//! let (new_one, new_two) = rating_system.rate(&player_one, &player_two, &outcome);
//!
//! // After that, access new ratings and uncertainties with the functions below.
//! assert_eq!(new_one.rating().round(), 1241.0);
//! // Note that because not every rating system has an uncertainty value,
//! // the uncertainty function returns an Option<f64>.
//! assert_eq!(new_one.uncertainty().unwrap().round(), 118.0);
//! ```
//!
//! ## Contributing
//!
//! Contributions of any kind are always welcome!\
//!
//! Found a bug or have a feature request? [Submit a new issue](https://github.com/atomflunder/skillratings/issues/).\
//! Alternatively, [open a pull request](https://github.com/atomflunder/skillratings/pulls) if you want to add features or fix bugs.\
//! Leaving other feedback is of course also appreciated.
//!
//! Thanks to everyone who takes their time to contribute.
//!
//! ## License
//!
//! This project is licensed under either the [MIT License](/LICENSE-MIT), or the [Apache License, Version 2.0](/LICENSE-APACHE).
use DeserializeOwned;
use ;
/// The possible outcomes for a match: Win, Draw, Loss.
///
/// Note that this is always from the perspective of player one.\
/// That means a win is a win for player one and a loss is a win for player two.
/// Outcome for a free-for-all match or a match that involves more than two teams.
///
/// Every team is assigned a rank, depending on their placement. The lower the rank, the better.\
/// If two or more teams tie with each other, assign them the same rank.
///
/// For example: Team A takes 1st place, Team C takes 2nd place, Team B takes 3rd place,
/// and Teams D and E tie with each other and both take the 4th place.\
/// In that case you would assign Team A = 1, Team B = 3, Team C = 2, Team D = 4, and Team E = 4.
;
/// Measure of player's skill.
///
/// 📌 _**Important note:**_ Please keep in mind that some rating systems use widely different scales for measuring ratings.\
/// Please check out the documentation for each rating system for more information, or use `None` to always use default values.
///
/// Some rating systems might consider other values too (volatility, age, matches played etc.).
/// If that is the case, we will use the default values for those.
/// Rating system for 1v1 matches.
///
/// 📌 _**Important note:**_ The RatingSystem Trait only implements the `rate` and `expected_score` functions.\
/// Some rating systems might also implement additional functions (confidence interval, match quality, etc.) which you can only access by using those directly.
/// Rating system for rating periods.
///
/// 📌 _**Important note:**_ The RatingPeriodSystem Trait only implements the `rate` and `expected_score` functions.\
/// Some rating systems might also implement additional functions which you can only access by using those directly.
/// Rating system for two teams.
///
/// 📌 _**Important note:**_ The TeamRatingSystem Trait only implements the `rate` and `expected_score` functions.\
/// Some rating systems might also implement additional functions which you can only access by using those directly.
/// Rating system for more than two teams.
///
/// 📌 _**Important note:**_ The MultiTeamRatingSystem Trait only implements the `rate` and `expected_score` functions.\
/// Some rating systems might also implement additional functions which you can only access by using those directly.