View Single Post
  #139  
Old 09-19-2019, 04:13 PM
jeverett's Avatar
jeverett jeverett is offline
Double Eagle Member
 
Join Date: Sep 2009
Location: Eugene, OR
Years Playing: 10.5
Courses Played: 27
Throwing Style: LHBH
Posts: 1,201
Niced 48 Times in 32 Posts
Default

I know I have some old posts around here about rating systems, but as Chess rating systems were mentioned, I thought I would chime in a bit.

One key difference between the rating systems used for games like Chess and the PDGA system is that one is a 'player' rating system, and the other is a 'round' rating system.

Player rating systems like Elo (chess) or TrueSkill are typically used for 1v1 competitions, but actually TrueSkill (among others) can be used for group or even team-based competitions too. However one really important point about these is they revolve around win/loss ratios. i.e. Behind the scenes, the rating difference between any two players dictates the percentage chance that one player beats the other. It would be much easier to implement a system like this in disc golf if we only cared about whether player A beat player B, and totally ignored the question of "by how much?". The second big feature of player rating systems is that they also behind-the-scenes include an assessment of confidence in rating. i.e. How confident am I that player A's rating is 'correct'? The more confident (i.e. due to previous number of matches) I am in a particular players' rating, the more weight that particular value has in adjusting the ratings of participants after any match.

By contrast, in a 'round' rating system, we're really interested in the actual projected round scores of players, rather than simply what percentage of the time would they beat some other player. With a big field of players, we get a ton more data than simply a rank-ordered list of who beat who, but it's also extremely difficult to sort out what's valuable/useful information because of all of the myriad of variables involved (e.g. the course, the participant pool, etc.). The PDGA system also does not include any kind of measurement of how confident it is in a players' rating (i.e. all 'propagators' are treated equally in terms of the value of their round data).

Finally, one (psychological) consideration that I think rating system designers underestimate is accuracy vs (upward) progression. Actually, most players don't really like 'true' rating systems, because of how rapidly they plateau a players' 'rank'. i.e. Players generally want to always see some kind of upward progression in their rating/ranking, yet rating systems are built around locking in on a stable and accurate depiction of a players' skill. In disc golf terms, consider that in the PDGA system, 90% of your rounds (once you have a few rated rounds in your rating already) will come in within +/- 30 rating points of your own rating. e.g. If you're a 950-rated golfer, and you throw a 980-rated round (or a 920-rated round), that could still simply be you performing at your 'correct' skill level. So a more 'true' player rating system might simply be to call both of those rounds a '9'. This is why rating system scales such as TrueSkill typically run 1-25 (i.e. a very small number of possible 'steps' between rating minimums and maximums).

Niced: (1)
Reply With Quote