• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Why Ratings are awful

This makes sense.

6 pages in and I finally got something that makes sense.

So you needed someone you liked to say the same thing that several other posters you don't like have already said to be able to understand it. Putting that communications degree to good use...
 
I assume you're talking about forcing players up when they win or place well at local events, forcing them into divisions they're not ready for and often causing them to stop playing competitively. Since you've capitalized it, perhaps it has some other very specific meaning to you...

Close.

I would love to explain it if you are interested. But I guess I wouldn't communicate it well enough for your or anyone else to understand.
 
So they don't matter yet virtually every division has ratings standards for who can and can't enter it?

They only have to be accurate to within 50 points to separate people according to those guidelines. I think the system does a fine job of getting within 50 points. Also you don't have to be rated to play most events, just pay the extra fee and play wherever you want. That's why they don't matter to me much. It's not like the divisions are strict. Wanna play up? Play up. Wanna play down? Dont get rated. Pretty simple
 
Yes it's a lot like the BCS. The BCS is terrible but way better than when people couldn't play each other due to conference bowl affiliations.

But as we now see, the BCS isn't the answer.

Funny, I liked the pre-BCS, vote-and-argue system. I'm pretty sure I'll like the playoff system more than the BCS.

Problem is, I haven't seen anything better than ratings offered up, anywhere. If "par" weren't such a mess, and there was a universally agreed upon and applied standard that worked well over the tremendous variety of courses we have, we could use it. But that's a whole 'nother thread.
 
So you needed someone you liked to say the same thing that several other posters you don't like have already said to be able to understand it. Putting that communications degree to good use...

He went into further detail and instead of using general Chuck mumbo jumbo, he referenced my example and addressed it.

Reading is fun!
 
If "par" weren't such a mess, and there was a universally agreed upon and applied standard that worked well over the tremendous variety of courses we have, we could use it. But that's a whole 'nother thread.

Agreed. The root of this issue is the lack of standardization of par in disc golf.

A buddy texted me today and said he shot 80 on the golf course. He didn't tell me where. It didn't matter. Golf has a standardization of par.

But if a buddy told me he shot 54 on a course in disc golf, I would need to know a lot more to know how he played.
 
He went into further detail and instead of using general Chuck mumbo jumbo, he referenced my example and addressed it.

Reading is fun!

Exactly. You didn't need any details from chuck to extrapolate his very specific message to you into a big general issue with the ratings system, but you couldn't generalize the many points about small sample sizes to understand the point David explained in detail.
 
The hyper inflation of NT rounds and the upward trend of ratings has a lot to do with the fact that the NT events typically have very highly rated players in them thus the highly rated propagators tend to spiral the ratings upward and upward.
In a local c tier or b tier a player must destroy everyone to get a 1020 round.

Generally the ratings are a good thing but ask any pro that plays in a tourney with the NT pack and they know that if they play decent a highly rated round will be registered. If the sane round was shot with local ams then not nearly as good a chance of a high rating.

It all comes down to who u play with and playing with highly rated players begets higher ratings. This is why soon there will be NT guys with 1080 ratings eventually
 
My local course has an SSA of roughly 48.5 - 49. Hundreds of rounds played there and always right around there.

When we had a NT here it was magically 51. Highest it's ever been.
 
The hyper inflation of NT rounds and the upward trend of ratings has a lot to do with the fact that the NT events typically have very highly rated players in them thus the highly rated propagators tend to spiral the ratings upward and upward.
In a local c tier or b tier a player must destroy everyone to get a 1020 round.

Generally the ratings are a good thing but ask any pro that plays in a tourney with the NT pack and they know that if they play decent a highly rated round will be registered. If the sane round was shot with local ams then not nearly as good a chance of a high rating.

It all comes down to who u play with and playing with highly rated players begets higher ratings. This is why soon there will be NT guys with 1080 ratings eventually

I've heard this argument a few times, but I've never seen any numbers that don't completely disagree with it. We have more players with ratings near the current highest rated player, but that rating is not much higher than Climo's rating when he was at the top. Do you think it's possible that putting the top 10 guys in the same event makes them all elevate their game to stay with each other?
 
My degree in Communications extremely disagrees with you.
That and a quarter will get you a pack of gum.

Yes it's a lot like the BCS. The BCS is terrible but way better than when people couldn't play each other due to conference bowl affiliations.

But as we now see, the BCS isn't the answer.
The BCS is designed to do one thing: match the top two teams at the end of the year. Nothing more. Perfect? No, but quite good at its sole purpose.
_MTL_ said:
Wah, wah wah wa wah. Me me me, you no no, wah wah.

Good for entertainment value in the same ilk of a 24hour Brittany Spears webcam, but no one's getting smarter from it.
 
The hyper inflation of NT rounds and the upward trend of ratings has a lot to do with the fact that the NT events typically have very highly rated players in them thus the highly rated propagators tend to spiral the ratings upward and upward.
In a local c tier or b tier a player must destroy everyone to get a 1020 round.

Generally the ratings are a good thing but ask any pro that plays in a tourney with the NT pack and they know that if they play decent a highly rated round will be registered. If the sane round was shot with local ams then not nearly as good a chance of a high rating.

It all comes down to who u play with and playing with highly rated players begets higher ratings. This is why soon there will be NT guys with 1080 ratings eventually

Interestingly enough, I shot a 65 in the last round of the VO am side rated 985. I then shot a 65 in the first round of the NT event, which was rated 976. I think it has more to do with getting a varied set of propagators rather than the actual ratings of the unvaried set of propagators.
 
Just look at the ratings! It's that simple, isn't it?

Well then I don't have to look. The higher rated round is more likely to not be reproduced in identical settings. That's just about what round ratings tell you when compared between courses and even just different rounds. Between the same round you can tell which one was better. That's about it.

And the ratings system works and averages everything out because if you single out a single tournament round from any given course, you can rate them based on their performances (scores). Do this again for a different round at the same course. Repeat again, and again, and again, and soon you will start to see trends and after a certain number of rounds (technically a sample size >10) you can start to average individual players' scores and do some stats and find their average round and the standard deviation of what they shoot along with some other stuff that's irrelevant.

So now, you've got a list of players that you have rated based on their rounds, and they are rated on their average. A good round will bring their rating up a bad one will bring it down.

Let's throw a curveball in here. Our group of players learns of the existence of another course. They decide to play this one for a while, and they do the same thing keeping track of their scores. After they record enough rounds, they find that they shoot different averages so they have different ratings at this new course. How can they make the two ratings jive?

This is where we are running into the problem: the courses are so different that any metric isn't going to provide a perfect picture. But we can make a model that works well enough most of the time. And that's exactly what happens. This is the SSA.

And I've lost my train of thought. That, and I am talking from my anus. I really don't know much about the inner workings of the ratings system and SSAs, but this is how it works out in my head and makes logical sense.
 
This analysis is based on actual results data from Pro Worlds 2013.

....and for anyone who thinks PDGA Player Ratings are not accurate check this out:

MPO
1040+ players (5 of them) finished in the top 5 spots
Of 1030+ rated (13 of them) only 1 (Rovere - 1034) finished outside the grouping and only one lower rated was in that grouping (Colglazier - 1025)
Of 51 1000+ rated players, 7 did not cash
Of 79 999- rated players, 8 cashed
Of 6 players rated 950-, they all finished in the bottom 14
Joe Bishop breaks the curve by being one of the 4 lowest players that did not end in the 4 lowest spot. He finished in the 10th bottom spot.

FPO
3 highest players finished in the top 3 spots
11 lowest players (902-), all finished in the bottom 13 spots

MPM
Top highest 2 finished in top 2
Top highest 7 finished in top 7
Of 9 1000+ players only 1 was not in that group (Bygde) and only 1 lower rated made it into that group (Weimer - 995 rated)
Bottom 10 (950-) all finished in bottom 20. If not for TD of the Year, Sam Nicholson, they would have all finished in the bottom 13. (Sam ran combined Worlds last year and did a TON of course work in preparation....and his game suffered. Now it is rebounding back to the 975 level he has played at for years)

FPM
Other than the winner (local Barrett White winning by 1), they finished in order of their ratings (division of 6)

MPG - the most sporadic division with no great correlation of ratings to finish place
Top 11 players (985+) all finished in the top 20
Bottom 9 players (935-) all finished in the bottom 12

FPG
Finished in ratings order (division of 4 players)

MPS
Highest rated player won
Highest 5 rated players finished in top 5
Lowest 5 rated players (900-) finished in bottom 5

This data is for all players who completed 6 rounds (7 if they made the cut into the Semis, and 7.5 if they were in the Finals). This amount of DG evens out any individual/isolated hot or cold rounds.

As in the first post, I hope this makes sense for everyone interested in analyzing and discussing.
 
I personally think the ratings system does an excellent job considering all the variation in our sport from round to round. Most of the imperfections (like unusually high ratings at the Memorial) can easily be logically explained and are pretty much never important. Most of the whining I hear comes from people who aren't happy with their ratings and therefore blame the system.
 
My guess is that the standard deviation of the field's rounds at Maple Hill is higher, so SSA points per stroke has to be lower--and it looks to be about 8 points per stroke. At the Memorial, the field's scores are more compressed due to it being Par 54, so the standard deviation is lower and the SSA points per stroke is higher--and it is 10 points per stroke.

Let's say there are two courses where par is 1000 rated: a par 54 course and a par 59 course and getting a birdie on each hole on each course is equally challenging. On the par 54 course, shooting a -15 gets you a 1150. On the par 59 course, it gets you a 1120. Change the numbers a little, and that seems to be approximately what's going on with the Memorial and Vibram Open rounds.

The key element of the PDGA ratings is the assumption of constant SSA points per stroke; i.e., you earn the same number of additional points for going from +4 to +3 as you get for going from -14 to -15. This system probably works very well for determining round ratings in the middle of the pack, but things probably get weird when it comes to assigning a rating to a really hot (or even a really cold) round.
 
My guess is that the standard deviation of the field's rounds at Maple Hill is higher, so SSA points per stroke has to be lower--and it looks to be about 8 points per stroke. At the Memorial, the field's scores are more compressed due to it being Par 54, so the standard deviation is lower and the SSA points per stroke is higher--and it is 10 points per stroke.

Let's say there are two courses where par is 1000 rated: a par 54 course and a par 59 course and getting a birdie on each hole on each course is equally challenging. On the par 54 course, shooting a -15 gets you a 1150. On the par 59 course, it gets you a 1120. Change the numbers a little, and that seems to be approximately what's going on with the Memorial and Vibram Open rounds.

The key element of the PDGA ratings is the assumption of constant SSA points per stroke; i.e., you earn the same number of additional points for going from +4 to +3 as you get for going from -14 to -15. This system probably works very well for determining round ratings in the middle of the pack, but things probably get weird when it comes to assigning a rating to a really hot (or even a really cold) round.

Just FYI, the points per stroke is based solely on the SSA, and it scales linearly (with two different slopes on either side of a specific point near 50.4), it doesn't vary based on the variance in any given round.
 

Latest posts

Top