• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Tournament ratings question

I'm seeing some tournaments organized as "Pro C-tier / Amateur B-tier" all on the same course, same tournament. Does this cause the tiers to be rated separately?

If you see them on the same results page, then they're grouped together as the same tournament.

For unofficial ratings, the logic is:
1 - Same tournament (same 5-digit ID number in the URL)
2 - Same round number
3 - Same layout name

When the ratings go official, the separate rounds will likely be grouped together into a "super-round" if the rounds are statistically close to each other. At that point, the logic is:
1 - Same tournament
2 - Same layout name
3 (optional) - Same round number, only if separate rounds show significantly different results on the same layout
 
Dixon Jowers via Facebook: "Whilst crunching some numbers I came across something interesting. The common refrain is that if Paige was playing MPO her rating would be much higher. That seems to make sense in theory but it is speculation.

At the LVC, however, we had an interesting opportunity. The FPO played the same layouts, albeit with different pars, as the MPO. This should give us a 1 to 1 comparison.
Here is what I found...(course, raw score, FPO rating, MPO rating, difference)
Infinite...52...1031...1036...-5
Innova...69...948...909...+39
Factory...57...1017...1013...+4
Innova...62...977...990...-13

This means for the same score, the FPO rating was actually 25 points higher total for the event. What am I missing as far as variables that could be at work that I'm not seeing?
I know the wind kicked up in the late afternoon during round 2, but the lead MPO group was around the turn at the time. I would be shocked if the rating was that drastically different when players in both divisions were on the course concurrently."

Jay Yeti's Comment via Facebook: "Are we really shocked? Some of us have been saying this for years. PDGA members must demand an audit of the ratings system now more than ever. I'm not hating. If we are going to have a ratings system, I want it to be as accurate as possible. Especially now that ratings are very much being used with financial implications. The flaws we all spoke about years ago were brushed under the table with this quoted sentiment. "When ratings start affecting real money players are earning, the system deserves to be scrutinized. Until then, its the best system we have." We ARE HERE...."
 
At the LVC, however, we had an interesting opportunity. The FPO played the same layouts

Did they though? I didn't look at the layouts, but I know for the Memorial, there are some holes where the females have different tee pads - for example: Fountain Hills Hole 1 - one tee pad for the male pros and one tee pad for the male ams and female pros/ams. For that hole, the FPO difficulty would have been the same as male ams, not MPO. So that would change the distance along with the difficulty.
 
Did they though? I didn't look at the layouts, but I know for the Memorial, there are some holes where the females have different tee pads - for example: Fountain Hills Hole 1 - one tee pad for the male pros and one tee pad for the male ams and female pros/ams. For that hole, the FPO difficulty would have been the same as male ams, not MPO. So that would change the distance along with the difficulty.

I played LVC last year and I can confirm that everyone plays the exact same course; men, women, pro, am, juniors. This year the pars changed for FPO for the first time, but the course is the same. I'd wager the ratings differences just comes down to weather in the morning when most men played vs the afternoon when the women play. On day two for example there was almost no wind for most MPO cards and the wind picked up a lot for all the FPO cards. I know for sure that the first day had similar weather the whole day, hence the ratings being much closer. I don't remember about days 3 or 4.
 
That could also very well be the case. Though, I feel like the thought from most people is just if they play the same course, just throw everyone into the same bucket to calculate the ratings but varying weather conditions between when the divisions play make that kind of silly sometimes. If most of MPO play in the morning with almost no wind and FPO play in the afternoon with almost 20mph winds (as was the case on the second day of LVC, I just double checked the weather from that day) I would definitely expect a big swing in ratings like we saw. I also double checked day three, which had very similar weather the whole day, and day four which had 5-10mph less winds for the FPO field.

I think even on a day like today at WACO where there wasn't very much wind in the morning and it picked way up in the afternoon could be given a split in the calculations to give more "accurate" ratings among just the MPO field. Though, deciding where to make that split would probably create more headaches than its worth.
 
Last edited:
I think the main idea is to assign players ratings, not to be a precise number for a given round.

If round ratings are imprecise, over time they'll average out. So if ratings for a single round are subject to a variability of 20 points, then over a 4-round tournament that should average to 5....over 10 tournaments, to 0.5 points....in other words, be inconsequential.
 
I pretty much agree with that. I guess I worded my bit on WACO poorly as I was making the suggestion mostly from the direction of 'the system is broken' that others in this thread and the quote from Yeti are coming from.
 
I played LVC last year and I can confirm that everyone plays the exact same course; men, women, pro, am, juniors. This year the pars changed for FPO for the first time, but the course is the same. I'd wager the ratings differences just comes down to weather in the morning when most men played vs the afternoon when the women play. On day two for example there was almost no wind for most MPO cards and the wind picked up a lot for all the FPO cards. I know for sure that the first day had similar weather the whole day, hence the ratings being much closer. I don't remember about days 3 or 4.

The late cards in MPO and the FPO got beat up by the wind today in Waco.

Regarding LVC, I think both it and the Memorial were surprisingly little wind compared to most years.
 
I pretty much agree with that. I guess I worded my bit on WACO poorly as I was making the suggestion mostly from the direction of 'the system is broken' that others in this thread and the quote from Yeti are coming from.

I'm just guessing. There could be more to it, for all I know.

The main value of ratings is to separate amateurs into competitive divisions, and I think it does that quite well. A problem has arisen with the attention to a single, high-rated round, when I don't think ratings for a single round are specific enough to warrant it.

I'm less certain about the claims that certain circumstances allow for higher or lower round ratings. I've seen a lot of claims to that effect over the years, but little proof. Which isn't to say it isn't true; just unproven (in my experience).
 

Latest posts

Top