• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Hey hey ho ho round ratings have got to GO!

The evidence that with enough money + cajoling + threatening (I assure you that I don't have the power, wealth or desire to make this happen) I could convince twenty people (10 of them rated 800 and 10 rated 1000) to "play" a round in a tournament I've set up and get an identical score. It would be utterly futile and tell us nothing at all about the rating system but it is possible.

Even without this, someone could design a course on which it is probable for competitive players to score the exact same score. Say 18 x 10' holes as an example. Again, stupid, but possible, i.e. not impossible.

Further, in a real world tournament, on a proper course with competitive players, I do not expect to ever see this occur in my lifetime nor that it would occur before the heat death of the universe, but it's not impossible.

Also your understanding of 99.999% is wrong. Regardless of all the real world possibilities for error that percentage is leaving open the possibility of an alternative result. In a practical sense you should feel supremely confident that you are the father but it is not a certainty. By DEFINITION.

I suggest you don't say 'by definition' and ignore the definition.

So, let's start with your "suggestion."
From Webster's:

definition (n.): a statement expressing the essential nature of something;
root word,
define (v., tr.): to determine or identify the essential qualities or meaning of.

And for complete clarification --
essential (adj.) : of, relating to, or constituting essence; INHERENT
and
essence (n.): : the permanent as contrasted with the accidental element of being; OR the individual, real, or ultimate nature of a thing especially as opposed to its existence.


So it seems to me that the definition of something has to do with what is real -- not what is theoretical or hypothetical. No need to lecture me on the definition of "definition." Neither my education nor my knowledge of English is in question here.

Recall that I've never stated that you couldn't plug those numbers into the formula and they wouldn't "spit out" a result. I've stated that those results, the ones NOT IN THE REAL WORLD, would not be valid or reliable statistics. Hence, those in hypothesy (sic) have no meaning whatsoever. When I said certain things were impossible I was speaking in the context of this thread -- drawing conclusions about the PDGA ratings system. To me that only includes valid, reliable conclusions.

Now, one at a time.

The part in RED -- renders your experiment unreliable, you've manipulated the subjects of the experiment.
The part in PURPLE - is manipulating the conditions or environment of the experiment. That makes it not valid.
the part in LIME GREEN seems to echo what I've been saying; It cannot happen in reality. Like the definitions of "definition" above! (sans your qualifier).
The part in BLUE (again sans your qualifier) is about the mathematics of the calculation NOT the reality of the outcome. There exists no other possibility, in reality, other than YOU ARE THE FATHER. The fact that it says 99.999% instead of 100.00% is about how those results are calculated, and that only.

Just for fun I figured we could consider a set of Bernoulli trials.
Assume:
All holes played by each player are independent (a big assumption)
Each player's score is independent of all others (another big assumption)
There is an 18 hole course where the following are true (probably an ok assumption, but I bet there's good data somewhere to be used instead):
- 800 rated players score par 20% of the time (and mostly worse the rest)
- 1000 rated players score par 20% of the time (and mostly better the rest)
The tournament consists of one round.
We only count the 'same score' as all players scoring par every hole (another big assumption)

Then using a Binomial distribution with 360 (20 x 18) trials and probability 20% the probability of all trials being successful is 2.35 x 10^(-252) or 1 in 4.26 x 10^251. For a reference size the number of particles in the observable universe has been estimated as 3.28 x 10^80. So if every particle in the observable universe had it's own observable universe and all the particles in those universes had their own observable universe then each of those particles run 12 billion of these tournaments, we should expect that one of them would have this result. So, definitely possible.

Since that is absolutely rife with assumptions, none of which are consistent with how the formula is developed, I won't even. I'll suffice it to say that all the plugging into a computer in the world, cannot make that scenario occur. Why should this ratings system, developed for real life application, have to have any consistency with a purely hypothetical, not-real-life occurrence. I say it shouldn't. There is no such thing as a REAL course where better players and worse players (BOTH GROUPS) will score par 20% of the time. That's not possible. In fact, I think this is where you missed what I meant when I said, "by definition." I was talking about players -- the better players (i.e., those rated 1000) and worse players (i.e., those rated 800). Perhaps you disagree with me that the 1000-rated players are better players than the 800-rated players, or that 800-rated players are worse players than 1000-rated players; if so you are incorrect. And I know the definition of "better" and "worse."

Of course not, my point about it being mathematically possible was to point out the flaw/bug/anomaly whatever you want to call it that exists in the ratings system, that as long as divisions are rated separately, identical scores can yield different round ratings, and will always favor the division with higher rated players. As far as inflation, bubbling up, whatever you want to call it, it's pretty easy to grasp the concept that as long as the higher rated players are always rated among themselves (DGPT events for example) combined with a ratings system that knows no limits, well...the rich will get richer. ;)

I don't see that as a flaw in the system. They are better players. And that's true for anyone who does the same. It works for everybody.


It just doesn't follow that the ratings would necessarily inflate, they might but I don't see any guaranteed mechanism by which it would. Identical scores yielding different round ratings doesn't necessarily favor the division with higher rated players. I'm sure that stat could be pulled. If we look at round ratings for fields with higher rated players and round ratings for fields with lower rated players do the identical scores on identical courses average higher when scored by the higher field (averaging over enough separate events should minimize the effect of course conditions)?

I can do this analysis quite easily if there's a good source for the data and it's easy to identify that identical courses are being played. Anyone know how/where it's available in an easy to consume form? (the format on PDGA website isn't ideal, I don't think it is easy to identify identical course layouts and writing a script to scrape it just doesn't seem appealing right now).

Since I do not know the formula I cannot say, but I'd guess there is an asymptote out there somewhere. But that's purely a guess.
 
. . . There is no such thing as a REAL course where better players and worse players (BOTH GROUPS) will score par 20% of the time. . . .

Maybe. But picture a basket set in a bowl behind a dense stand of closely-spaced trees. 1000-rated players have an easy throw over the top, and the bowl directs most errant shots to the basket. I could see 80% birdies.

800-rated players lack the power to go over the top, so they play poke and hope. If the trees are dense enough, I could see 20% pars and the rest worse.

Does such a hole currently exist? I dunno. Could such a hole exist? Yes. Would the scores be exactly 20% pars? Um, maybe. But the exact percentage is not the point.

. . . That's not possible.

As Inigo Montoya wisely said, "You keep using that word. I do not think it means what you think it means." ;)

If you substituted "prohibitively unlikely" or even "completely unrealistic" for "impossible", I would probably agree with your point unreservedly.

Apologies for focusing on the word "impossible", but much real-world anguish has been caused by the difference between impossible and highly improbable. For an example, see the demise of the Long Term Capital Management hedge fund in the 1998 financial crisis.

Fortunately, possible flaws in disc golf player rating systems are unlikely to disrupt worldwide financial markets. :D
 
...
Does such a hole currently exist?
...

Maybe one. "Nine Holes In" at Kaposia Park. Very short with many little trees surrounding the target.

Everyone can reach it, no one can hit any gap with regularity.

Of the thousands and thousands of holes I've studied, it is the only one that shows no difference in scoring by rating.

I say "maybe" because I don't have data from the very best players.
 
For those interested in the calculation of round ratings, I think this is the mechanism used. I have seen this quoted and it fits with data I have looked at, but I am not 100% sure that I have everything correct.

Step 1: Find the SSA
Plot the propagators' round rating vs. the round scores and then fit a line to the data. There are better ways to do it, but for simplicity, you can use Microsoft Excel, plot the data, choose fit trend line, choose linear, and then choose "display equation on chart".
This gives you Y = mX+b
SSA = m*1000 + b
An example of a perfect fit:
There are 5 players rated [800 850 900 950 1000 1050] and they respectively score [70 65 60 55 50 45]. This is a perfect linear fit that yields Y = -0.1*x + 150.
SSA= -0.1*1000+150 = 50


Step 2: Find the Rating Increment
This is the part I am not sure about, but I have seen it quoted.
If SSA >= 50.3289725: Rating increment = -0.225067*SSA+21.3858
If SSA < 50.3289725: Rating increment = -0.487095*SSA+34.5734
This only works if there are 18 holes in the round. When there are a different number of holes, you have to scale the SSA by 18/Number_Holes.


Step 3: Find Round Rating
Round Rating = (SSA-Round_Score)*Rating_Increment+1000
So given the scenario where SSA=50, Rating_Increment=10, and you shoot a 52:
Round Rating = (50-52)*10+1000 = 980 since each stroke counts 10 points

The linear fit part to get SSA is pretty straightforward. I assume there is historical background involved in the Rating Increment calculations, but I don't know where those numbers came from. Probably has something to do with getting 10 points per stroke on certain courses. If SSA=50, then you basically get 10 points per stroke. In the Idlewild Open, SSA~=66.5, so it was about 6.5 points per stroke.
 
For those interested in the calculation of round ratings, I think this is the mechanism used. I have seen this quoted and it fits with data I have looked at, but I am not 100% sure that I have everything correct.

Step 1: Find the SSA
Plot the propagators' round rating vs. the round scores and then fit a line to the data. There are better ways to do it, but for simplicity, you can use Microsoft Excel, plot the data, choose fit trend line, choose linear, and then choose "display equation on chart".
This gives you Y = mX+b
SSA = m*1000 + b
An example of a perfect fit:
There are 5 players rated [800 850 900 950 1000 1050] and they respectively score [70 65 60 55 50 45]. This is a perfect linear fit that yields Y = -0.1*x + 150.
SSA= -0.1*1000+150 = 50


Step 2: Find the Rating Increment
This is the part I am not sure about, but I have seen it quoted.
If SSA >= 50.3289725: Rating increment = -0.225067*SSA+21.3858
If SSA < 50.3289725: Rating increment = -0.487095*SSA+34.5734
This only works if there are 18 holes in the round. When there are a different number of holes, you have to scale the SSA by 18/Number_Holes.


Step 3: Find Round Rating
Round Rating = (SSA-Round_Score)*Rating_Increment+1000
So given the scenario where SSA=50, Rating_Increment=10, and you shoot a 52:
Round Rating = (50-52)*10+1000 = 980 since each stroke counts 10 points

The linear fit part to get SSA is pretty straightforward. I assume there is historical background involved in the Rating Increment calculations, but I don't know where those numbers came from. Probably has something to do with getting 10 points per stroke on certain courses. If SSA=50, then you basically get 10 points per stroke. In the Idlewild Open, SSA~=66.5, so it was about 6.5 points per stroke.

I think that's making is a little more complicated than it is.
In your scenario:
Average the player/propagator ratings = 925
Average the round scores = 57.5
925/57.5 = 16.09 points per throw.
1000-925 = 75
75/16.09 = 4.7 throws
1000 rated round is 57.5 - 4.7 = 52.8 (numbers are rounded)
A round of 50 would be 1000 + 2.8*16.09 = 1046 rated round

I could be wrong.
 
...
Since I do not know the formula I cannot say, but I'd guess there is an asymptote out there somewhere. But that's purely a guess.

Asymptote would be nice. My money is on a peaked function - at some point round ratings would start going down with better play by one player and horribler play by all the others. But, it's way out there where it doesn't matter, couldn't happen, and would be thrown out if it did.
 
..You cannot create granularity where the game does not already have that granularity.

You can if you tap into other sources of information besides just scores. For example, the percentage of players who got each score. See post #157.

Another source would be information about where each throw landed and how it go there.

Maybe launch speed, angle, and spin could help tell us who is the better player with more precision.

Extra credit for playing well while in a romantic relationship?

Look at all the stuff they use to rate baseball players; score hardly matters there.
 
Maybe one. "Nine Holes In" at Kaposia Park. Very short with many little trees surrounding the target.

Everyone can reach it, no one can hit any gap with regularity.

Of the thousands and thousands of holes I've studied, it is the only one that shows no difference in scoring by rating.

I say "maybe" because I don't have data from the very best players.

Just to be clear I didn't say 1000 rated and 800 rated players average the same score on a hole, but assumed a hole where 1000 rated players have a probability of 20% to play poorly and score only a par, and 800 rated players have a probability of 20% to play well and score a par. It didn't seem that outlandish an overlap, I would guess I've played on a few such holes and I don't have many courses played.
 
Step 2: Find the Rating Increment
This is the part I am not sure about, but I have seen it quoted.
If SSA >= 50.3289725: Rating increment = -0.225067*SSA+21.3858
If SSA < 50.3289725: Rating increment = -0.487095*SSA+34.5734
This only works if there are 18 holes in the round. When there are a different number of holes, you have to scale the SSA by 18/Number_Holes.

Where did you get this portion from? I've always wondered about this part but have never actually found any explicit methods detailing this portion. I look forward to applying this to my data set.
 
Math was never my forte, so I only have a surface understanding of this system (like how propagators and scores work in general to produce a rating that's fairly commensurate with your skill), but I will say that I like it and understand why we have this system as opposed to the unfeasibly expensive course ratings that ball golf has.

I like saying I'm a ____-rated amateur Disc Golfer. Now that I'm back into playing a few tournaments it gives me a little motivation to concentrate on every stroke.

I also like how we're all interconnected mathematically. When I watch the best players in the world on Jomez they're participating in the same math formula that I am and that's cool.

One thing I don't understand and wouldn't mind one of you calculus types explaining is whether or not a 920 rating today is commensurate with a 920 rating 20 years ago. I suspect that it isn't, perhaps because competition is much more populated today?
 
One thing I don't understand and wouldn't mind one of you calculus types explaining is whether or not a 920 rating today is commensurate with a 920 rating 20 years ago. I suspect that it isn't, perhaps because competition is much more populated today?


I think the easiest way to test this would be to find a major tournament whose layout is relatively unchanged throughout the years. Looks like the ratings started in 1999. Maybe start from 2005 to give the ratings some time to reach a steady state. That would give you 15 or 16 years of ratings. Any major tournaments out there with roughly the same layout from 2005 to 2020?
 
I think the easiest way to test this would be to find a major tournament whose layout is relatively unchanged throughout the years. Looks like the ratings started in 1999. Maybe start from 2005 to give the ratings some time to reach a steady state. That would give you 15 or 16 years of ratings. Any major tournaments out there with roughly the same layout from 2005 to 2020?

Good idea. Does the fact that we had way fewer Discs back then factor into that too? Would Climo have been 1060 rated if he and everybody else had Destroyers in 1998? And the Mach III's caught like today's new baskets? Would Ams rated 920 actually have been 950 with the same scenario, or would it all even out because the same people were playing anyway and they'd all decrease their scores proportionally together?

Masters Cup at DeLaveaga, perhaps, would be a good one?

I was about to say DGLO at Toboggan but I seem to remember seeing the final 9 coverage from 2006 with Feldberg and Al Schack on the same card still at Hudson Mills. I don't know exactly when DGLO moved exclusively to Toboggan. (A good friend of mine was with the two of them in like 1998 in Ludington when Schack was giving pointed serious advice to Feldberg as an Am on the course...). Anthon and one other too was on that card. Maybe Matty-O? Anyway...they've been using Toboggan for the big Am tournament every year for a long time so that would count in this scenario too I guess.
 
If you are interested in the ratings system, 2 hours well spent...



Chuck is a member here and posts about how the ratings system works. I think it's a real honor to have people so fundamental to this sport participating here.
 
One thing I don't understand and wouldn't mind one of you calculus types explaining is whether or not a 920 rating today is commensurate with a 920 rating 20 years ago. I suspect that it isn't, perhaps because competition is much more populated today?

All sports evolve. You can't make a direct comparison between the two, but the ratings system represents the spectrum of people's ability to play DG with the tools at the time and on the courses at the time.

IMO a person capable of being a 1000 plus rated player in 1998 had the ability to be a 1000 rated player today.

The difference today is there are 10x (just a guess) as many people playing DG professionally, thus there are more people rated 1000 plus today than 20 years ago. But it's still a tiny fraction of people that achieve that status.
 
Just to be clear I didn't say 1000 rated and 800 rated players average the same score on a hole, but assumed a hole where 1000 rated players have a probability of 20% to play poorly and score only a par, and 800 rated players have a probability of 20% to play well and score a par. It didn't seem that outlandish an overlap, I would guess I've played on a few such holes and I don't have many courses played.

It's not outlandish.

There really aren't a lot of holes where both 1000-rated and 800-rated players both play them in competition. (Most of those are in European events.) Out of 5862 holes, only 53 had data for both 1000-rated and 800-rated. That's less than 1%.

Across all those 53 holes where 1000-rated and 800-rated players both played, a 1000-rated player would get the same score (whether par or not) as an 800-rated player about 25% of the time. For one particular hole, it was 60%. At the other end was one hole where it was less than 7%.
 
One thing I don't understand and wouldn't mind one of you calculus types explaining is whether or not a 920 rating today is commensurate with a 920 rating 20 years ago.

Discs fly further, competition is not equal (could be playing against harder or easier players). Ratings for top players have no ceiling. But..you're talking 20 points over a rec rating, so I would guess it's close enough. The bigger question might be what it took to be 1038 rated in 2000.

Not the same tournament obviously but interesting data none the less.

(2000) 56 Players, 5 rated 1000+. Ken Climo avg 1047 per round. 3 of those rounds avg 1058.
https://www.pdga.com/tour/event/2043

(2020) 110 Players, 42 players rated 1000+. Paul avg 1068 per round.
https://www.pdga.com/tour/event/44653

Double the total players, but many more players over 1000 rated. Obviously that affects the "ratings in / ratings out" math. Wish it was easy to pull data from the PDGA archives. Would be fun to build graphs on the data.
 
Something that tends to be overlooked in these discussions is the impact the design/terrain/rules have on the resulting round ratings. Many here have heard or understand the overall concept of Propagator Ratings IN ~ Propagator Round Ratings OUT. But that's only the starting and ending points. The full process is a dual "funnel":

Propagator Ratings IN & Course Factors/Design/Rules > Scores (&Stats) > SSA < Propagator Round Ratings OUT

Several sports compete on a course or playfield with fixed dimensions and theoretically similar surfaces where the course factors and their version of SSA are ignored or the differences are not considered big enough to be measured and accounted for in their stats tracking. Ideally, for our DG system to work well, the course factors, designs and rules should operate within as consistent a range as possible. When that's the case, the scores produced on a course over time should correlate well with the rating of the player shooting that score on average or at least within statistical expectations.

We have developed a pretty solid baseline for the predictable variances in a propagator's performance. For example, the odds that a field with only 5 propagators all shooting more than 3 strokes worse or better than their rating in a single round is around 1 in 7800 (1/6 to the 5th power). So when you see odd round rating results, especially in fields of 20 or more propagators, there's something about the course factors, design or rules that were different enough from our baseline course layouts to produce it. In other words, a group of props are relatively stable thermometers, gages, measuring sticks whose scoring distribution can indicate there's something different about a course where the round ratings seem odd because the same ratings calculation math has been consistently applied for 20+ years.

For those who think the wrong algorithms have been applied for calculations, we have tested alternatives because in the beginning we didn't know if what we tried would work. And once we got even more data, we've constantly checked. It's clear that overall, it becomes incrementally more difficult to shoot another stroke lower. However, it's the same relative to a player's rating as long as the course is challenging enough to allow high rated players room to approach the "perfect" 36.

One validation test is whether a player can average their rating over the range of course lengths and difficulties present in our game. We've checked a few times over the years and regardless of the player's rating above 700 and courses played from 42 to 70 SSA, they can average their rating on any course, the caveat being that the course factors, design and rules fall in our established range.

The new frontier in terms of ratings improvements will be to discover what factors, designs or rules cause a course's round ratings to deviate too far from the norm. For example, we're pretty sure that excessive OB courses are different enough from our historical norms that they are likely the prime contributor to high end ratings inflation due to the extra stroke and sometimes lost distance "padding" that artificially boosts the SSA on those courses. The question is whether to develop an appropriate way to counteract that effect in the ratings calculations or (preferably) break out those ratings separately from conventional courses. Either choice will be an improvement among other tweaks not discussed here.
 
Something that tends to be overlooked in these discussions is the impact the design/terrain/rules have on the resulting round ratings. Many here have heard or understand the overall concept of Propagator Ratings IN ~ Propagator Round Ratings OUT. But that's only the starting and ending points. The full process is a dual "funnel":

Propagator Ratings IN & Course Factors/Design/Rules > Scores (&Stats) > SSA < Propagator Round Ratings OUT

Several sports compete on a course or playfield with fixed dimensions and theoretically similar surfaces where the course factors and their version of SSA are ignored or the differences are not considered big enough to be measured and accounted for in their stats tracking. Ideally, for our DG system to work well, the course factors, designs and rules should operate within as consistent a range as possible. When that's the case, the scores produced on a course over time should correlate well with the rating of the player shooting that score on average or at least within statistical expectations.

We have developed a pretty solid baseline for the predictable variances in a propagator's performance. For example, the odds that a field with only 5 propagators all shooting more than 3 strokes worse or better than their rating in a single round is around 1 in 7800 (1/6 to the 5th power). So when you see odd round rating results, especially in fields of 20 or more propagators, there's something about the course factors, design or rules that were different enough from our baseline course layouts to produce it. In other words, a group of props are relatively stable thermometers, gages, measuring sticks whose scoring distribution can indicate there's something different about a course where the round ratings seem odd because the same ratings calculation math has been consistently applied for 20+ years.

For those who think the wrong algorithms have been applied for calculations, we have tested alternatives because in the beginning we didn't know if what we tried would work. And once we got even more data, we've constantly checked. It's clear that overall, it becomes incrementally more difficult to shoot another stroke lower. However, it's the same relative to a player's rating as long as the course is challenging enough to allow high rated players room to approach the "perfect" 36.

One validation test is whether a player can average their rating over the range of course lengths and difficulties present in our game. We've checked a few times over the years and regardless of the player's rating above 700 and courses played from 42 to 70 SSA, they can average their rating on any course, the caveat being that the course factors, design and rules fall in our established range.

The new frontier in terms of ratings improvements will be to discover what factors, designs or rules cause a course's round ratings to deviate too far from the norm. For example, we're pretty sure that excessive OB courses are different enough from our historical norms that they are likely the prime contributor to high end ratings inflation due to the extra stroke and sometimes lost distance "padding" that artificially boosts the SSA on those courses. The question is whether to develop an appropriate way to counteract that effect in the ratings calculations or (preferably) break out those ratings separately from conventional courses. Either choice will be an improvement among other tweaks not discussed here.


are you saying that the ratings evaluation has actual numerical factors for courses that are incorporated in to the calculation? Or that the ratings results reflects the course design inherently?
 
Something that tends to be overlooked in these discussions is the impact the design/terrain/rules have on the resulting round ratings. Many here have heard or understand the overall concept of Propagator Ratings IN ~ Propagator Round Ratings OUT. But that's only the starting and ending points. The full process is a dual "funnel":

Propagator Ratings IN & Course Factors/Design/Rules > Scores (&Stats) > SSA < Propagator Round Ratings OUT

Several sports compete on a course or playfield with fixed dimensions and theoretically similar surfaces where the course factors and their version of SSA are ignored or the differences are not considered big enough to be measured and accounted for in their stats tracking. Ideally, for our DG system to work well, the course factors, designs and rules should operate within as consistent a range as possible. When that's the case, the scores produced on a course over time should correlate well with the rating of the player shooting that score on average or at least within statistical expectations.

We have developed a pretty solid baseline for the predictable variances in a propagator's performance. For example, the odds that a field with only 5 propagators all shooting more than 3 strokes worse or better than their rating in a single round is around 1 in 7800 (1/6 to the 5th power). So when you see odd round rating results, especially in fields of 20 or more propagators, there's something about the course factors, design or rules that were different enough from our baseline course layouts to produce it. In other words, a group of props are relatively stable thermometers, gages, measuring sticks whose scoring distribution can indicate there's something different about a course where the round ratings seem odd because the same ratings calculation math has been consistently applied for 20+ years.

For those who think the wrong algorithms have been applied for calculations, we have tested alternatives because in the beginning we didn't know if what we tried would work. And once we got even more data, we've constantly checked. It's clear that overall, it becomes incrementally more difficult to shoot another stroke lower. However, it's the same relative to a player's rating as long as the course is challenging enough to allow high rated players room to approach the "perfect" 36.

One validation test is whether a player can average their rating over the range of course lengths and difficulties present in our game. We've checked a few times over the years and regardless of the player's rating above 700 and courses played from 42 to 70 SSA, they can average their rating on any course, the caveat being that the course factors, design and rules fall in our established range.

The new frontier in terms of ratings improvements will be to discover what factors, designs or rules cause a course's round ratings to deviate too far from the norm. For example, we're pretty sure that excessive OB courses are different enough from our historical norms that they are likely the prime contributor to high end ratings inflation due to the extra stroke and sometimes lost distance "padding" that artificially boosts the SSA on those courses. The question is whether to develop an appropriate way to counteract that effect in the ratings calculations or (preferably) break out those ratings separately from conventional courses. Either choice will be an improvement among other tweaks not discussed here.

Chuck, you know the most about this but it seems to me like it hard to get good ratings if higher rated propagators aren't present.

I threw a round that was rated 945where several 950+ rated players were present and then shot the same score on the same course with almost identical conditions but the highest current PDGA players were in the 930s, and even though I got 3rd it was only a 915 rating.

Do they really weight that heavily based on who is playing?
 
Top