• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Hey hey ho ho round ratings have got to GO!

Maybe. But picture a basket set in a bowl behind a dense stand of closely-spaced trees. 1000-rated players have an easy throw over the top, and the bowl directs most errant shots to the basket. I could see 80% birdies.

800-rated players lack the power to go over the top, so they play poke and hope. If the trees are dense enough, I could see 20% pars and the rest worse.

Does such a hole currently exist? I dunno. Could such a hole exist? Yes. Would the scores be exactly 20% pars? Um, maybe. But the exact percentage is not the point.

If said SINGLE HOLE existed (as Steve W says it might) that's all well and good. But I said a COURSE. No such course exists. Not one on which this ratings system has validity and internal reliability. And if you were to construct one, then you have, once again rendered the experiment invalid-- so your example is meaningless when it comes to drawing valid and reliable conclusions about the ratings system.

As Inigo Montoya wisely said, "You keep using that word. I do not think it means what you think it means." ;)

If you substituted "prohibitively unlikely" or even "completely unrealistic" for "impossible", I would probably agree with your point unreservedly.

Apologies for focusing on the word "impossible", but much real-world anguish has been caused by the difference between impossible and highly improbable. For an example, see the demise of the Long Term Capital Management hedge fund in the 1998 financial crisis.

Fortunately, possible flaws in disc golf player rating systems are unlikely to disrupt worldwide financial markets. :D

Dude. I have a PhD from one of the top 25 colleges of education in this country. I know what the words mean. Quit questioning what "you think I mean or you think I don't know what it means" and have a discussion with what I mean. Which, ironically is what I am saying (imagine that, eh?). I even gave an example outside of disc golf. While a DNA paternity test may only calculate the probability at 99.999% that you are the father (the calculation itself being a mathematical model), the REALITY is that there exists no other possibility. It's certain. You the daddy.

I don't mean "prohibitively unlikely" or even "completely unrealistic." In this context (see post #161, purple) it is not possible. And I work now in finance. So don't give me a "certainty" about any hedge fund. You are comparing apples and oranges talking about theoretical mathematical constructs versus the reality of financial markets. No comparability in that analogy at all.
 
Chuck, you know the most about this but it seems to me like it hard to get good ratings if higher rated propagators aren't present.

I threw a round that was rated 945where several 950+ rated players were present and then shot the same score on the same course with almost identical conditions but the highest current PDGA players were in the 930s, and even though I got 3rd it was only a 915 rating.

Do they really weight that heavily based on who is playing?
Higher rated propagators are more likely to produce a lower SSA (round ratings) than lower rated propagators but it has more to do with the course than the player ratings. When higher rated players come to town, the courses are tricked out, typically with more OB and some longer pin placements. The lower rated players on average, typically play worse than their ratings in these conditions. Remember that the course ratings is directly based on the number of throws the propagators make. Who is more likely to take proportionately more the penalties, the elite players or the donators, especially regional pros under 1000 rating where they are traveling to the course versus home turf?

You don't have to take my word for it because you can watch it happen at any elite tee time event starting in round 2. Jot down the rating for a few scores posted by the early groups that come in. As the day goes on, typically the ratings go down as the better players finish. Our MP60 group was able to watch this progression in reverse this past weekend. The average ratings of our MP60 group was higher than the other divisions playing our course at Highbridge. We were the first group done and my 60 was temporarily rated 913. I told the guys to monitor how much higher their prelim ratings would go. As the lower rated MA60, MA50 and MA 40 cards came in, my 60 moved to 918, 921, 926 and ended at 928 unofficial.

If you don't see this happen, it's more likely due to course related factors like weather and/or time of day than the ratings calculations which are a fixed process.
 
are you saying that the ratings evaluation has actual numerical factors for courses that are incorporated in to the calculation? Or that the ratings results reflects the course design inherently?
There are no course factors entered into the calculations. So any oddities such as smaller than normal correlation of prop ratings and their round ratings (lots of trees, too narrow fairways), wider than "normal" scoring spread (lots of OB) or even zero correlation with ratings like the hole that Steve West mentioned, will be course related.
 
While a DNA paternity test may only calculate the probability at 99.999% that you are the father (the calculation itself being a mathematical model), the REALITY is that there exists no other possibility. It's certain. You the daddy.

Unless you have an identical twin. If you do, the probability is cut in half. Which again illustrates the difference between "impossible" and "highly unlikely".

And I work now in finance. So don't give me a "certainty" about any hedge fund.

Are you familiar with Long Term Capital Management? They were run by finance PhDs and two Nobel prize winners. And yet they managed to lose $4.4 billion out of $4.7 billion in assets in a year by being a little too sure they were right.

Look, I respect your knowledge of disc golf and statistics, and life is too short to spend it arguing on the internet. As I stated previously, I agree with your underlying point about the rating system.

So I respectfully suggest using a different word than "impossible" for that which is supremely unlikely to ever happen. But you are a grownup and may certainly do as you wish.
 
Finally coming to the dark side I see.... ;)

The only side I'm ever trying to on is above the cash line, but it would be nice to have blazing rounds recognized as such even if the the field is not rated super high.
 
What about course conditions and weather conditions?
Nope. Never has been involved in the calcs. All of those factors are inherently incorporated in the scores the propagators shoot. That's the key to making the system work without needing to try and figure out how much each of the thousands of tiny factors impact the course rating during that time period.
 
The only side I'm ever trying to on is above the cash line, but it would be nice to have blazing rounds recognized as such even if the the field is not rated super high.
Take a look at Darrell Nodland's stats on PDGA. He's maintained an elite rating for almost 20 years playing North Dakota events where his rating is usually 30 to 50 points higher than the average rating of propagators that affect his round rating.
 
Higher rated propagators are more likely to produce a lower SSA (round ratings) than lower rated propagators but it has more to do with the course than the player ratings. When higher rated players come to town, the courses are tricked out, typically with more OB and some longer pin placements. The lower rated players on average, typically play worse than their ratings in these conditions. Remember that the course ratings is directly based on the number of throws the propagators make. Who is more likely to take proportionately more the penalties, the elite players or the donators, especially regional pros under 1000 rating where they are traveling to the course versus home turf?

You don't have to take my word for it because you can watch it happen at any elite tee time event starting in round 2. Jot down the rating for a few scores posted by the early groups that come in. As the day goes on, typically the ratings go down as the better players finish. Our MP60 group was able to watch this progression in reverse this past weekend. The average ratings of our MP60 group was higher than the other divisions playing our course at Highbridge. We were the first group done and my 60 was temporarily rated 913. I told the guys to monitor how much higher their prelim ratings would go. As the lower rated MA60, MA50 and MA 40 cards came in, my 60 moved to 918, 921, 926 and ended at 928 unofficial.

If you don't see this happen, it's more likely due to course related factors like weather and/or time of day than the ratings calculations which are a fixed process.

I guess that makes sense as in the lower rated round I was two strokes off the lead and the higher rated round I was four strokes off and the guy who won was rated 970 something.

The course didn't have any differences other than one OB line gone and it was on the same course so that's why I found a swing that big to be odd.
 
Nope. Never has been involved in the calcs. All of those factors are inherently incorporated in the scores the propagators shoot. That's the key to making the system work without needing to try and figure out how much each of the thousands of tiny factors impact the course rating during that time period.

mind blown...so often I've been told that course conditions, weather, play a role in the ratings calc over the years. Better wording would be it plays a role in how well the players play the course. But the course being harder or easier based on conditions (slippery tees, water, etc) doesn't play a factor at all. Interesting, thx for the clarification.
 
Something else to consider is the locations you land for every throw is different in each round on the same course other than the 18 throws that hole out. So no one actually plays the same throw sequence on the same course twice in a tournament day. That's separate from any weather-related differences and changes you may make to your disc and shot selection in the second round. Interestingly, we've found more second rounds with higher SSA than the first round even though you would think players learned something that would have helped them play better in the second round.

Think about the area where each player is likely to land based on their average distance and backhand/forehand spin. If there are any differences in ground surface, OB or vertical obstacles in each person's likely landing area, they will regularly have different challenges within the area when they have a good throw let alone a poor throw. Two feet left and you're up against or behind a tree and have to flick instead of backhand. Players who can't make it or decide not to throw across water lay up. That tree at 350 feet only affects those who can throw past it from the tee. If you only throw 300 max, you deal with it from a much closer distance and angle, hopefully, on your next throw.

Everyone is playing a different course on the same layout with the exception of the basket positions. Even then, are they coming at it into the wind or uphill some rounds? Point being that the system relies on gathering as many numbers as possible to average out these random happenings which can only be measured in toto via accumulating their round scores which rolls all of those random effects into a single number per player.
 
mind blown...so often I've been told that course conditions, weather, play a role in the ratings calc over the years. Better wording would be it plays a role in how well the players play the course. But the course being harder or easier based on conditions (slippery tees, water, etc) doesn't play a factor at all. Interesting, thx for the clarification.
All of those items play a factor. We just don't know how much each one plays a factor in each propagator's score. Imagine trying to figure out how much each one made a difference?
 
Unless you have an identical twin. If you do, the probability is cut in half. Which again illustrates the difference between "impossible" and "highly unlikely".

Again, Mr. Mono, I understand the difference between impossible and highly unlikely. I am not an idiot, sir, so please stop the patronizing. By the way, DNA science can definitively determine paternity, even in the case of identical twins. One twin is excluded from paternity (0 chance out of 100, aka 0%) and one twin is the daddy (mathematically calculated to 99.999% because there is no reciprocal for 0/100).

https://www.eurofins.com/scientific-impact/scientific-innovation/busting-the-identical-twin-myth/

Are you familiar with Long Term Capital Management? They were run by finance PhDs and two Nobel prize winners. And yet they managed to lose $4.4 billion out of $4.7 billion in assets in a year by being a little too sure they were right.

Look, I respect your knowledge of disc golf and statistics, and life is too short to spend it arguing on the internet. As I stated previously, I agree with your underlying point about the rating system.

So I respectfully suggest using a different word than "impossible" for that which is supremely unlikely to ever happen. But you are a grownup and may certainly do as you wish.

Mono, I am going to respectfully ask you to understand that as currently is situated and until I have some REALITY that shows otherwise, I mean impossible. That's what I mean. As in this Webster's Dictionary meaning of the word, identified here as 1a: https://www.merriam-webster.com/dictionary/impossible.

I am not, as you continue to disrespectfully suggest, utilizing the word "impossible" as a substitute for "improbable," "supremely unlikely," "highly unlikely", "completely unrealistic," "prohibitively unlikely," or any other iteration that you may *think* I mean -- either intentionally or unintentionally -- not in this context. If I am proven wrong someday, then I'll admit I am wrong. I won't cower down to "no I didn't mean 'truly impossible' " just trying to then save face, because that is what I DO mean -- not possible in the real world. So if I were to substitute any of those words you propose I use, then I'd be lying about what my opinion, evidence, and training has brought me to the conclusion of. And I prefer not to lie about issues like this.

As far as the Long Germ Capital Management or whatever it was, there is NO comparison herein. As I said earlier, to my knowledge no such sure thing investment exists. None. If people believed what those PhDs told them, then someone got sold a bag of goods. And if those PhDs really honestly believed the investments to be 100% certain, then what purpose would there even be in telling others to invest therein? They'd have only used their own money. Duh! Why would they need investors for a 100% certain investment???
 
Not a personal attack at all.

I feel very strongly that a very small green fee (I pay $110 annually, total, for 5 courses) should be considered by many more parks departments. Some of the appeal of disc golf is the 'free' to play aspect - but I think the sport also needs additional, local investment beyond what taxes pay for. This will also help the volunteer pools that always seem to be only a few of the same people that will do extra course maintenance - the parks department would have a significantly larger budget to allocate funds for work and improvement using the model we have here. All of the money that is raised for daily/annual fees is put back into the courses. The improvements I've seen locally in the 4 years I've been here have been dramatic and very much appreciated.

You're basically describing the fishing license model, right?
 
Again, Mr. Mono, I understand the difference between impossible and highly unlikely. I am not an idiot, sir, so please stop the patronizing.

There was no intent whatsoever on my part to be patronizing. If anything I wrote came across that way, I apologize sincerely.

And please keep in mind that I have stated - TWICE - that I have no disagreement with the substance of what you have written about the ratings system.

By the way, DNA science can definitively determine paternity, even in the case of identical twins. One twin is excluded from paternity (0 chance out of 100, aka 0%) and one twin is the daddy (mathematically calculated to 99.999% because there is no reciprocal for 0/100).

https://www.eurofins.com/scientific-impact/scientific-innovation/busting-the-identical-twin-myth/

That is an interesting article, thank you for posting. I learned something, and I always appreciate that.

If I read the article correctly, it sounds like they are sequencing the whole genome, or close to it, to detect subtle genetic differences between identical twins. Standard paternity tests examine a much more limited number of markers and would be unlikely to distinguish between identical twins.

Mono, I am going to respectfully ask you to understand that as currently is situated and until I have some REALITY that shows otherwise, I mean impossible. That's what I mean. As in this Webster's Dictionary meaning of the word, identified here as 1a: https://www.merriam-webster.com/dictionary/impossible.

The first definition of "impossible", per your link, is "incapable of being or of occurring". I will go back to the probability that the air in a room suddenly rushes to the corners, leaving most of the room in a vacuum. This has probably never happened. It will probably never happen while humans exist, and might never happen while our universe exists. But it could happen tomorrow, and the probability can be calculated.

Would you consider such an event "incapabable of being or occurring"? That could be the essence of our communication disconnect.

I am not, as you continue to disrespectfully suggest, utilizing the word "impossible" as a substitute for "improbable," "supremely unlikely," "highly unlikely", "completely unrealistic," "prohibitively unlikely," or any other iteration that you may *think* I mean -- either intentionally or unintentionally -- not in this context.

I am genuinely baffled why you interpret what I have written as disrespectful. That was not my purpose or intent. I respect your knowledge of disc golf and statistics, as I specifically stated previously. But we clearly differ on what "impossible" means. Which is pretty far from the substance of this thread.

As far as the Long Germ Capital Management or whatever it was, there is NO comparison herein. . . .

I completely agree that no "sure thing investment" exists. Given your interest in finance and statistics, you might find the Long-Term Capital Management saga rather interesting. Or perhaps not. In any case, I have not read this book but it is supposed to be quite good.

At Long-Term, Meriwether & Co. truly believed that their finely tuned computer models had tamed the genie of risk, and would allow them to bet on the future with near mathematical certainty. And thanks to their cast--which included a pair of future Nobel Prize winners--investors believed them.

When Genius Failed: The Rise and Fall of Long-Term Capital Management

The Nobel Prize winners were Merton and Scholes, who received Nobel Prize in Economics for developing the Black-Scholes-Merton model for valuing derivatives.
 
Is it possible for a group of 4 players (850,900,950,1000) to play an 18 hole round and the 850 player has the best score and 1000 rated player has the worst score?

Of course it is. It's sports. If the trend continues in rated rounds, then the ratings will change over time. To suggest otherwise would imply that it's not a competitive event, but a series of fixed outcomes.
 
Is it possible for a group of 4 players (850,900,950,1000) to play an 18 hole round and the 850 player has the best score and 1000 rated player has the worst score?

Of course it is. It's sports. If the trend continues in rated rounds, then the ratings will change over time. To suggest otherwise would imply that it's not a competitive event, but a series of fixed outcomes.
On a competent course with an SSA around 50, the odds are about 1 in 6 that the 1000 rated player will shoot worse than 970. Same odds for the 850 shooting better than 880 That's a 1 in 36 chance those two players will shoot within 90 points of each other. It's around 1 in 400 rounds that the 1000 rated player will shoot below 940 and the 850 shoots above 910. It's about 1 in 1000 rounds that the 1000 rated player shoots below 920 and the 850 player shoots above 930 to win that round. So it's possible and probably happened a few times over 20 years. However, that anomaly would usually be buried in the stats where the rest of the props hovered around their normal scoring range and few ever saw the anomaly.
 
On a competent course with an SSA around 50, the odds are about 1 in 6 that the 1000 rated player will shoot worse than 970. Same odds for the 850 shooting better than 880 That's a 1 in 36 chance those two players will shoot within 90 points of each other. It's around 1 in 400 rounds that the 1000 rated player will shoot below 940 and the 850 shoots above 910. It's about 1 in 1000 rounds that the 1000 rated player shoots below 920 and the 850 player shoots above 930 to win that round. So it's possible and probably happened a few times over 20 years. However, that anomaly would usually be buried in the stats where the rest of the props hovered around their normal scoring range and few ever saw the anomaly.

Thanks for the input. I buy into the Idea that the proof is in the historical results that the ratings system works.

It is the KISS principle in action and the corollary of keep it as simple as possible but no simpler.
 
On a competent course with an SSA around 50, the odds are about 1 in 6 that the 1000 rated player will shoot worse than 970. Same odds for the 850 shooting better than 880 That's a 1 in 36 chance those two players will shoot within 90 points of each other. It's around 1 in 400 rounds that the 1000 rated player will shoot below 940 and the 850 shoots above 910. It's about 1 in 1000 rounds that the 1000 rated player shoots below 920 and the 850 player shoots above 930 to win that round. So it's possible and probably happened a few times over 20 years. However, that anomaly would usually be buried in the stats where the rest of the props hovered around their normal scoring range and few ever saw the anomaly.

As I understand it abnormally bad rounds are dropped from ratings calculations, whereas abnormally good rounds are included. Do you think some ratings inflation is due to this practice?
 

Latest posts

Top