• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

DGPT: The Memorial Championship presented by Discraft 27-Feb to 01-Mar-2020

Also, didnt he have a signature disc? The phantom menace or what was it? (Obvious intentional mistake) Did he ever throw that? Must be a great disc. :/


I think that's just a Legacy Rival with a special stamp.

Might be a slightly different plastic blend too.
 
That was my whole point Chuck. Solely basing the system off how the players play compared to each other and not at least part of that equation being a course difficulty rating, is flawed... and yes, I realize how hard it would be to place a course rating for every course in the world that people play tourneys at. Its not your fault. You did the best you could with what you had.
Again, your lack of understanding is showing. How do you measure a course rating unless it's done by expert measurers? What people don't seem to understand is that the rating process has two independent steps. Expert measurers each determine how the course played that round, the same as if people with thermometers walked around and took temperature readings at the tee and basket of each hole. The average of our expert measurers is little different from the average temperature of the course from all of those temperature measurements.

Just like thermometers, we've show that with enough expert measurers with the same mix of ratings, regardless where they come from, they'll produce the same course rating for that round as any other mix of expert measurers with the same average rating. We had to show this was true early on when developing this rating process. It turned out that it works within reasonable statistical limits, even with a small number of expert measurers.

The course rating measurement is essentially an independent process from calculating the round ratings for all of the players that played the round, not just the propagators. We know that when the course has a rating of X, that a score of Y will always get a rating of Z. That's automatic. You don't even need to know who played. From the player's standpoint, it's as if they played any course with that SSA course rating that round. Shoot Y, get Z.

When you consider how a fixed course rating would be determined, you would need dozens of rounds on a course by players with known skill levels, likely playing under different conditions, or their average to determine a fixed course rating wouldn't mean much. Here's the thing, if the data gathered for a single round is good enough to be used to produce the overall course rating average, why isn't it good enough to produce the specific course rating for that round? We've tested it over 10s of thousands of rounds and it has been shown to be good enough. The PDGA rating system essentially uses the specific data for the course conditions for each round to generate better round ratings specific to the conditions than would be produced by using some fixed average that was produced by a bunch of other conditions.
 
Its not a lack of understanding, its a different view of what should be rewarded. I keep making the mistake of saying you get rewarded for playing shorter courses at the top when its actually just an appearance of that because of the compression. But you SHOULD be rewarded for shooting low numbers on longer harder courses. Whether that's Idlewild, Iron Hill, Milo or wherever and yes I agree that what players are shooting on a given day should have a role in the system. But lets look a course Like Toboggan. If you look at what paul got for his -18 45 round at Toboggan, 1108 and then look what he got at Fountain for his -17 39 1132... He was also 7 shots clear of the field that round at Toboggan and 4 clear at Fountain. Those numbers alone give the appearance of Toboggan being harder and how much Paul was on another level that day...

Does anybody really think its harder to shoot a 39 at Fountain than it is to shoot 45 at Toboggan? Because that's what those ratings numbers make it look like. At best, I'd say its a wash but would tend to give Toboggan the edge Unless the conditions at Fountain were just insane that day, which I don't recall them being. It sure as heck shouldn't be a 24 pts difference.
 
Last edited:
Read these articles which go in depth on ways to compare exceptional performances.
Fair Ways for Statistics - Part 1
Fair Ways for Statistics - Part 2
Fair Ways for Statistics - Part 3

Ultimately, the tours may not have recognized how the array of stats look among significantly different lengths and types of courses, or they don't think they can fix it, or don't want to fix it. If they think it's hurting their tour or certain tour stops, or it could look better, they either need to establish a tight SSA range for any course that gets accepted for their tour or require that everyone in contention for the tour play a similar mix among a wider array of course types to earn tour points. They now have the stats to do either of these things if they wish. I guess we'll see down the road if any feel it's necessary to do it.
 
I read those when you first wrote them. In fact, I think I posted them here.

I think it would maybe be something the touring players themselves would need to spearhead, since its their livelihood being based off the ratings. I doubt that will happen anytime soon. For the most part, the players I've met seem to be very down to earth people that are happy they can make a living at DG(of course there are some more business/long term minded people) but we've already seen them take one stand this season with the Weema Rule. I don't recall if it was first brought up by the players or the PDGA but I do remember someone who was there saying a lot of the players got behind it. If however, we start getting more and more players come into DG from other sports, that "laid back" viewpoint could change pretty quickly.

I do understand why you made the ratings to begin with. It was for players like myself at the lower and middle levels wanting a fairer playing field and I do appreciate what it must've took to come up with the system. I just feel like the touring level of pros maybe need something different... and especially so if the sport keeps growing like it has recently. The more money on the line, the more people I can see voicing similar opinions to mine...

But I guess what it comes down to, is that I need to just get over it for now.;)
 
Last edited:
He is still getting the molds from his sponsors worked out. His bag will change significantly once they are ready. He also says that his deal with Infinite enables him to throw any disc they sell, not just their own molds.

He talks about it all here:
 
some stats on Anhyzer TV about memorial


I have more if you want them. Here is the one asked for by Big Jerm.

picture.php


Bounceback stat are the far right columns, bounce back is birdie or better following bogey or worse.

Birdie streak % is streaks of amount of three or more birdies per round

Bonus Birdie is birdies got when 25% or less of field achieved birdie on same hole in same round

Must Get birdie drop is par or worse when 75% or more of field achieved birdie on same hole in the same round

I will try and give derived stats on something for every tournament that at least UDisc covers.
 
Your support of tough wooded courses has affected your understanding. There are no more ratings rewards for the field regardless of the course played. It's not possible. If the propagators average 990 rating at a course, their round rating average will be the same at Fountain or Idlewild, probably a few points above 990. It's the way the math works. However, Fountain will have equal amounts of higher ratings and lower ratings at the extremes than Idlewild because the more shots thrown, the harder it is to maintain exceptional shooting. That narrows the range of ratings but does not decrease the average generated.

The PGA tour has known this for a 100 years. They don't include Par 3 and Executive Par 64 courses in their tour. Their courses are essentially like our Idlewild. But that restriction to par 70-72 courses doesn't work yet for our disc golf tours and may never be the case. The rating system is not the culprit, it's just the messenger indicating the differences.


It's not a secret that this happens. And within this scope of explanation, it makes sense.

However in golf handicaps (ratings) are based on the index (basically, the SSA) of a course. In disc golf, it's based on everyone's ratings competing at the same time.

So the fact that someone can have played nothing but Idlewild or Fountain affects their individual rating but then we average all those rounds together, despite not being comparative and then use that average to average everyone's rounds.

So there's that.
 
...
So the fact that someone can have played nothing but Idlewild or Fountain affects their individual rating but then we average all those rounds together, despite not being comparative and then use that average to average everyone's rounds.
...
I agree that a 950 propagator who exclusively plays their rounds on Fountain type courses will likely average a lower rating when playing Idlewild/Iron Hill type courses. But there are typically over 100 props in tour events whose ratings have been produced from playing a mixed bag of courses, so the impact of the odd tour propagator with ratings based on a specific course type, if any, would be minimal.
 
I agree that a 950 propagator who exclusively plays their rounds on Fountain type courses will likely average a lower rating when playing Idlewild/Iron Hill type courses. But there are typically over 100 props in tour events whose ratings have been produced from playing a mixed bag of courses, so the impact of the odd tour propagator with ratings based on a specific course type, if any, would be minimal.

I understand that it won't happen. The issue is that the system is set up with this flaw and that it could happen.

I mean I'm playing an event Saturday with 12 people....
 
I understand that it won't happen. The issue is that the system is set up with this flaw and that it could happen.

I mean I'm playing an event Saturday with 12 people....
You'll need to go back to your statistics class and check your math. What are the odds 5 propagators all have poor days and shoot more than 3 shots or roughly 27 rating points worse than some hypothetical true value? We know from actual data that propagators shoot worse than their rating by 3 shots or more 1 in 6 rounds. The odds all 5 will do that in a round is 1 in 6 to the 5th power or 1 round in 7776 rounds. For 12 to do that it's 1 in 6 to the 12th power (1 in ~2 billion). We currently have ratings on just under 5 million rounds in the PDGA database in 20 years.
 
I love it when people want the ratings system to be something it's not, and create absurd hypotheticals to support their false narrative. It's even better when these people are PDGA committee members.

What if a propagator had only ever played at Fountain Hills...then they show up at Idlewild?

That ranks right up there with the gem from last year:

What if a bunch of 920 rated players beat a bunch of 1020 rated players?

The ratings calculation is just that: a calculation. If you feed it a bogus premise it will give you a bogus result. Garbage in --> garbage out. That doesn't mean that the ratings system isn't serving its purpose in the real world with real data.
 
I love it when people want the ratings system to be something it's not, and create absurd hypotheticals to support their false narrative. It's even better when these people are PDGA committee members.

What if a propagator had only ever played at Fountain Hills...then they show up at Idlewild?

That ranks right up there with the gem from last year:

What if a bunch of 920 rated players beat a bunch of 1020 rated players?

The ratings calculation is just that: a calculation. If you feed it a bogus premise it will give you a bogus result. Garbage in --> garbage out. That doesn't mean that the ratings system isn't serving its purpose in the real world with real data.

Exactly.

And I have this running argument with a buddy of mine who LOVES to create tons of "what if" scenarios. He will tell me about times he was TD'ing an event and as he entered more and more scores, the "ratings" would continually change. He doesn't get (or doesn't accept) that those calculations done without all propagators was not the "rating;" it was a mathematical calculation. The rating is not the rating until (by definition) it includes all propagators for that day. And thus, ratings don't change during the day.
 
I love it when people want the ratings system to be something it's not, and create absurd hypotheticals to support their false narrative. It's even better when these people are PDGA committee members.

What if a propagator had only ever played at Fountain Hills...then they show up at Idlewild?

That ranks right up there with the gem from last year:

What if a bunch of 920 rated players beat a bunch of 1020 rated players?

The ratings calculation is just that: a calculation. If you feed it a bogus premise it will give you a bogus result. Garbage in --> garbage out. That doesn't mean that the ratings system isn't serving its purpose in the real world with real data.

Ratings are totally fine for 99% of the objectives they serve. However it's illogical to say different courses rate differently so you can't compare one to the other and then average them all together.

I don't like that they effect where someone can and cannot play. However, I don't have a better solution and freely admit that the ratings guidelines for division structure is miles and miles ahead of the local bump rule that was the standard prior to this.
 

Latest posts

Top