• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Trusted Reviewer - Groupthink

Which TR do you trust the most? (more than one choice allowed)


  • Total voters
    66
Also some higher skilled players themselves do not fully understand concepts of design for the lower skilled players. What we need are college courses/apprenticeships/degrees/certifications in order to truly decipher who knows what better than who. The DGCD is a good start, but a lot of courses are still being put in by local Joe Golfer that lives down the street.

Anybody have a couple million laying around to start a school for disc golf course design?
 
I'll again point out you're taking a small percentage of TRs and making assumptions about the group as a whole based on a small sample.

Yeah....I read it the first time you posted. I assume you want a response since you are posting it again.

Yes....it is normal in statistical analysis to make observations based on a subset of data (randomly collected). These TR's were basically randomly collected other than the fact they they had written the most reviews (it is not Diamond or Gold TRs, or ones with the most courses played, or the 40 year olds, or those living in the Pacific Northwest, or those with PDGA ratings over 935, etc).

These top 20 have 4498 reviews between them. Amazingly, you need to go down the list and grab the next 3 pages (60) and they have exactly 4498 reviews amongst them.

That would be cool to have a bigger set of numbers to make the data even more valid......but the lower down the list, the less valid the data becomes. These guys only have 53-55 reviews (timg is in this group) so their number of courses with 4.0+ ratings becomes pretty small.

Do you have a concern in the validity of the data as presented? What is the specific concern?

Of course there are people who do not fit the profile that this data portrays. There rarely is a specific person that fits the profile of a whole bunch of data averaged together. If this data is instructive and helpful to you, use it. If not, ignore it (or laugh at it). I never expect anything anyone posts on an internet forum to change someones mind if their mind is already made up.
 
Again, you are on my wavelength totally. What I learned here is that lots of reviewers do not rate from their own personal perspective and their own personal preference points....they rate (the number) from what they think the masses will think. I think the data I went to great pains to gather bear that out.

What lots of Reviewers write (the words) is intended to be helpful to all. I have no beef with that. Lots of people with lots of skill levels can benefit knowing that a given course holds water in puddles/mud for a week after a rain, or has no bathrooms, or has parking issues, or gets really overcrowded on weekends, or is easy to get lost on (so bring a map is advised), or has major mosquito problems, or is very thorny, or offers little shade and has no drinking fountains near hole 9, or is very hilly exhausting but has no benches, etc.

But when it comes to design, these same reviewers try hard to appeal to the masses too. This simply does not work. Like you pointed out with your Justin Bunnell post, a hole that can be perfect for a mid-level player is boring for a higher rated player AND for a lower level player. Also, many reviewers simply do not understand design in the framework of testing player skills.....they write about things like flow/fairway routing, erosion control, safety. Not that those are bad in and of themselves, but those things belong on a designer group forum, not in a users/players forum.

So, for a player to rate a course based on how well he/she can compete against the course (the single most important aspect in the sport of disc golf), he/she needs to let readers know how they rate the course in this area. And, you have to rate this area for a specific skill level (and disclose that).

Well.....that was a whole bunch of rambling to let you know that you will not get the very logical thing you ask for from "Trusted Reviewers" ratings since they write (and rate) for the masses (the lump sum average.....the fat part of the bell curve).

And.....all that said, I still maintain that DGCR ratings do extremely well at finding courses for me that I really enjoy. I have found that anything rated >3.5 will only very rarely disappoint and that <2.5 are not worth me going out of my way and spending precious time on if I have other options.

So for say Flip City, I did not enjoy it to a 5.0 experience that DGCR rates it (it was an A grade 4.0 experience), or Idlewild which I enjoyed at a B+ great experience....these courses are still wonderful courses but their scoring experience for me is the primary thing that knocked them down. Conversely, Yadkin County Park is only rated 3.57, but it has all the elements (especially scoring euphoria) for me that makes it an incredibly addictive experience......if I lived nearby I would go there over and over and over in hopes of conquering it. The better a course is at giving me that reaction, the better the course.....so I gave it an A+ (5.0).

In response to the part I bolded, I would argue that the rest of your post is stuff that more belongs on a designer forum. This site is about finding the courses that are enjoyable to play, and things like navigation and safety issues are certainly a part of that. The vast majority of people on this site are much more qualified to assess the experience of a course than the design. I try to evaluate the design in a way not all that different than your stated philosophy, though I look at it from both my own skill level and how it plays for an 850ish rated player since that's where my fiancee is and she plays every new course with me.

The largest part of my personal rating by far is based on the design and how it plays for my level. I also really enjoy gold level courses, I have the distance and driving skills of a low gold level player but don't have the short game and putting to be rated that way yet, so I feel like I have a pretty good handle on how a design plays for that level. In my review, I do try to look at it for different skill levels but you'll find that info down in the other thoughts section. You'll also find a bunch of other things in my pros and cons that don't necessarily affect my rating but that I think are helpful to know when choosing a course to play.

All that to basically say that you mentioned several times in this thread that you wanted to focus specifically on ratings, but now you're conflating that with the reviews and drawing some conclusions on how people rate based on the verbiage of their reviews. I don't think those are as connected as you're implying.
 
...This is a list of top Trusted Reviewers and an analysis of their course ratings of 4.0 and greater. It compares them against what the course is rated by all reviewers for each course...

Can the question be re-phrased to ask which reviewer is best at assigning the "correct" rating?

Can we assume that TR ratings are more "correct" than those of all reviewers?

Isn't the ability to give "correct" low ratings just as important as singling out 4.0 courses?

If so, wouldn't the similarity of a all of a TR's ratings to those of other TR's be a better measure?

You could even do a second round of refinement, by only comparing to the subset of TR's that scored best on the first round.
 
To me, the skill level of the reviewer should be disclosed. It matters because you need to know where their game is to understand where they are coming from. I am not saying change the rating system to give more weight to higher rated players, their viewpoint is just as important, but it means different things to different levels of players. For instance, a sidearm 300' flick player may love a course that others don't care for.
Besides being ludicrous, it's unnecessary. Not everything needs to be spelled out in black and white, you can figure these things out by *gasp* reading the reviews and the course data. Using your example, if a short course with right-ending holes is rated/reviewed favorably by a RHFH dominate player than you can probably put two and two together. But you can never divine all the info you need to know from one quantity (the rating) and the reviewer's baseball card.
Do you really think people give up that easy and are so weak minded? I don't really agree with this, although I think we do need some shorter beginner courses, you gotta give newer players a little more credit. If every new golfer is this fickle, easily frustrated type of person, then they'll never last anyways and I say why waste time catering to them.
If most courses were championship level courses, really long and open and really punishing, most n00bs would A) get tired of throwing big sky hyzers 5-6 times before they get to putt, hole after hole and B) would probably lose their discs they just bought way more frequently. Some would gut it out, especially if they see a good player throw and see that it can be done but I would bet that most n00bs are your college kids, families, and other kids that think they're just gonna throw frisbees around. Even with most courses being red/blue level, there's already a problem with n00bs coming to the park with dollar store frisbees and being totally overwhelmed and probably won't give playing disc golf another thought for the rest of their lives. Why waste time catering to these people? B/c they buy way more discs than us fraction of obsessed, DG internet nerds.
Again, you are on my wavelength totally. What I learned here is that lots of reviewers do not rate from their own personal perspective and their own personal preference points....they rate (the number) from what they think the masses will think. I think the data I went to great pains to gather bear that out.
Not really. They rate the courses based on the description that goes beside the number (decent, good, best of the best, etc), they just happen to come to general agreement on that because words mean things.
 
Can the question be re-phrased to ask which reviewer is best at assigning the "correct" rating?

Can we assume that TR ratings are more "correct" than those of all reviewers?

Isn't the ability to give "correct" low ratings just as important as singling out 4.0 courses?

If so, wouldn't the similarity of a all of a TR's ratings to those of other TR's be a better measure?

You could even do a second round of refinement, by only comparing to the subset of TR's that scored best on the first round.

I don't think there's nearly enough data yet to make those comparisons, though they do sound like an interesting way to look at this. There are very few courses with more than a couple TR reviews and almost none with 10+ TR reviews.
 
That would be cool to have a bigger set of numbers to make the data even more valid......but thne lower down the list, the less valid the data becomes. These guys only have 53-55 reviews (timg is in this group) so their number of courses with 4.0+ ratings becomes pretty small.

The counterpoint criticism of my data analysis based on the aggregate is that Mashnut & Harr dominate the # datapoints........so that analysis is weighted more to measuring them.
 
The counterpoint criticism of my data analysis based on the aggregate is that Mashnut & Harr dominate the # datapoints........so that analysis is weighted more to measuring them.

If you take just the top 3 by # of reviews, Harr, TVK and I have a third of the total reviews out of the group you looked at. Not sure how much that matters, just throwing it out there.
 
Can the question be re-phrased to ask which reviewer is best at assigning the "correct" rating?

Can we assume that TR ratings are more "correct" than those of all reviewers?

Isn't the ability to give "correct" low ratings just as important as singling out 4.0 courses?

If so, wouldn't the similarity of a all of a TR's ratings to those of other TR's be a better measure?

You could even do a second round of refinement, by only comparing to the subset of TR's that scored best on the first round.

As usual, your questions (and thoughts) are very insightful....both in what you say and in what you don't say (what your questions lead to).

Your theme seems to be "correct" ratings. I will try to succinctly contrast 2 approaches to arriving at "correct".

1) The reviewer provides a rating that is solely and honestly (brutally sometimes) theirs......their reaction alone to the playing experience of the course for them. No speculation on how it will play for others. From that (and all the others doing the same thing), the average of everyone combined together becomes the DGCR average....the "correct" average.

2) The reviewer takes into account what he/she thinks is the "correct" rating for a course is based on their thoughts of what the masses like. Then people argue with everyone else why their ratings are wrong if they differ from the DGCR average (which also is very close to the rating they provided).

#1 is what I try to do and crashzero is talking about. This is like what movie reviewers or food critics. It allows users to follow one or more critics and use their guidance in a more useful way. I think it would be cool for timg to create a way you could select favorite reviewers and rate just them (like he does with the TR checkbox.....except much expanded from that in functionality).

#2 is what the data shows that TRs do. It is helpful to the masses....but is not as helpful as if they were more personal with their ratings and simply let the magic AVG function do the work of providing the "correct" rating.
 
2) The reviewer takes into account what he/she thinks is the "correct" rating for a course is based on their thoughts of what the masses like. Then people argue with everyone else why their ratings are wrong if they differ from the DGCR average (which also is very close to the rating they provided).

In an effort to be succinct, I realize that this might come across as harsh and judgmental to those whose personal preference in courses happens to match up extremely well with the masses. That was not the intent.

I would think however, that if a person has discerning tastes at all there would be a large minority of courses that do not suit that reviewer's tastes.....just basing that on the huge variety in types of courses out there and the wide variety of reviewers writing reviews and rating.
 
This is how the absolute variation from the DGCR average ("correct" rating) might look for the 2 approaches (when looking at courses raters rated 4.0+):

Code:
[COLOR="Wheat"]blankness[/COLOR]	[B]<0.25	.25-.49	.50-.74	.75-.99	>0.99[/B]
Approach 1	35%	22%	18%	20%	6%
Approach 2	53%	30%	13%	4%

Note how 83% (a solid majority) of Approach #2 are the same as DGCR average or as close as can be without exactly matching.
 

Latest posts

Top