• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Weighted Reviews

Maybe it still works out ok? If so it might be possible to do a half-assed version of this system with the data that's already on here. The algorithm could take each reviewer's ratings as tiers of preference and discount the actual number value of the ratings. If someone only has two reviews, one's a 1 and one's 4.5, it would only count that they prefer/recommend/rate-higher (however you want to think about it) the second one. Reviewers with only one review couldn't be counted.

I'll reiterate what I was thinking the positives of this system might be

- It might reduce outliers because you can only say a course is better or worse than another course.
- It would spread the ratings evenly out across the full number spectrum. (Grading on a curve basically. There'd be the same number of course rated 0 to 1 as 1 to 2, and 2 to 3 and so on). This may not accurately reflect the quality of the courses (an argument for having both kinds of ratings maybe?) but it would make the collective preferences easier to distinguish.

I don't know but I suspect these things together would mostly squeeze the hump in the bell curve toward the bottom end and a lot of the baskets-around-a-soccer-field type courses would go down. It may not make a huge difference when considering all courses everywhere at once but when you look at smaller areas it might.

I don't think there's anything wrong with the current system. I still feel the best improvement would be to add some 0.25 increments (e.g. give a course a 3.75 or 4.25) to help differentiate courses. In the end, the system ain't broke so it doesn't need fixing as far as I can tell. The real solution is increased reviews.

You're right. I find the best courses in an area by choosing a number of minimal reviews (usually 5) and then sorting by rating. Sure there are some 3.8 courses that are better than 4.1 courses, but that's rare especially with higher numbers of reviews. In the end, there are other factors that are more important than number or even overall course quality. E.g. course location and course type. For course type, I go to reviews and read a few good ones (sort by "most helpful") to get an idea of what I'm in for as far as exhaustion factor and whether short/long or technical/open. This method has never steered me wrong. Making anything more complicated is more likely to decrease sample size (number of reviews), which would be the only way to make DGCR less helpful imho.
 
I'm not so sure it matters how much the reviewer weighs? I guess if its a really tough course to hike, then it would be nice to know, but for the most part just not sure if it matters.
 
I've seen a similar style of ranking methods called MaxDiff. An adapted method to do a comparative analysis of courses would be pretty interesting. I don't think the rating/ranking system needs to change here, but I am curious to see if there would be any significant differences in the ranking lists.
 
I've seen a similar style of ranking methods called MaxDiff. An adapted method to do a comparative analysis of courses would be pretty interesting. I don't think the rating/ranking system needs to change here, but I am curious to see if there would be any significant differences in the ranking lists.

It depends on what significant means but, since the top courses are separated by tiny fractions of a rating, it would almost certainly make a noticeable difference.
 
It depends on what significant means but, since the top courses are separated by tiny fractions of a rating, it would almost certainly make a noticeable difference.

Of course people can choose different definitions of significance but we need a different thread to define that :D

I think the strength of the MaxDiff method is to help sort the middle of the pack courses not necessarily the high/low tails of the distribution.
 
My $0.02 on weighing reviews...

Some people's reviews are light and airy.

Others' (e.g Wellsbranch and mine), have considerably heft, as we tend to break courses down into specific attributes, and provide details describing a particular course with regard to each of those attributes.

Judging from the results, DGCR's voting members seem to prefer reviews that "carry some weight," while light & airy reviews don't seem to be as well received.

Just my personal observations.
 
Last edited:
Top