Nevertheless, rating and ranking courses in that way simply doesn't cut it. Courses are so different, especially when comparing across different regions, that I still value the review more highly than the number in every case. When getting to a new region, I've often found a trusted reviewer that I find I can --- pardon the obvious word play -- trust. In Florida, it was reposado. The rating numbers actually meant very little. What mattered was gathering significant information from a trustworthy source who had played all the courses in the region in addition to many more nationwide.
I do something like that if I run across a review that speaks to me, I'll read the other reviews from that person in the area. But afa that particular manor of rating not cutting it, couldn't you just as well say that about the current system? or maybe you are saying that. ...But If you're looking for new courses to play, don't you start out by looking at the higher rated ones in the area first? If I'm going in a road trip somewhere and hope to hit a course in the area or on the way, I look at my route on the DGCR map and see what's there. If there are a lot, I might filter out all the ones below X.X rating and start looking at what remains.
The way I see it, rating a course already gives some sort of comparison between all courses I've played. I think this holds for any reviewer who has reviewed more than a handful of courses. It would be redundant for me to say "I prefer course A over course B" when I've already rated one 4.0 and one 3.5.
I get what you mean but with the half disc units you can only put courses in tiers so you're going to end up with bunches of courses ranked the same numerically in your individual list. That's just fine on it's own but idk how much it would mathematically gum up the works if you tried to connect everybody's rankings together, which is the idea.
Maybe it still works out ok? If so it might be possible to do a half-assed version of this system with the data that's already on here. The algorithm could take each reviewer's ratings as tiers of preference and discount the actual number value of the ratings. If someone only has two reviews, one's a 1 and one's 4.5, it would only count that they prefer/recommend/rate-higher (however you want to think about it) the second one. Reviewers with only one review couldn't be counted.
I'll reiterate what I was thinking the positives of this system might be
- It might reduce outliers because you can only say a course is better or worse than another course.
- It would spread the ratings evenly out across the full number spectrum. (Grading on a curve basically. There'd be the same number of course rated 0 to 1 as 1 to 2, and 2 to 3 and so on). This may not accurately reflect the quality of the courses (an argument for having both kinds of ratings maybe?) but it would make the collective preferences easier to distinguish.
I don't know but I suspect these things together would mostly squeeze the hump in the bell curve toward the bottom end and a lot of the baskets-around-a-soccer-field type courses would go down. It may not make a huge difference when considering all courses everywhere at once but when you look at smaller areas it might.
Ok, I think I'm done pitching this for now. Sorry for the interruption. You can go back to discussing par. Me, I'm such a casual (and bad) player I haven't bothered to keep track of my score in 15 years.