#31  
Old 05-22-2018, 12:07 PM
DavidSauls's Avatar
DavidSauls DavidSauls is offline
* Ace Member *
 
Join Date: Mar 2008
Location: Newberry, SC
Years Playing: 24.7
Courses Played: 125
Posts: 15,317
Niced 3,690 Times in 1,586 Posts
Default

Interesting how very slightly the actual ratings change (for the most part)
---yet, because the rankings are sorted in the hundredths of a point, they get shuffled.

So it's a huge deal when a course increases from 4.61 to it's rightful 4.63.
Sponsored Links
Reply With Quote
  #32  
Old 05-22-2018, 12:26 PM
wellsbranch250's Avatar
wellsbranch250 wellsbranch250 is offline
Birdie Member
 
Join Date: Jul 2016
Location: Huntsville Alabama
Years Playing: 6.1
Courses Played: 420
Throwing Style: RHBH
Posts: 498
Niced 857 Times in 304 Posts
Default

From running all these data sets I've realized that 6 the current top 7 are pretty much in the top 10 no matter how you slice it (exception is Harmon Hills). However after that there's some major changes. Mostly like you stated, they just get shuffled around a little bit, But there are a handful of courses that have been greatly impacted by driveby 5s and vendettas 0.5s in this data set and im sure there are hundreds more when factoring in all 6140 listed courses.

Quote:
Originally Posted by DavidSauls View Post
Interesting how very slightly the actual ratings change (for the most part)
---yet, because the rankings are sorted in the hundredths of a point, they get shuffled.

So it's a huge deal when a course increases from 4.61 to it's rightful 4.63.
Reply With Quote
  #33  
Old 05-22-2018, 12:46 PM
DavidSauls's Avatar
DavidSauls DavidSauls is offline
* Ace Member *
 
Join Date: Mar 2008
Location: Newberry, SC
Years Playing: 24.7
Courses Played: 125
Posts: 15,317
Niced 3,690 Times in 1,586 Posts
Default

I'm not knocking it---it's more an observation on how tightly the Top 25 is packed, and how little difference there is between courses ranked 10 spots apart.

So anyway you filter it, some course will jump into the top 10, some other course will fall out, even though their ratings are virtually identical. (This is probably true of many things for which there are thousands of entrants, and you try to rank the top 10. Not a big gap between #9 and #29).
Reply With Quote
  #34  
Old 05-22-2018, 01:12 PM
BogeyNoMore's Avatar
BogeyNoMore BogeyNoMore is online now
* Ace Member *
 
Join Date: Jul 2009
Location: Walled Lake, MI
Years Playing: 15.7
Courses Played: 330
Throwing Style: RHBH
Posts: 10,115
Niced 2,645 Times in 1,267 Posts
Default

Quote:
Originally Posted by DavidSauls View Post
...it's more an observation on how tightly the Top 25 is packed, and how little difference there is between courses ranked 10 spots apart.
Once you get into the top 10 (or perhaps even the top 25), while it's easy to do mathematically, you're essentially splitting hairs trying to rank them. I'm not saying they're all the same - they're not. Many of them are quite different, and I say:
vive la difference!
Reply With Quote
  #35  
Old 05-22-2018, 01:19 PM
scarpfish's Avatar
scarpfish scarpfish is offline
Resident Grouch
 
Join Date: Jan 2009
Location: Brownbackistan
Years Playing: 16.6
Courses Played: 360
Posts: 8,146
Niced 738 Times in 259 Posts
Default

If we can learn anything from this thread, its that the existing system, whatever it's faults, works as good as circumstances will allow, and whatever we may propose to fix it, does little to change the status quo.

This is why I don't gospelize these numbers that much. They are after all a quantified result derived from a salad of glorified opinions. In an alternate universe, we might see somewhat the same batch of top 100 courses in a somewhat different order.

Niced: (1)
Reply With Quote
  #36  
Old 05-22-2018, 01:23 PM
DavidSauls's Avatar
DavidSauls DavidSauls is offline
* Ace Member *
 
Join Date: Mar 2008
Location: Newberry, SC
Years Playing: 24.7
Courses Played: 125
Posts: 15,317
Niced 3,690 Times in 1,586 Posts
Default

Quote:
Originally Posted by BogeyNoMore View Post
Once you get into the top 10 (or perhaps even the top 25), while it's easy to do mathematically, you're essentially splitting hairs trying to rank them. I'm not saying they're all the same - they're not. Many of them are quite different, and I say:
vive la difference!
Yes.....I should have said, little difference, rating-wise.
Reply With Quote
  #37  
Old 05-22-2018, 01:30 PM
timg's Avatar
timg timg is offline
*Administrator*
 
Join Date: May 2007
Location: Cortland, NY
Years Playing: 17.7
Courses Played: 257
Throwing Style: RHBH
Posts: 9,990
Niced 929 Times in 303 Posts
Default

For your second data set, what weight did you give the < 5 crowd? .5?


A critics list would be challenging just because the way I display TRs on the site is based on a formula rather than a flag. I guess once they hit bronze I could just add a TR flag to their account which would make life easier as I'd no longer be trying to hit a moving target. It's possible but very, very rare that people actually lose TR status.
Reply With Quote
  #38  
Old 05-22-2018, 03:16 PM
wellsbranch250's Avatar
wellsbranch250 wellsbranch250 is offline
Birdie Member
 
Join Date: Jul 2016
Location: Huntsville Alabama
Years Playing: 6.1
Courses Played: 420
Throwing Style: RHBH
Posts: 498
Niced 857 Times in 304 Posts
Default

the weighted factor for all three graphics. (photoshop image, excel graphic, excel graphic 2)

data set 1, .1 weighted for those with less than 5 reviews
data set 1, .2 weighted for those with 5 or more reviews and not a TR.
data set 1, .4 weighted for those bronze
data set 1, .6 weighted for those silver
data set 1, .8 weighted for those gold
data set 1, 1 weighted for those diamond

data set 2, .1 weighted for those with less than 5 reviews
data set 2, .5 weighted for those with 5 or more reviews and not a TR.
data set 2, 1 weighted for those TR

data set 3 .1 weighted for those with less than 5 reviews.
data set 3 1 weighted for everyone else.

Quote:
Originally Posted by timg View Post
For your second data set, what weight did you give the < 5 crowd? .5?


A critics list would be challenging just because the way I display TRs on the site is based on a formula rather than a flag. I guess once they hit bronze I could just add a TR flag to their account which would make life easier as I'd no longer be trying to hit a moving target. It's possible but very, very rare that people actually lose TR status.
Reply With Quote
  #39  
Old 09-01-2019, 02:36 PM
Ptronius Ptronius is offline
Newbie
 
Join Date: Aug 2015
Posts: 9
Niced 5 Times in 4 Posts
Default

How about a separate ranking system that was based on "which course would you rather play?" It's been a looong time since I took a statistics class so IDRC what this system is called but I *think* it's a thing. Every time you go to rate a new course, it would go down the list of courses you've played and one by one you choose whether you like that one or the new one more. Everyone's comparisons get aggregated, then the # of discs can be assigned by the resulting rankings. The one that ends up at the top of the list is 5 discs the one at the bottom is zero.

Because reviewers regions overlap every course would (hopefully eventually) be compared to every course by some degrees of separation and regional flattening would hopefully be compensated for.

You could have written reviews not be a requirement to contribute to the rankings, which could be a negative but might encourage a lot more people (like me) to contribute to them and reduce outliers caused by low numbers of rankings.

For someone to give an crazy high ranking to their local baskets-in-an-open-field course they would have to pretty brazenly say they prefer it to some other high ranked course. IOW it takes outright dishonesty to throw the rankings as opposed to innocent enthusiasm for their new local course.

An extremely simplified example of how this works: 3 reviewers, 4 courses, each reviewers has played a different pair of courses
- first reviewer has played coarse A and B says they like A more than B, A is now 5 in the overall ratings and B is 0
- Second reviewer has played B and C and says they like B more than C, A is now 5, B is 2.5 and C is 0
- Third reviewer has played C and D and says they like C more than D, A is now 5, B is 3.33, C is 1.66, and D is 0


You would get weird things at first when there are missing connections but I would think that would go away pretty quickly as connections are made. One hitch I can see is when a new reviewer reviews a new course. Then you can have a "floater". Say they have only played C and E and they prefer E. All you know is that E is better than C but you don't know where it goes in comparison to A and B until you get some connecting comparisons.

Niced: (2)

Last edited by Ptronius; 09-01-2019 at 02:41 PM.
Reply With Quote
 

  #40  
Old 09-01-2019, 03:18 PM
DiscGolfCraig's Avatar
DiscGolfCraig DiscGolfCraig is offline
Double Eagle Member
 
Join Date: Feb 2008
Location: Charlotte, NC
Years Playing: 15.7
Courses Played: 332
Posts: 1,296
Niced 292 Times in 123 Posts
Default

Quote:
Originally Posted by Ptronius View Post
How about a separate ranking system that was based on "which course would you rather play?" It's been a looong time since I took a statistics class so IDRC what this system is called but I *think* it's a thing. Every time you go to rate a new course, it would go down the list of courses you've played and one by one you choose whether you like that one or the new one more. Everyone's comparisons get aggregated, then the # of discs can be assigned by the resulting rankings. The one that ends up at the top of the list is 5 discs the one at the bottom is zero.

Because reviewers regions overlap every course would (hopefully eventually) be compared to every course by some degrees of separation and regional flattening would hopefully be compensated for.

You could have written reviews not be a requirement to contribute to the rankings, which could be a negative but might encourage a lot more people (like me) to contribute to them and reduce outliers caused by low numbers of rankings.

For someone to give an crazy high ranking to their local baskets-in-an-open-field course they would have to pretty brazenly say they prefer it to some other high ranked course. IOW it takes outright dishonesty to throw the rankings as opposed to innocent enthusiasm for their new local course.

An extremely simplified example of how this works: 3 reviewers, 4 courses, each reviewers has played a different pair of courses
- first reviewer has played coarse A and B says they like A more than B, A is now 5 in the overall ratings and B is 0
- Second reviewer has played B and C and says they like B more than C, A is now 5, B is 2.5 and C is 0
- Third reviewer has played C and D and says they like C more than D, A is now 5, B is 3.33, C is 1.66, and D is 0


You would get weird things at first when there are missing connections but I would think that would go away pretty quickly as connections are made. One hitch I can see is when a new reviewer reviews a new course. Then you can have a "floater". Say they have only played C and E and they prefer E. All you know is that E is better than C but you don't know where it goes in comparison to A and B until you get some connecting comparisons.
First post and it's beating a dead horse. Wish someone had told you to save your energy before writing all this.

Niced: (1)
Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
need a heavy weighted z zone 176+ michaelmove The Marketplace 0 03-09-2012 01:07 AM
reviews Huff General Disc Golf Chat 22 04-27-2009 10:33 PM
Reviews Innovadude General Disc Golf Chat 12 09-09-2008 07:39 PM


All times are GMT -4. The time now is 09:21 AM.


Powered by vBulletin® Version 3.8.10
Copyright ©2000 - 2020, vBulletin Solutions, Inc.