This analysis is based on actual results data from Pro Worlds 2013.
I did my own version of the stats as well for worlds (long and not well put together, sorry and all based on unofficial ratings still)
The largest variation above rating was 4.48%, avg 1022 for worlds, rated 978.
The largest variation below rating was -2.97% avg 936, rated 965.
McBeth avg 1053, .94% (9.8) over his rating of 1043
57.6% (72 of the 125) who finished all 6 rounds (excluding semis and finals) shot within 1% of their rating, either above or below.
4.8% shot at least 20 points below their rating.
12.7% shot at least 20 points above than their rating.
36 people avg'd >10 points over their rating
16 people avg'd >10 points below their rating.
Avg rating of all competitiors who finished 6 rounds = 990
avg rating of tourney rounds = 993 = 0.3% above rating.
Rating Range # avg Rating avg round variance
1019+ 25 1030.5 1034.0 3.5
1002-1017 24 1007.8 1010.4 2.6
980-1001 25 989.8 992.7 2.9
969-979 25 974.0 978.7 4.7
<969 26 950.1 953.8 3.6
since everyone loves talking about Joe Bishop, he was the 6th highest over his rating, avg 943 compared to his 913 rating (again, I shot a round with him, his 991, and it was a damn good time, most fun round of the week, and a crappy scoring round for me)
So the issue to me is either the ratings system is accurate, or it is so inaccurate that it has become consistently very bad. Either way, i think its pretty interesting, and more times than not (When a solid sample size) it is quite accurate for how someone would do compared to me on a normal course.
I think this does bring up the fact that we need a more accurate way to rate an individual course on any given day. That way, our ratings would mean something. if SSE is the "par" or (what a 1000 rated would shoot) then i should be able to figure out pretty quick what i should shoot based on my rating. That is where we aren't at yet, and once we are, i think that will be better and make ratings more accepted as accurate descriptions of ability.