That was my whole point Chuck. Solely basing the system off how the players play compared to each other and not at least part of that equation being a course difficulty rating, is flawed... and yes, I realize how hard it would be to place a course rating for every course in the world that people play tourneys at. Its not your fault. You did the best you could with what you had.
Again, your lack of understanding is showing. How do you measure a course rating unless it's done by expert measurers? What people don't seem to understand is that the rating process has two independent steps. Expert measurers each determine how the course played that round, the same as if people with thermometers walked around and took temperature readings at the tee and basket of each hole. The average of our expert measurers is little different from the average temperature of the course from all of those temperature measurements.
Just like thermometers, we've show that with enough expert measurers with the same mix of ratings, regardless where they come from, they'll produce the same course rating for that round as any other mix of expert measurers with the same average rating. We had to show this was true early on when developing this rating process. It turned out that it works within reasonable statistical limits, even with a small number of expert measurers.
The course rating measurement is essentially an independent process from calculating the round ratings for all of the players that played the round, not just the propagators. We know that when the course has a rating of X, that a score of Y will always get a rating of Z. That's automatic. You don't even need to know who played. From the player's standpoint, it's as if they played any course with that SSA course rating that round. Shoot Y, get Z.
When you consider how a fixed course rating would be determined, you would need dozens of rounds on a course by players with known skill levels, likely playing under different conditions, or their average to determine a fixed course rating wouldn't mean much. Here's the thing, if the data gathered for a single round is good enough to be used to produce the overall course rating average, why isn't it good enough to produce the specific course rating for that round? We've tested it over 10s of thousands of rounds and it has been shown to be good enough. The PDGA rating system essentially uses the specific data for the course conditions for each round to generate better round ratings specific to the conditions than would be produced by using some fixed average that was produced by a bunch of other conditions.