RD Contest – State Scoring formula

Is the RD contest state score formula correct, and is it fair?
There has been a lot of concern about the RD contest state score formula, and conjecture about how it can be improved.
I have written this to clarify my thoughts and to show others what my analysis of the scoring formula reveals.

A fundamental principle is that the scoring formula has to work in the simplest of all situations, ie. when all states have equal performance.  We know the number of licencees is different so we cannot simply add up their logs and award the contest to the state with the biggest score.  That would give the result to the largest state and that’s not fair to the smaller ones.

To analyse the situation we have to start with some assumptions.

  • Assume all operators across all states had the same average points per log
  • Assume all states had the same participation ratio, ie. logs submitted divided by licencees

Should we expect that to result in equal scores?  Seems reasonable doesn’t it?

Let’s assume each log contains 10 points  (this can be an average, if you wish) and that the participation rate in each state is 10%.
Table 1. Examples of varying state sizes and scores
State Licencees Logs Participation rate (PR) Pointsper log Score(total points*PR)
A 3000 300 10% 10 3000*10%=300
B 1000 100 10% 10 1000*10%=100
C  200 20 10% 10 200*10%=20
We find that the smaller states cannot win if all operators score the same average points per log as the larger state.  Their population is a fraction of the bigger state and they need a correspondingly higher average points per log than the larger state does.
This simple example shows that the current scoring formula does not allow for different sized states.
In most RD contests, the two larger states have had low participation rates and several mid sized states have had better rates.  To examine how those scoring rates affect the outcomes, we can insert those factors into the model and see how that affects the state scores.
We can go further and examine what a smaller state needs to do in order to score higher than a larger one.

Table 2. Further examples showing how state score is affected by participation rate (PR)

State Licencees Logs Participation rate (PR) Points per log Score(total points * PR)
A 3000 150 5% 10 3000*5%=75
B 1000 200 10% 10 1000*100/1000=100
C 200 20 10% 50 1000*10%=100

Firstly, to explain States A and B of this example, the participation rate of state A has dropped to half its level in the first table.  But note that its computed state score has cut to a quarter of its original value.  It was 300 when its participation rate was 10% but with half the particpation the final score is 75, which is one quarter of the original value.  We can observe that the state score changes in proportion to the square of the participation rate.  Hold that thought.

This change in State A’s participation has had a dramatic effect on its final score, allowing state B to score better than state A, with the same average log value as state B.

Secondly, for the much smaller state C, I have illustrated how it can achieve the same score as one that is 5 times bigger.  Its logs contain five times as many points per log as the mid level state.  Here we can observe that the state score is proportional to the average points per log for each state, provided the participation rate remains constant.

Why is the state score not proportional to the Participation Rate?

As we know the formula for state score is:

State score = PR * (total of all logs)

However the total of all logs already reflects the participation rate, as if the PR were higher or lower, the log total would be correspondingly higher or lower.  Indeed if the number of logs was halved, the total score would halve.  Putting that another way:

Total of submitted logs = Total of all possible logs from state * PR

Rewriting the state score formula, we see that the state score formula can be rewritten as:

State score = (total of all possible logs from the state) * PR^2  (ie. PR squared)

From this it is apparent that the state score is proportional to PR squared.

This is why the state score is affected so dramatically by the participation rate PR.

How can the state score formula be improved?

Clearly the formula currently does not compensate for the different sizes of each state.
Teams of differing sizes can be compared only by normalising results to the average effort of each team member.  In the case of the states competing for the RD trophy, this translates to the average number of points earned by each state licencee.
ie. Average score per licencee = (Total points on logs) divided by (total licencees in that state).
With that formula, going back to table 1 above, the average score per licencee in each state is 1.0.  The average score per licencee in the other states is also 1.0.  A tie.
And since they both produced the same average effort per licencee, a tie is exactly correct. This measure works well for the case where all states perform equally.  How does it work if some states perform differently?
Let’s recalculate table 2, where states had different participation rates and different points per log.
Table 3. Sample results with Average Score per licencee
State Licencees Logs Participation rate (PR) Points per log Average Score per licencee
A 3000 150 5% 10 0.5
B 1000 200 20% 10 2.0
C 1000 220 22% 10 2.2
D 200 20 10% 50 5.0

Outcomes:

  • The anomalous  results shown in Table 2 have gone.
  • State A: large state, low participation, logs submitted are average value, overall rating 0.5.
  • State B: medium size state with higher participation than state A, and a much higher score – 2.0.
  • State C: like state B but with 10% more logs.  Note that the score is 10% higher.
  • State D: the exaggerated example of a small state with very high average logs, scores best of all at 5.0.

As can be seen the results are linear, with increased scores resulting in a proportional increase in State score.

Where to from here?

The state score formula should be changed to the following:

State score = (total points from logs submitted) divided by (number of licencees in the state) [see note below]

I would like to see this analysis considered by contest managers and other decision makers within the WIA.  I believe that the state scoring formula was fundamentally flawed because it was based on incorrect mathematics.

It is recommended that this part of the contest rules be corrected at a suitable time, to reflect the results of this analysis. It may be too late for the rules to be changed for 2012, but perhaps this anomaly can be corrected for subsequent years.

This change would make it more feasible for the contest to be won by different states.  I believe the run of wins for VK6 has occurred due to good promotion of the contest in VK6 combined with a severe penalty for the larger states imposed by the erroneous formula discussed here.  I feel sure that with a more appropriate formula, competition would be enlivened and the contest would be a healthier and better supported event.


Note: number of licencees is adjusted to remove licencees that cannot participate in contests such as repeaters and beacons.  This is already catered for by current rules, but I did not wish to complicate the description above.

Reference:  Current rules for the RD Contest, rule 14.1 defines the state score calculation. http://www.wia.org.au/members/contests/rdcontest/documents/RDcontest2012Rules.pdf