RD Contest – State Scoring formula

Is the RD contest state score formula correct, and is it fair?
There has been a lot of concern about the RD contest state score formula, and conjecture about how it can be improved.
I have written this to clarify my thoughts and to show others what my analysis of the scoring formula reveals.

A fundamental principle is that the scoring formula has to work in the simplest of all situations, ie. when all states have equal performance.  We know the number of licencees is different so we cannot simply add up their logs and award the contest to the state with the biggest score.  That would give the result to the largest state and that’s not fair to the smaller ones.

To analyse the situation we have to start with some assumptions.

  • Assume all operators across all states had the same average points per log
  • Assume all states had the same participation ratio, ie. logs submitted divided by licencees

Should we expect that to result in equal scores?  Seems reasonable doesn’t it?

Let’s assume each log contains 10 points  (this can be an average, if you wish) and that the participation rate in each state is 10%.
Table 1. Examples of varying state sizes and scores
State Licencees Logs Participation rate (PR) Pointsper log Score(total points*PR)
A 3000 300 10% 10 3000*10%=300
B 1000 100 10% 10 1000*10%=100
C  200 20 10% 10 200*10%=20
We find that the smaller states cannot win if all operators score the same average points per log as the larger state.  Their population is a fraction of the bigger state and they need a correspondingly higher average points per log than the larger state does.
This simple example shows that the current scoring formula does not allow for different sized states.
In most RD contests, the two larger states have had low participation rates and several mid sized states have had better rates.  To examine how those scoring rates affect the outcomes, we can insert those factors into the model and see how that affects the state scores.
We can go further and examine what a smaller state needs to do in order to score higher than a larger one.

Table 2. Further examples showing how state score is affected by participation rate (PR)

State Licencees Logs Participation rate (PR) Points per log Score(total points * PR)
A 3000 150 5% 10 3000*5%=75
B 1000 200 10% 10 1000*100/1000=100
C 200 20 10% 50 1000*10%=100

Firstly, to explain States A and B of this example, the participation rate of state A has dropped to half its level in the first table.  But note that its computed state score has cut to a quarter of its original value.  It was 300 when its participation rate was 10% but with half the particpation the final score is 75, which is one quarter of the original value.  We can observe that the state score changes in proportion to the square of the participation rate.  Hold that thought.

This change in State A’s participation has had a dramatic effect on its final score, allowing state B to score better than state A, with the same average log value as state B.

Secondly, for the much smaller state C, I have illustrated how it can achieve the same score as one that is 5 times bigger.  Its logs contain five times as many points per log as the mid level state.  Here we can observe that the state score is proportional to the average points per log for each state, provided the participation rate remains constant.

Why is the state score not proportional to the Participation Rate?

As we know the formula for state score is:

State score = PR * (total of all logs)

However the total of all logs already reflects the participation rate, as if the PR were higher or lower, the log total would be correspondingly higher or lower.  Indeed if the number of logs was halved, the total score would halve.  Putting that another way:

Total of submitted logs = Total of all possible logs from state * PR

Rewriting the state score formula, we see that the state score formula can be rewritten as:

State score = (total of all possible logs from the state) * PR^2  (ie. PR squared)

From this it is apparent that the state score is proportional to PR squared.

This is why the state score is affected so dramatically by the participation rate PR.

How can the state score formula be improved?

Clearly the formula currently does not compensate for the different sizes of each state.
Teams of differing sizes can be compared only by normalising results to the average effort of each team member.  In the case of the states competing for the RD trophy, this translates to the average number of points earned by each state licencee.
ie. Average score per licencee = (Total points on logs) divided by (total licencees in that state).
With that formula, going back to table 1 above, the average score per licencee in each state is 1.0.  The average score per licencee in the other states is also 1.0.  A tie.
And since they both produced the same average effort per licencee, a tie is exactly correct. This measure works well for the case where all states perform equally.  How does it work if some states perform differently?
Let’s recalculate table 2, where states had different participation rates and different points per log.
Table 3. Sample results with Average Score per licencee
State Licencees Logs Participation rate (PR) Points per log Average Score per licencee
A 3000 150 5% 10 0.5
B 1000 200 20% 10 2.0
C 1000 220 22% 10 2.2
D 200 20 10% 50 5.0

Outcomes:

  • The anomalous  results shown in Table 2 have gone.
  • State A: large state, low participation, logs submitted are average value, overall rating 0.5.
  • State B: medium size state with higher participation than state A, and a much higher score – 2.0.
  • State C: like state B but with 10% more logs.  Note that the score is 10% higher.
  • State D: the exaggerated example of a small state with very high average logs, scores best of all at 5.0.

As can be seen the results are linear, with increased scores resulting in a proportional increase in State score.

Where to from here?

The state score formula should be changed to the following:

State score = (total points from logs submitted) divided by (number of licencees in the state) [see note below]

I would like to see this analysis considered by contest managers and other decision makers within the WIA.  I believe that the state scoring formula was fundamentally flawed because it was based on incorrect mathematics.

It is recommended that this part of the contest rules be corrected at a suitable time, to reflect the results of this analysis. It may be too late for the rules to be changed for 2012, but perhaps this anomaly can be corrected for subsequent years.

This change would make it more feasible for the contest to be won by different states.  I believe the run of wins for VK6 has occurred due to good promotion of the contest in VK6 combined with a severe penalty for the larger states imposed by the erroneous formula discussed here.  I feel sure that with a more appropriate formula, competition would be enlivened and the contest would be a healthier and better supported event.


Note: number of licencees is adjusted to remove licencees that cannot participate in contests such as repeaters and beacons.  This is already catered for by current rules, but I did not wish to complicate the description above.

Reference:  Current rules for the RD Contest, rule 14.1 defines the state score calculation. http://www.wia.org.au/members/contests/rdcontest/documents/RDcontest2012Rules.pdf

8 thoughts on “RD Contest – State Scoring formula”

  1. Yep. Well done. It is simple and fair and it encourages participation.

    You can’t ask for more than that.

    Thanks and 73,
    Alek. VK6APK
    RD Contest Manager for 11 years in the dim dark past.

  2. Interesting break down, but being from a state where we have no neighbours, your discussion does not appear to take into account the situation where there are many states close together.

    Out here in the West we have to make do with what we have and most of the contacts are made within Perth on 2m and 70cm. In the more populous states these same bands can be used for interstate contacts and HF can be used with much less effort than in WA where intrastate HF contacts are not counted, but the same distance, north to south WA, covers several states in the East.

  3. Hi VK6FLAB (sorry but I don’t have your name)
    You make some very good points about what contacts are valid.

    However that is a different discussion. The focus of my analysis is on how the state scores are calculated, once the valid contacts have been made. If we can sort out the state scoring formula we will be in a better position to fine tune other parts of the rules.
    73, Andrew

  4. Excellent scoring method. Makes sense. I think the new rules allow intrastate contacts as well. (Paragraph 9.7 in the new rules)

  5. Andrew,

    My point is that the two, scoring and as you put it, valid contacts, are not separate issues – each end up in the state ranking. You can fix the scoring all you like, but it assumes that all states are created equal and clearly they’re not.

    Let me try to say this in another way.

    The argument you make relates to participation rate per state and attempts to even out between states with different population sizes. That’s fine. But the population size is not the only metric; the population density affects the individual state ranking just as much, if not more.

    What I’m trying to say is that a state like VK6 where the population density is extremely low with basically one pocket of people (Perth), has no incentive to make HF contacts because getting from Perth to Broome is as hard as getting from Perth to Sydney.

    So a participant in Sydney can make contacts to Adelaide, Brisbane, Hobart, Canberra and Melbourne, and get inter-state points for their effort, where a person in Perth, working the same distance gets to Kalgoorlie, Albany and Geraldton, all within the state, none of which are scored in the same way as the Sydney participant.

    So, your discussion deals with how many people are turning on their radio, compared to how many radios there are in the state, but it does not take into account the disparity between a VK2 and a VK6 operator.

    The two issues cannot be dealt with separately, they go hand in hand and should both affect the state ranking formula.

    That was my point.

    73, Onno

  6. Hi Onno,

    My aim was not to define a complete scoring system, it was just to examine the formula used in rule 14.1 and to suggest improvements from a mathematical viewpoint, assuming the goal of the rule is to fairly compare the point scores earned by each state and decide on a winning state.

    The other rules define the legal contacts and what points they earn. If the point scores earned for the contacts made was roughly proportional to the size of the state, when a similar proportion of the state actually operated, then that part of the rules could be considered to be working correctly. This could be the subject of a separate simulation or analysis to see if it is indeed operating in that way.

    For much more fascinating reading and research on this topic I refer you to the scoring table used in the 60s and 70s for this contest. I knew the vk2 table intimately. That table attempted to award points on the basis of difficulty, distance, rarity, probability, population, activity, fairness, wind and tide etc. It was controversial and was eventually dropped. Then there was the system introduced in the early 80s. That attempted to set a long term participation rate for each state and rewarded the biggest improver with the win. That system was not without its critics either. The RD contest has a long history of complex rules being changed periodically to try to find the elusive perfect system.

    Andrew

Comments are closed.