As the year draws to a close, I was reflecting back on some of the highlights.
I went skydiving, discovered foraging, did some fun escape rooms, and started a new job at Monash University. Also some notable recent successes: a Service Award from the Statistical Society of Australia, and a paper in Nature (with long-time colleague Stephen Leslie, our former PhD student Georgia Tsambos, and several other colleagues).
However, I’d like to highlight one piece of work in particular.
A challenge with such elections is the very large number of ways that voters can cast their vote. With 10 candidates there are about 3.5 million possible ways to rank them in order. With 12 candidates this goes up to 479 million, and it rapidly increases as you add more candidates. The number of ways that the election count could evolve (as a sequence of candidate eliminations) is correspondingly very large.
This makes it tricky to do any statistical inference from a sample of the ballots. The best existing method relied on having access to a digitised version of all of the ballots in the election (for example, if they were all digitally scanned) to pick an optimal set of hypotheses to test. Our new method, which we have called AWAIRE, uses a special technique to ‘learn’ a such a set from the sample alone. That means we do not require a digitised copy of the ballots, allowing our method to be used for more elections. Moreover, the statistical ‘price’ we need to pay for this generality (for doing the learning) is relatively small.
We presented our method in October at the E-Vote-ID 2023 conference and received the Best Paper Award in the technical track.