# To decide between two candidates, long live the simple majority!

In his carte blanche, the mathematician Etienne Ghys goes back over the different ways, from the most to the least fair, of electing a representative among two competitors.

https://www.lemonde.fr/sciences/article/2020/11/18/pour-departager-deux-candidats-vive-la-majorite-simple_6060148_1650684.html

By Etienne Ghys (perpetual secretary of the Academy of Sciences, director of research (CNRS) at ENS Lyon)

Carte blanche. Can mathematics shed some light on the American election soap opera? Let’s imagine a population voting for two candidates and assume that voters flip a coin to choose one or the other. At the end of the ballot, the ballots are counted and the candidate with the most votes is elected. Now suppose that, during the counting, the scrutineers make a few mistakes (or fraud), for example, by being wrong once out of 10,000. What is the likelihood that these small errors will distort the overall result and the other candidate will be elected? It turns out that this probability is of the order of 6 out of 1,000 (for the curious, it is 2/π times the square root of 1/10,000). Is this an acceptable risk in a democracy?

American elections are two-tiered. Each state elects its representatives by a majority and these in turn elect the president. Assuming one more reading error out of 10,000 (which is reasonable when looking at the American ballots), what is the probability of distorting the final result? The existence of this second level makes the probability much worse: one election in 20 would be distorted! This is far too much.

Noise Sensitivity

Of course, all of this depends on very unrealistic assumptions and does not in any way substantiate Donald Trump’s allegations of fraud! Assuming that voters flip a coin is obviously meaningless, even if one can be amazed by the near-equal results in Georgia, for example. However, this illustrates a phenomenon highlighted by mathematicians some twenty years ago: the “noise sensitivity” of various decision-making processes, which go far beyond elections. This concerns computer science, combinatorics, statistical physics and social sciences. When a large number of “agents”, who can be human beings or neurons for example, have “opinions”, what are the right processes that allow a global decision to be made in a stable manner? This stability means that we want the decision to be as insensitive as possible to noise, i.e. to small errors that we cannot control.

One can imagine many electoral processes. For example, each neighborhood could elect its representative who would then elect the city representative, who would elect its representative in the canton, then the department, and so on. It would be a sort of sports tournament, in successive stages, a bit like the American elections but with many more levels. This method happens to be extremely sensitive to noise, and it must absolutely be avoided. The slightest proportion of errors in the count would result in a very high probability of being wrong about the final result. This is unacceptable for a vote, but it is part of the charm of sports tournaments: it is not always the best who wins, and that’s just as well.

What is then the best method, the one that is the most stable? The answer is a bit distressing and shows that the question is badly asked. It is enough to ask a dictator to decide alone. This “method” is indeed very stable because, to change the result, you need an error on the only ballot that counts, which happens once out of 10,000. The question must therefore be rephrased by looking for equitable methods that give the same power to all voters. About ten years ago, three mathematicians demonstrated a difficult theorem in this context, which is ultimately only common sense. To decide between two candidates, simple majority voting is the most stable of all fair methods. Long live the majority!

Some references :