Fair and accurate elections, statistically speaking

Fair and accurate elections, statistically speaking
Electoral College map of the 2000 election, one of the most disputed in U.S. history. A uniquely American institution, the Electoral College consists of popularly elected representatives apportioned to each state according to the size of states' congressional delegation. It's the electors who formally elect the President of the United States. According to Berkeley statistician Elchanan Mossel, this system of electing the president is significantly more likely to result in an erroneous election outcome compared to the simple majority voting system..

The political controversy surrounding the Electoral College -- the institution whereby we elect the president of the United States -- is as old as the republic. In spite of recent contentious elections that raised the controversy to new heights, the debate is unlikely to reach a resolution given the compelling political considerations on both sides. But rarely if ever does the public debate on this subject take into account objective, mathematical considerations.

UC Berkeley’s Elchanan Mossel, an associate professor in the departments of Statistics and Computer Science and an expert in probability theory, believes there is an important contribution statisticians can make to the debate. He is not alone. Statisticians have subjected voting-related issues to complex mathematical calculations at least since the 18th century, when Marquis de Condorcet, a French philosopher and mathematician began using probability theory in the context of voting.

Mossel’s analyses pit the Electoral College system against the simple majority-voting system in an attempt to test the strength of our electoral system in one key aspect: how prone to error is it and, in turn, what are the odds that the outcome of an will actually be flipped by such random error?

“There are many ways of voting, Mossel says. “You can vote by majority vote, Electoral College, weighed voting, even dictatorship. The statistical question is, ‘Which voting method is most robust to errors?’”

Mossel’s assumption is that any voting model is intrinsically subject to a finite error, meaning that the vote cast by a small number of voters in each election will end up being recorded differently from what those voters intended. This may be due to human error, hanging chads, or voting machines that flip some vote randomly. In a landslide election such unfortunate occurrences make no statistical difference. But in a close election — the likes of which we’ve often had in recent election cycles – such errors may wreck havoc with the election, with and sometimes even without our knowledge.

“Statistically, the most robust system in the world is a dictatorship,” Mossel says, not without a measure of amusement. “Under such a system, the results never depend on how people vote.”

But since most of us would prefer an alternative to dictatorship in spite of the system’s robustness, the question then becomes which voting system in a democracy is most likely to produce accurate results. To that end Mossel compares all of the possible voting systems, including the two voting methods we are most familiar with — simple majority-vote and the Electoral College system, both of which offer voters two alternatives to pick from.

Before running his analysis, Mossel first sets out to tests the model to ensure it satisfies some basic statistical requirements for fair elections. One such mathematical criterion corresponds to the notion of “fairness among all the alternatives” — meaning that the model must ensure that all alternatives (i.e., candidates) receive the same treatment.

“Let’s say some people under one model voted for Candidate A and some people voted for Candidate B and the winner was Candidate A under a given system. Now we replace the people who voted for B with those who voted for A and vice versa and we want the result to flip, too. It’s a natural notion of fairness that is also common in economics. The results should not depend on the names of the candidates.”

Another way to factor in democracy is transitivity, which assumes that every two people play the same role mathematically and no one person has a greater chance of changing the outcome than anyone else. One example of transitivity, Mossel says, is to imagine people seated in a circle. Then he rotates everyone (or every person’s opinion) one seat to the left. “We want the voting function to be transitive, meaning that the result is the same if we rotate people.”

Once criteria for democracy are factored in, the problem of finding the most robust voting system becomes a problem of mathematical analysis. The reasoning is not simple. Mathematicians do not rely on standard Euclidian geometry to solve social problems of such complexity, which makes voting analysis difficult to explain on national television. Instead they apply what’s known as Gaussian geometry, or the geometry of spheres in very high-dimensions. This methodology is employed when studying aggregate behavior of large numbers of people.

In the context of robustness of voting, a key role is played by geometric Isoperimetric theorems, which study the relationships between volumes and surface areas. (“Isoperirimetric” means having the same perimeter.) To make his point, Mossel reduces the highly-complex problem to a very simple and amusing hypothetical question.

“We have the cold war all over again,” he smiles. “The U.S. and Russia decide to partition the world exactly in half, 50-50 each. The two states must have the exact same area, including the oceans. And they try to minimize the border between the two states so they need the fewest number of border guards.”

The optimal solution to this problem is obvious: split the world along the line of the equator.

“The mathematics we developed for the robustness problem in some sense corresponds to the partitioning of very high-dimensional spheres.”

After running his analysis, Mossel says, the answer is unequivocal. It also serves a mathematical mortal blow to the American system of electing a president.

“Applying isoperimetric theory tells us majority voting method is optimal. It is the most robust function.”

The difference between this common voting method and the Electoral College system is in fact stunning. The first person to determine a way to calculate the error for these voting methods was statistician W. F. Sheppard back in 1899. He determined that majority voting takes a noise rate of x to an error that’s approximately the square root of x. So under majority , if the voting machine flips votes with a probability of 1 in 10,000, the chance that the result of the election will be flipped is roughly the square root of that probability, or 1 in 100.

“With Electoral College voting, in essence you’re doing majority twice,” Mossel says. “First you do majority in each state and then you do the majority of the majority, so you take the square root of the square root. So you take square root of 1/10,000 once and get 1/100, and then you take square root again and get 1/10.”

The Electoral College appears to fail miserably based on the robustness to error criteria.

“We don’t have the best system,” Mossel says.

Yet even in the face of his own analysis he remains highly philosophical about how meaningful this apparently whopping difference between the two systems really is. “Philosophically it may not be morally relevant,” he says. “If the election is so close anyway and people don’t have a strong preference, maybe it doesn’t really matter?”

But to the extent that the democratic ideal is for the outcome to reflect the intent of the voter as much as humanly possible, then the difference in Mossel’s robustness-to-error test could give political pundits food for thought.

Voting theory is only one example of Mossel’s vast work applying probability theory to a wide range of both scientific and social problems. These range from theoretical computer science and evolutionary biology to game theory and social choice — the latter of which includes topics such as voting or economic problems.


ScienceMatters@Berkeley is published online by the College of Letters and Science at the University of California, Berkeley. The mission of ScienceMatters@Berkeley is to showcase the exciting scientific research underway in the College of Letters and Science.

More information: Mossel’s statistical analyses can be found in the following papers: "Maximally Stable Gaussian Partitions with Discrete Applications," written in collaboration with Marcus Isaksson, and "Noise stability of functions with low influences: invariance and optimality," written with Ryan O’Donnell and Krzysztof Oleszkiewicz.

Provided by University of California, Berkeley

Citation: Fair and accurate elections, statistically speaking (2011, February 18) retrieved 19 April 2024 from https://phys.org/news/2011-02-fair-accurate-elections-statistically.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Political Winners and Losers From 2010 Census Not as Obvious as Some Claim

0 shares

Feedback to editors