Theory of Voting paper in two parts again this week, because it's a long one - but also my penultimate (you'll be glad to hear)
What System of Apportionment of US Congressional Districts to States is the Best?
Balinski and Young point out “The ultimate choice in any nation must, of course, depend upon its political, social, and legal heritage” (p.93[1]). This questions whether there is any one best system. Further, there is a difference between arguing in the abstract for one principle or method of apportionment, and arguing for what should be applied in a particular context, e.g. the US Congress – which is more constrained by cultural possibilities, and subject to particular details (e.g. representing minority groups). Nonetheless, my discussion will be rather abstract, but focus on the US.
First I outline six historical procedures, and compare their merits, following Balinski and Young. I question, however, some of these authors’ normative claims, in particular about fairness and lotteries. I go on to discuss a radical alternative, proposed by Andrew Rehfeld[2], for random constituencies. I criticise many details of his plan, not least his neglect of equality, but think there is something to be said for randomness, and I use this against Balinski and Young’s rapid dismissal of ‘roulette methods’.
A). Balinski and Young
Article 1, section 2 of the Constitution provides that “Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers” (quoted p.5). This is, however, totally vague in that it doesn’t specify how representatives are to be apportioned according to numbers. There are infinite logical possibilities (pp.63-5), we shall see some of the important methods historically proposed, but for now note it doesn’t even logically require direct proportion between representatives and population (though this is obviously what’s normatively desirable) – some form of inverse proportion would also be ‘according to numbers’.
Six historical methods:
(1) Jefferson (p.18) – fix a divisor (e.g. one seat per 34,000 people) then find the resulting number of seats for each state. E.g. a state with 76,000 people would have two. All fractions are rounded down.
(2) Hamilton (p.17) – fix the number of seats and then find a quotient (US population divided by number of seats). Give each state the whole number part of their quotient. Distribute remaining seats to those with the largest fractions (so if one state should have 1.88 seats and another 6.27, the actual distribution is 2 and 6).
(3) Adams (p.27) – fix a divisor and give each state their quotient, with fractions rounded up – thus 1.02 becomes two and 5.98 six.
(4) Webster (p.32) – find the divisor so whole numbers nearest quotients add up to the required total.
(5) Dean (p.29) – make the quotient as near to the divisor as possible, e.g. in 1830 Massachusetts had a population of 610,408 which divided by 13 was 46,954 and divided by 12 was 50,867 – thus they got 13, as this was closer to the ‘target’ of 47,700.
(6) Hill(-Huntington) (p.47) – make the representatives to seats ratio as near to uniform as possible, with divergences measured in relative percentage terms (unlike Dean’s absolute).
Balinski and Young reject Hamilton’s method in favour of the divisors because of what’s come to be known as the ‘Alabama paradox’ (pp.39-42, c.f. pp.68-9). In 1880, it was noticed that the Hamilton method would give Alabama eight out of 299 seats, but only seven out of 300. This is because Alabama’s quotient of 299 was 7.646, which was rounded up to eight (being one of the largest remainders). An extra seat increased this by 0.33% but because this was a larger absolute increase for larger states, Illinois and Texas ‘leapfrogged’ Alabama in the queue for extra seats – they now had fractions rounded up, while Alabama’s was rounded down. This isn’t an isolated incident – in 1900 it was particularly worrying to find that Colorando would get two seats out of 357, but three out of either 356 or 358, and that as the total changed from 350 to 400 Maine’s delegation moved back and forth between three and four (pp.40-41).
One consequence of this is that state A can grow in size relative to state B, and yet still lose a seat to B. in 1900, for example, Virginia’s share of 386 seats was 9.599 (rounded up to ten), while Maine’s share had been 3.595 (rounded down to three). Virginia’s population was growing by 1.07%pa and Maine’s by 0.67%pa, both less than the national average (2.02%pa). Thus by 1901 their shares were 9.509 and 3.548 – Virginia had declined more in absolute terms because it was larger, and so lost a seat! Another perversity is that the addition of Oklahoma in 1907, which had five extra seats, added to the legislature, resulted in New York transferring a seat to Maine (p.43).
I think Balinski and Young may be too troubled by these paradoxes. They claim:
“Intrinsic to the idea of fair division is that it stand up under comparisons. If an estate is divided up fairly among heirs, then there should be no reason for them to want to trade afterward[3]… [Further] any part of a fair division should be fair” (p.44)
If we take fairness to mean equality, then it is obviously a relational ideal. One can’t be equal without being equal compared to someone. There are, however, other possible meanings. If fairness was a matter of satisfying desert then, assuming we can measure desert without comparison, we can also judge fairness. If I deserve ten, and have ten, then my absolute claim is satisfied[4]. Whatever we think of the first claim, the second – that parts of a fair whole must be fair – seems even more controversial. Suppose we divide £30 between three people, £10 each. Is it fair that the first person in isolation gets £10? It seems we can only call the part fair in virtue of the whole. Perhaps that’s just sophistic, but consider a further case of compensating inequalities. Suppose we are dividing food between two people – we have one bit of meat and three potatoes. A fair division may well be to give one person the meat and one potato, the other two potatoes[5]. Can we say that the division of potatoes part (1,2) is fair though?
In any case, Balinski and Young go on to conclude “No apportionment method is reasonable that gives some state fewer seats when there are more seats to go around” (p.42). Presumably, this is meant with a ceteris paribus condition implied – for it seems fine to give one state fewer seats if it has shrunk in absolute or relative population. The reason the Alabama paradox is problematic is that representation varies (up and down) simply with the total size of the House, nothing else changing. It is true that these paradoxes are worrying, but it’s also true that deviation from quotients is worrying. Although it requires divisor methods to avoid the population paradoxes they don’t (unlike the Hamilton method) guarantee that state’s representation is within +/-1 of their initial quotient (i.e. rounded up or down) (p.79).
Balinski and Young advocate the Webster method, because it avoids the population paradoxes, is unlikely to deviate from quotas (one time in 1,600 (p.81)), and (when it does so) shows no bias in favour of either large or small states. Their table 9.2 (p.77) illustrates expected biases:
Adams: Chance of favouring small 100%: Expected bias to small 28.2%
Dean: Chance of favouring small 93.9%: Expected bias to small 7.0%
Hill: Chance of favouring small 78.5%: Expected bias to small 3.5%
Webster: Chance of favouring small 50%: Expected bias to small 0.0%
Jefferson: Chance of favouring small 0%: Expected bias to small -20.8%
The Webster method, they argue, is intuitive because it treats fractions in a natural manner – rounding up those over 0.5 and down those under. Further this is unbiased, as every state is as likely to have a remaining fraction of over 0.5 as under. Being more generous with rounding up favours small states, being less favours large ones (p.76).
There is, however, another option that they admit is unbiased (p.74) yet reject, and that is an option involving some form of lottery[6]. As they spell this out:
“[C]onstruct a roulette wheel divided into fifty slots, one for each state, the size of each slot being exactly proportional to the population of the state. Spin the wheel and drop a small ball onto it: the state at which it comes to rest “wins” and is awarded one seat. Do this 435 times consecutively and the house is apportioned. The method is perfectly unbiased: every state is treated fairly; no one can complain that the method discriminates against it” (p.66)
This is not necessarily to describe the method in its most plausible form – indeed, they deride it as a gambler’s strategy. They immediately, however, acknowledge that one could distribute whole numbers from the quotients directly, and only randomise the fractions (p.66)[7]. On this interpretation, no state will depart from its quote (rounded up/down), and there is no bias; though population paradoxes remain, they will be partly the consequence of chance (and as such, it’s not so clear they’re problematic).
[1] All references in parentheses refer to M. L. Balinski and H. P. Young (1982) Fair Representation: Meeting the Ideal of One Man, One Vote.
[2] A. Rehfeld (2005) The Concept of Constituency: Political Representation, Democratic Legitimacy, and Institutional Design. All references in square brackets refer to this book.
[3] C.f. Dworkin’s envy-test: “No division of resources is an equal division if, once the division is complete, any immigrant would prefer someone else’s bundle of resources to his own bundle” (2000) Sovereign Virtue p.67.
[4] This claim is complicated if we distinguish absolute and relative desert. (I shall illustrate this with a case of non-desert claims, for simplicity). Imagine A owes B £10 and C £20, but A only has £15. He could give B £10, which would fully satisfy B’s claim. This, we might say, would be unfair to C, however, who will only be 25% satisfied. Other things equal, it would be better to give B £5 and C £10, satisfying each 50% of their claim. I put this aside too.
[5] Assume this is fair, by hypothesis. The envy test would make it depend on parties’ preferences between meat and potatoes. If both very much want the meat, it might be better to make the ‘choice’ between meat or all three potatoes. I think it’s worrying that if one person’s vegetarian, the fact he doesn’t want the meat effectively makes it cheaper for the other – implying (meat + 1.5 potatoes) for the omnivore and (1.5 potatoes) for the vegetarian are equal as neither envies what the other has…
[6] In fact, this is just one option preserving proportionality elsewhere in the system – here in chances. There might be other alternatives sites of proportionality, e.g. one could weight representatives’ votes to reflect populations. I leave that problem to next week…
[7] Rawls mentions something similar in A Theory of Justice (1971 p.223/1999 p.196) “the precept of one elector one vote implies, when strictly adhered to, that each vote has approximately the same weight in determining the outcome of elections. And this in turn requires, assuming single member territorial constituencies, that members of the legislature (with one vote each) represent the same number of electors… [S]afeguards are needed to prevent gerrymandering, since the weight of the vote can be as much affected by feats of gerrymander as by districts of disproportionate size… [In the ideal world] Political parties cannot adjust boundaries to their advantage in the light of voting statistics; districts are defined by means of criteria already agreed to in the absence of this sort of information. Of course, it may be necessary to introduce certain random elements, since the criteria for designing constituencies are no doubt to some extent arbitrary. There may be no other fair way to deal with these contingencies.” Note how readily he accepts random methods!
No comments:
Post a Comment