Sunday, November 27, 2005

BA (Hons) Spam

Today I received the below email:

Subject: Admissions Office - emailing
From: "Patrice Durham - MBA, PhD"
Date: Sun, November 27, 2005 5:09 pm
You've been nominated,
Thanks to a private nomination, you are now eligable to obtain an official University Degree.
Obtain a prosperous future, increase money-earning power, and the [sic] enjoy the prestige that comes with having the career position you've always dreamed of. The degree will be awarded to you based on your present knowledge and life experience, bachelors, masters, phd and more are available.
If you are serious about this, please call us back ASAP at 1-206-338-3579
Patrice Durham - MBA, PhD - MBA, PhD
-Admissions Officer

I already have BA (1st class) and, though not yet graduated, an MPhil in Politics. I'm working toward my PhD. Somehow, buying a dodgy degree online doesn't seem so appealing... Nice of them to try though!

More mild amusement, this list of ill-chosen URLs.

7th Week Round-up

Round up: Thank goodness it’s now the start of 8th week. I’m looking forward to the end of term… Last week was crazily busy. Tuesday I ended up getting a late spot on the New College exchange dinner, and met a guy (Paul) from my department just starting a PhD there. I’d seen him in seminars, but having dinner together gave us a chance to get to know each other properly.

Thursday, after our Political Theory Workshop, we traditionally go to the Kings Arms for drinks. This week I’d been planning on getting food too, since I was going on to the Jurisprudence Discussion Group, but Paul suggested New College bar, since it was cheaper and closer than KA. So I ended up back there, and had to go to the law seminar on wine but no real food… It was a good talk though, about deliberative democracy, and I got to say rather a lot (since those lawyers don’t work so much on democracy). Prof John Gardner was there, and joined us in the KA afterwards – where on leaving I met Paul and some of the other Politics students, also leaving the KA where they’d gone after New!

Prof Gardner not only bought me a drink, but invited me to his annual Jurisprudence party Friday, so that gave me another night out, and again without dinner… And Saturday was the CSSJ conference, so again no dinner, and New College bar afterwards, for the third time in a week!

Saturday, November 26, 2005

Man City 0-1 Liverpool

I didn’t see any of this match, owing to being at the launch of the Oxford Centre for the Study of Social Justice conference.

From the sounds of it, Liverpool were hardly fluid, but ground out a 1-0 win, which is exactly what we need to be doing to maintain our progress up the table.

The main talking point seemed to be the minute’s silence for George Best cut down to about 20 seconds. It was hardly surprising that a Man City/Liverpool crowd wouldn’t be keen to respect someone so connected with Man Ure. I know a number of people were unhappy with the decision for a national minute’s silence, given he wasn’t even English, and had pissed his life away.

It’s interesting to note he was applauded at Wolves, Celtic and (I think) Old Trafford. This, I’m told, is the continental approach, and far more sensible. It allows any who don’t agree to ‘voice’ that by not clapping, yet without spoiling the overall effect – as with those who talked in the silence.

Friday, November 25, 2005

Rehfeld on Random Constituencies

B). Rehfeld

At this stage, we could consider even more radical proposals. Andrew Rehfeld[1], for example, proposes abandoning territorial representation altogether. He argues that large constituencies don’t in any case represent communities anyway. Rather the consequence of territorial representation is that legislators are more concerned with pushing local pork (like our ‘divide the 10 Euros game, last week) than promoting the common good. As an alternative, he proposes random constituencies: On coming of age, every American will be assigned to one of 435 constituencies, which will be theirs for life. As a consequence, each representative will truly represent a cross-section of the nation, providing “self-regarding incentives to act as if they cared about the common good” [p.xiv].

This radical suggestion challenges many of our ordinary assumptions. Rehfeld seems right to point out that territorial representation – even if deeply embedded in the US by the federal system – isn’t necessary; the slogan ‘all politics is local’ is true in consequence of the system, and we could go for representation by occupation or ethnicity. Rehfeld goes even further in disputing the presuppositions of our current topic, when he suggests district equality doesn’t matter:

“Despite its democratic-sounding framing, this “equal chance” claim in fact reflects one of the least democratic values we could imagine, as if everyone should have an equal chance of individually deciding an election, each of us standing an equal chance of being our own petty tyrant for a day… [W]e properly should not worry about unequal distributions of inconsequential goods, and an individual vote in a large election is as inconsequential good as any” [p.11].

This poses an interesting challenge, and radical alternative, to our present concern. My disagreements with Rehfeld will require more thought (and for me to finish reading his book first) – though it goes without saying that, while I’m happy with the use of randomisation, I’ll have to dispute the above claim. For now, some provisional remarks:

If votes aren’t equal, we aren’t giving all people equal concern and respect. If votes are ‘inconsequential’, it isn’t clear they’re any good. But votes aren’t just of symbolic value. Suffragettes surely wouldn’t have been happy if women were granted ‘half-votes’.

Moreover, equalising the value of votes seems the safest way to make sure they all count at all. One problem with weighted-voting is that one’s share of power need not be proportional to ones number of votes – a fact Rehfeld seems either ignorant of or to forget [p.42][2]. If votes are split 3, 3, 3, and 1 (with 6 needed for a motion to pass) the fourth person has no voting power – they are never vital to any passing motion, as it always requires at least two of the others, who are sufficient themselves[3]. If we allow even what seems a small inequality, it may have a great effect, to the extreme of denying some any influence whatsoever. If we want votes to have more than symbolic effect, the best thing to do is make them equal.

Finally, I think Rehfeld’s proposal would give majorities too much power. Territorial districting works because the population isn’t evenly distributed. If the population was clustered, one could achieve approximately proportional results, but districting accordingly. Alternatively, however, if the minority are split over many districts, and lose them all, they have no representation at all. Rehfeld suggests that some minorities may be better served by having a voice in all districts, rather than controlling a few but being anonymous in most [p.11]. This is an empirical question, but I’m still unhappy with the consequence he envisages and accepts – that a 51% majority could win 100% of the seats [p.244].

In any case, while Rehfeld seems happy to accept majority-rule in his random constituencies, and declares himself not bothered by any inequality of votes, his proposal does not directly contradict my lottery-voting. He is at pains to stress that defining districts is something independent from, and prior to, determining voting procedures [e.g. pp.7, 21]. Once a district is randomly constituted, it is still an open question whether to adopt majority rule[4].

In commenting on PR and group rights, Rehfeld says:

“If constituencies are defined by their members’ similarity of voice (if African American representatives, for example, come from predominantly African American districts), then we promote diversity of voice within a representative body by denying it within the constituency. The demand that representative bodies should be diverse thus subordinates the deliberative diversity within a constituency to that of the legislature. Yet, if good and proper deliberation requires that all voices are heard, then it would appear that we have to choose between diversity within the legislature and diversity among their electoral constituents. Or, in terms of exclusion, the question becomes, do we exclude “voice” from the representative body itself, or from the constituent groups who select their representatives?” [p.27]

He may be right that a diverse legislature is often ensured by creating homogeneous electoral groups, and further that such groups (exposed only to their own viewpoints, not others) may radicalise, making legislative compromise harder. However, the suggestion we must choose between a diverse legislature and diverse constituencies is a false dichotomy – and even if it were not, Rehfeld is not necessarily right to opt for the latter, given the former is where real decision making (if not local participation) takes place. It’s quite possible to adopt randomly-assigned constituencies (a la Rehfeld) and lottery-voting, and thereby produce diverse constituencies that mirror the whole nation, and a legislature that also includes members of all these groups. (Since both procedures rely on randomisation, neither are logically guaranteed – but the numbers concerned make these generalisations pretty much absolute).

In any case, Rehfeld’s preference for diverse constituencies seems to presuppose that democratic deliberation has to take place with fellow constituents [p.51]. Non-territorial constituencies, he supposes, only became possible with mass media and particularly the internet – through which he imagines most debate and campaigning taking place [pp.243-4, c.f. p.60]. I haven’t yet found any reason why deliberation has to be with fellow constituents, rather than simply with any others (preferably perhaps of somewhat opposing views). While it’s true that the internet has allowed for much democratic debate, e.g. political blogs[5], it seems unlikely to me that citizens would deliberately seek out others with whom they had no more than a randomly-assigned constituency in common – it’s far more likely they will converse with those local to them and/or sharing similar interest/views.

I think Rehfeld and I disagree about as much as we agree on[6], but his work is useful because it highlights that the very question we are here answering – how to assign representatives between states – may itself be flawed. However, I suggested arguments above against his indifference to the (in)equality of votes. If we are concerned about equality, then even if we were to adopt non-territorially based constituencies (on whatever lines you might imagine, e.g. profession, ethnicity or randomisation), then we would still need a method of apportionment between these groups. I don’t believe Rehfeld’s under-developed criticism of equality is successful, so I do not think he casts doubt on the importance of our present inquiry – only raises an important question about whether we should be apportioning to territory, as opposed to something else.

C). Conclusion

Use of territorial constituencies is just one historical prejudice too often accepted as a given, however. A further value in Rehfeld’s work is bringing random methods to more prominence, even though I don’t agree with exactly where he uses them. This takes me back to the lottery method so quickly dismissed by Balinski and Young (p.66, p.74). They say “A basic problem with a betting man’s method is that, although it may be “fair” over the long run, the immediate outcome is almost sure to be unfair” (p.67). However the only reason they say it is ‘almost certain’ to be unfair is their less plausible assumption all seats (rather than just remaining fractions) should be assigned randomly. The latter method, because it sticks within the quotas (rounded up/down) can be said to be fair in both the short term and long term. While the lottery may seem to go in favour of a state with a weaker claim on one occasion, there is no systematic bias involved.

To dismiss such (partly) random methods out of hand seems, to me, far too quick. Of course, to return to their quotation (p.93) with which I opened, electoral systems as a whole are confined by feasibility – in part, what people will accept. If the US public are hostile to the perceived ‘irrationality’ of random methods, then it is unlikely they will be the best to adopt. While it’s true there is a tendency to choice in modern society[7], there is also a counter-current against ‘hyper-rationality’, with many advocating random methods when ‘reason(s) run out[8]. It’s far from clear what public opinion is when it comes to lotteries. Whatever the rationality of such procedures, it’s almost universally accepted that they’re fair. If we want each person to be counted equally – in the sense of having an equal chance to make a difference – there seems to be no objection to bringing more chance into the procedure.
[1] See note 2.
[2] This gives me a feeling of smug superiority, even if it doesn’t itself discredit his whole argument!
[3] Such examples can be found in J. F. Banzhaf III (1965) ‘Weighted Voting Doesn’t Work: A Mathematical Analysis’ Rutgers Law Review 19 317-343 and A. D. Taylor (1995) Mathematics and Politics: Strategy, Voting, Power, and Proof ch.4, where he shows Luxembourg had no power in the 1958 EEC split of 4, 4, 4, 2, 2, 1.
[4] Rehfeld (2005) p.7 “Maybe they would use majority rule or plurality rule. Maybe they would select a representative by lottery. Our concern here is not, then, with voting rules or the questions of single-member or multimember representation. It concerns the prior question of how constituent groupings themselves affect the legitimacy of a political regime.”
[5] E.g. a few quick links take me to (amongst others):,,,,,,,,
[6] Nonetheless what two people share can be as illuminating as what they disagree on. It was recently put to me that G. A. Cohen shares, with many pragmatists and continental philosophers (e.g. Rorty and – I think – Habermas) a belief that if there’s no objective truth, normative political philosophy is in need of major revision. This agreement is itself significant, though they disagree about the conditional (i.e. Cohen thinks there is a truth…).
[7] See, e.g., A. Buchanan, D. Brock, N. Daniels and D. Wikler (2002) work on designer babies, From Chance to Choice: Genetics and Justice.
[8] See, e.g. N. Duxberry (2002) Random Justice: On Lotteries and Legal Deciison-making; J. Elster (1979) Ulysses and the Sirens: Studies in rationality and irrationality and (1989) Solomonic Judgements: Studies in the limits of rationality; D. Heyd ‘When Practical Reasons Plays Dice’ in E. Ullmann-Margalit (ed.) (2000) Reasoning Practically; and O. Neurath (1913) ‘The Lost wanderers of Descartes and the Auxiliary Motive (On the Psychology of Decision)’ in his (1983) Philosophical Papers 1913-46 [ed. and trans. R. S. Cohen and M. Neurath].

Thursday, November 24, 2005

US Apportionment

Theory of Voting paper in two parts again this week, because it's a long one - but also my penultimate (you'll be glad to hear)

What System of Apportionment of US Congressional Districts to States is the Best?

Balinski and Young point out “The ultimate choice in any nation must, of course, depend upon its political, social, and legal heritage” (p.93[1]). This questions whether there is any one best system. Further, there is a difference between arguing in the abstract for one principle or method of apportionment, and arguing for what should be applied in a particular context, e.g. the US Congress – which is more constrained by cultural possibilities, and subject to particular details (e.g. representing minority groups). Nonetheless, my discussion will be rather abstract, but focus on the US.

First I outline six historical procedures, and compare their merits, following Balinski and Young. I question, however, some of these authors’ normative claims, in particular about fairness and lotteries. I go on to discuss a radical alternative, proposed by Andrew Rehfeld[2], for random constituencies. I criticise many details of his plan, not least his neglect of equality, but think there is something to be said for randomness, and I use this against Balinski and Young’s rapid dismissal of ‘roulette methods’.

A). Balinski and Young

Article 1, section 2 of the Constitution provides that “Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers” (quoted p.5). This is, however, totally vague in that it doesn’t specify how representatives are to be apportioned according to numbers. There are infinite logical possibilities (pp.63-5), we shall see some of the important methods historically proposed, but for now note it doesn’t even logically require direct proportion between representatives and population (though this is obviously what’s normatively desirable) – some form of inverse proportion would also be ‘according to numbers’.

Six historical methods:

(1) Jefferson (p.18) – fix a divisor (e.g. one seat per 34,000 people) then find the resulting number of seats for each state. E.g. a state with 76,000 people would have two. All fractions are rounded down.

(2) Hamilton (p.17) – fix the number of seats and then find a quotient (US population divided by number of seats). Give each state the whole number part of their quotient. Distribute remaining seats to those with the largest fractions (so if one state should have 1.88 seats and another 6.27, the actual distribution is 2 and 6).

(3) Adams (p.27) – fix a divisor and give each state their quotient, with fractions rounded up – thus 1.02 becomes two and 5.98 six.

(4) Webster (p.32) – find the divisor so whole numbers nearest quotients add up to the required total.

(5) Dean (p.29) – make the quotient as near to the divisor as possible, e.g. in 1830 Massachusetts had a population of 610,408 which divided by 13 was 46,954 and divided by 12 was 50,867 – thus they got 13, as this was closer to the ‘target’ of 47,700.

(6) Hill(-Huntington) (p.47) – make the representatives to seats ratio as near to uniform as possible, with divergences measured in relative percentage terms (unlike Dean’s absolute).

Balinski and Young reject Hamilton’s method in favour of the divisors because of what’s come to be known as the ‘Alabama paradox’ (pp.39-42, c.f. pp.68-9). In 1880, it was noticed that the Hamilton method would give Alabama eight out of 299 seats, but only seven out of 300. This is because Alabama’s quotient of 299 was 7.646, which was rounded up to eight (being one of the largest remainders). An extra seat increased this by 0.33% but because this was a larger absolute increase for larger states, Illinois and Texas ‘leapfrogged’ Alabama in the queue for extra seats – they now had fractions rounded up, while Alabama’s was rounded down. This isn’t an isolated incident – in 1900 it was particularly worrying to find that Colorando would get two seats out of 357, but three out of either 356 or 358, and that as the total changed from 350 to 400 Maine’s delegation moved back and forth between three and four (pp.40-41).

One consequence of this is that state A can grow in size relative to state B, and yet still lose a seat to B. in 1900, for example, Virginia’s share of 386 seats was 9.599 (rounded up to ten), while Maine’s share had been 3.595 (rounded down to three). Virginia’s population was growing by 1.07%pa and Maine’s by 0.67%pa, both less than the national average (2.02%pa). Thus by 1901 their shares were 9.509 and 3.548 – Virginia had declined more in absolute terms because it was larger, and so lost a seat! Another perversity is that the addition of Oklahoma in 1907, which had five extra seats, added to the legislature, resulted in New York transferring a seat to Maine (p.43).

I think Balinski and Young may be too troubled by these paradoxes. They claim:

“Intrinsic to the idea of fair division is that it stand up under comparisons. If an estate is divided up fairly among heirs, then there should be no reason for them to want to trade afterward[3]… [Further] any part of a fair division should be fair” (p.44)

If we take fairness to mean equality, then it is obviously a relational ideal. One can’t be equal without being equal compared to someone. There are, however, other possible meanings. If fairness was a matter of satisfying desert then, assuming we can measure desert without comparison, we can also judge fairness. If I deserve ten, and have ten, then my absolute claim is satisfied[4]. Whatever we think of the first claim, the second – that parts of a fair whole must be fair – seems even more controversial. Suppose we divide £30 between three people, £10 each. Is it fair that the first person in isolation gets £10? It seems we can only call the part fair in virtue of the whole. Perhaps that’s just sophistic, but consider a further case of compensating inequalities. Suppose we are dividing food between two people – we have one bit of meat and three potatoes. A fair division may well be to give one person the meat and one potato, the other two potatoes[5]. Can we say that the division of potatoes part (1,2) is fair though?

In any case, Balinski and Young go on to conclude “No apportionment method is reasonable that gives some state fewer seats when there are more seats to go around” (p.42). Presumably, this is meant with a ceteris paribus condition implied – for it seems fine to give one state fewer seats if it has shrunk in absolute or relative population. The reason the Alabama paradox is problematic is that representation varies (up and down) simply with the total size of the House, nothing else changing. It is true that these paradoxes are worrying, but it’s also true that deviation from quotients is worrying. Although it requires divisor methods to avoid the population paradoxes they don’t (unlike the Hamilton method) guarantee that state’s representation is within +/-1 of their initial quotient (i.e. rounded up or down) (p.79).

Balinski and Young advocate the Webster method, because it avoids the population paradoxes, is unlikely to deviate from quotas (one time in 1,600 (p.81)), and (when it does so) shows no bias in favour of either large or small states. Their table 9.2 (p.77) illustrates expected biases:

Adams: Chance of favouring small 100%: Expected bias to small 28.2%
Dean: Chance of favouring small 93.9%: Expected bias to small 7.0%
Hill: Chance of favouring small 78.5%: Expected bias to small 3.5%
Webster: Chance of favouring small 50%: Expected bias to small 0.0%
Jefferson: Chance of favouring small 0%: Expected bias to small -20.8%

The Webster method, they argue, is intuitive because it treats fractions in a natural manner – rounding up those over 0.5 and down those under. Further this is unbiased, as every state is as likely to have a remaining fraction of over 0.5 as under. Being more generous with rounding up favours small states, being less favours large ones (p.76).

There is, however, another option that they admit is unbiased (p.74) yet reject, and that is an option involving some form of lottery[6]. As they spell this out:

“[C]onstruct a roulette wheel divided into fifty slots, one for each state, the size of each slot being exactly proportional to the population of the state. Spin the wheel and drop a small ball onto it: the state at which it comes to rest “wins” and is awarded one seat. Do this 435 times consecutively and the house is apportioned. The method is perfectly unbiased: every state is treated fairly; no one can complain that the method discriminates against it” (p.66)

This is not necessarily to describe the method in its most plausible form – indeed, they deride it as a gambler’s strategy. They immediately, however, acknowledge that one could distribute whole numbers from the quotients directly, and only randomise the fractions (p.66)[7]. On this interpretation, no state will depart from its quote (rounded up/down), and there is no bias; though population paradoxes remain, they will be partly the consequence of chance (and as such, it’s not so clear they’re problematic).

[1] All references in parentheses refer to M. L. Balinski and H. P. Young (1982) Fair Representation: Meeting the Ideal of One Man, One Vote.
[2] A. Rehfeld (2005) The Concept of Constituency: Political Representation, Democratic Legitimacy, and Institutional Design. All references in square brackets refer to this book.
[3] C.f. Dworkin’s envy-test: “No division of resources is an equal division if, once the division is complete, any immigrant would prefer someone else’s bundle of resources to his own bundle” (2000) Sovereign Virtue p.67.
[4] This claim is complicated if we distinguish absolute and relative desert. (I shall illustrate this with a case of non-desert claims, for simplicity). Imagine A owes B £10 and C £20, but A only has £15. He could give B £10, which would fully satisfy B’s claim. This, we might say, would be unfair to C, however, who will only be 25% satisfied. Other things equal, it would be better to give B £5 and C £10, satisfying each 50% of their claim. I put this aside too.
[5] Assume this is fair, by hypothesis. The envy test would make it depend on parties’ preferences between meat and potatoes. If both very much want the meat, it might be better to make the ‘choice’ between meat or all three potatoes. I think it’s worrying that if one person’s vegetarian, the fact he doesn’t want the meat effectively makes it cheaper for the other – implying (meat + 1.5 potatoes) for the omnivore and (1.5 potatoes) for the vegetarian are equal as neither envies what the other has…
[6] In fact, this is just one option preserving proportionality elsewhere in the system – here in chances. There might be other alternatives sites of proportionality, e.g. one could weight representatives’ votes to reflect populations. I leave that problem to next week…
[7] Rawls mentions something similar in A Theory of Justice (1971 p.223/1999 p.196) “the precept of one elector one vote implies, when strictly adhered to, that each vote has approximately the same weight in determining the outcome of elections. And this in turn requires, assuming single member territorial constituencies, that members of the legislature (with one vote each) represent the same number of electors… [S]afeguards are needed to prevent gerrymandering, since the weight of the vote can be as much affected by feats of gerrymander as by districts of disproportionate size… [In the ideal world] Political parties cannot adjust boundaries to their advantage in the light of voting statistics; districts are defined by means of criteria already agreed to in the absence of this sort of information. Of course, it may be necessary to introduce certain random elements, since the criteria for designing constituencies are no doubt to some extent arbitrary. There may be no other fair way to deal with these contingencies.” Note how readily he accepts random methods!

Wednesday, November 23, 2005

Liverpool 0-0 Real Betis

We need at least a draw tonight. Hopefully we should get it, and wrap up qualification for the last 16 of the CL. I'm going round my friend Rob's to watch it.

In the meantime, this Liverpool squad selector made me laugh.

(My, more realistic, team for tonight: here)

UPDATE: The result was a fairly boring 0-0 draw but it was enough to ensure qualification. A shame Crouch still didn't break his duck - though he was stopped only by a handball from taking one of several good chances we created.

Saturday, November 19, 2005

Thesis Pointers

I keep getting the nagging feeling my PhD will carry on for ever, as I haven't really written anything since finishing my Masters in April. At the moment, the intention is to start remedying this over the Christmas 'vac' - though I don't work too well in the cold (it's taken me all day to cobble together 2,000 words about what I was recently reading for next week's Theory of Voting class paper - to be posted here of course), and I'm sure plenty of other distractions will come up (academic and less so).

Anyway, for those similarly struggling, Crooked Timber has what looks like some very good advice here.

Liverpool 3-0 Portsmouth

I only followed the game online, while working, and am going to a birthday party tonight so won’t get to see MotD. Still, sounds like another good result for Liverpool, with Cisse and Morientes both scoring.

Too bad Crouch’s duck continues – with a saved penalty and all – but by most accounts he played pretty well. I don’t think it counts as an assist (to the much improved Zenden, who got the rebound) though.

My main concerns are injuries to Garcia and Alonso. No word from LFC on how bad those are as of yet, but hopefully they’ll be fit for next weekend. I think since we can probably afford to rest them (on the bench) for the Betis clash we should.

From the BBC 606 Messageboard: "Alonso was clearly stuggling and limping off the pitch and Finnan passes the ball to him when he can hardly walk! Fair play to Alonso, got the ball, back heeled it to one of our players then hobbled off."

Commiserations to my flatmate Glyn, who saw Southampton turn a 3-0 lead into a 3-4 defeat in the last 20 minutes this afternoon.

Friday, November 18, 2005

Philosopher's Humour 4

Theory of Voting – 11/11/05

1. Professor: “Do you know what Euclidean means?”
Ian: “Of or relating to Euclid”

2. [Having drawn a complicated diagram on the board] “And if we combine… no… that way madness lies”

Moral Philosophy Seminar – 14/11/05

1. Susan Wolf: “When I said my daughter was getting a SUV over my dead body, I didn’t mean it literally”

2. [Wolf again, I think…] “I’m not wholly unsympathetic to this response, though I prefer not to see it as an objection”

Berlin Lectures – 15/11/05

1. Allen Wood: “I thank you for your warm welcome and tough question. Because that’s the way philosophers welcome each other, by giving each other a hard time”

Political Theory Graduate Workshop – 17/11/05

1. [In reference to the title ‘On the Notion of Basic Structure’] Foreign presenter asks “Is it bad English?”; Dr Butt (chair) “I don’t know, I can’t tell”

2. “A Cohensian… Is that the opposite of Rawlsian here?… Cohen himself, let’s say”

Herbert Spencer Lecture (Jonathan Glover) – 18/11/05

1. “When people don’t know what to believe, they don’t call for philosophers. It’s not an option on the emergency services.”

2. [Having just talked about Creationists and Communists who didn’t think Germany was a threat in 1939] “If you’re determined enough, and have little sense of plausibility, you can defend any belief”

3. “If you believe Prince Charles is invading your mind, the problem with you isn’t just that you’re poor at testing evidence for your hypotheses”

Why so much Stability?

Who do you find more Convincing, Riker or Mackie?

Riker claims that cycles and manipulation are pervasive in politics, and that the only reason they’re not more obvious is the paucity of data on voter preferences (since many electoral systems only record first preferences, and can’t distinguish whether what’s expressed is sincere or sophisticated). This possibility calls into question the meaningfulness of democratic politics, and allows scope for politicians skilled in ‘heresthetic’ (the art of political manipulation) to secure their favoured outcomes by altering the agenda or issue dimensions.

Despite the difficulties of observing such instability and manipulation, Riker claims to have identified certain historical examples – such as the 17th Amendment, Wilmot Proviso and Lincoln election – that are best explained by postulating voter cycles and heresthetic manipulation.

Gerry Mackie’s historical analysis of these cases calls Riker’s findings into question[1]. For example, Riker assumes in the 1860 election that Lincoln won, Lincoln voters ranked John Bell above Stephen Douglas, and generates a cycle based on this assumption. Mackie points out that this is implausible. While this shows there may not have been a Bell, Douglas, Lincoln cycle (as Riker would have liked), it doesn’t harm the essential point of Riker’s story, however – which is that Lincoln was not a Condorcet winner, but won through a combination of the institutional system and heresthetics[2].

As McLean points out, “Riker’s case studies are falsifiable, and some of them have been falsified. But they are not verifiable”[3]. If Riker’s stories weren’t falsifiable, then they would have little meaning – at least, they would be untestable hypotheses, and as Popper observes, a prediction that can’t be falsified isn’t useful in social (or other) science. In trying to give any historical example, then, Riker is always vulnerable to the claim that he has got the analysis wrong, and that the case doesn’t reveal what he wants. On the other hand, no matter how many particular cases he has got wrong, one can’t prove that Rikerian cycles never take place.

Mackie’s more powerful criticisms go beyond simple quibbling with Riker’s examples, and strike to the heart of the Rochester methodology – arguing that Riker’s analysis fails or is self-defeating on its own terms. For example, Mackie points out that Riker places unrealistically high standards – for those of us outside of epistemology classes – on what we need to know someone’s preferences. Riker says we can’t know someone’s real preferences just from how they vote.

It’s true that we can distinguish someone’s action (what they were trying to do, in their self-description) from their observed movements. As Mackie says, “When someone buys a Cadillac, what choice has been revealed: a means of transportation, a status symbol, a dating ploy, a nostalgic memory of the buyer’s father, a tax dodge, a mistake?”[4].

Judging someone’s inner state from their outward movements is always a matter of interpretation, and always leaves the possibility of mistakes. In everyday life, however, we do think we can make reasonable assumptions about what may have motivated another’s movements. For example, if I see you reach across the table and pour a glass of water, it is natural to assume you wanted a drink. There may, of course, be other explanations, but we are surely ordinarily justified in taking the most obvious.

If we apply such a strict standard of knowledge, however, it doesn’t just block the possibility of inferring preference from how one votes, it seems to block the very possibility of recognising a vote from one’s movements: “raising one’s hand might be a sincere vote, might be a strategic vote, might be a mistake, might be a yawn and a stretch, might be a sign of a follower of St. John the Baptist, might be a joke, might be an involuntary reflex”[5].

Riker’s point is that we can’t know, just from someone casting a certain order of votes (say, in an STV election) what their real preferences are – we can’t know the person they put second really is their second preference, or if they are put there strategically. The ad hominem reply is that on such a strict requirement for knowledge, we can’t even know – simply from someone walking into a booth and writing numbers next to different names – that they are intentionally voting at all, rather than merely doing something to alleviate boredom. Our electoral officials will take it as voting, because that is the natural explanation of what they do. Similarly, the natural explanation of why someone puts 1 next to one name and 5 next to another is that they prefer the first.

Moreover, the idea that we could recognise someone as voting strategically is parasitic on the idea that we at least understand (even if we can’t always recognise) sincere voting. As Mackie puts it, “We could not discover that choices may strategically misrepresent preferences unless we have information from beyond the single instance… we know that choices sometimes misrepresent preferences only because we know that choices sometimes do represent preferences”[6].

If we thought what someone wrote on their ballot paper was simply arbitrary, we’d have no reason to pay their marks any attention. We interpret their pen strokes as expressing preferences because that’s the usual explanation. We assume they actually rank the first choice higher than the fifth, because again that’s the normal case. It is only against this background assumption that we can understand strategic misrepresentation. If we took the marks as meaningless, then swapping two votes wouldn’t have any effect, because they would be meaningless. Strategic voting only works because the sophisticated voter knows his/her marks will be interpreted as expressing sincere preferences. Compare a feint (in football, or sword-fighting, say) – faking a move to your right (before actually going left) only works if others believe from the move that you’re actually going to your right.

All these points by Mackie are well made, I believe. To say we can never infer someone’s preferences, and that all votes must be taken as meaningless, is surely too strong. From the fact that we can be wrong about any particular case, it doesn’t necessarily follow that we can be wrong about every case[7]. I might believe P and ¬P. Either of those beliefs could be false, but they can’t both be.

So, it seems Riker tries to draw conclusions that go beyond his evidence. We can’t simply discard all our beliefs about votes, preferences and outcomes, simply because some of them are called into question. On the other hand, he still seems to have enough resources to draw more modest conclusions, sufficient for his purposes. The universal possibility of error doesn’t entail the possibility of universal error, but it does mean, while we might be right to trust our intuitions as a whole, we should be wary of relying on any one in particular – we must always acknowledge that the outcome of any particular vote could be meaningless.

A similar point can be made about Riker’s obsession with cycles. Perhaps he simply liked their elegance, or the fact they could undermine the very possibility of a Condorcet winner[8], but Riker was always looking, not just for instability or manipulation, but cycles. Mackie may be right that actual cycles are far less prevalent than Riker’s sometimes-fanciful examples would have us believe. These findings, however, don’t challenge Riker’s fundamental conclusions as McLean observes, the cycles are inessential elements of the stories, “There are heresthetic moves that do not entail cycles: some of them are in Riker’s own stories… All that a Rikerian needs to show is that multidimensional issue space offers the potential to construct a new winning majority”[9].

Thus far, it seems Mackie’s criticisms should have some bite, in limiting the scope of Riker’s conclusions, and forcing him to be more modest in his attacks on democracy. Nonetheless, we haven’t seen anything that forces us to give up the Rikerian project, or deny that it’s lessons can be of great practical import or be applied to new (past or future) cases[10]. We still have perhaps Mackie’s most powerful ad hominem point to come, however.

Riker’s methodological approach seems to presuppose the falsity of the very claims he is trying to support!
“Riker’s further analysis of the Powell amendment makes an implicit auxiliary assumption that elected representatives represent the interests of their districts. He goes on to confidently identify five “natural political groups”… and the preferences of each group over three alternatives (Riker 1986, 118-122). He seems to have forgotten, among other things, that on his account there is no such thing as a district interest that could be discovered by electing a representative. It is a delicious irony that his analysis is forced to assume that Congressional districts have identifiable interests… [I]n his attempts to show the empirical relevance of Arrow’s logical possibility result, Riker is forced to assume the empirical irrelevance of Arrow’s result”[11]

As a rhetorical point, this is brilliant – that someone is forced (by their self-defeating argument) to deny the very thing they’re trying to establish is more fatal than simply observing that they beg the question by supposing its truth, since at least the latter allows that the proposition could still be true (though the argument for it fails), whereas the former suggests an incoherence in what is to be proved. It’s therefore no surprise that this point comes at the end of Mackie’s chapter 2, on ‘The doctrine of democratic irrationalism’, as the key point in his refutation of Riker’s general argumentative strategy.

The question is, however, whether it’s as effective as it first seems. If Riker stuck to his position that it was impossible ever to know a group’s preferences, then it would indeed establish that Riker’s analytic narratives were incoherent, for he is forced to give up that commitment and postulate the very thing he denies to explain cycles. If he gave up the search for cycles, however, it’s not clear he would have to engage in such problematic postulations. Moreover, we’ve already established that Riker’s conclusion are too strong – he should give up the claim that democratic outcomes are always arbitrary, irrational and meaningless, in favour of the weaker claim that we can never know they aren’t. If this is so, then it seems his stories aren’t so implausible – we can suppose certain groups of voters who have stated preference patterns, and see how the final result is still chaotic and manipulable. If we deny that the voting groups can even be given such ordered preferences, then in a sense we make the final conclusion even stronger. If the parts are arbitrary and meaningless, so a fortiori is the whole derived – albeit itself in an arbitrary and meaningless manner – from them.

There is much truth in Mackie’s criticisms, and it would be right to be more cautious in stating Rikerian conclusions than Riker himself was, but Mackie has not refuted Riker’s analysis. What Mackie has shown is that democracy is not impossible – that society can choose, and stable collective orderings may be possible. Riker should never have denied this, for the Arrow result doesn’t say it is impossible to reach a collective decision (it can be done, if e.g., voter preferences are relatively homogeneous without violating Condition U). Rather Arrow showed it is impossible to guarantee an ordering – that is, and ordering is possible but not necessary. What we learn from this is not that democratic outcomes can’t occur, but that they needn’t always occur – no matter how we design our institutions.

The search for historical cases demonstrating cycling and instability may be inconclusive. Indeed, given limits of what we know, it may even be necessarily so – it may be that we can’t possibly find a definite real-world case of such. This only demonstrates that stable outcomes are possible, not that cycles and instability can’t occur. So even if Mackie is right about the historical instances Riker cites, Riker’s conclusions are still relevant and important – particularly when it comes to designing future institutions (the most normatively important question).

[1] His results are summarised by I. McLean (2002) ‘Review Article: William H. Riker and the Invention of Heresthetic(s)’ British Journal of Political Science 32 table 1 (p.549) and G. Mackie (2003) Democracy Defended table 1.6 (pp.18-9). Since I am not qualified to comment on these historical examples, the following discussion draws more on abstract methodology, rather than interpretation of these data.
[2] I. McLean (2002) p.553.
[3] I. McLean (2002) p.555.
[4] G. Mackie (2003) p.38.
[5] G. Mackie (2003) p.38.
[6] G. Mackie (2003) p.39.
[7] It doesn’t follow from the fact that anyone could be an eldest sibling that everyone can be.
[8] The identification of which Riker took as an essential normative requirement of an acceptable voting system.
[9] I. McLean (2002) p.555, c.f. p.553.
[10] See I. McLean (2001) Rational Choice and British Politics, who argues various possible explanations of stability “do not invalidate Riker’s central claim. Once in a while there comes a politician who… can see opportunities where others do not, in opening up or closing down political dimensions” p.231.
[11] G. Mackie (2003) p.43.

Sunday, November 13, 2005

Political Incorrectness

Vaguely in the vein of Quentin Skinner's Liberty Before Liberalism, here's Political Theory before Political Correctness:
since the majority of people everywhere, however excellent the education they may have obtained, are of very restricted intelligence, it is more likely than not that they will not only be ignorant of the best means to the good, but also of the good itself.

- John Plamenatz (1938) Freedom, Consent and Political Obligation p.159 [Appendix to ch.7]
It's actually an argument for representative, democratic government(!), but he goes on to say immediately after:
An educated person, because he thinks about more things, has more opinions than an uneducated one, but there is little reason for supposing that he is more likely to be right about matters that interest them both. He is just as likely to be wrong, but for different and more complicated reasons.

Saturday, November 12, 2005


It's only occasionally, when something goes wrong, that one realises how important email is here in Oxford. The college were carrying out scheduled maintenance work this afternoon, involving a power cut and hence loss of email (run through the central college server). It was scheduled for 2-4 hours from 2pm today. Well, five hours later, the internet's still working (because our off-site flats run through a separate hub), but the email's still down. I don't know if this means power is still off in college - in which case I pity the poor people in the cold and dark - or if they just buggered something up. Either way, the lack of access to my email is annoying. Of course, I have other accounts, but the college one is my primary one - and I don't know how to contact many other people than via their (also not working) accounts... I honestly have difficulty imagining university before the days of email, internet and mobile phones.


Explain the McKelvey-Schofield (chaos) theorems

Riker employs McKelvey and Schofield’s chaos theorems to argue that once there is no unique stable optimum or transitive ordering, election results are effectively arbitrary and meaningless. As he puts it, “not only is a Condorcet winner unlikely, but also, when one does not exist, anything can happen” . If there is no equilibrium, then we can move away from any status quo in any direction, and all possible points seem to be included in a possible cycle.

Mackie, however, criticises the realism of the assumptions in these models. “In the absence of friction most initial states result in nonstationary orbits or cycles that would continue forever in disequilibrium… it is a mistake to argue that the counterfactual world of no friction somehow reveals a more fundamental truth about the world of friction” .

It takes only a small amount of homogeneity to produce stability. Rather than assuming an ‘impartial culture’, if just 5% of voters have identical preferences, the Condorcet efficiency of many voting procedures is greatly increased . Presumably this also increases stability. Further, super-majority rules can ensure stability. If there is only one dimension, >50% ensures stability. This can be generalised to d/(d+1), where d is the number of dimensions; thus if there are two dimensions, a 2/3rds majority suffices for equilibrium.

Certainly empirical evidence suggests that while somewhat unexpected outcomes can emerge – for example, the USA’s recent repeal of estates tax – normally political outcomes are fairly predictable. Knowing just a little about voters/decision makers’ preferences, we usually feel we have a good idea of the kind of outcomes they might arrive at. While strategic manipulation may swing results more in favour of one party than another, we certainly don’t expect any outcome to be possible. For example, while we may not be greatly shocked by a deviation from the median voter, we would be extremely surprised if the outcome was to the left (or right) of the furthest left (or right) voter.

Friday, November 11, 2005

Philosopher's Humour 3

Nuffield Political Theory Seminar 7/11/05

1. We can’t compare cardinal utilities. It’s like saying ‘what if we were all dragons?’ We’re not!

An applied ethics class on drugs in sport 9/11/05

1. “I don’t know what sport is” – Julian Savulescu, Professor of Applied Ethics

Political Theory Graduate Workshop 10/11/05

1. [After a clarification] “That’s fine. I can’t work out if I agree with it or not”

2. “I can hardly think of a theory of justice that doesn’t have at least one perverse implication”

Law and the State 11/11/05

1 “...and the rest, as they say, is a cliché”

2. Raz’s account is just what it would be for something to be objectively legitimate. He could turn round and say ‘I visited the world for the first time yesterday, and nothing lives up to this’. After all, that’s an empirical question – and would you trust a legal philosopher to give you advice on that? – Prof John Gardner

3. There are billions of people who aren’t lawyers, as unlikely as that seems to us.


Explain the Gibbard-Satterthwaite (manipulability) theorems

Gibbard and Satterthwaite independently proved the susceptibility of almost all voting procedures to manipulation. As McLean summarises it, “A voting procedure would be strategy-proof if it satisfied the following conditions. For all individual preference profiles it would ensure that whenever an option became more popular its chances of success would at least get no less; and it would ensure that the result could not be manipulated by adding or withdrawing options. But this turns out to be the same as saying that such a procedure must satisfy conditions U, P, and I in Arrow’s Theorem. Therefore if there are more than two options any strategy-proof voting procedure might throw up a dictator”

In British General Elections, strategic-voting is quite common, for example if a voter ranks the parties (Labour, Lib Dem, Conservative), but thinks that the Lib Dems and Conservatives are the two most likely winners, he can vote Lib Dem – effectively reporting (Lib Dem, Labour, Conservative). This is by no means unique to FPTP plurality elections. If anything, the Borda count is even more susceptible, as here voters give a complete ordering, so there is more scope for shifting an option up our down one’s ranking. E.g. one might report (Lib Dem, Labour, Green, UKIP, BNP, Conservative) – even though one actually preferred the Conservatives to UKIP or the BNP, if one thought they were a bigger threat it might make more sense to minimise their points score.

The consequences of strategic voting are unclear. Sometimes co-ordinated strategic voting (e.g. log-rolling) can produce better outcomes for all involved, and maybe everyone. Alternatively winners can gain at the expense of losers, and if what they gain is less than others lose the overall consequences can be bad . Aside from the overall consequence, the purpose of strategic voting is for one person to get what they want, when they otherwise wouldn’t have. This distorts the winners and losers, and some have worried it is unfair. Theoretically perfectly informed strategic voters on either side can cancel out, but in practice the worry is those who are better informed are more likely to get their way if they are better able to manipulate the system. Some hold that this violates voter equality; however, it isn’t entirely obvious that this is so – everyone has a vote, the fact that they do in fact make differential use of this doesn’t show that it isn’t of equal potential value .

In any case, it is fairly easy to avoid strategic-voting, at least if we are willing to give up other possible requirements. Mackie quotes Hinich and Munger’s summary of the Gibbard-Satterthwaite theorem, but then goes on to point to how this conclusion can be avoided by a system like lottery-voting . Here voters express only first preferences, and every vote for A increases A’s chances, there is no possible way a voter can be better off by voting for B if he sincerely prefers A. Riker too notes the possibility of probabilistic solutions , but claims they violate the independence axiom. (It is not clear to me why he thinks this, or that it is necessarily a greater problem than the manipulability it overcomes. It is, however, true that such procedures could be said to violate Riker’s further ‘citizen sovereignty’ axiom – since the outcome depends not only on citizens’ votes but also a randomising device)

Wednesday, November 09, 2005

Monty Python Character

You are the Abuse Clerk. You dish out verbal abuse all day long as the customer keeps paying. AAH, what satisfying work!
You are the Abuse Clerk! You dish out verbal (and
some physical!) abuse all day long as
the customer keeps payin'! Aaah...such
satisfying work!

What Monty Python Sketch Character are you?
brought to you by Chris Brooke

Tuesday, November 08, 2005

Amusing Stupidity

One problem with democracy is that everyone gets an equal vote: you, me, and even complete idiots. I suspect the individual involved here is below the current voting age, but still the funniest thread of the century surely.

Someone by username of 'ChelseaUnbeatable' was posting wind-ups on the BBC's Liverpool messageboard. He changed it to 'ChelseaWin99%OfGames' in light of recent events, but was challenged as follows:

Screen name still wrong. As you've played 17 games this season (12 in the EPL, 1 in CC, 4 in CL) and lost 3 then ChelseaWin87percentofTheirGames is more appropriate. [message 17]

His reply was:

I don't know how to do percentages but we went on a winning streak at the beginning of season it was 100% now it has dropped about 1 percent to 99% with
recent blip, common sense. Even the government fiddle figures, but the facts speak don't lie. [message 23]

Before going on to say:

I meant last year we won more silverware, in future all the silverware is ours too you will just have old grotty rusty silverware all ours is brand new fresh shiny silverware from the 20th century. [message 31]

When it was pointed out to him that we're actually in the 21st century, his initial reaction was:

We are in the 20th century you muppet 20th is 2000 onwards, 21st is 2100 onwards just add two zero [message 38]

Before eventually recanting:

I just checked on Google yes we are in the 21st century, I thought you add two
zero's [message 49]
Despite this, however, a seeming Man Utd fan leapt (surprisingly) to his defence:

we are in the 20th century until 2009 and then when we change to 2010 we are then in the 21st century. [message 103]

It's funnier because it's a Chelsea fan, but whatever age he is (or century he's in) I worry about the state of our education system, not to mention democracy...

Monday, November 07, 2005

Ranking and Comparison

Today I was at a paper by Keith Dowding all about luck egalitarianism, expensive tastes and utility functions. He argued if two people are identical in external behaviour, we have no reason to posit different utility functions.

The problem is one of comparing different people. It’s like chalk and cheese.

It was with this in mind I was interested to see that Liverpool are officially ranked as the number one club side in the world. The methodology of this ranking (essentially assigning points for wins/draws in the usual way, weighted according to perceived difficulty of the competition) is explained here. At the bottom, it says

The World Club Ranking is a precise classification showing the real level of the clubs free of any subjective influence.
How ironic…

Saturday, November 05, 2005

A Villa 0-2 Liverpool

Another good result for Liverpool today. I didn’t catch much of the game, being at the conference, but I was able to follow the last 30 minutes on the BBC website, and see the goals on the news tonight.

By all accounts, not a vintage performance – attacking-wise at least (Carragher was apparently outstanding, as ever) and 2-0 probably flattered us slightly. Villa were caught out at the end, as we have been before (e.g. against Chelsea). And I’d take a poor performance and 3 points over a good performance and one point (as in our opening game at the Riverside) any day.

Milan Baros didn’t come back to haunt us; where Crouch – while missing probably the easiest chance of the game – won the penalty and was involved in Alonso’s goal.

Going to my mate Rob’s house party tonight to watch fireworks, so that’s it for now. More conference tomorrow…

Conference / Philosopher's Humour 2

Today I'll be spending most of the day at the Philosophy Faculty's Graduate Conference. In the meantime, the latest amusing sayings, from yesterday's Law and the State (legal philosophy) seminar:

1. After someone had made a point by referring to "that case where..." John Gardner's response was "That's the first time a case has ever been mentioned in this class. And it's very appropriate that you didn't even know the name..."

2. John Gardner again, this time his comment "That's unusual" after the presenter had said "The reason I like [Jim] Harris is that he is a lawyer".

From today's conference:

[Michael Ferry on Reasons and Supererogation]

1. If you're allowed to count costs to yourself twice (from a neutral and a personal perspective), we don't think you're morally required to throw yourself on a grenade to save one other person and a cat. "As long as you're worth more than a cat"

2. "It's not social in the sense that it involves other people. Ok, that's inconsistent, but you know what I mean."

And an interesting quote I just came across here: "The nation that makes a great distinction between its scholars and its warriors will have its thinking done by cowards and its fighting done by fools"-Thucydides.

Friday, November 04, 2005

Arrow's Theorem 2 - Evaluation

Here's the second part...

Since Arrow’s impossibility theorem is a proven theorem, we can’t say it’s wrong. It can be taken as showing that there’s no such thing as a perfect SWF or choice procedure[1], if we accept Arrow’s four conditions. We can, however, question or weaken these axioms.

Non-dictatorship is also relatively uncontroversial, so long as we are dealing with democracy. Once one person always determines the ‘social’ ordering – like an absolute monarch – we seem to have left behind the area we’re interested in[2].

Arrow’s Pareto requirement is so weak that it doesn’t seem objectionable. If everyone unanimously prefers x to y, it would be very strange for this not to be society’s ordering. This, however, assumes that the only way for a social outcome to be better is for it to be better for the people in it. One might argue a situation can be better, though better for no one – for example, by being more equal or in keeping with desert[3]. It’s very hard for a social decision mechanism (as opposed to betterness ranking) to take such factors into account. It would seem to incorporate citizens’ ‘external preferences’ – i.e. those they have over how well off others are. For practical purposes, it seems we must assume if everyone is better off then the outcome is socially preferred, even if it might not be from the external viewpoint of an impartial spectator.

More plausible lines of attack might be found against Arrow’s independence or universal domain requirements.

The latter seems obviously democratic. As Riker says, “Any rule or command that prohibits a person from choosing some preference order is morally unacceptable (or at least unfair) from the point of view of democracy”[4]. In fact, however, this allows some seemingly unlikely or ‘irrational’ preferences. In particular, it permits individual inputs to be intransitive[5], thus it is hardly surprising that we have difficulty producing a transitive social ordering, even if there was total unanimity of preference orderings!

Further, liberals have argued there are normative justifications for excluding certain preferences, e.g. those based on religious doctrines that do not pass the standard of ‘public reason’ (Rawls), or those over how other people should behave in self-regarding matters (Mill, Dworkin, Sen). Perhaps another possibility is that prior deliberation, bringing out everyone’s preferences and concerns, will allow us to reach acceptable compromises and in effect restrict the domain. Greater homogeneity of preferences makes Arrovian results far less likely[6].

The other option is to weaken the independence of irrelevant alternatives requirement, as we spoke about two weeks ago (Borda vs. Condorcet). It might be fair enough to suppose that choice between x and y should depend only on their relative merits not on, for example, z. This, however, rules out methods like the Borda count that use such information as a proxy for intensity. Arrow restricted information to ordinal rankings out of a scepticism about interpersonal comparison[7]. Thus Sen argues, “the impossibility can be seen as resulting from combining a version of welfarism ruling out the use of non-utility information [see above comments on Pareto] with making the utility information remarkably poor (particularly in ruling out interpersonal comparisons)”[8].

I would contend the dogma that interpersonal comparisons are meaningless or impossible is fairly obviously false. Think of someone you know who’s happy – who has a job they enjoy, plenty of money, a loving family, etc. Now think of a starving, abused orphan child somewhere in the Third World. Ask yourself who has higher utility… Admittedly, comparisons are to a certain extent subjective, and necessarily vague. You certainly can’t quantify how much better off the former is than the latter, but it is still possible to make comparisons.

Sometimes when a small group are making decisions, they can take into account that a certain course has more effect on one than another. This might apply, for example, in a group of three flatmates, or Brian Barry’s example of five people in a train carriage, trying to decide whether or not it allows smoking – if one is asthmatic, his interest might hold sway even if numerically out-voted[9]. This is democratic – it treats each equally, but in doing so recognises inequalities between them.

So it seems sometimes egalitarian decision procedures can operate, on the basis of either a narrower range of preferences than Arrow allows, or if there is consensus over intensities. These are not, however, conditions that necessarily hold when it comes to decision-making in a large, heterogeneous democratic state. What can we do then?

Arrow’s result demonstrates an impossibility. If we find all his axioms normatively compelling, we will be disappointed to learn that we cannot satisfy them together. The search now is not for a ‘perfect’ system, but the best we can do.

In this case, I offer lottery-voting. This satisfies U, P and I, as any preferences can be expressed, and the chance of any option winning depends on the number of votes it gets. This could be called a ‘random dictator’ model, suggesting that it fails condition D. It is, of course, the case that just one person preferring y to x could be picked, yielding such a result against everyone else’s opposition (note this satisfies weak Pareto, as that’s a unanimity requirement). Such random or changing dictatorship avoids what I think is the serious problem of one person always determining the social outcome, however.

The problem with lottery-voting is that it doesn’t produce a stable welfare function. One can get different results from the same preference profiles on different occasions, and intransitivities or inconsistencies in results. Shepsle and Bonchek comment that “There is, in social life, a tradeoff between social rationality and the concentration of power”[10]. While this might be an objection to individual rational choice, it’s not so clear we should expect such consistency from social choices, as societies are essentially pluralistic, not single actors. As Mueller notes, “Although obviously arbitrary, the general popularity of random decision procedures to resolve conflictual issues suggests that “fairness” may be an ethical norm that is more basic than the norm captured by the transitivity axiom for decisions of this sort”[11].

[1] E.g. P. Samuelson (1977) Collected Scientific Papers IV “what Kenneth Arrow proved once and for all is that there cannot possibly be found… an ideal voting scheme. The search of [some] great minds of recorded history for the perfect democracy, it turns out, is the search for a chimera, for a logical self-contradiction” pp.935, 938. Quoted G. Mackie (2003) Democracy Defended p.10.
[2] T. Pratchett (1987) Mort “Ankh-Morpork had dallied with many forms of government and had ended up with that form of democracy known as One Man, One Vote. The Patrician was the Man; he had the Vote” p.176, fn.
[3] Consider, for example, a modification of Larry Temkin’s ‘sinners and saints’ example. In world one, sinners have 2 units of good and saints 10. In world two, sinners have 12 and saints 11. Everyone is better off in world two, but we might all things considered prefer world one.
G. E. Moore’s view is that anything intrinsically valuable is valuable independently of human interaction or appreciation. Thus the world would be a better one if there was breathtaking natural scenery on Pluto, even though no sentient creature would ever see it.
[4] Riker (1982) p.117.
[5] See D. Saari (1994) Geometry of Voting p.327.
[6] For the deliberative response, see D. Miller (1992) ‘Deliberative Democracy and Social Choice’ in Political Studies 40 [reprinted in D. Estlund (ed.) (2002) Democracy]. On homogeneity, see G. Mackie (2003) especially pp.47-55.
[7] G. Mackie (2003) “Historically, Arrow’s theorem is the consequence of noncomparabilist dogma in the discipline of economics, that it is meaningless to compare one person’s welfare to another’s, that interpersonal utility comparisons are impossible” p.8.
[8] A. Sen (1982) p.330.
[9] B. Barry ‘Is Democracy Special?’ in his (1991) Democracy and Power p.38. This fits Fleurbaey’s argument for democratic votes to be weighted according to the interests one has at stake. Note, however, that even if we can agree the relative interests people have at stake, we don’t necessarily want to accord with the person who has most at stake. The convicted criminal, for example, may lose more for going to prison than any of us – individually or collectively – gain by his doing so. (But then, this is not simply a democratic decision; it brings in independent requirements of justice).
[10] K. Shepsle & M. Bonchek (1997) Analyzing Politics: Rationality, behavior and institutions p.67. Quoted in G. Mackie (2003) p.14.
[11] D. C. Mueller (2003) Public Choice III p.588.

Agnostic Mythology

I went to see my friend Karl play at the Phoenix Picturehouse last night. He certainly is a 'folk-punk' legend. Check out his personal site (linked above), which includes MP3s and lyrics.

Thursday, November 03, 2005

Arrow's Theorem - Proof

I decided I'd once again split my Theory of Voting paper into two more manageable parts. 'Enjoy':

Prove and Evaluate Arrow’s Theorem

Arrow’s theorem states that no (transitive and complete) Social Welfare Function (SWF) satisfies four desirable conditions:
U: universal domain – all preference orderings are possible
P: weak Pareto principle – “if every individual in a society prefers x to y, the social choice procedure should pick x over y”[1]
I: independence of irrelevant alternatives – choice between x and y depends only on voters’ rankings of x and y, not comparison to z.
D: non-dictatorship – there is no voter such that when xPiy society prefers x (i.e. xPy) whatever other voters’ preferences.
This is because, assuming a finite number of voters, conditions U, P and I imply a dictator.

Suppose group M prefers x to y, and group N prefer y to x. Suppose x is the social preference (it doesn’t matter which way round this is done, the argument can be applied with all variables reversed). Group M is almost decisive over this pair, where almost decisiveness means they determine the social preference when all others (here, N) are opposed[2].

Theorem 1: If M are almost decisive over one pair, they are over all pairs.
Suppose M rank (a, x, y, b) and N rank (y, b, a, x). We know xPy (since M are almost decisive over x and y). We must show this implies and is implied by aPb. But a is socially better than x and y is socially better than b – both by Pareto. If aPxPyPb then aPb by transitivity.

Theorem 2: If M are almost decisive over a pair, they are decisive over that pair.
It would certainly be odd for M to be almost decisive but not decisive – it would mean their choice is the social choice only when opposed, but not when unopposed, which would imply a perversity about social responsiveness to N’s preferences.

Suppose M rank x, z, y and N rank z top (with any preferences between x and y). Thus, by Pareto, zPy. Since M is almost decisive over x and y, then xPy. By theorem 1, if M are almost decisive over some pair, then are over all, thus also xPz. As xPzPy then xPy by transitivity. Thus xPy irrespective of how members of N rank x and y – M are decisive over this pair.

So far, if M are almost decisive over a pair, they are decisive over that pair, and if they are almost decisive over a pair they are over all pairs – thus if they’re almost decisive over a single pair they’re decisive over all pairs. This establishes that a group can be ‘dictatorial’ – which is obvious and uncontroversial in one sense, as Arrow assumes the whole group are dictatorial in this way[3]. The problem comes when smaller groups can dictate over the rest, irrespective of their preferences. However:

Theorem 3: If M is a decisive group (any size over one), there must be a subgroup of M that is decisive without the rest.
Let us subdivide M1 and M2. M1 rank (x, y, z) and M2 (y, z, x) – with the others, N, still ranking (z, x, y). Since M are assumed decisive over y and z, then yPz[4].

It’s either the case that yPx or xRy (where xRy means ‘x is at least as good as y’, i.e. xPy or xIy). If yPx then M2 are almost decisive over x and y. If xRy then from xRyPz we can conclude xPz by transitivity, and thus M1 are almost decisive over x and y. Thus some subgroup of M, either M1 or M2, is bound to be decisive.
If we keep repeating this division of the divisive group to its logical limit – i.e. a single decisive person – we have violated non-dictatorship. That is, “the set of all members of society is decisive (Condition P). But this set can always be partitioned in such a way that one of the subsets is decisive (by Theorem 3) unless the decisive subgroup has only one member, which violates Condition D.”[5]

[1] I. McLean (1987) Public Choice: An Introduction p.173 (I have italicised the variables for consistency).
[2] Note this is logically weaker than decisiveness, which requires M determine the outcome, whatever N’s preferences (i.e. in cases where N are opposed and cases where they aren’t).
[3] I. McLean (1987) p.176/A. Sen (1982) ‘Personal Utilities and Public Judgements: or What’s Wrong with Welfare Economics?’ in his Choice, Welfare and Measurement p.334, see below. W. Riker (1982) Liberalism against Populism labels this ‘citizen sovereignty’, though for the reason just explained it is unnecessary as a separate axiom.
[4] All in M prefer y to z, so it follows from Pareto that the group (M) does and thus so does society.
[5] McLean (1987) p.176.

Wednesday, November 02, 2005

Philosopher's Humour

I've decided I'll try to update semi-regularly (if not frequently) with little things from my studies that amuse me - whether amusing examples or paradoxes, things said in seminars, or whatever. Now I have a few readers, I might as well entertain them (or should that be 'you', assuming you're reading...?)

Here are the first two from this week (all are paraphrases):

Monday 31/10/05 Moral Philosophy Seminar
John Broome accuses Doug MacLean (the author over at Left2Right) of an intransitivity in his betterness ranking, to which MacLean responds "I think we're using 'equal to' and 'better than' in different ways", to which Broome replies "Well, I'm using them in ways conformable to logic, that's true".

Wednesday 2/11/05 Saving the Greater Number seminar
After discussing David McCarthy's ingenious mathematical tricks - which I won't pretend to understand, but which can be found at the Equality Exchange - one participant said "I think he just came up with this nice mathematical thing, and then was looking for an ethical problem where he could use it"

And a bonus one from a couple of weeks ago - my friend Kieran to my friend Rob - "I love the fact that the second [hideously complicated] thing you just said was supposed to clarify the first"

Liverpool 3-0 Anderlecht

It’s hardly a surprise Liverpool waited for another European night to turn on the style, as it seems their habit of late. Still after a fairly solid performance against West Ham, this fairly comfortable 3-0 will have done them a power of good.

Morientes and Crouch both missed a few chances, but were at least there to take them – and the way the former took his goal, he didn’t look like a striker on a barren run. The defence were solid, with the full backs getting forward well, and Garcia scoring another impressive goal.

The introduction of Cisse to exploit inevitable space in the Anderlecht defence made perfect sense. I was happy to see him score – straight through the ‘keeper’s legs, suggesting either he can place the ball after all, or at least that he’s lucky – both handy qualities! Several times I thought he linked up quite well with Kewell too, which is sign of a promising relationship we’d only seen glimpses of before.

Of course, it was perfect. Perhaps the biggest disappointment is that Chelsea’s shock defeat – while always nice – means it isn’t quite job done. That’s a shame, I was hoping we’d be able to concentrate on domestic football, and maybe even blood some youngsters in our last two group games. As it is, a point at home to Betis should do it – or if we lose to them, then we’d travel to Stamford Bridge needing a win but knowing we could eliminate Chelsea – an enticing prospect, but not one I want to rely on…

Best not to worry too much about results elsewhere. 3-0 was a good performance tonight, putting us in a good position – clear above Chelsea in a league table for the first time in a while!

Tuesday, November 01, 2005

Oswald Hanfling RIP

I regret to report that I only learned today of the sad death of Prof Oswald Hanfling last Tuesday (25/10/05). Just the night before he'd been contributing fully to the Oxford Moral Philosophy Seminar. His presence will be missed.