Tuesday, March 3, 2015

Interview on Robot Overlordz Episode 150: Tech Unemployment and Enhancement




I have the good fortune of being a guest on the Robot Overlordz podcast this week. Some people might be interested in checking it out. The link is here.

Robot Overlordz is a biweekly podcast, hosted by Mike Johnston and Matt Bolton. As they say themselves:

Robot Overlordz is a podcast about the future. On the show, we take a look at how society is changing. Everything from pop culture reviews to political commentary, technology trends to social norms. All in about 30 minutes or less, every Tuesday and Thursday

I recommend checking out their website and listening in regularly to their show. In my episode, I discuss three things with Mike and Matt:


  • Tech Unemployment: Are human workers going to be replaced by robots? Is this happening now? Are fears about tech unemployment misplaced? And doesn't this simply point to a tension or contradiction inherent in capitalism?

  • Neuroenhancement and the Extended Mind: Could smartphones and wearable tech be a part of our mind? Is interfering with someone's iPhone ethically equivalent to interfering with someone's mind?

  • Enhancement in Sports and Education: Should we allow the use of enhancement technologies in sport or in education? Are traditional methods of educational assessment fit for purpose?


Saturday, February 28, 2015

Human Life and the Quest for Immortality


Romano - Allegory of Immortality

Human beings have long desired immortality. In his book on the topic, cleverly-titled Immortality, Stephen Cave argues that this desire has taken on four distinct forms over the course of human history. In the first, people seek immortality by simply trying to stay alive, either through the help of magic or science. In the second, people seek resurrection, sometimes in the same physical form and sometimes in an altered plane of existence. In the third, people seek solace through the metaphysical/religious concept of the soul as an entity that houses the essence of our personalities and which will live on beyond the death of our physical bodies. And in the fourth, people seek immortality through their work or artistic creations.

With the exception of the last of these forms, most versions of the quest for immortality share the belief that the immortal existence of the self — i.e. the human person — is something worth pursuing. But some philosophers reject this notion. They do so not because they wish to die or think that death is a good thing, but because they think that without death there is no possibility of a recognisably human life. That is to say: they believe that the quest for an immortal human life is incoherent.

One such philosopher is Samuel Scheffler. In his recent(ish) book Death and Afterlife, Scheffler tries to defend the claim that an immortal life would be no life at all. More precisely, he tries to argue that temporal scarcity is a condition of value in human life, and that without the “threat” of death, it would be difficult to make sense of our existence. In this post, I will try to outline Scheffler’s argument and consider its implications for those who seek to promote radical life extension.


1. What is an immortal life anyway?
One thing I have noticed in the debate about life extension and immortality is a tendency for the participants to talk past one another. This is chiefly because the participants often conceive of an “immortal life” or the quest for “immortality” in very different ways. It’s important that we try to avoid this mistake here.

Let’s suppose that there are four types of human life that we could be arguing about (I am aware that this fourfold distinction doesn’t exhaust the possibilities, but I think it suffices for now):

Ordinary Contingent Human Life: This is the kind of life we all currently lead. We are organic beings, whose bodies are susceptible to injury, disease and decay. We can stave off some of these existential threats, but eventually our bodies will give up and we will die. At present, we can expect to live (roughly) between 80-100 years. With medical advancements we might expect to increase that life expectancy (maybe even up to 150 years), but still we will eventually die.

Necessarily Immortal Human Life: This the kind of life in which we continue to exist in something roughly equivalent to our current form, but we do so forever, without the risk or possibility of death. In other words, it is the kind of life in which we must continue to exist, irrespective of our wishes.

Contingently Immortal Human Life (Type 1): This is the kind of life in which we continue to exist in something roughly equivalent to our current form, and we do so with the continuing risk of death by injury or, maybe, some diseases. In other words, our bodies no longer decay or degrade over time, but they are still vulnerable to some external existential threats (perhaps the main one being the risk of fatal attack from other human beings).

Contingently Immortal Human Life (Type 2):: This is the kind of life in which we continue to exist in something roughly equivalent to our current form, without the risk of death by injury or disease, but with the periodic option of ending our lives. In other words, it is the kind of life in which we are free from all existential threats, apart from “threats” realised by our own volition.

Let’s agree, for the sake of argument, that we want to escape the limitations of the first option and live forever. Which of the remaining three options do we hope for? In my experience, most life extensionists and scientifically-inclined immortalists argue for something like the third and fourth options, i.e. lives of indefinite duration with the lingering possibility of death. Most of them don’t really consider the second option. On the other hand, many religious believers seem more committed to the second possibility. The most obvious examples of this are those that believe in the traditional conceptions of heaven and hell, which often seem to require involuntary immortality.

So which type of existence is the subject of Scheffler’s argument? The answer is the second. It is the necessarily immortal human life that he deems to be an incoherent concept. This immediately limits the audience for his argument. Most life extensionists will be non-plussed by what he has to say because it doesn’t touch upon the sort of life they wish to live; religious believers (at least, those who are committed to the idea of immortality) will more plussed about what he has to say.

I think it is important to acknowledge these limitations at the outset as it helps to avoid potential misinterpretations of Scheffler’s argument. That said, in making the case against the necessarily immortal life, Scheffler says some things about temporal scarcity and conditions of value that could have a (lesser) effect on contingently immortal lives. This is something worth bearing in mind.


2. The Argument for Incoherence
With that clarification out of the way, we can proceed to address Scheffler’s argument for the incoherence of a necessarily immortal life. Scheffler doesn’t present this with any degree of formality. Instead, he adduces a number of considerations and tries to informally draw out some conclusions. I’ll try to adopt a more formal approach here. I take it that the argument is something like this:


  • (1) Much of what is central to our conception of human life (including our conception of value in life) tacitly assumes that that life will come to an end, and/or is persistently vulnerable to existential threat.
  • (2) A necessarily immortal life is one that does not come to an end and is not persistently vulnerable to existential threat.
  • (3) Therefore, much of what is central to our conception of human life (including our conception of value) would be lost if we lived necessarily immortal lives.
  • (4) If a form of existence entails the loss of much of what is central to our conception of human life, it is not clear that that form of existence can be deemed “human”.
  • (5) Therefore, there may be no such thing a necessarily immortal human life (i.e. the concept of a necessarily immortal human life may be incoherent).


The argument can be dragged in different directions once this main conclusion is reached. In particular, it can be used to support the claim that we ought not to desire a necessarily immortal life, or, perhaps more interestingly, to argue against certain religious doctrines that presuppose such an existence (Brian Ribeiro does this in his article “The Problem of Heaven”).

But I am not too interested in those possibilities. I am more interested in how Scheffler defends the main premises of the argument. In particular, I am interested in how he defends premise (1). I take it that the rest of the argument is relatively uncontroversial. Premise (2) is true by definition. (3) looks to be a valid conclusion from (1) and (2). (4) might be controversial because it is vague, but that can be corrected by having a more detailed account of what is taken from the concept of human life by immortality. That is something that the defence of (1) helps to provide. And, finally, (5) also looks to be a reasonable inference from (3) and (4).

So let’s consider the defence of premise (1). In the book, Scheffler offers three reasons in support of premise (1). The first is:


  • (6) Our conception of life, and of success in life, is bound up with the notion that life has stages that come to an end.


Scheffler provides more detail on what he is talking about in the book. He notes how the standard conception of a human life has a finite duration, i.e. a beginning (birth) and an end (death). Between these two endpoints, the living person passes through a number of stages, childhood, adolescence, adulthood etc.. These stages, and their durations, vary somewhat from culture to culture. Nevertheless, all cultures share the notion that life is broken down into distinct stages and that these stages come to an end. More importantly, our sense of accomplishment and satisfaction is often intimately linked to our conception of these stages. Thus, what counts as an achievement for a child (first words, learning to read) would not count as an achievement for an adult, and vice versa. As Scheffler puts it:

Our collective understanding of the range of goals, activities, and pursuits that are available to the person, the challenges that he faces, and the satisfactions that he may reasonably hope for are all indexed to these stages. The very fact that the accomplishments and satisfactions of each stage count as accomplishments and satisfactions depends on their association with the stage in question…” 
(Scheffler 2013, p 96)

Scheffler’s point is that this division of life into stages, each with its own characteristic virtues and vices would be lost if we lived necessarily immortal lives.

So much for the first reason. The second reason Scheffler offers is linked to the concepts of loss, illness, injury and so on:


  • (7) Concepts such as loss, illness, injury, harm, health, gain, security, safety (and so on), all of which are central to how we understand value in life, derive a good deal of their content from the assumption that life is temporally limited.


This is probably a more significant claim than the first since it focuses directly on things that are deemed to be of value (or disvalue) in human life. Scheffler doesn’t offer much in the way of support for this claim. He simply points out that much of human life is spent trying to avoid things like loss, illness, injury and harm, and trying to pursue health, gain, security and safety. And then adds that these concepts “derive much of their content from our standing recognition that our lives are temporally bounded, that we are subject to death at any moment, and that we are certain to succumb to it in the end.” (p. 97) Since that temporal boundedness is lost in a necessarily immortal life it follows that such a life would consist in a radically altered set of values. (It should also be added that if you are necessarily immortal you would presumably be free from many physical limitations and needs, e.g. hunger and thirst).

Arising from this is Scheffler’s third reason for supporting premise (1). This reason has to do with human planning and decision-making:


  • (8) Much of human decision-making and planning only make sense against a background assumption of temporal scarcity.


As I say, this arises from the same set of considerations as the previous reason, and so it may not be fair to treat it as a distinct ground for supporting premise (1). Still, I think it is worth separating it out because the planning and making of decisions (sacrifices, choices etc.) is central to human existence and may well be radically altered in a necessarily immortal life. The reasoning would be, roughly, that whenever we plan or decide to do something we do so on the basis that we must “give up” something (what economists call the “opportunity cost”). The presence of that opportunity cost lends some normative significance to our decision-making and adds to our sense of urgency and motivation. These things would be lost if we lived forever because we would always have a second chance (or a third or fourth or fifth…). Some people might welcome this fact, but even still it would make for a very different type of existence. (I wrote about this argument in much more detail before).


3. Concluding Thoughts
So what are we to make of all this? Let me close with two reflections on Scheffler’s argument. First, I accept most of Scheffler’s argument and I think it says something important about the desire for a (necessarily) immortal life. For example, I accept that a necessarily immortal life would be free from many of the limitations that currently shape our conception of what is or is not of value. Consequently, I am largely persuaded by his use of (7) and (8). I am less persuaded by his use of (6). It seems conceivable to me that an immortal life could still be broken down into stages of finite duration. Maybe it would be more difficult to then associate those stages with distinctive accomplishments and satisfactions, but sufficient ingenuity may make it possible. That said, I believe this to be a minor point. The larger point is that a necessarily immortal life would be radically different from what we currently have, and there is no doubt that much of our current understanding of value would be altered. Although this might not make the desire for a necessarily immortal life incoherent, it may make it silly or misconceived: such a life cannot hope to preserve the things we value about our present lives.

Second, although I accept he was not arguing about contingently immortal lives, I think it is worth asking how much of his argument would carry over to such lives. It seems fair to say that some of it would. After all, a contingently immortal life would reduce at least some of the existential threats that Scheffler thinks are essential to our conception of a human life. For instance, it would force some restructuring of the stages of life and their associated accomplishments and satisfactions (I considered this before). Likewise, with a contingently immortal life of the second type (i.e. one free from all involuntary existential threats) things like loss, illness, harm, safety, health and so on would be deeply affected. How exactly they would be affected is difficult to say. There would still, presumably, be some values that are independent of existential threats (e.g. the intrinsic value of pleasure, or of enhancing theoretical knowledge), but we may find that a good deal of our values are lost or radically altered. We may also, of course, acquire new values that compensate for these losses. But there is something of bet taking place: we risk trading one set of familiar values for another, less familiar, set.

Anyway, that’s all I have to say for now.

Sunday, February 22, 2015

Two Interpretations of the Extended Mind Hypothesis




I’m trying to wrap my head around the extended mind hypothesis (EMH). I’m doing so because I’m interested in its implications for the debate about enhancement and technology. If the mind extends into the environment outside the brain/bone barrier, then we are arguably enhancing our minds all the time by developing new technologies, be they books and abacuses or smartphones and wearable tech. Consequently, we should have no serious principled objection to technologies that try to enhance directly inside the brain/bone barrier.

Or so some have argued. I explored their arguments in a previous post. In this post, I want to do something a little different. I want to consider how exactly one should interpret the claim that the human mind can extend into the external environment. To do this, I’m calling upon the help of Katalin Farkas, who has recently written an excellent little article entitled “Two Versions of the Extended Mind”. In it, she argues that there are two interpretations of the EMH, both extant in the current literature. The first makes a wholly plausible and, according to Farkas, uncontentious claim that can be endorsed by pretty much everyone. The second is rather more contentious and, arguably, more significant.

In the remainder of this post, I will go through both interpretations.


1. Some Key Concepts
Before I get to those interpretations, I need to run through a few key conceptual distinctions. First, I need to distinguish between different types of mental event. We all know what the mind is: it is that “thing” that thinks, feels, believes, perceives, dreams, and intends. Mental events are the events that happen within the mind, i.e. the thinking, feeling, believing, perceiving and so on. By describing it in this way, I do not mean to rule out the possibility that the mind is itself an extended set of events (and so not a “thing” per se). The mind could well be an extended set of events and still consist of sub-events like believing, perceiving, dreaming, intending and so forth.

Anyway, although there are many different mental events, they seem to fall into two broad categories:

Events in the Stream of Consciousness: As the name suggests, these are the mental events that form part of the subject’s occurrent conscious life. They include things like the taste of chocolate, the feeling of warmth, the perception of red and so on.
Standing Events: These are mental events that need not form part of the subject’s occurrent conscious life. The classic examples are beliefs and desires, which are generally taken to characterise a subject even when they are not directly conscious of them (they are taken to be dispositions). For example, I can be said to “desire a meaningful relationship with my children” even when I am asleep and not consciously representing that desire. (Farkas refers to these as standing “states” not “events”; her terminology may be more correct)

This distinction turns out to be important when it comes to understanding what is being “extended” when we talk about the extension of the mind. That is to say, it is important when it comes to understanding the content of the mental extension. It is, however, less important than the next distinction when it comes to understanding the two competing interpretations of the EMH.

That next distinction arises from the functionalist theory of mind. According to that theory, whether something counts as a mental event or not depends on the role that it plays in fulfilling some function. Thus, for example, something counts as a “belief” not because it is made of a particular substance (res extensa or res cogitans) but because it has a particular role in an overarching mental mechanism. Thus, it can be said to counts as a belief because it is capable of producing certain conscious states, action plans and decisions.

Functionalists distinguish between two things that are necessary for mental events/states:

The Mental Realiser: This is the object or mechanism that realises (i.e. constitutes) the mental event. In other words, it is the physical or mental stuff that the event is made out of.
The Mental Role: This is the position (or locus of causal inputs and effects) that something occupies in the mental system.

This distinction is important when it comes to understanding the method or nature of mental extension. In fact, a very simple way to understand Farkas’s main contention is that when it comes to extending the mind, there is a significant difference between realiser-extension and role-extension. The former is trivial, and can arguably be embraced by non-functionalists. The latter is more significant. Let’s try to see why.


2. The Trivial Interpretation: Extending Mental Realisers
As mentioned in the introduction, the gist of the extended mind hypothesis is that the mind can extend out beyond the brain/bone barrier. There may be sound evolutionary reasons for our minds to be limited to that space, but according to proponents of the EMH there is simply no good, in-principle, reason to suppose that the mind has to remain confined to the three and half pound lump of squidgy biomass that we call the “brain”.

The easiest way to interpret that claim is to interpret it as a claim about mental realisers, i.e. as a claim that mental realisers can extend beyond the skull:

Extended Mind Hypothesis (1): The physical basis for mental events can extend beyond the boundaries of our organic bodies

Farkas appeals to an example used by Andy Clark (one of the original proponents of the extended mind hypothesis) to illustrate this version:

Diva’s CaseThere is a documented case (from the University of California’s Institute for Nonlinear Science) of a California spiny lobster, one of whose neurons was deliberately damaged and replaced by a silicon circuit that restored the original functionality: in this case, the control of rhythmic chewing. (...) now imagine a case in which a person (call her Diva) suffers minor brain damage and loses the ability to perform a simple task of arithmetic division using only her neural resources. An external silicon circuit is added that restores the previous functionality. Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon circuit: a genuinely mental process (division) is supported by a hybrid bio-technological system. 
(Clark, 2009 - quoted from Farkas)

Here, the hybrid bio-technological system constitutes an extended mental realiser for the performance of mental arithmetic. The word “constitutes” is important. The claim is not merely that the extended system causally precedes the mental event; it is that the extended system either is or grounds the mental event. What’s more, this claim applies to standing states just as much as it applies to events in the stream of consciousness. Indeed, in Diva’s case it is a mental event in the stream of consciousness that is getting its realiser extended beyond the brain/bone barrier.

Farkas argues that this version of the EMH is fairly trivial. Indeed, she goes so far as to say that non-functionalists and some dualists may be able to embrace it. All it is saying is that if some physical realiser is necessary for mental events (and many theories of mind accept that a physical realiser is necessary, even if it is not sufficient) then there is no reason to think that the realiser has to be made up of neurons, or glia or whatnot. Not unless you think that neurons have some magical mentality-exuding stuff attached to them.


3. The Significant Interpretation: Extending Mental Roles
The more significant interpretation of the EMH claims that more things can count as performing a mental role, even when they seem remarkably distinct from what traditionally seems to perform that role. As Farkas sees it, this is largely a claim about what counts as a standing mental state and, more precisely, as a claim about the possibility of extending the set of things that can count as a standing mental state.

Extended Mind Hypothesis (2): “the typical role of standing states can be extended to include states that produce conscious manifestations in a somewhat different way than normal beliefs and desires do.”

It will take a little longer to understand this version, but we can start by looking at the most famous thought experiment in the debate about the extended mind. This is the Inga vs Otto thought experiment from Chalmers and Clark’s original 1998 paper:

Inga and Otto: Imagine there is a man named Otto, who suffers from some memory impairment. At all times, Otto carries with him a notebook. This notebook contains all the information Otto needs to remember on any given day. Suppose one day he wants to go to an exhibition at the Museum of Modern Art in New York but he can’t remember the address. Fortunately, he can simply look up the address in his notebook. This he duly does, sees that the address is on 53rd Street and attends the exhibition. Now compare Otto to Inga. She also wants to go to the exhibition, but has no memory problems and is able to recall the location using the traditional, brain-based recollection system.

The essence of Chalmers and Clark’s original paper was that there is no significant difference between what happens in Otto’s case and what happens in Inga’s case. They can both be said to “believe” that the Museum of Modern Art is on 53rd Street, prior to “looking up” the information. It just so happens that in Otto’s case the recollection system extends beyond his brain.

Farkas argues that this is a very different type of extension when compared with that of realiser-extension. In this instance, it is not simply that the notebook replaces the brain with a functionally equivalent bio-technological hybrid; it is that the notebook mediates the recollection process in a very different way. Consequently, to say that Otto’s mind extends into the notebook is to say that we should be more liberal in our understanding of what kinds of system can count as fulfilling a mental role.

To see this, it helps to consider some of the important differences between the recollections of Otto and Inga. First, note how they are phenomenologically distinct. Inga gains access to the relevant information through direct mental recall, not mediated by any other sensory process; Otto needs to literally see the information written down in his notebook before he can be said to “recall” it. Second, note how Inga’s “belief” is more automatically integrated with the rest of her mental system than Otto’s. If Inga learns that she got the wrong address, this will affect a whole suite of other beliefs and desires she might have had. In Otto’s case, learning that he has the wrong address will simply involve deleting the entry from his notebook and correcting it. This will not immediately affect other entries in the notebook that relied on the same information.

One could point to other differences too but these suffice for now. Some people argue would argue that these differences should lead us to re-evaluate the Inga-Otto thought experiment. In particular, they should lead us to say that Otto does not really believe that the Museum of Modern Art is on 53rd Street and that Inga does. The problem is that proponents of the EMH can come back and highlight how focusing on phenomenological and integration-based differences between Otto and Inga can affect how we interpret other cases. For example, the phenomenology of recollection varies greatly from case to case. I remember all of Macbeth’s “full of sound and fury” monologue from Act 5, Scene 5 of Macbeth. But in order to remember the fifth line (“Til the last syllable of recorded time”) I actually need to speak, out loud, the first four lines. That does that sensory intermediation deny me the status of an ordinary mental recollection? Likewise, with respect to automatic-integration, it is possible that Otto could have a “smart” organiser that automatically updates other entries with the new information. This is something that is increasingly a feature of smart devices with cloud-based syncing.

And this is where the EMH becomes significant. By responding to critics of their position in such a manner, proponents of the EMH are arguing that we should be much more ecumenical when it comes to determining what can count as a standing mental state. That’s what EMH (2) is claiming. Furthermore, this time round the claim is that the extension is limited to standing states and does not also encompass events in the stream of consciousness. There is a good reason for this. As Farkas sees it, EMH (2) isn’t really about physical expansion outside the brain-bone barrier in the way that EMH (1) is. For all that proponents of EMH (2) care, the notebook-lookup system could be located entirely within the confines of Otto’s skull. That wouldn’t make a difference to their claim. What matters for them is that we are less restrictive when it comes to determining what counts as a providing the basis for a mental standing state.


4. Conclusion
Where does that leave us? Well, it leaves us with two versions of EMH. The first is relatively straightforward and simply claims that mental-realisers need not be confined to the brain. The second is more contentious and claims that we should have a more expansive conception of what can count as a standing mental state.

To really understand the significance of the second version it helps if you consider Chalmers and Clark’s original criteria for assessing whether something could count as standing state. They suggested that anything that was readily accessible and automatically endorsed could form part of the extended mind loop that constituted a belief or desire. This could include something like the information in Otto’s notebook, the information stored on the Web, and also, more controversially, the information stored in someone else’s head. Imagine a really close couple who are in the habit of relying upon the mental content stored in their partners’ heads. In many ways, they would be just like Otto and his notebook.

But if it is possible for all such closely connected information-exchange partnerships to form part of an extended mind, we will find ourselves in some pretty tricky ethical and social waters. What does it mean for privacy? Individual autonomy? Responsibility and blame? Praise and reward? All of these concepts would need to be revised if we fully embraced the implications of EMH (2). That is some serious food for thought.

Saturday, February 21, 2015

How I Write for Peer Review




Publish or perish, or so they say. That’s the rule in academia. But not all publications are created equal. I’ve “published” over 700 posts on this blog (and republished many on other blogs), and although I think there are advantages to having done so, I’d be lying if I said these publications were academically “significant”. They’re certainly not significant from the perspective of the administrators and overseers lurking within the groves of academe. If you want to please these people you must produce peer-reviewed publications (preferably double or triple-blind peer-reviewed publications) in high impact academic journals. That’s where the game is.

This is not to disparage the system. Suffice to say that, despite my rather cynical and world-weary tone in the opening paragraph, I’m something of a fan of peer review. With some limited exceptions, I can safely say that most of the articles I’ve published in academic journals have been improved by the process. But, in any event, my focus here is not on the merits or demerits of peer review. Instead, my focus is on the practical matter of how one actually goes about producing and publishing material in peer-reviewed journals.

Or, more precisely, my focus is on how I go about doing this. I focus on myself not because I am a self-obsessed egomaniac (though I may be) but because I don’t think there is a definitive set of “how tos” for success in peer review. Different people will have different methods, some of which may work for them but not for others. What I offer here is just my approach. I offer it in the hope that it will be useful to some, and in the spirit of transparency.

I’m not particularly successful in producing peer-reviewed publications (probably 12 good pieces in the past 4 and half years, and 4 other less good pieces), but I have some experience that might be worth sharing. One thing I noticed as a struggling PhD student — and this may have just all been in my head — was how guarded and competitive people seemed to be about their attempts at publication. It’s as if everyone knows that everyone else is grinding away at it, but no one is willing to share the stories of their failures. Given the levels of neurosis and impostor syndrome in the academy, it would be nice (I think) if people were more open about these things.

So in that spirit of openness, here are the steps I generally follow when writing for peer-review.


Step One: Prepare for Failure
Trying to publish in academic journals is often a soul-destroying process. Acceptance rates at the top journals are absurdly low (under 5% in some cases), and even at middle-ranked journals one is far more likely to be rejected than not. I might like to think that I am special, that my paper is truly exceptional and that the peer reviewers will be overwhelmed by the subtlety and sophistication of its argument, but the likelihood is that I am wrong. I have now come to accept this, and before I even start writing I prepare for failure.

I put this as step one because I think coping with failure is the most important thing to master. It’s certainly something I wish I had been better able to deal with before I embarked on an academic career. Throughout my school and undergraduate days, I was used to success. I had never failed an exam; I was (nearly) always at the top end of my class. The notion that something I had written would be rejected — often with peer-reviewers questioning my basic competence and understanding — was completely alien to me. Now, I’ve learned to deal with the fact that it will be a fairly regular occurrence.

Part of the trick is, I think, to frame failure in a positive way. In his book How to Write a Lot, Paul J Silva suggests that your goal should be to become the most rejected author in your department. Why? Because being the most rejected means you are also likely to be the most published. The reason for this is twofold. First, if you are throwing a lot of darts at the dartboard one of them will eventually hit the target. Second, most papers that get rejected from one journal will end up being published elsewhere. Although it may be painful and disappointing at first, taking onboard the reviewers’ criticisms, revising the paper and submitting it elsewhere will often result in success. That’s true of the most recent paper that I had published. Indeed, one of my biggest regrets is that I lacked persistence with some of my earlier papers. In 2010 I had three papers rejected from journals that, in retrospect, could have been published elsewhere. But I just gave up on them after one round of rejection.


Step Two: Generating Ideas
Although writing for peer review can seem like a tedious and grubbily instrumentalist process, it’s important to realise that at its foundation there is (one hopes) some genuine intellectual passion and interest. That’s certainly true for me. I write papers about topics that interest me (even if only for while) and generating ideas for papers is something that provides me with great joy.

Unfortunately, I have no idea how I manage to generate such ideas. I don’t think I have ever generated an idea for a paper through brute intellectual force (i.e. by simply sitting down and demanding myself to come up with an idea). They just seem to occur to me. That said, there are some discernible patterns. I usually come up with ideas when out running or cycling (when I’m free from other distractions), or else late at night (typically between 11:00 pm and 1:00 am). I don’t know why this is. Most of my ideas come from combining and manipulating arguments that I have found in other papers that I have read. I don't think I've ever had a truly original thought. For example, my paper on sex work and technological unemployment came from combining ideas that I came across when reading on the ethics and economics of prostitution with other ideas I encountered when reading about robots and technological unemployment.

By an “idea” for a paper I mean a basic argument I wish to defend. Sometimes those ideas come to me in a conclusion-first format. That is to say, I’ll think “wouldn’t it be interesting if someone were to argue X” and then work backwards from there. Other times, the ideas come in a premise-first format. That is to say, I’ll think “what does principle X imply?” and try to work out the details. Again, I don’t know how or why things occur to me in these formats. I believe that wide reading and ongoing curiosity are the key, but I don’t have a precise algorithm or strategy.


Step Three: Writing the Abstract
Once I think I have the kernel of a good idea (and this idea could be percolating around my brain for several months before I take it any further), I will try to write the abstract for the final paper. Some people might find this odd, but it is a habit I picked up from reading Wendy Belcher’s book Writing Your Journal Article in 12 Week, which I recommended on a previous occasion. As she notes in that book, a good abstract should provide a summary of the argument you wish to defend in the paper. To me it makes sense to try to generate that summary first. Hence why I write the abstract first. I may, of course, revise it at a later point, if my argument changes slightly.

To help me to write the abstract, I will often use Stephen Posusta’s “Instant Thesis Generator” (something I also picked up from Belcher’s book). This comes from something called the Procrastinor’s Guide to Writing an Effective Term Paper, which as you may gather is a book for lazy undergraduates. Again, it might seem odd that I would rely on this, but I find it suprisingly beneficial because it provides a useful heuristic for thinking about a good argument. It is illustrated in the diagram below.





You can click on the image for a fuller explanation, but the basic gist of it is that you should be able to summarise your paper using a fill-in-the-blanks sentence: “Although….Nevertheless…Because”. The first part is a statement of the contrary point(s) of view (the ones you will be opposing in your article); the second part is a statement of your main conclusion(s); and the third part is a list of the reasons, evidence and argumentation you will use to support your conclusion(s). Of course, I will rarely actually write the abstract out in this exact format. But I will use it to structure how I think about the abstract. In essence, I think that if I can’t summarise my paper using the instant thesis generator, I probably don’t have an idea worth pursuing in any more detail. It may need some further percolation.


Step Four: Planning the Article
Once I have the abstract in place, I will proceed to write out a more detailed plan for the article. I will do this using pen and paper, and I will usually do it late at night (after 11:00 p.m.). For whatever reason, I find it easiest to work on plans at this time. I believe this is because article planning is, for me, an exercise in sustained thinking, and I find it easier to do this when I am free of all other distractions (emails, work, cooking etc.).

Unlike generating article ideas, I find that planning an article is something that can be done through brute intellectual force. Most articles I have written have been planned out in about one to two hours, by literally just sitting down and forcing myself to think through each step of the argument I wish to make. In other words, using my abstract as a starting point, I force myself to think about each of the premises I need to defend, the counterarguments that are likely to be thrown at me, and my responses to them.

My plans are pretty detailed. For a 10,000 word article, I’ll usually scribble out anywhere between 4 and 10 A4 pages-worth of a plan. I do this mainly because I like the actual writing process to be as mechanical and methodical as possible. Simply a question of filling in the details and smoothing out the edges of the plan.


Step Five: Writing the First Draft
With a detailed plan in place, I can proceed to sit down and write the first draft. In many ways, I find this to be the easiest and possibly least exciting part of the process (though there is a nice sense of accomplishment that comes with it). I follow pretty much the same timetable for writing all my articles. I set aside about two hours every morning (usually between 9 and 11 or 10 and 12), and I write approximately 1,000 to 2,000 words every day. In this way, I typically manage to produce a first draft of an article in about two weeks. Obviously, it all depends somewhat on the time of year and the presence of other distractions, but I think I can genuinely say that every one of my published articles had their first drafts produced in a two-week period.


Step Six: Revising and Seeking Feedback
Once I have finished the first draft of the paper, I will set it to one side for a period of time. The period of time varies. Sometimes it will be a month; sometimes it could be six months. Ideally, I aim for the shorter end of this spectrum: if it takes me six months to get back to a paper, that’s usually a sign that I didn’t really like what I wrote and I need more distance from it. Anyway, once I return to the paper, I will read through it and revise it however I see fit. This process usually takes only a couple of hours, though it may take longer if I think the paper needs substantive revisions. When revising, I do the usual things: I look for weakpoints in the argument and think of possible amendments (oftentimes I’ll have flagged these in bold in the initial draft), I try to improve the overall “flow” of the writing, and I try to correct for any spelling mistakes and referencing errors/omissions. I am, however, notoriously bad at that and nearly everything I have ever published (including articles that ultimately pass through copy-editing and proof-reading) will have some typographical and spelling errors.

At this point I face a choice. I can either send the paper around to some colleagues for feedback, or I can submit it to a journal. I’m guessing the former is preferable, and it seems to be a common practice among some of my peers, but I don’t do it that often. I usually send things out to journals before seeking feedback from others. Indeed, even though I have participated in various “work in progress” seminars over the years, in most cases the papers I was discussing had already been submitted.

I think the reason I prefer to submit without soliciting feedback is that I’m impatient and I’m all too aware of how slow the peer review process can be. I’m also a little bit insecure. I’m now pretty good at taking critical feedback from anonymous reviewers but I'm still less good at taking it from people I know. Once I’m happy enough with a piece I’ve written myself, I just want to get the peer-review ball rolling.

That said, once I’ve submitted the piece I will sometimes seek feedback (in anticipation of the need to revise once the journal gets back to me), or else I will seek feedback once I have received a “revisions requested” or “revise and resubmit” notification from a journal. This may, once again, reveal my insecurities: I’m less bothered by what people I know say about what I’ve written if I have some affirmation from the peer reviewers.


Step Seven: Submission
I’ve jumped the gun to this step already but just to reiterate, once I’ve revised the paper and I’m happy with it myself, I will submit it to a journal. I’m not hugely strategic about the choice of journal. I want something with a decent reputation and an international audience, but beyond that I’m not too fussy. When I’m writing I will usually have a number of options in mind (in fact, if I’m honest, I will rarely proceed with writing a paper if I can’t think of possible target journal in advance), and I’ll start by picking the best of those and then working my way down the hierarchy afterwards if needs be.

Once I have submitted, I’m forced to play the waiting game. Unfortunately, this waiting game can take a long time. In terms of my own publications, the best experiences I have had when it comes to the time from submission to initial decision were with Neuroethics (accepted with revisions), Religious Studies (accepted) and Journal of Medical Ethics (rejected) all of whom got back to me within a month and half. The worst experiences I’ve had, alas, were with Sophia and Law and Philosophy, where in both cases I was waiting nearly nine months for an initial decision. That said, both pieces were ultimately accepted and I know that these are far from being the worst waiting times. There may have been good reasons for them too (e.g. waiting for reviewers to write up their reports).


Step Eight: Prepare for Failure Again
When I have submitted the paper I will prepare myself, once again, for the possibility of failure. I think it is important to remind myself of this after the initial submission because, once I have written the paper, I may be lulled into a false sense of immodesty. I’ll think that what I’ve written is wonderful and important and bound to be accepted. It’s important that I disabuse myself of such notions. If I don’t, the reviewers of my paper will be quick to do so.


Step Nine: Responding to the Decision
…eventually I’ll receive a decision about my paper. If all goes well, the paper will either be accepted outright (this has never happened to me), accepted with revisions (major or minor) or I’ll be asked to revise and resubmit. If I receive one of those latter two decisions, I will parse the reviewers’ comments immediately and then I’ll go through the five stages of grief: denial (“they couldn’t have said that about my paper”); anger (“the reviewers must be idiots”); bargaining (“if only I could get them to see this point”); depression (“that’s it, I’m never writing another paper again”) and, finally, acceptance (“well, I suppose I better get on with it and revise the damn thing”). In the early days, it would take me a while to move through those five stages. Nowadays, I’m usually onto stage five within 24 hours.

I find responding to the reviewers to be one of the more difficult parts of the process. Unlike the writing of the first draft, I can only manage revisions in concentrated bursts. I usually manage them over two to three days (normally at weekends during teaching semesters). I’ll sit down on day one, read through my paper and the reviewers’ comments and try to come up with some strategy for responding to them. Then, on day two, I’ll add my revisions to the paper, taking in a third day if necessary. By way of illustration, on my two most recent papers (“Robotic Rape and Robotic Child Sexual Abuse” and “The Normativity of Linguistic Originalism”), I managed to complete the revisions over two and three days, respectively. Ideally, I like to do this pretty soon after I receive the comments, but sometimes it takes me a while to schedule the two or three days I need. I have no tips to offer on making your revision more likely to be accepted. The only thing I do — which I suspect everyone does — is to prepare a separate document with a very detailed response to each of the reviewers’ comments. The goal of this is to show how I have taken onboard everything they have said, and where exactly in the paper I address the point. In this document, I will say things like “on the penultimate paragraph of pg. 22, I have addressed reviewer one’s point about X by saying the following”. This level of precision and detail is something I picked up, in particular, from Matthew Kramer, who was a reviewer on a paper I once wrote.

If the paper is rejected (either initially or after revisions), I will also go through the five stages of grief, with the main difference being that the acceptance stage either consists of: (a) deciding to revise the paper and submit it elsewhere; or (b) abandoning it completely. As I mentioned at the outset, I’m less inclined to go for the nuclear option of (b) now, though it used to be my norm. That said, I am tempted by it on occasion. For example, I’m currently in the process of submitting a paper I wrote to a third journal, after it was rejected twice before. If it is rejected a third time I will certainly abandon it completely. The world is must be trying to tell me that it’s not a very good idea.


Conclusion
So that’s it. Those are the nine steps I usually follow when writing for peer reviewed journals. I’ll close with my favourite comment from an anonymous reviewer of one of my papers (if you’re interested it was about this paper). The comment is a positive one, but in a backhanded way that is typical among certain philosophers:

“This is a good paper. In the opinion of this reviewer, it is wrong at nearly every important point, but it is wrong in ways that are interesting and important -- a genuine contribution to the philosophical discussion.”

It almost makes it all worthwhile.

Sunday, February 15, 2015

The Logical Space of Democracy: A Map




Democracy is the worst form of government except for all those other forms which from time to time we have tried. Granting this, we might be inclined to wonder what sorts of democratic decision-making procedures are possible? This is a question that Christian List sets out to answer in his paper “The Logical Space of Democracy”. In this post, I want to share the logical space alluded to in his title.

To do so, I need to briefly recap my previous post, which looked at something called the “democratic trilemma”. This trilemma is a generalised version of the Condorcet voting paradox. It applies to any collective decision-making procedure in which inputs (i.e. attitudes towards propositions) are taken from individuals and then aggregated together to form some collective output. The trilemma starts by suggesting that there are three things that proponents of democracy would like to get out of any such aggregation method:

1. Robustness to Pluralism: The procedure should be able to accept any possible combination of individual attitudes about propositions which are on the “agenda” for any given decision problem.
2. Basic Majoritarianism: A necessary condition for the collective acceptance of any proposition should be its majority acceptance.
3. Collective Rationality: The collective output should be a consistent and complete set of attitudes on the propositions on the agenda.

The conclusion of the trilemma is that it is impossible to satisfy all three of these requirements at the same time (at least when dealing with reasonably complicated decision problems). At best, it is possible to satisfy two at the same time. I tried to explain why this conclusion was true in the previous post.

The important thing for now is that this conclusion provides us with a framework for thinking about the logical space of possible democratic decision procedures. If we know that it is impossible to satisfy all three requirements at any one time, the question is: which two requirements will we prioritise? This implies that there are three major “territories” within the logical space of democracy, each occupied by a number of different sub-types of decision procedure. Let’s explore these three major territories now.


1. Procedures that limit the possible inputs
The first major territory is marked by its willingness to give up the need for robustness to pluralism. As mentioned previously, robustness to pluralism is the notion that a decision-making procedure should be able to accommodate any combination of attitudes towards any of the propositions on a particular decision-making agenda. Procedures that drop this requirement place constraints on the permissible combinations of attitudes. Doing so then eliminates possible inconsistencies at the group level.

To explain what this might look like, recall the decision problem from the previous post. It involved a cabinet trying to reach a collective decision about each of the following propositions:

Proposition A: “Country X has weapons of mass destruction”
Proposition B: “We should invade country X if and only if it has weapons of mass destruction.”
Proposition C: “We should invade country X.”

As we saw, problems arise if the collective decision results from the complete, consistent and majoritarian aggregation of all possible individual attitudes toward each of these propositions. Specifically, allowing any combination of attitudes to be fed into the decision-making procedure could have resulted in a situation in which the collective rejected proposition A, but accepted propositions B and C. This would result in collective inconsistency.

You could avoid this inconsistency by placing limitations on the combinations of attitudes that are allowed to filter up to the group level. The simplest way of doing this would be to require unanimous opinions on each of the propositions. In other words, everyone has to accept proposition A in order for it to be accepted by the group, and so on. This would ensure consistency at the group level.

Another, more laboured but perhaps more realistic, way of doing it would be to allow for some meta-consensus between different collections of attitudes. In other words, to allow for some cohesiveness between the different attitudes adopted by the individuals within the group. This is something you will be familiar with from real-world politics. It has often been noted that those on the left and those on the right have opposing views on whole sets of propositions. Thus, for example, those on the (American) right are likely to be more hawkish, in favour of gun rights, against church-state separation, in favour of free markets, against universal healthcare and so on. Those on the left are likely to take the opposing view on each of those issues. This correlation (meta-consensus) of opinions may help to avoid possible inconsistencies at the group level.

To return to the example above, it may be that people who reject proposition A, are also more likely to accept B and reject C; whereas those who reject B, will be more likely to accept C and A (or some such). These correlations in attitudes will then filter up to the group level, preventing a scenario in which the collective accepts the inconsistent set of propositions. To be more precise, and as List notes, if we assume that attitudes toward relevant propositions are aligned along a unidimensional axis (like the left-right axis) then the majority attitudes will tend to coincide with the individual attitudes of whoever occupies the median position on the relevant axis.

There are several dubious assumptions underlying this analysis. The obvious one being that attitudes toward whole sets of relevant propositions may not align along single dimensions. But there is, nevertheless, something close to the real-world about this analysis. It is often noted, at least in countries like the U.S. with a two-party system, that collective decision-making tends to align with a “centrist” viewpoint.

Anyway, this is all by the by, the important point is that within this territory of possible democratic decision-making procedures, the focus is on limiting possible combinations of attitudes. List suggests that there are two ways in which this can happen in the real world. First, there may be exogenous pressures that limit the possible attitudes possessed by those who participate in the decision-making procedure. This would be consistent with the views of certain communitarians or nationalists who think that there should be some level of group harmony prior to democratic governance. Second, there may be endogenous pressures that limit the possible attitudes. That is to say, democratic decision-making procedures may themselves force people to limit the possible combinations of attitudes they have toward different propositions. This would be consistent with the views of deliberative democrats, who believe that the open discussion about the propositions on a given agenda (prior to any attempted aggregation) is likely to make for more consistent group decision-making.


2. Procedures that relax the commitment to majoritarianism
The next major territory within the logical space of democracy is the one that casts off the need for majoritarianism. At first glance, this seems to be anathema to the very concept of democratic decision-making, but it’s possible to imagine systems which reject the need for majoritarianism and yet still retain a flavour of democracy.

To see this, it is once again worth dividing the methods for doing so into exogenous and endogenous types. Within the exogenous class we get decision-making methods that rely upon external factors to limit the need for strict majoritarianism. By strict majoritarianism I mean any method that requires that the collective decision reflect the majority opinion on each and every proposition on a given agenda. List mentions two exogenous methods for avoiding the need for strict majoritarianism.

The first method involves the prioritisation of certain majority attitudes over others. For example, in the decision problem given above, we might argue that the majority view about proposition C take priority over the majority view on proposition B. In other words, we might decide that although all the propositions have some relevance to the collective, what is most important is the attitude towards the invasion of country X, and not the commitment to the principle for legitimate invasion stated in proposition B. Thus, if there is an inconsistency in the collective attitude, we should go with the collective attitude toward C and not towards B. List suggests that constitutional democracies, with their penchant for protecting certain fundamental rights and freedoms, could be viewed as providing democratic decision-making frameworks that lie within this region of the logical space of democracy.

A second exogenous method for avoiding strict majoritarianism would be to adopt a system of sequential priority procedures. These are decision-making procedures in which the collective attitude toward one proposition must be determined before proceeding to the next. The procedure would then be structured in a way that avoids the possibility of inconsistent collective attitudes. To return to the decision problem given above, a sequential priority procedure might require first ascertaining the majority view on proposition A, and then on proposition B. If the majority accepts both propositions, then the collective attitude towards C is determined. Likewise, if the majority accepts A but rejects B, then C should not be considered. In this manner, inconsistent collective judgments can be avoided.

Turning then to endogenous methods for avoiding strict majoritarianism, List notes how certain legislative procedures may allow for groups to achieve some sort of “reflective equilibrium” between their attitudes towards certain propositions. So, in the case of the decision to invade country X, a parliament that decided that they rejected proposition A, but accepted propositions B and C, could enter into a period of further reflection on the inconsistency between their attitudes. This would force some revision in the collective attitudes towards the individual decisions.

Borda’s counting method of preference aggregation is also an example of an endogenous method for ensuring pluralistic and rational collective decisions that nevertheless reverses some majority preferences.


3. Relaxing the need for collective rationality
The final territory within the logical space of democracy is the one that rejects the need for collective rationality. As we saw the last day, collective rationality is the notion that collective decisions should be complete and consistent. That is to say: they should cover every proposition on a given agenda, and they should ensure that the collective attitudes towards those propositions does not entail some inconsistency. The working example in which the cabinet rejects proposition A, but accepts proposition B and C is a classic illustration of a complete yet inconsistent decision.

When it comes to loosening one of those requirements, List argues that completeness is the only one that can go. The reason for this is that if you loosen the need for consistency, it creates problems for the action-guiding-ness of collective decision-making. That said, he recognises the loosening of consistency as a logical possibility, just not one he cares to consider in any depth.

With that in mind, there are, once again, exogenous and endogenous methods for avoiding completeness. The exogenous methods would involve partitioning the agenda in advance so that the collective attitude only needs to be provided for certain propositions (not all). Thus, when deciding about invading country X, the cabinet could determine that proposition C is the only one that needs a collective decision. List refers to this as a “conclusion-based procedure” as it involves ignoring the empirical and normative grounding for a particular decision.

Endogenous methods might also achieve the same thing. These would not entail any advance partitioning of an agenda but, rather, an effective partitioning through the use of some decision-making rule. For instance, supermajoritarian or consensus-based decision-making might work by preventing the collective from coming to a particular view on a particular proposition without crossing some threshold of acceptance (e.g. 75% or 100%).

A complete map of the logical space is provided in the diagram below.




4. Conclusion
To briefly recap, despite its popularity, true democracy is difficult to achieve in practice. This is because there are a number of things we would like to get out of a democratic decision-making procedure, that we simply cannot get. This is demonstrated by famous voting paradoxes and, more generally, by List’s democratic trilemma.

Because of these impossibilities, the space of possible democratic-like decision-making procedures is more limited than we might first realise. We really only have three options: (i) procedures that reject our commitment to pluralism; (ii) procedures that reject our commitment to majoritarianism or (iii) procedures that reject our commitment to collective rationality. We can see examples in the real world of decision-making procedures that fall into each of these three categories.

Wednesday, February 4, 2015

The Democratic Trilemma: Is Democracy Possible?

Nicolas de Condorcet


You may have heard of the Marquis de Condorcet (Nicolas de Condorcet). He was an 18th century French philosopher, mathematician and social theorist. He was a champion of the Enlightenment, and a leading participant in the French revolution. He is probably most famous today for three things. First, his jury theorem which showed how, under certain conditions, majority voting can get us closer to the truth. Second, his voting method which proposed that winners of elections be determined by pairing each candidate against every other candidate and figuring out who won each of those contests. And third his voting paradox which revealed how majority preferences could be inconsistent if there were more than two candidates or options to be voted upon.

You may not have heard of Christian List — though you really should have. He is a contemporary philosopher, operating out of the London School of Economics (as of 2015 anyway). He is a champion of a highly formal and axiomatised approached to philosophical theory. And he has expanded and refined many of Condorcet’s original ideas, as well as coming up with quite a few of his own. I like his work a lot, even if some of it is over my head. One of my favourite of his articles is “The Logical Space of Democracy”, in which he attempts to trace out the space of possible democratic decision procedures.

One key part of this article is List’s articulation of the democratic trilemma, which is a more general version of the Condorcet Theorem, applying to all collective decision-making procedures (it is also similar to, but different from, the famous Arrow Impossibility Theorem). The trilemma proposes that there are three things we would intuitively like from any democratic decision-making procedure, but that it is only possible to satisfy two of them at any one time.

In this post, I want to explain what the trilemma is and why it arises. I’ll use this as a springboard for explaining List’s logical space of democracies in a future post.


1. Collective Decision Problems and Collective Decision Procedures
To understand the trilemma we must first understand collective decision problems and collective decision procedures. A collective decision problem can be defined as follows:

Collective Decision Problem: Any situation in which two or more individuals must form intentional attitudes towards certain propositions and then use those attitudes to determine subsequent (collective) actions.

This is quite a general characterisation of collective decision problems, probably more general than you first realise. As List notes, the intentional attitudes at the heart of the decision problem can be representational or motivational in nature. In other words, they can be attitudes towards statements of fact like “lax regulation led to the financial crisis of ’08”; or they can be attitudes towards statements of desire, intention or principle like “we ought to introduce stricter regulations of the financial sector”. I say this is more general than you might first appreciate because collective decision-making — particularly in the political context — is often understood solely in terms of the ranking or ordering of preferences (“I prefer candidate A to candidate B” etc.), and not in terms of things like beliefs about the world.

Another point worth noting here is how assumptions about rational choice may influence our thinking about collective decision-making. At the individual level, it is commonly assumed that decisions are rational if they evince a harmony between sets of representational and motivational attitudes. For example, my decision to go to the shop to get some milk is rational if (a) I believe the store stocks milk and (b) I desire milk. The key question is whether harmony between sets of representational and motivational attitudes is possible at the collective level.

Moving on to collective decision procedures, these can be defined as follows:

Collective Decision Procedures: Any procedure which takes as its input a group of individuals’ intentional attitudes toward a set of propositions and which then adopts some aggregation function to issue a collective output (i.e. the group’s attitude toward the relevant propositions).

The basic schematic is illustrated in the diagram below.



One of the more interesting parts of List’s article is his illustration that the space of possible collective decision procedures — i.e. ways of going from the individual attitudes to the collective attitude — is vast. Much larger than any human can really comprehend. For example, if you have a simple collective decision problem, in which three people must state their preferences for options A and B, there are 256 possible collective decision procedures.

I won’t go through the full illustration of the point here, but to give you a sense of what List is talking about, imagine an even simpler decision problem in which two people have to vote on options A and B. There are four possible combinations of votes (as each voter has two options). And there are several possible ways to go from those combinations to a collective decision (well 24 to be precise). For example, you could adopt a constant A procedure, in which the collective attitude is always A, irrespective of the individual attitudes. Or you could have a constant B procedure, in which the collective attitude is always B, irrespective of the individual attitudes. We would typically exclude such possibilities because they seem undesirable or counterintuitive, but they do lie within the space of logically possible aggregation functions. Likewise, there are dictatorial decision procedures (always go with voter 1, or always go with voter 2) and inverse dictatorial decision procedures (always do the opposite of voter 1, or the opposite of voter 2).

You might find this slightly silly because, at the end of the day, there are still only two possible collective outputs (A or B). But it is important to realise that there are many logically possible ways to go from the individual attitudes to the collective one. This illustrates some of the problems that arise when constructing collective decision procedures. And, remember, this is just a really simple example involving two voters and two options. The logical space gets unimaginably large if we go to decision problems involving, say, ten voters and two options (List has the calculation in his paper, it is 21024).


2. Three Intuitive Requirements of Democratic Procedures
How can we constrain our exploration of the logical space of decision procedures? List recommends the axiomatic method. This involves starting with a set of axioms that specify what we would look for in our preferred type of procedure. In this particular instance, we are interested in decision procedures that fit with our conception of democracy. Democracy calls for individual input and control over collective decision-making. This means we can instantly rule out all procedures that would exclude, ignore or prioritise particular individual attitudes.

But this is to explore the negative side of dialectic. What exactly would we be looking for in a democratic procedure? List argues that there are three initial, and intuitively plausible, requirements. The first is:

1. Robustness to Pluralism: The decision procedure accepts any possible combination of individual attitudes about propositions which are on the “agenda” for any given decision problem.

This is, in effect, an inclusiveness condition. It forces the aggregation procedure to include all combinations of views on a given set of propositions. If there are two voters, and they are being asked to consider three separate propositions, each of which can be either affirmed or denied, then all combinations of affirmation and denial should play a role in influencing the collective output. This fits with our intuitive conception of democracy because democracy is supposed to be about giving some power to all points of view.

The second requirement is:

2. Basic Majoritarianism: A necessary condition for the collective acceptance of any proposition is its majority acceptance.

This is arguably the most straightforward requirement. For most people, democracy is synonymous with majority-based decision-making. The one thing to note about this condition is that it states that majority acceptance is a necessary condition for collective acceptance, not a sufficient condition. This means that basic majoritarianism is not the same thing as simple majoritarianism (where all you need is >50% approval). Basic majoritarianism could also include unanimous or supermajoritarian decision-making procedures.

The final requirement is:

3. Collective Rationality: The collective output should be a consistent and complete set of attitudes on the propositions on the agenda.

This might be the trickiest condition to understand. In essence, it tries to carry over the requirements of individual rationality to the group level. So imagine you had to make a decision on two propositions (A and B), each of which could be affirmed or denied. Your individual decision would be complete and consistent if it covered both propositions and if the combination of attitudes towards those propositions did not entail some sort of contradiction. The collective rationality requirement is simply saying that the collective output should do the same.

And this is where things get interesting. List argues that it is impossible for any collective decision-making procedure to satisfy all three requirements if the decision-making problem has some complexity to it. Let’s see why this is the case.


3. The Trilemma Illustrated
Let’s start with a general statement of the trilemma (this is copied almost verbatim from List):

The Democratic Trilemma: For all but the simplest collective decision making problems (e.g. voting on one proposition), there exists no decision procedure which satisfies the requirements of robustness to pluralism, basic majoritarianism and collective rationality. At most two of these requirements can be met at once.

There is a general proof of this, provided in an appendix to List’s paper. I’ll just run a relatively simple informal demonstration of the trilemma.

Suppose a multi-person government is confronted with a collective decision problem in which they have to form attitudes toward the following set of propositions:

Proposition A: “Country X has weapons of mass destruction”
Proposition B: “We should invade country X if an only if it has weapons of mass destruction.”
Proposition C: “We should invade country X.”

(Quick aside: note how this set of propositions is structured like a simple syllogism. This gets back to one of List’s key points: collective decision problems aren’t always about ranking options)

We can reasonably imagine that the members of a cabinet could have different attitudes toward each of these propositions. Some people might accept all three; some people might think that country X does not have weapons of mass destruction, but that this shouldn’t prevent an invasion on other grounds; and so on. Nevertheless, they must make some collective decision based on all three propositions.

To demonstrate how the trilemma arises we’ll use proof by contradiction. In other words, we will start with the assumption that the collective decision-making procedure satisfies the three requirements and show how that assumption leads to a contradiction. This we can do by imagining that the individual attitudes toward the propositions are depicted by the table below. So, one third of the government agrees with all three propositions; one third disagrees with proposition 1 and proposition 3 but agrees with proposition 2; and one third agrees with proposition 3 but not with propositions 1 and 2.



This is a perfectly sensible collection of individual attitudes. The first group represent all those who think that the possession of weapons of mass destruction is the only reason that could justify an invasion and that such weapons are indeed in possession of country X. The second group represents all those who think that the possession of weapons of mass destruction is the only reason that could justify an invasion and that country X does not have such weapons. And the third group represents all those who think that country X does not have such weapons, but that this is irrelevant to the question of invasion because there are other justifications for this.

According to the requirement of pluralism, our collective decision procedure should be able to factor in all these opinions. And according to basic majoritarianism, the collective attitude toward each proposition should (at a minimum) represent the majority view. The majority reject proposition 1, accept proposition 2, and accept proposition 3.

But note how this leads to a contradiction when we think of the group output in terms of collective rationality. Effectively, the group attitudes are:

Collective Attitude 1: There are no weapons of mass destruction in country X
Collective Attitude 2: The only good reason for invading country X would be if country X had weapons of mass destruction.
Collective Attitude 3: We should invade country X.

This set of attitudes is clearly inconsistent. This demonstrates how the trilemma arises. List gives other examples in his paper, and they all highlight the same phenomenon.


4. Conclusion
To briefly sum up, it is initially plausible to suppose that any democratic decision-making procedure should be able to consider all possible combinations of views; should (at a minimum) accept the majority view; and should conform with the requirements of collective rationality (i.e. be complete and consistent). But List has shown how this is impossible for all but the most simple decision problems. This is what gives rise to the democratic trilemma.

You may be curious about the relationship between the democratic trilemma as formulated here and Condorcet’s voting paradox. The answer is that Condorcet’s voting paradox is a special instance of the trilemma, one which is specifically concerned with collective decision-making procedures that involve the ranking of options. List’s trilemma is the more general problem.

Accepting the trilemma as a reality has some interesting implications for democratic theory. In particular, it suggests that there is a limited space of possible types of democratic decision-making. I’ll provide a map of that space in the next post.

Tuesday, February 3, 2015

Pettit on the Infrastructure of Democracy




Democracies are the preferred form of modern government. Democracies pay homage to the notion that we are all moral equals. This means that no one human has an intrinsic right to exercise domination or control over another. No one human has the right to impose coercive rules on others. But, of course, coercive rules are sometimes necessary if we are to have a workable social order. The key insight of democracy is that everyone should have some say in the construction of those rules.

The philosopher Philip Pettit offers a more precise account of the requirements of democracy. He says that any democracy worthy of the name should provide its citizenry with two powers:

Influence: Each citizen should have some influence over the public decision-making process. That is to say, their input should make some difference to the outcomes of that process.
Control: In addition to influencing the outcomes, the citizens should have some ability to direct the process toward their preferred outcomes.

Influence and control are not the same thing. One can influence a process without controlling it. Pettit gives the example of a man trying to control the traffic with hand signals. If the man is not a police officer, then by standing in the middle of the road and waving his arms about he may able to exert some influence over the traffic. At a minimum, he will distract the drivers and alter their behaviour. But he will not be able to control their movements. Not unless they recognise his hand signals as having some sort of meaning and social authority.

And this is the key danger in any purported democratic system. It may be set up in such a way that individual citizens can influence a political or bureaucratic decision-making process without being able to control it. Politicians may react to the claims of individual voters, without directing their decisions toward outcomes desired by the polity. A true democracy needs to provide for both.

But there are many practical impediments to realising such a system of democratic control. Pettit discusses three such problems in his work. In this post I want to consider each of them along with Pettit’s proposed solutions. I do so largely in an effort to understand Pettit’s main claims; not in an effort to critically expose their weaknesses.


1. The Three Problems: Sticky Minorities, Party Interests and Influential Lobbies
An ideal system of democratic control might be thought to require direct, participatory decision-making. In other words, it might be thought to require each and every citizen having a vote on each and every decision. While this ideal may not have been historically possible, some people argue that it might now be possible via the wonders of modern information technology. Every time a decision needs to be made, the citizenry could be notified on their smartphones: swiping right to vote yes, swiping left to vote no.

Pettit rejects this ideal. He does so because such a system would not allow for serious deliberative decision-making. If we had to vote on literally everything, we would arguably become less engaged in the political process, more inclined to view it as a chore, less inclined to inform ourselves on the issues. Hence, Pettit thinks that the only feasible system is one involving periodic elections to representative assemblies. These periodic elections “[recruit] people to an enterprise of collective self-assertion and [help] to underwrite the need for government to win popular acceptance”.

But representative systems give rise to three inter-related problems:

The Sticky Minority Problem: Some people may find themselves locked into a sticky minority, i.e. a minority that is consistently outvoted by the majority and whose ability to influence and control governmental policy is thereby limited. The classic example would be members of a religious or ethnic minority, who are repeatedly shut out of government. Obviously not all such minorities are problematic. It’s presumably a good thing that members of, say, the Ku Klux Klan are in a minority. But other minorities lead to systemic injustices and need to be addressed.

The Party Interest Problem: If the representative system is based on political parties, then there is a great danger that once in power these parties may take steps to protect their party’s interests not the interests of the polity. A classic example is the abuse of electoral districting powers in order to ensure that one party is more likely to win an election.

The Influential Lobby Problem: Political representatives are likely to rely on funding to get into power. Elections are an expensive business. The danger is that the politicians that are thereby elected will be pressured to adopt policies that favour the interests of the wealthy lobby groups that help pay for their elections. The result is a situation in which money has all the real influence and control.

All three of these problems undermine the influence and control that is necessary for a true democracy. All three of them are past or present realities of our democratic systems. What can be done to address them?


2. The Paradox of Democracy: Taking things off the political table
Pettit argues that the solution to these three problems lies in a paradox at the heart of the democratic electoral system. We embrace representative government because it allows for individual citizens to influence and control public decision-making processes, but in order to avoid those processes being dominated by sticky majorities, party interests and influential lobbyists, we must remove certain things from the political bargaining table. As Pettit puts it:

“The solution to such problems requires dividing, constraining, regulating and sometimes even sidestepping elected representatives.” 
(Pettit 2014, 125)

But how does this work in practice? Take the sticky minority problem. There are a couple of ways to address this. First, and foremost, you could try to create constitutional protections for such minorities, protections that ensure their right to participate in the political process and allow them to escape discrimination. More practically, you could take steps to ensure that they can find a political voice via some public interest organisation, and you might set-up a system of equality commissioners and ombudsmen to ensure that their complaints are heard and interests protected. In this way, you prevent the easy dismantling of their control by the dominant majority.

Moving on then to the party interest problem, the solution is similar. Certain types of power should be put at arms-length from political parties, simply on the grounds that political parties will be too eager to abuse them for their own gains. Thus, for example, when it comes to electoral districting, this should not be under the control of whoever happens to be in power. They will simply gerrymander the districts in an effort to perpetuate their control. Instead, an independent electoral commission should be established, and it should be given a legal obligation to control districting in a way that ensures representativeness. Similar arguments apply to other powers. For example, control over the money supply — particularly in a fiat money system — is probably best left outside the immediate control of the elected party. Pettit argues that the same goes for sentencing powers and the power to collect economic data.

Finally then there is the influential lobby problem. Here, steps need to be taken to ensure that money does not control governmental policy. There should be limits placed on the degree of financial support that any one candidate can receive from any one person or institution. There could also, perhaps, be some system of state support for candidates that ensure that they all get a minimum allocation of funding. This is already common in some countries.

The suggestions here are somewhat vague and idealised. But you get the general idea. To solve the three problems mentioned, Pettit thinks that we need a robust system of institutions and laws that take certain issues off the political bargaining table.


3. The Problem of the Technocratic Elite
But isn’t there a problem with this too? If we create such institutions, don’t we effectively push them outside the sphere of democratic influence and control? Don’t we risk creating a worse version of the problem we were trying to solve? Some people think so. They think that this leads to the creation of a technocratic elite, which in turn leads to problems of unaccountability and opacity. They think it might be better to leave everything on the political bargaining table, despite its obvious problems.

Pettit has a more subtle reply to this objection. He distinguishes between two types of representative:

Responsive Representatives: These are people we pick to represent us on particular decision-making committees and who must consult with us and adopt our instructions on how to vote. In other words, they must be directly responsive to our wishes and needs.

Indicative Representatives: These are people we pick to represent us on particular decision-making committees who we think are likely to act as we might act, even without direct consultation or instruction. In other words, they are representatives that we believe to share a significant proportion of our own values and commitments, even when they are not at our direct beck and call.

Pettit’s argument is that we can ensure that the technocratic institutions are not completely beyond the scope of public influence and control by ensuring that they are staffed by indicative representatives. Indeed, he thinks that the legal documents setting up such institutions should probably insist upon this indicativeness (e.g. by demanding professional and “lay” representatives to serve on particular commissions), and should give the representatives very clear (and appropriately limited) “briefs”. Furthermore, we should ensure some residual accountability for such officials. For example, by insisting on regular public reports, facilitating freedom of information requests, and allowing for occasional grilling by elected representatives.

I think there is much to said for all this, and there are many ways in which our present system tries to implement these solutions. I suspect, however, that the reality of being appointed as an indicative representative is going to be much grubbier and more political than Pettit’s idealised discussion lets on. For instance, I know that in Ireland the procedure through which officials are appointed to certain bureaucratic roles is sometimes dominated by party political concerns. But I suppose Pettit wouldn’t dispute that. It’s just simply not his concern. He is trying to trace out the ideal infrastructure of democracy, not figure out a practical policy for political reform.

Anyway, that’s all for this post.