Sunday, January 29, 2012

Attempt Liability and Moral Luck (Part Two)



(Part One, Series Index)

This is the second part in my short series on attempt liability and moral luck. As explained in part one, attempt liability is a concept from criminal law. It holds that it is legitimate to hold someone criminally liable for an attempted crime as well as for a completed crime. This poses certain problems, which were highlighted the last time. Those problems are connected to the more general problem of moral luck, which is the problem of determining how to account for the role of luck in our moral assessments.

Part one looked at three basic arguments in favour of attempt liability. In this part, we’ll look at three arguments on the other side of the debate. In compiling this list of arguments I’ve taken inspiration from the following article:

K. Kessler “The Role of Luck in the Criminal Law” (1994) 142 University of Pennsylvania Law Review 2183

But I haven’t covered every argument mentioned in that article; I’ve just selected the three I think are most interesting.


1. The No-Harm Argument
The first argument against attempt liability focuses (like the harm prevention argument from part one) on the purpose of the criminal justice system. More specifically, it focuses on the purpose of a criminalisation. It argues that certain actions, results and states of affairs are criminalised because they are harmful — in other words, it argues that the nature of a crime is such that it is something that is inherently harmful. The problem the argument then raises is that attempts are not, by nature, harmful. Therefore, it follows that attempts are not crimes.


  • (1) For any X (where X is an action, result or state of affairs), X is a crime if and only if X is a harm.
  • (2) Attempts are not harmful.
  • (3) Therefore, attempts are not crimes.


There are many problems with this argument. One problem is that (2) could be false. Although some attempts might not be harmful, it could well be that others are harmful. Part of the issue here is how we define “harm”. Harms could be defined broadly to include psychological harms, or narrowly to only include physical harms (covering harms to property as well). The choice could be crucial in deciding whether an attempt is harmful since an attempt could cause psychological harm even if it didn’t cause physical harm. Then there’s the issue of how we define “attempt” itself. In the criminal law, an attempt is typically defined as something that is “more than merely preparatory” to an offence. Now, it’s quite possible that an action that is more than merely preparatory to murder could cause serious harm. So, under the definition just given, an attempt might be harmful. The problem with that, however, is that so many of harmful acts that fall short of murder are themselves criminalised. So it’s likely that if someone harmed another, but didn’t kill them, a lesser offence such as assault would be substituted. The only reason for choosing attempted murder over these lesser offences is if it leads to a higher punishment.

The bigger problem with this argument is that it is a conceptual/definitional argument, not a normative one. It focuses on the properties that an event or state of affairs must have in order to count as a crime. But the concept “crime” is not some kind of metaphysical necessity, etched into the fabric of the universe; it is instead a social construct, something whose boundaries we can change if we wish.

So for this argument to work we’d need to change premise (1) so that it becomes normative not definitional. For example:


  • (1*) For any X (where X is an action, result or state of affairs), X should be criminalised if and only if X is a harm.


But when we do this the harm-essentialist view of crime might seem less plausible. We could then ask the question: why shouldn’t increasing the risk of harm be enough for X to count as a crime? Is that not something our risk averse society should seek? This leads us back to the harm prevention argument covered in part one.

Finally, another problem with the argument is that it focuses on the justification of criminalisation, not on the justification of punishment. When it comes to attempt liability it is the latter, not the former, that is important. This is significant because, if we adopted a utilitarian view of punishment, avoiding the risk of harm might be enough to justify the existence attempt liability. (Note: it could well be that criminalisation and punishment are inextricably linked so that justifying the one necessarily justifies the other. That’s an argument some people have made, but it’s not something I’ll get into here).



2. The Moral Luck Argument
This next argument links us directly back to the problem of moral luck, which was introduced in part one. As I noted there, one of the major developments in the 20th century analysis of moral luck, was the realisation that luck plays an important role in our everyday moral judgments. In particular, there was the realisation that we already seem to believe that the presence of luck should alter our moral judgments. And that our intuitive reactions and social practices reveal this to be true.

Consider the following example, drawn from a previous post:

Conference Organisation (1): You take responsibility for organising a conference. You invite the keynote speakers, send out the call for papers, book the venue, arrange for travel and so on. Everything is going well up until the day of the conference. Unfortunately, on that day, a freak snowstorm hits. No one can travel to the conference and the event has to be called off.

Conference Organisation (2): Same as (1) except that there’s no snowstorm. The conference is a resounding success.

Luck clearly separates these two cases. The conference organiser in the first example was a “victim” of what we might call bad luck, whereas in the second they are “victims” of good luck. Yet, when it comes to assessing both individuals, it is highly likely that the person in the second case will be rewarded for the success of their conference, while the person in the first case won’t be. But why is that? They both did the exact same things, the only difference between them is the kind of luck they were exposed to. Why should one be rewarded and the other not? The only way to explain this practice is to suppose that luck plays an important part in how we morally assess agents.

But this is just to point out how things actually are. How do we translate an observation about what is the case into an argument about what should be the case? After all, just because we do react this way, doesn’t mean we should. What will decide the matter is the weight we attach to intuitive reactions in our moral reasoning. If we attach a high weight to intuitions, then we might agree that luck should play a part in our moral reasoning. Hence, it would then seem to follow, that attempt liability is not (entirely) legitimate. On the other hand, if we attach more weight to principles (such as the control principle of responsibility) we might lean in the other direction.


3. The Deterrence Argument
People who read part one might be surprised to see deterrence cropping up again since in part one a deterrence argument was used to support the legitimacy of attempt liability. How can we now turn around and say that deterrence-based considerations support the other side of the debate? Very easily, actually. Deterrence is a tricky concept, its applicability depends crucially on the assumptions we make about human reactions to incentives. If those assumptions are wrong, or if there is some doubt about them, it is quite possible for a deterrence-based argument to work both ways.

So how does the deterrence argument against attempt liability work? Here’s a suggestion:


  • (1) In order for an agent to be deterred from doing X (where X is an option the agent can exercise), the overall utility of X must be less than the overall utility of ~X.
  • (2) If attempted offences are punished in the same way as completed offences, then the overall utility of an attempt is not more than (and possibly less than) the overall utility of a completed offence.
  • (3) Once an agent has begun to attempt an offence they have only two options: (i) don’t complete the offence; or (ii) complete the offence.
  • (4) Not completing an offence after one has begun attempting it does not have more utility than completing the offence, in fact, it might have slightly less (from 2).
  • (5) Therefore, an agent will not be deterred from completing an offence once they have begun to attempt the offence (from 1, 3, and 4).


The idea here is clear enough. If someone wants to murder another person, then, if attempts are punished in the same way as completed crimes, they will have no incentive to refrain from murdering that person if they have started an attempt. Why bother? If their punishment is going to be the same in both instances, why not finish the job? But, so the follow-up argument would go, this is perverse: we should prefer attempts to completed crimes since they are less harmful (even if they do cause some harm). So we shouldn’t allow for attempt liability.

Of course, this argument only really bites on the equivalency version of attempt liability, i.e. the version that holds that attempts should be punished in the exact same way as completed crimes. If a lesser punishment attaches to attempts, then the deterrent effect would return (even if it’s minimal) and you would get the added benefit of deterring attempts as well.

Okay, so that brings us to the end of this series. As you can see, there are some reasonable arguments to make on both sides. I’m not going to try to assess which side wins the debate. In an annoying move, I’m going to leave that up to you.

Saturday, January 28, 2012

Attempt Liability and Moral Luck (Part One)



(Series Index)

This post is the first in a short series on the combined issues of attempt liability and moral luck. Attempt liability is an idea arising out of criminal law. It holds that a person can be held criminally liable for attempting an offence, such as murder or rape, as well as for completing an offence. And moral luck is….well, moral luck is one of the more interesting conceptual developments in 20th century ethical philosophy. I’ll talk about it in more detail later.

The purpose of this series is to address a simple question: is it right hold someone criminally liable for attempting, as opposed to completing an offence? This entry introduces the basic problems associated with attempt liability, their connection to the problem of moral luck, and some basic arguments in favour of attempt liability. The second entry will consider the arguments against attempt liability.

I’m going to be working off a variety of sources for this post. I’ll provide relevant links as I go along.


1. The Problem(s) of Attempt Liability
So should an attempted murderer be punished in an equivalent manner to a successful murderer? Before you answer that, let’s pump some intuitions with the following case study:

The Poisoning in the Teacher’s Room (A): Mike and Marge both teach at the local high school. They don’t like each other very much, and clash repeatedly during their daily exchanges in the teacher’s room. Finding it’s all too much to take, Mike decides to poison Marge. In the teacher’s room there are two large pots containing loose-leaf tea leaves or ground coffee. Marge drinks coffee every day. So Mike decides to place a quantity of poison in the coffee pot, just prior to her taking some to make her daily cup of coffee (he’ll replace it after, before anyone else takes some poison). He does this and succeeds in poisoning and killing Marge.

The Poisoning in the Teacher’s Room (B): The exact same as above only at the last minute, for some unknown reason, Marge switches from drinking coffee and decides to drink tea instead.

How do you feel about these two cases? Do you think Mike is just as culpable in Case A as he is in Case B? If you do then you are accepting (subject to defeaters) that attempt liability is a legitimate idea. But in doing so you open up the door to some problems. Chief among them being the problem of distinguishing an attempt from a non-attempt.

One of the core values in a liberal society is that people are free to think and act as they wish as long as they do not harm anyone else in the process; one of the hallmarks of the totalitarian society is its attempt to regulate, control and punish thought and behaviour, irrespective of whether it harms anyone else. The problem with allowing for attempt liability is that you may begin the slide from liberalism to totalitarianism. While we may feel comfortable treating Mike from Case A the same as Mike from Case B, in doing so we might set a dangerous precedent for future cases, a precedent that blurs the boundary between an attempt and a non-attempt.

(See the discussion of precedential slippery slope arguments in this post for more on the arguments that might be made here).

Consider the following two variations on the poisoning case:

The Poisoning in the Teacher’s Room C): Mike decides to poison Marge. He reads up about poisoning on the internet, buys some poison and plans out exactly what he is going to do. But at the last minute he gets cold feet and doesn’t put the poison in the coffee pot. Marge lives on oblivious, but Mike later tells a colleague of his plans and they inform the police (this last bit is probably irrelevant from a moral perspective, but important from an evidential one).

The Poisoning in the Teacher’s Room: Mike would love to poison Marge, but he just doesn’t have the courage. He fantasizes about it every day, planning the act meticulously in his mind, but never bringing it to fruition. He tells his therapist, but she sees no threat in his fantasies.

How do we feel about these two cases? I suspect we’d feel that Mike from Case D should be exempt from liability: his idle fantasies are exactly the kind of thing we want to protect from government intervention in a liberal society. But how about Mike from Case C? What he did was clearly less culpable than what he did in case B, but we might still feel like some intervention, or minimal form of liability is merited. Indeed, the criminal law can allow for some liability through the offence of conspiracy. However, by allowing for this we’ve definitely begun to slide from pure liberalism. That might be a perfectly acceptable thing to do — a capacity for nuance and an appreciation for the moral complexity of the real world are qualities we might like the criminal justice system to exemplify — but we have to consider where the boundaries should be drawn, if at all.

Distilling from the preceding discussion, there are perhaps three core boundary-line problems associated with attempt liability:

The Slippery Slope Problem: Should we even begin to punish attempts in addition to completed crimes?

The Equivalence Problem: Should we punish attempts in the exact same manner as completed crimes?

The Gradient Problem: Should we grade attempts in terms of their seriousness, and punish them in accordance with their location on the gradient?


Our primary focus will be on the first two problems, not so much on the third. I want to explore the arguments that propose different answers to those two problems. But before I do that I want to talk a little bit more about the issue of moral luck.


2. The Problem of Moral Luck
The problem of attempt liability links to a more general problem in moral philosophy, namely: the problem of moral luck. This is something which has garnered much attention in the relatively recent past. This largely began in the late 70s/early 80s when Thomas Nagel and Bernard Williams wrote a pair of classic articles on the topic.

The problem of moral luck can be simply stated: what role should luck play in the moral assessment of someone’s actions, accomplishments, failures, achievements and so on? Luck can be roughly defined as any action, event or state of affairs that is outside the control of the person being assessed. In his analysis of the issue, Nagel identified four main varieties of moral luck. They are (taken from the SEP article):

Resultant Luck: This is luck relating to the results of our actions. Poisoning cases A and B give us some idea of what is involved here. In both cases, Mike acted in the exact same way, but the results of his actions were different. In one instance, they led to the death of Marge, and in the other they did not.

Circumstantial Luck: This is luck relating to the circumstances in which one finds oneself. The classic example here being those who found themselves living in Nazi Germany in the 1930s. They likely did things that many of us would do (follow authority, act in their own self-interest) only they did so in appalling circumstances, circumstances which were outside of their control but made them complicit in an atrocity.

Constitutive Luck: This is luck relating to the kind of person that you are. Although we might like to think that we control our character traits and personality tics, our genes and our environment must play a considerable role in determining their content. These factors are beyond our control and could impact upon our moral choices.

Causal Luck: This is luck relating to the antecedent causes of who we are and how we act. Nagel views this as being equivalent to the issue of causation and determinism in the free will debate.

Now I’ll have to be honest and say I don’t see the need to distinguish between constitutive and causal luck. They seem like very similar concepts to me. Nevertheless, I think there is some utility to the resultant/circumstantial distinction. Indeed, within the criminal law — which is where the issue of attempt liability arises — this distinction is significant. This is because crimes are typically distinguished from one another on these kinds of grounds. For example, murder is a result-oriented offence: the actus reus of murder is the death of one person as caused by actions of another. In contrast, rape is a conduct and circumstances-oriented offence: the actus reus of rape is (usually) penetration of a bodily orifice of one person by the penis of another (conduct), without the consent of the person being penetrated (circumstance). So the conceptual and normative issues associated with resultant and circumstantial luck could be important when assessing attempt liability.

I previously said that the problem of moral luck has to do with whether luck should play a role in our moral assessments. That characterisation of the problem is sort of correct, but not quite all the way there. One of the realisations to emerge from the Nagel/Williams exchange in the 70s/80s was that luck does seem to play a substantial role in our everyday moral assessments. That is to say, we already seem to allow for moral blame to attach even in the presence of luck. So, to them and to most contemporary philosophers, the problem of moral luck is not “whether” luck has a role to play but, rather, how to account for the role that it does play. Of course, that’s not to say that the more general normative issue of “whether” is ignored — far from it — but it does suggest an alternative perspective is being taken on the problem.

Anyway, we now need to move away from this problem-setting stage and on to the problem-solving stage. We do this first by looking at three arguments in favour of punishing attempts and treating them as (roughly) equivalent to completed crimes. These arguments are: (i) the control argument; (ii) the harm prevention argument; and (iii) the deterrence argument. Let’s look at each in turn.


3. The Control Argument
The control argument appeals to the most common principle of responsibility, namely: an agent should only be liable for those results (and circumstances) that are within their control. From there it builds a case for treating attempts the same as completed crimes. Using the poisoning cases from earlier as a reference, this is the basic control argument:


  • (1) An agent is (only) liable for the results and circumstances that are within their control.
  • (2) In terms of results and circumstances, Mike from Case A exercised control over the exact same things (no more, no less) as Mike in Case B.
  • (3) Therefore, if Mike from Case A is liable for something, Mike from Case B is liable for the exact same thing.
  • (4) Mike from Case A is liable for the murder of Marge.
  • (5) Therefore, Mike from Case B is liable for the murder of Marge.


Although the logic here seems valid enough, the conclusion is strange. How can Mike be liable for murder in case B when Marge isn’t dead? After all, murder requires an actual death, doesn’t it? Since the conclusion seems strange we might be inclined to think that at least one of the premises is dodgy, but let’s not be too hasty. I suspect the reason for thinking that the conclusion is strange stems from confusing liability and responsibility. This is something I’ve spoken about before. Roughly, liability is concerned with the price one has to pay for one’s actions; whereas responsibility is concerned with the outcomes that one actually brought about. Since this argument is framed in terms of liability, not responsibility, I think it makes sense: Mike must pay the same price in both cases because he controlled the same things, despite the fact that his actions led to a different result.

Even still, there might be something wrong with the premises. For instance, we might argue that control is not the only thing relevant to liability, that utilitarian principles can also be used to determine who should pay the price for something. In that case, premise (1) would be faulty. But that wouldn’t necessarily spell the end for attempt liability because utilitarian principles might just as easily be used to support the case for attempt liability. Indeed, this is exactly what the next two arguments hold.


4. The Harm Prevention Argument
Like the control argument, the harm prevention argument works from a very simple idea. The idea is that the purpose of the criminal law is to identify those wrongs which we would prefer not to occur. And since those wrongs are usually (and probably preferably) linked to harms, it would seem that the criminal law is designed to prevent harm. Think about it like this. The reason we classify murder as a crime is because we don’t want people to kill one another. And the reason we do this is because deaths are harmful and we wish to prevent harm.

But then, if our goal is harm prevention, why should we wait until the harm has been caused before intervening? In other words, why doesn’t the following argument hold?


  • (1) The criminal justice system ought to prevent harm.
  • (2) Intervening before a crime has been completed (but after it has been attempted) prevents more harm than intervening after the crime has been completed.
  • (3) Therefore, the criminal justice system ought to be willing to intervene before a crime has been completed, not just after.


This argument is fine, in so far as it goes. The problem is that it doesn’t go far enough. While it might be true that, if we’re interested in harm prevention, we ought to try to prevent harm and not just step in after it occurs, this doesn’t say anything about whether we should hold someone liable for attempting a crime. In other words, the argument fails to answer the question: why can’t we just prevent the crime and leave it at that (without punishing the attempt)?

There are a number of possible replies (impracticality, epistemic hurdles etc). The next argument is one of them.


5. The Deterrence Argument
Let’s say we accept the basic tenets of the harm prevention argument. What we then need is some principle to plug the gap between intervention and liability. A deterrence argument might be exactly what we need. A deterrence argument will work off the idea that there are certain incentives that make people more likely to respond or behave in a particular way in the future. In many ways, the goal of any social engineer is to craft a network of incentives that encourages people to behave in ways you like, and deters them from behaving in ways you do not like.

When it comes to attempt liability, the proponent of deterrence is going to argue that intervention+punishment is going to be a more effective deterrent than intervention on its own. And if it is a more effective deterrent, then it is going to prevent more harm going into the future than an intervention.

This leads us to the following argument:


  • (1) The criminal justice system ought to prevent as much harm as possible.
  • (2) Intervening and punishing attempts (i.e. creating a system of attempt liability) will prevent harm than just intervening before crimes are completed (because it provides a greater deterrent).
  • (3) Therefore, the criminal justice system ought to create a system of attempt liability.


This argument, which is subtly different from the harm prevention argument, provides some justification for attempt liability. The second premise would be supported by the deterrence-based reasoning that I outlined in the two preceding paragraphs.

Is the argument any good? Well, note the change in premise (1) from the version in the harm prevention argument. One might argue that this change is both crucial and problematic. It is crucial because without it the deterrence based objection to intervention without liability won’t work. It is problematic because it may claim too much for the criminal justice system. Should we really aim to prevent as much harm as possible? Probably not, especially if doing so will also prevent us from doing other things that we might value. Whether deterrence actually do that is a question worth pursuing. All I’ll say here is that by using deterrence as the justification for imposing liability, we may slide down the slope towards totalitarianism. After all, totalitarian societies might be very safe places to live, but they achieve this at the expense of other values.

Okay, let’s leave it there for now. In part two, we’ll look at the arguments on the other side of the debate.

Friday, January 27, 2012

Free Will, Punishment and Responsibility (Series Index)



I've been blogging a lot recently on the philosophy of responsibility, punishment and, to a lesser extent, crime. Since I'm likely to continue addressing those topics in the coming weeks and months, I thought it was time to provide an index to all the posts I've written so far. I've divided them into three specific groups, but the divisions are far from pure. Anyway, you can expect this to grow somewhat in the near future.


1. Free Will and Moral Responsibility




2. Theories of Punishment




3. Criminal Responsibility and Liability



Thursday, January 26, 2012

Book Recommendations (Index)




I've decided to do a series of posts giving short book recommendations. Since I usually read far more interesting stuff than I could ever possibly write about in a substantive way, I figure a series of short posts just recommending things I've read might be worthwhile. Also, each post will give readers an opportunity to recommend things that they've read too.

I won't give any ratings or substantive criticisms of the books I recommend here, I'll just provide a couple of reasons for thinking the book is worthwhile. Also, while my main focus will be on philosophy books, I won't limit myself to those.


An Index to Recommended Books


1. Just the Arguments: 100 of the most important arguments in Western Philosophy

2. Contemporary Theories of Liberalism (Gaus)

3. The Art of Strategy (Dixit and Nalebuff)

4. Human Enhancement (Savulescu and Bostrom)

5. The Logic of Real Arguments (Fisher)

6. God in the Age of Science (Philipse)

7. The Ethics of Voting (Brennan)

8. On Politics (Ryan)

9. The Problem of Political Authority (Huemer)

10. Books on Writing (Various)

11. Philosophical Devices (Papineau)

12. Contractualism and the Foundations of Morality (Southwood)

13. Moral Tribes (Greene)

14. Gratuitous Suffering and the Problem of Evil (Frances)

15. The Mind-Body Problem (Goldstein)



Book Recommendations ♯1: Just the Arguments


(Series Index)

Regular readers of this blog will be aware of my penchant for argument analysis. It is perhaps a sad reflection on my hobbies, but there’s nothing I like more than taking a passage of argumentative writing, breaking it down into its key components (reasons, conclusions etc.), and then reconstructing it in a formal or diagrammatic way.

There’s something beautiful about a well-constructed formal argument. It compresses masses of information, it provides a structure for persuasive and critical thinking, and reveals strengths or weaknesses in someone’s reasoning. When you formalise an argument, you can usually instantly highlight points of disagreement and agreement. What’s more, since philosophy should be a inquisitive, not a persuasive, enterprise, the formalisation of arguments allows you to identify useful areas of future research and inquiry.

Given my love for arguments, it will comes as little surprise to learn that I’m a big fan of Michael Bruce and Steven Barbone’s recent edited collection Just the Arguments: 100 of the Most Important Arguments in Western Philosophy. The book is definitely one that can be judged by its cover, or, more properly, its title. It features one hundred individual chapters by different authors, albeit with some repeat offenders along the way. Interestingly, this adds up to more than 100 arguments. For example, the opening chapter on Aquinas’s Five Ways contains, unsurprisingly, five separate arguments.

By and large, the content is good. It ranges over the full suite of philosophical topics: religion, metaphysics, epistemology, ethics, mind, science and language. The chapters are usually short enough to read through in five to ten minutes, although some of the longer ones will take up more time. The arguments are typically presented at the end of each chapter, and references are provided to primary and secondary sources at the start of the chapter. The arguments themselves vary from being extraordinarily complex to admirably brief. On the complex side of things, check out the chapter on Kant’s categorical imperative which features 27 premises and 14 conclusions; while on the brief side of things, check out the chapter on G.E. Moore’s anti-skeptical arguments which have 2 premises and a conclusion.

I wouldn’t hesitate to recommend the book to anyone with an interest in philosophy. The editors pitch the book at students, suggesting it might provide an ideal revision tool since most philosophy courses can be boiled down to a few key arguments. But I think professional philosophers and interested lay people could easily benefit from it too. I can certainly imagine myself dipping into some afternoon, and perhaps squeezing a blog post out of one the chapters.

There are some criticisms to be made though. First, because the chapters are written by different contributors, the quality is somewhat patchy. And while it’s nice to have short chapters for ease of reference, I find the longer chapters with background and context are more enjoyable to read, particularly for topics outside my main areas of research. Also, some contributors are a little too sparing with their bibliographies when compared to others. I think it would be nice to have a reasonably detailed bibliography in each chapter for those who wish to follow things up in more detail.

Wednesday, January 25, 2012

Facebook Page

,

I've set-up a facebook page for the blog. I'll put links to all new blog posts on there, as I do with my twitter page. Not sure how useful this will be, but if facebook is your main portal to the web, and if you're a fan of this page, you might like to check it out.

I've put a link to it in the sidebar, and below too:

Philosophical Disquisitions Facebook Page

Enhancement and Education: Lessons from the Kobayashi Maru (Part Two)



(Part One, Series Index)

This post in the second in my two-parter on the lessons we can draw from the Kobayashi Maru (KM) for the enhancement-in-education debate. The KM test is, of course, a part of the Star Trek canon. It is supposedly an unpassable test but, according to Trek lore, Kirk managed to pass the test. The problem is that he did so by reprogramming it to make it passable. The question I’m considering now is whether this was legitimate.

In a change from my normal practice, I’m going to continue directly on from part one (this is reflected in the numbering of the sub-sections). So, in other words, you really need to read part one before attempting this.


3. Spock’s Argument Against Kirk
Should Kirk’s “pass” on the KM really be a “fail”? The obvious answer would appear to be “yes”, but let’s consider the reasoning behind the obvious answer in a little more depth.

According to the most recent film, Star Trek, Spock was the creator of the test, and he clearly thought that Kirk had defeated the true purpose of the test by cheating. As a result, his success was illusory and should be deemed illegitimate. Indeed, Spock directly challenged Kirk, through disciplinary proceedings, on this issue. This is revealed in the courtroom-like scene in the film where Kirk asks to confront his accuser, who turns out to be Spock. Unfortunately, I couldn’t find the relevant clip on youtube, so you’ll have to look it up on your own time (assuming you have access to a copy of the film).

More important than the clip though is Spock’s reasoning. As creator of the test, Spock clearly had an intended learning outcome (ILO) that he wanted to achieve with the test. He states this pretty clearly at one point. He says the purpose of the test was to get the student to experience fear in the face of certain death and to see how they coped with it. This means that the KM-test was not intended to be a test of problem-solving skills, but, rather, a test of character. Consequently, its the no-win scenario was a core part of Spock’s ILO.

This leads to the following argument:


  • (4) If you succeed in a test by avoiding the ILO of that test, your success is illegitimate and illusory.
  • (5) The ILO of the KM-test is to see whether someone has the psychological resilience to cope with a no-win (certain death) scenario, not to develop their problem-solving skills.
  • (6) By reprogramming the test, Kirk avoided having to cope with a no-win scenario.
  • (7) Therefore, Kirk’s success on the KM-test was illegitimate and illusory.


This argument appears to be valid. So if its premises are true, Spock’s accusation against Kirk is vindicated (then again, we’d expect this since Spock is, supposedly, “logical”).

So are the premises true? I think premises (5) and (6) are relatively uncontroversial: (5) is what the (admittedly fictional) Spock claims for his test himself and (6) seems like a fair description of what Kirk did. Premise (4) is the tricky one. Its justification could be based on what I said in part one about constitutive regulations and ILOs, but I don’t want to spell out that argument just yet (for reasons that should become clear). Instead, I’ll offer a prima facie justification of it by way of analogy.

The analogy is as follows. Suppose I assess my philosophy 101 course with an essay. The essay asks the student to offer an argument for or against the following proposition: “Abortion is immoral”. The ILOs for the students taking this course would include: (i) acquire knowledge of relevant subject matter in ethical philosophy; (ii) develop the ability to think critically and reasonably about a controversial ethical topic; and (iii) develop the ability to present an argument about this issue in a readable manner. The essay is supposed to provide evidence as to whether those ILOs have been achieved.

Now suppose that instead of researching, thinking and writing about the topic in the required manner, a student simply buys an essay dealing with this topic from an online essay-mill and hands it in as if it is their own work. This essay might be quite good and might, if I can’t tell the difference between it and a genuine essay, garner a pass grade or higher. This means that the student would (officially) appear to have succeeded in the test. But then ask yourself: would that student’s success be legitimate? No, obviously not. It would be both illegitimate and illusory. And why is this? Because the success was gained by avoiding the ILOs. Thus, premise (4) would seem justifiable.

Is this enough for Spock’s argument to go through? Not quite, we have to consider Kirk’s potential rebuttal and counterargument.


4. Kirk’s Rebuttal and Counterargument
To be clear, nowhere in the Star Trek canon does Kirk actually present anything like a formal rebuttal and counterargument to Spock. He does, however, present his reason for reprogramming the test. In Wrath of Khan — during one of the scenes set in the cave on the Genesis moon, if you must know — Kirk says, with admirable brevity, that he cheated because he doesn’t believe in the no-win scenario. I think we can expand upon this single reason to present a more formal response to Spock’s argument.

I’ll focus first on a potential rebuttal of Spock’s argument. A rebuttal is an argument or premise that challenges a premise in the main argument. In this instance, the rebuttal will be aimed at premise (4). You see, despite the prima facie justification offered above, there is some reason to doubt its truth. This reason is linked to some of my previous comments on ILOs and their connection to the constitutive regulation argument (see part one for details).

Basically, my feeling is this: an ILO can only be part of a constitutive regulation argument if the ILO is itself morally legitimate, and the problem with ILOs is that they need not always be morally legitimate. Indeed, if one of the key properties of an ILO is that it is linked to what a teacher intends for their students to get out of a course, then there’s no reason why an ILO can’t be morally arbitrary. For example, I could make learning the names of all my relatives one the ILOs for the students on my philosophy course. I could even assess them on this by asking them to name my relatives at the end of their essay. However, this ILO would be morally arbitrary and the students would be right to challenge me on it (or to “cheat” on this part of the assessment if they so wished).

So premise (4) is rebutted by this:


  • (8) You are within your moral rights to bypass or avoid an ILO if that ILO is morally illegitimate or morally arbitrary.


And so should be revised accordingly.

All of which leads us to the counterargument to Spock. A counterargument is an argument that stands in direct opposition to the conclusion of another argument. So in this case we want an argument that contradicts (7), from above. How can we construct such an argument? Well, we can start by picking up the trail left by the rebuttal that I just outlined. The rebuttal works because it assumes that some ILOs are legitimate and some are not. Furthermore, it assumes that a student can be within their moral rights to bypass an illegitimate ILO.

This raises an intriguing possibility. What if, in addition to having a ILOs that are legitimate/illegitimate, we could also have hierarchical relationships among legitimate ILOs? In other words, what if some ILOs were more legitimate than others? And what if, recognising that there was a “more” legitimate ILO in place, a student achieved it by avoiding a lesser ILO? Might that make the practice of avoiding a legitimate ILO acceptable?

Like I said, it’s an intriguing possibility. It’s also a possibility that could work in Kirk’s favour. After all, I don’t think we can say that Kirk’s success was legitimate because Spock’s ILO was morally illegitimate — cultivating the psychological resiliency to face death seems like a legitimate ILO for command-track cadets in Starfleet — but we might be able to say that his success was legitimate because he achieved a greater ILO at the expense of a lesser one.

This raises the obvious question: what might the greater ILO be in the case of the KM-test? Here’s a suggestion. Most educators would agree that one of the supreme goals of any type of education is to cultivate the capacity for critical thought among students. Critical thought is understood to include the ability to question taken-for-granted assumptions through the use of reason and logic. If students can demonstrate this ability, most educators would be pleased.

So perhaps this is what Kirk was doing when he “cheated” on the KM-test. Perhaps through reprogramming the test he demonstrated a capacity for critical thought that should be rewarded, not punished. As follows:


  • (9) If a student succeeds on a test by avoiding one legitimate ILO for the sake of a more important ILO, then their success is not illegitimate or illusory (in fact, it’s the very opposite).
  • (10) Demonstrating the capacity for critical thought (an ILO for nearly all educational projects) is more important than demonstrating the psychological resiliency to cope with a no-win scenario (Spock’s ILO for the KM-test).
  • (11) The capacity for critical thought consists primarily in the ability to challenge taken-for-granted-assumptions.
  • (12) The implicit taken-for-granted assumption of the KM-test is that there is such a thing as a no-win scenario.
  • (13) By reprogramming the test, Kirk demonstrated an ability to challenge the assumption that there is a such a thing as a no-win scenario.
  • (14) Therefore, Kirk demonstrated a capacity for critical thought (from 11, 12, and 13).
  • (15) Therefore, Kirk’s success on the KM-test was not illegitimate or illusory (from 9, 10 and 14).


Again, I think this is valid and it might make us more sympathetic to Kirk’s solution. I’ve illustrated the relationship between this argument and Spock’s argument in the diagram below.




For all the sympathy that Kirk’s counterargument might engender, it has a number of weak links. For starters, premise (10) might be controversial. Someone might argue that the hierarchy is the other way round or, worse, that hierarchical relationships cannot be established between ILOs because they are fundamentally incommensurable. Additionally, someone might dispute premise (13) and say that mere act of reprogramming does not provide good evidence of critical thought.

These objections are certainly worthy of consideration, but I think there is a more interesting one. The argument above (premise 12) suggests that the KM-test comes with a taken-for-granted assumption about no-win scenarios. It further suggests (premise 13) that challenging that assumption would be the way to demonstrate critical thought. Consequently, since Spock’s ILO requires the assumption about the no-win scenario, demonstrating critical thought would have to come at the expense of Spock’s ILO.

But maybe that’s the wrong way to look at it. Maybe the KM-test is itself designed to challenge a taken-for-granted assumption. Most students would enter a simulated test like the KM with the assumption that there is some kind of “solution” to the problem. They might then carry this assumption with them into the real world when they should really be open to questioning it. Thus, it may be that in programming the test so that there is no solution, Spock actually gets them to both (a) challenge their own assumptions about these kinds of scenarios and (b) test their psychological resiliency by getting them to confront death. In other words, achieving Spock’s ILO need not come at the expense of critical thought. Indeed, it might actively require a capacity for critical thought.

In the end then, whether we side with Kirk or Spock might just come down to which assumptions we think are worthy of challenge and whose challenge would hence be more indicative of critical thought. So whether Kirk’s success was legitimate or not is, as I said at the outset, a close run thing.


5. Lessons for the Enhancement Debate

At last we come to the topic that I started out with: the legitimacy of enhancement in the educational context. Can the analysis of the Kobayashi Maru tell us anything interesting about this issue? On one level, the analysis of the Kobayashi Maru is just a bit of fun; but on another I think it has some interesting lessons for the enhancement debate. Here are the two that occur to me:

The Normative Irrelevance of ILOs: The first lesson has to do with the normative salience of ILOs. As should be clear from what I said, ILOs are not necessarily normatively significant. Since they are linked to the intentions of the teacher, it is possible for them to be normatively arbitrary. This has important consequences for anyone who wishes to object to enhancement because they might cause students to bypass or avoid certain ILOs. They first have to make sure that the ILOs being avoided are normatively significant.

The Link Between Enhancement and ILOs: The second lesson has to do with how the use of enhancement might be linked to the avoidance of a normatively significant ILO (or, alternatively, how it might be linked to the achievement of an ILO). Kirk’s reprogramming of the KM-test was (depending on how you look at it) either directly indicative of a failure to achieve an ILO or directly indicative of success in achieving an ILO. It was directly indicative of a failure to confront death (Spock’s ILO), or directly indicative of an ability to think critically (a more general ILO). Thus, it is possible to make a fairly simple normative argument for or against Kirk’s actions. This is interesting because most enhancement technologies are unlikely to be directly linked to ILOs in this manner. The effect they actually have on achieving ILOs is likely to be indirect. Thus, making normative arguments for or against their use is likely to be more difficult.

And on that note, I shall conclude.

Tuesday, January 24, 2012

Enhancement and Education: Lessons from the Kobayashi Maru (Part One)



(Series Index)

Okay, so this is going to be the last two-part entry in my series on the use of enhancement in sports and education, and since it’s the last I’ve decided to have a little fun with the topic. This post is going to be a, rather self-indulgent, philosophical analysis of a key component of the Star Trek canon: the Kobayashi Maru Test. (Don’t worry, I’ll be explaining for all the naifs.) I suppose I should apologise in advance to all those non-Star Trek fans, but I recommend persevering with these two entries anyway since I think they contains some interesting material. Then again, I would say that.

To set this up properly, I need to summarise the purpose of this series so far. The series was written to help me investigate whether the use of cognitive/performance enhancing technology is legitimate in the educational context. To make this investigation more interesting, and to draw upon an already rich philosophical literature, I’ve been considering the analogies between sports and education. I’ve done this over quite a number of posts. I don’t know if I have any particularly strong conclusions to draw so far. My basic feeling is that the use of some performance enhancers might be illegitimate in (some) sports because they either breach the constitutive regulations of that sport, or because they leads to inter-temporal unfairness. But as to whether that carries over to education, I’m really not too sure. There may be important disanalogies between the two fields that make this impossible.

The next two posts will focus on those potential disanalogies, but won’t directly touch upon the enhancement issue. At least, not until the end. The goal, instead, is to expand upon the notion that breaching the constitutive regulations of some activities is morally illegitimate by exploring the fictional example of James T Kirk’s alleged “success” on the infamous Kobayashi Maru test. Using what has been said about the test in two of the Trek films (Wrath of Khan and the more recent reboot film Star Trek), I’ll suggest that there are good arguments on both sides of the issue. So Kirk’s success might be legitimate or it might not; it’s a close run thing. And the fact that it’s a close run thing has some interesting implications for the enhancement debate.

The remainder of the series is structured as follows. In section one, I revisit the concept of a constitutive regulation and explain why it is normatively significant. In section two, I give a brief outline of the structure of the Kobayashi Maru test and explain the circumstances behind Kirk’s alleged “success” in it. In section three, I outline Spock’s argument (from Star Trek) as to why Kirk’s success was illegitimate. In section four, I outline Kirk’s counterargument as to why his success was legitimate. And finally, in section five, I draw out the lessons of all this for the enhancement debate.

I’ll cover sections one and two today; sections three, four and five the next day.


1. The Constitutive Rule Argument
Some time ago, John Searle set down a very simple taxonomy of rules. According to Searle, the kinds of rules we use to regulate and control our activities can be broken into two broad classes: (i) regulative rules; and (ii) constitutive rules. Regulative rules take a pre-existing activity or set of activities and set down some rules so as to signal to us the (normatively) best way to perform that activity or set of activities. Constitutive rules are different: they set down rules so as to constitute (i.e. create) a new type of activity, a type of activity that wouldn’t exist without the rules.

Compare the rules of driving and the rules of chess. Driving is an activity that does not need rules to exist: we all know what it is to drive a car without having someone tell us that we ought to drive a car in a particular manner or at a particular speed. The rules of driving simply tell us how best to perform that activity. So, for instance, it is possible to drive a car while intoxicated, but this is a normatively inferior way of driving a car, hence there is a rule telling us not to do this. Contrast that with chess. While it is true that moving carved wooden pieces around a board makes a certain amount of sense without the rules of chess; it is also true to say that without following those rules any such activity is not chess. The rules of chess don’t just tell us how we ought to move wooden pieces around a board, they also create a unique kind of activity which we call chess. In other words, the rules of chess constitute a particular activity, they don’t just regulate it.

There is something attractive about the constitutive rule concept when it comes to understanding sport (and perhaps education - we’ll get to that in a minute). For example, the rules of soccer (football to the Brits) don’t simply regulate the activity of kicking a ball around a pitch; they also constitute a particular kind of activity we call soccer. But there’s something slightly unsatisfactory about using the constitutive rule concept when making a normative argument about sport. The problem is this: because of their nature, constitutive rules seem to be descriptive not prescriptive in nature. Thus, any argument made by appealing to them will be factual, not normative. For example, if we play cricket with a baseball bat, we’re not playing a normatively inferior kind of cricket; we’re just not playing cricket at all.

Or so it seems. But David Lauer has made an interesting argument about this in a recent paper. Lauer suggests that constitutive rules can be used as the basis of a normative argument, provided we distinguish between two kinds of constitutive rule. They are:

Constitutive Standards: These are constitutive rules that tell us the conditions that one kind of activity (X) has to satisfy in order to count as another kind of activity (Y). For example, moving carved pieces of wood around a check-patterned board counts as chess, if the movements correspond to the rules of chess. In this form, the constitutive rules are purely descriptive, not normative.

Constitutive Regulations: These are constitutive rules that remind us how an already intelligible activity (X) ought to be done in order to count as a good instance of X (call this Y). For example, hitting someone with your fists is a kind of activity that makes sense without the need for rules, but when you add rules it might constitute a new phenomenon that we call “boxing”, whilst at the same time creating a normatively superior form of hitting someone with your fists. In this form, the constitutive rule is not purely descriptive, it is also partially normative too.

I’d recommend reading Lauer’s paper for more on this conceptual division and the kind of work it can do. For now, I’ll suggest that we could use the constitutive regulation concept as the basis for a normative argument. As follows:


  • (1) It is wrong to perform an activity whilst breaching the constitutive regulations of that activity.
  • (2) X breaches the constitutive regulations of an activity.
  • (3) Therefore, X is wrong.


The interesting question is whether educational activities — specifically assessments — come with constitutive regulations. I think the answer is “maybe”. If we follow contemporary teaching theory, then each course we teach should come with a number of intended learning outcomes (ILOs). These are things you want your students to be able to do at the end of the course (usually they are abilities or capacities you want them to develop). In essence, they are the normative goals of the course. If the course is well-designed, then the assessment should essentially be like a “game” in which students are forced to demonstrate that they have achieved the outcomes. If they do not, they fail.

Assessments of this sort should, I think, bear some resemblance to an activity or set of activities governed by constitutive regulations. The assessment-regulations will take an independently intelligible set of capabilities (e.g. memorisation, analysis, critical thinking) and, through the constraints of rules, create a scenario in which there is a normatively superior way of demonstrating those capacities. These will be the test conditions and parameters. Consequently, if one breaches the constitutive regulations of the test, one should be deemed to have both: (a) subverted the purpose of the test; and (b) if the ILOs are normatively significant (a point to which I shall return), one should also be deemed to have done something normatively illegitimate.

For the remainder of this series, the key issue is to see whether Kirk’s actions in “passing” the Kobayashi Maru test did, in fact, breach the constitutive regulations of that test.


2. What is the Kobayashi Maru Test?
To address this issue, we first need to know what the Kobayashi Maru (KM) test actually is. The following description is based on the one from Memory Alpha (the Star Trek-wiki).

The KM is a test given to all command-track cadets in Starfleet. The test takes place in a simulated version of the USS Enterprise’s bridge. The test candidate assumes the role of captain for the duration of the simulation. The simulated scenario is as follows. The Enterprise is on patrol near the neutral zone between the Federation and the Klingon Empire. It receives a distress call from a civilian freighter named The Kobayashi Maru. The freighter, which is located within the neutral zone, has struck a gravitic mine and needs to be rescued, otherwise the crew and passengers will perish. While rescuing the ship is what every commander would like to do, the problem is that entering the neutral zone risks a confrontation with the Klingons. Sure enough, this is exactly what happens: when the Enterprise enters the neutral zone, three Klingon battle cruisers decloak and attack.

The video below, taken from the Wrath of Khan, shows what the test looks like.


The test is programmed in such a way that, once you enter the neutral zone, there is no way to “win”. In other words, there is no way to successfully rescue the Kobayashi Maru while at the same time avoiding death at the hands of the Klingons. This renders the test more a test of character than a test of problem-solving. Everyone is supposed to fail the test, at least superficially.

Now, while I’m willing to accept that the KM-test is a no-win situation, I must point out at least one potential flaw in the set-up so far. Donning a moral philosophers cap for a moment, I think the KM can be viewed as a kind of moral dilemma. In fact, I think a moral dilemma is the quintessential no-win scenario. A moral dilemma, strictly defined, is any decision-making context in which one’s choices are limited to two (or more) equally bad courses of action. As such, there is no morally correct solution: no way to “win” from a moral perspective and a genuine tragedy associated with any choice you make.

But if we adopt a consequentialist ethic, I think the KM-test is not a true moral dilemma, and hence not a true no-win scenario. Look at it like this: in the initial phase of the test you have two options: (i) enter the neutral zone and attempt a rescue or (ii) do not enter the neutral zone and do not attempt a rescue. If you go for option (i), you will be killed and so too will the crew and passengers of the KM. If you go for option (ii), you will not be killed, but the crew and passengers of the KM will be. Presumably. And while ideally no one should die, it’s surely preferable that the crew of only one ship die than the crew of two ships. The two options are not equally bad. One seems clearly better than the other.

Admittedly, this is a controversial solution. It’s much like the classic kill-one-to-save-five scenario depicted in the trolley problem. But as I said at the outset, this is only a “potential” flaw in the structure of the KM-test. It could easily be repaired. For one thing, the solution just outlined only works if we assume the captain knows what will happen when he/she enters the neutral zone (i.e. if we assume perfect information); if we assume the opposite — that the captain does not know what will happen — then the solution I pointed out above becomes much less obvious. For another thing, the test designers could easily reprogramme it so that the initial choice — that of entering the neutral zone or not — is eliminated. This way one is landed immediately into the no-win dynamic of the rescue.

Of course, one the key bits of Star Trek lore is that Captain Kirk managed to “pass” the KM-test despite its no-win dynamic. How did he manage this? Well, as is reported in Wrath of Khan and depicted on screen in Star Trek, he “cheated”. He reprogrammed the test so that it was possible to defeat the Klingons and rescue the KM.


The question we need to ask is whether his “success” on the test was commendable or not. To do this, we’ll need to delve a little deeper into the ILOs and constitutive regulations of the KM-test, and assess their normative significance. We’ll do this in part two.

Friday, January 20, 2012

Vincent on the Responsibility-Liability Gap



When we make judgments about a person’s responsibility, are we making judgments about their liability too? Or is there some conceptual “gap” between responsibility and liability? If there is a gap, what could possibly fill that gap? These three questions form the basis of one of my ongoing research projects. And as part of that research project, I’m currently surveying some of the literature on this putative gap.

To that end, this post is going to look at Nicole Vincent’s discussion of the responsibility/liability gap in her paper “What do you mean I should take responsibility for my own ill health?”. Vincent, whose work I’ve discussed in the past, has done much in recent years to clarify the conceptual landscape of responsibility, so her work is about as good as any a place to start.

In the paper I’m going to look at today, Vincent’s primary targets are the views of the so-called luck egalitarians. The luck egalitarians have a particular view on the distributive justice debate, a view which, according to Vincent anyway, makes certain unjustified assumptions about the relationship between responsibility and liability.

I won’t be going through all of Vincent’s article in this post. Instead, I’ll be focussing on the section in which Vincent endorses the notion of a responsibility/liability gap. Nevertheless, I’ll try to give enough of a flavour of the rest of the paper for Vincent’s arguments to make sense.

The remainder of this post is divided into two parts. Section one offers a brief primer on the whole notion of luck egalitarianism. Section two presents Vincent’s argument for the existence of a responsibility/liability gap.


1. The Allure of Luck Egalitarianism
Distributional problems form the basis of many of our social policies: Who is entitled to welfare or unemployment payments? How much should they get? Who is entitled to publicly-funded healthcare? Who gets priority for organ transplants? Each of these questions forms the basis of distributional problem and each answer forms the basis of a social policy.

Luck egalitarians have a particular view on distributional problems of this sort. Their view is driven partly by the belief that any scheme of distribution ought to minimise the effects of bad luck on a person’s life. “Bad luck” can be defined as any event or circumstance which is disadvantageous to a particular individual, but which is not their fault. In addition to being driven by the desire to minimise the effects of bad luck, luck egalitarians are also, typically, driven by the belief that any disadvantage that is the fault of particular individual should be discounted or ignored when we try to solve our distributional problems.

The idea should become clearer if we consider the following two distributional problems:

The Gambler: Suppose that Ronald has squandered all his money by gambling on the racehorses. As a result Ronald is living in squalor. Suppose also that Richard has also lost all his money and is forced to live in squalor. But this is because he was made redundant after the head of his company was found to have committed massive fraud on his shareholders and workers. Suppose further that, although the state is willing to pay out welfare to people who live in squalor, they only have enough money to payout to one individual. Who should be entitled to it?

The Smoker: Suppose Ronald has spent most of his adult life smoking 40 cigarettes a day. As a result he has contracted lung cancer and needs ongoing healthcare. Suppose that Richard is in a similar predicament only his cancer is attributable to asbestos exposure at the hand of a negligent employer. Once again, the state is willing to pay for the healthcare but only has enough money to pay for one individual. Who should be entitled to it?

I suspect that, for most people, the intuitive response to these questions is clear: Richard has a greater entitlement than Ronald. But why is this? Well, in both instances Richard’s predicament is attributable to bad luck, i.e. circumstances that are not his fault; whereas Ronald’s predicament appears to be attributable to certain lifestyle choices that he made, i.e. circumstances that are his fault. So our answer to the distributive question in both cases seems best explained by the idea that judgments of entitlement track judgments of bad luck/responsibility. This is the essence of luck egalitarianism.

There are a few forms that luck egalitarianism can take, I’ll mention two by way of illustration. The first I call “strict luck egalitarianism” and the second “prioritarianism”.

Strict Luck Egalitarianism: If X is responsible for their own misfortune, then X is not entitled to any distributional benefits that could alleviate that misfortune.

Prioritarianism: If X is responsible for their own misfortune, then X moves to the back of the queue in terms of entitlement to any distributional benefits that could alleviate that misfortune.

These formulations are my own, and it shows. They are quite cumbersome. Nonetheless, I hope it’s clear just how strict the first formulation really is: it suggests that those who are responsible for their own misfortune are never entitled to any benefits. This is probably much too harsh for most people. Are we really going to deny a smoker access to healthcare, even when we can afford to supply it? Prioritarianism, on the other hand, seems more sensible. It is quite popular in the literature (so I believe) and is most readily associated with the work of Richard Arneson.

When discussed in light of case studies such as the gambler and the smoker, luck egalitarianism is alluring. But is this allure enough? Is it prone to any objections? As it happens, there are a number of objections to luck egalitarianism in the literature (no surprises there) but Vincent isn’t too concerned with them in her article (she discusses them but dismisses them). Instead, she wants to develop an alternative objection to luck egalitarianism, one based on the alleged responsibility-liability gap. Let’s discuss this next.


2. The Responsibility-Liability Gap
Vincent’s argument proceeds in three distinct “steps”. Step one argues that the notion of outcome responsibility is conceptually distinct from the notion of liability responsibility. This is significant since luck egalitarianism relies on the latter not the former. Step two argues that there is an important logical gap between making claims about outcome responsibility and making claims about liability responsibility. And step three argues that luck egalitarians either ignore this logical gap or else fill it without engaging in an important moral debate. Consequently, their theory is improperly defended in at least one crucial respect. I’ll expand briefly on each of these steps.

Step one relies heavily on Vincent’s structured taxonomy of responsibility concepts (STRC), which I discussed in a previous blog post. Ideally, you should read that post first before continuing with this one, but things are rarely ideal so here are three important points. First, the STRC highlights the conceptual complexity of everyday responsibility-talk. When people say that someone (or thing) is “responsible”, they could be invoking any one of the following six responsibility concepts:

Virtue Responsibility: This refers to someone’s characteristics or traits. As in “X is a responsible individual, he takes his duties seriously and performs them diligently.”

Role Responsibility: This refers to someone’s duties. As in “X (a ship’s captain) is responsible for the well-being of his crew”.

Liability Responsibility: This refers to the actions someone must perform in order to take responsibility for something, e.g. how they must pay the penalty for something.

Causal Responsibility: This refers to the cause of some event or outcome. As in “The drought is responsible for the famine”.

Capacity Responsibility: This refers to the capacities or abilities someone needs in order to be a responsible agent.

Outcome Responsibility: This refers to the outcomes for which someone is held responsible. As in “X is responsible for murdering Y”.

Second, the STRC highlights the structural relationships between these responsibility concepts. Vincent summarises these relationships by using the following diagram. The diagram suggests that capacity-responsibility shapes causal- and role-responsibility, that causal- and role-responsibility determine outcome-responsibility, and that outcome- and virtue-responsibility determine liability-responsibility.



Third, these responsibility concepts have some significant differences in terms of their core properties. In particular, they vary in terms of their temporal direction and prescriptivity. For instance, causal- and outcome-responsibility are descriptive and backward-looking: in order to determine who is responsible for which outcomes, we must look back to the past and describe the events that happened in it. In contrast to this, role-responsibility and liability-responsibility are forward-looking and prescriptive: in order to determine what steps someone must take in order to pay the penalty for what they’ve done, or what steps they must take in order to fulfill their duties, we must look into the future and prescribe a course of action.

The important point for current purposes is how this taxonomy illustrates the significant conceptual distinction between outcome-responsibility and liability-responsibility: one is backward-looking and descriptive in nature, the other is forward-looking and prescriptive in nature. This is significant since luck egalitarianism relies on the liability concept of responsibility not on the outcome one: to say that a smoker is (outcome) responsible for his ill-health, is not the same as saying that he ought to take (liability) responsibility for his ill-health.

This is to pre-sage the crucial second step in Vincent’s argument. The second step highlights the logical gap between outcome responsibility and liability responsibility. This gap is typically passed over by luck egalitarians because, as Vincent sees it, luck egalitarians are guilty of making something like the following argument:


  • (1) X (e.g. a smoker) is outcome responsible for A (e.g. lung cancer).

  • (2) Therefore, X is liability responsible for A.


Clearly, this argument is logically invalid: the conclusion (2) cannot be derived from premise (1). At least, it cannot be derived from that premise alone. A normative “bridging” premise is needed to plug the gap. This is something luck egalitarians need to provide. But even if they did it for this case, they would face another gap that needed plugging. Why is this? Well, because they need to take into consideration the fact that liability responsibility also involves an agent taking certain steps in order to discharge their burden of liability. Thus, for instance, the intuition in the Smoker case is that the smoker must be denied priority in the assignment of healthcare in order to discharge their burden of liability. This is a particular course of action that is prescribed for the smoker.

To spell this out more formally, we say there is a gap between:


  • (2) Therefore, X is liability responsible for A.


And:


  • (3) Therefore, X is liability responsible in manner Y (e.g. forced to pay compensation, or denied healthcare benefits).


This gap must also be plugged.

This brings us to the third step in Vincent’s argument. Having illustrated the conceptual division among responsibility concepts, as well as the various logical gaps just described, Vincent goes on to point out how luck egalitarians tend to do one of two things. They either (a) carry on regardless and assume that outcome responsibility somehow entails liability responsibility; or (b) they try to plug the gap with some explicitly stated normative bridging premise.

She suggests — in light of the intuitions being pumped by the Smoker and Gambler cases — the most likely candidate for such a bridging premise is the following:


  • (4) If X is outcome responsible for A, then X is liability responsible in manner Y.


This would plug both of the gaps identified above, and Vincent calls it a reactive norm of liability. This is because it maintains that one’s liability is a direct function of what one did in the past. The problem with this reactive norm is that it in turn needs to be justified. And the only way to justify it is by looking at the kinds of arguments used in more general debates about desert (i.e. debates about how someone deserves to be treated). These arguments are usually consequentialist or retributive in nature.

Vincent does not attempt to show how the reactive norm may, or may not, be justified in light of desert theory more generally. She merely points out the fact that the presence of the logical gaps identified earlier means the luck egalitarian cannot ignore this theory. Thus, as things presently stand luck egalitarianism, despite its allure, seems under-motivated.

To sum up, Vincent’s article does two important things. First, it provides an illustration of the responsibility-liability gap and, second, it shows how this gap may be relevant to at least one area of normative ethics. I hope to explore more research on this alleged gap in future posts. For now, I’ll sign off.

Wednesday, January 18, 2012

Doping, Slippery Slopes and Moral Virtues



(Series Index)

Well, I’m still stuck on the enhancement-in-sports-and-education-roundabout and probably will be til the end of January. So this post is, unfortunately, yet another addition to my ongoing series. This one is quite narrowly focused, looking at one argument from Chapter 10 of the following book:

Mike McNamee Sports, Vices and Virtues: Morality Plays (Routledge, 2008)

The book is actually pretty interesting. It makes the case for viewing elite sport as a kind of morality play: a forum in which the moral virtues are celebrated, and from which the general population can learn. It’s an idealistic view, but one with which those who complain about the grubby professionalism of modern sports are likely to agree.

I’m only going to zone in on a small part of the book’s overall thesis. The part in question comes from McNamee’s chapter on doping in elite sports. The chapter reviews some of the typical anti-doping arguments and dismisses them in relatively short order. I’m not too interested in this part of the chapter since I’ve covered such arguments elsewhere in this series. The chapter gets rather more interesting when it turns to consider one potentially novel — and McNamee thinks better — argument against doping: the slippery slope argument (SSA). It’s this argument that will be discussed here.

The remainder of this post is structured as follows: part one discusses SSAs in general; part two looks at McNamee’s SSA against doping; and part three looks at McNamee’s complaints about the vices of athletes who dope. Just note that although “doping” has a particular meaning in sport, one that may be thought distinct from “performance enhancement”, the terms are used interchangeably in what follows.


1. A Taxonomy of Slippery Slope Arguments
An SSA has a fairly standard form: it proposes that allowing something to be the case (in this instance, doping in sports) will lead to something else, usually undesirable, happening. And since this undesirable thing should not be allowed to happen, then it follows that the first thing should not happen either. In other words, it proposes that there exists a slippery slope between two things (call them X and Y), and since the second of these is undesirable the first shouldn’t be allowed. To put this more elegantly:


  • (1) If X is allowed to happen, then Y will happen.
  • (2) Y should not be allowed to happen.
  • (3) Therefore, X should not be allowed to happen.


There’s one important thing to note about this version of the argument: there can be more than one “slide” on the slippery slope before we reach the bottom. In the version given here, Y is directly connected to X and Y also lies at the bottom of the slippery slope. As such, the argument is suggesting that there is only one slide down to the bottom of the slippery slope. This need not be the case. There could be multiple stopping off points along the slope before we get to the bottom. All that matters is that the first slide — the one from the current status quo to X — leads, inexorably, to the bottom.

Now, I’ve had the opportunity to discuss SSAs before on the blog. In my series on John Corvino’s article (“The PIB Argument”) I looked at the use of SSAs in the same-sex marriage debate. There, I followed Corvino’s basic taxonomy and identified two forms such arguments could take: (i) the causal and (ii) the logical.

A causal SSA is one that proposes a causal link between X and Y. As an example, consider the following SSA: if you smoke one cigarette, you will want to smoke more cigarettes; if you smoke more cigarettes you will become addicted to nicotine and continue smoking for an extended period of time; and if you continue smoking for an extended period of time you dramatically increase your risk of lung cancer. So, since you should not wish to dramatically increase your risk of lung cancer, you should not smoke one cigarette. In this argument, the link between the points on the slippery slope is causal: one outcome, it is claimed, will naturally lead to another.

Contrast this with a logical SSA. In a logical SSA, the link between the points on the slippery slope is logical, not causal, in nature. This is most common in ethical SSAs. In an ethical SSA, the claim is usually that if we deem some practice to be morally acceptable, then we lose the principled basis upon which we object to other (more undesirable) practices. In other words, the logical barricade that currently exists between our principles and some undesirable practice is eroded. In the same-sex marriage debate, for instance, there are those who argue that in legalising same-sex marriage, we lose the principled basis on which we object to polygamy, incest or bestiality. This was discussed in the earlier series on the PIB argument.

Although I like the simplicity of the logical/causal division, McNamee adopts a slightly more complex taxonomy of SSAs. Relying initially on the work of Bernard Williams, he distinguishes between horrible result SSAs and arbitrary result SSAs. The distinction here depends on the kind of event or outcome that is thought to lie at the bottom of the slippery result. Obviously enough, in a horrible result SSA, the outcome at the bottom of the slope is horrible, whereas in an arbitrary result SSA, the outcome is simply arbitrary. But that just raises the further question, what do we mean by “horrible” and what do we mean by “arbitrary”. Unfortunately, McNamee isn’t clear on this point, and since I haven’t read Williams’s earlier work, I’m not sure that I can clarify the matter all that much. My guess, however, is that a horrible result is one that is morally repugnant or objectionable, and that an arbitrary result is simply one that seems unconnected (either logically or causally) from the initial starting point.

In addition to dividing SSAs up into these two major kinds, McNamee (again borrowing from someone else) looks at three different ways in which the slide down the slope can be conceptualised. This leads to the following three sub-types of SSA:

1. Precedential SSAs: In a precedential SSA, the initial slide from the status quo to X, is thought to set a precedent for further slides down the slope. Imagine you are arguing with your teenage son. He wants to go to some nightclub, but you disapprove. He is persistent and eventually you relent, allowing him to go but only “just this once”. Unfortunately for you, this relenting sets a precedent to which your son can appeal in future cases. Before you know it, you are allowing him to go out whenever he asks, which is exactly what you didn’t want. This kind of SSA could also be referred to as a “thin end of the wedge”-SSA.

2. Sorites SSAs: In a Sorites SSA, the slide to the bottom of the slope is hastened by the conceptual ambiguity of some key term(s). Obviously, the allusion here is to the infamous Sorites paradox: if you remove grains of sand from a heap, one-by-one, at what point does the heap become a non-heap? Here, the problem is caused by the conceptual ambiguity of “heap”. As McNamee points out, conceptual ambiguity of this sort is often exploited by proponents of doping and performance enhancement. For example, proponents of enhancement often point to the fuzzy boundary that lies between “treatment” and “enhancement”. They then use most people’s acceptance of “treatment” to make the case for certain forms of “enhancement”.

3. Domino Effect SSAs: In a domino effect SSA, the slide to the bottom of the slope is causal in nature. One event leads to another, which leads to another and so on. This corresponds pretty much exactly with my earlier description of a causal SSA so I won’t given an example here.

I’m not sure that McNamee’s more complex taxonomy brings with it any great advantages. Although the horrible/arbitrary distinction might be useful, the three sub-types listed above really only elaborate on the logical/causal division discussed earlier. Specifically, they just distinguish between two kinds of logical SSA: the precedential — in which endorsing one moral claim sets a precent for endorsing further moral claims; and the Sorites — in which conceptual ambiguity means we lose faith in our more restrictive moral beliefs. The domino effect SSA is simply a redescription of the causal SSA.

Even though I’m somewhat sceptical about its utility, I’m still going to make use of McNamee’s taxonomy in what follows. This is for the obvious reason that he makes use of it in describing his argument, and since I want to explain his argument here, it’s easier if I follow suit.


2. The Arbitrary Result SSA against Doping in Sport
The section heading gives the game away but, just to state the obvious, having taxonomised SSAs, Mcnamee goes on to present his own, arbitrary result-SSA against the use of doping in sport. He does so in a slightly unusual way. He starts by developing a kind of Sorites SSA that used by proponents of doping, and then he inverts it to support the anti-doping position.

Let’s look at the pro-doping SSA first. Of course, since the argument is being used to support something rather than object to it, it’s not really right to call it an SSA. The moniker “slippery slope” should probably only be applied in the negative case, not the positive. Nevertheless, with that concession to linguistic purity in mind, I’ll persevere in calling it the pro-doping SSA.

The pro-doping SSA discussed by McNamee relates to the ambiguity of the treatment/enhancement distinction. He uses a nice example to make his point:

Skeletal Reinforcement: Apparently (I’m not familiar with the sport) American Footballers can suffer from pretty serious injuries (broken bones etc.). Oftentimes these injuries affect their quality of life in their retirement. Suppose someone, pointing to the prevalence of such injuries, suggested a remedy. Instead of relying on the fragile construction of the human body, all players would have their key skeletal structures reinforced by new materials such as Kevlar. This would prevent serious long-term health effects from accruing to the players.

McNamee seems to accept that most people would look favourably on this kind of intervention. Furthermore, its advocates could probably sneak it in under the guise of its being a “treatment” not an enhancement. After all, reinforcing the key skeletal structures is being done to prevent injuries. And preventing injuries is surely a respectable form of treatment. This is the essence of pro-doping SSA:


  • (4) If you accept the moral legitimacy of treating sports injuries, then you ought to accept the moral legitimacy of preventing sports injuries.
  • (5) If you accept the legitimacy of preventing sports injuries, then you ought to accept the legitimacy of skeletal reinforcement in American football.
  • (6) You do accept the legitimacy of treating sports injuries.
  • (7) Therefore, you ought to accept the legitimacy of skeletal reinforcement in American football.


I’ve compressed the logic here, but you get the drift. The top of the slope is represented by the proposition “the treatment of sports injuries is legitimate” and the slide is brought about by the conceptual ambiguity of “treatment”. Why shouldn’t preventative interventions be viewed as a type of treatment?

The problem — at least from the perspective of the anti-doper — is that if you accept the moral legitimacy of skeletal reinforcement in this instance, you begin the inexorable slide to a pro-enhancement position. This leads to the inversion of the above reasoning. If you accept skeletal reinforcement for football players, what would you say about artificial toe extensions and arch reinforcement for sprinters? Surely, you’d have accept them too, particularly if it was pointed out that this could reduce the effects of friction on the sprinters’ limbs. But these interventions are likely to have enhancing effects too. Before you know it, you are landed in the landscape of enhancement.

Two points here. First, this Sorites-style SSA might not be valid. Just because there’s no difference between preventative interventions and treatment, does not mean that there is no difference between enhancing interventions and treatment. Indeed, that’s the whole point of the Sorites paradox in the first place: a single grain of sand is very definitely distinct from a heap of sand, even if the dividing line between being a heap and a non-heap is fuzzy.

Second, and more importantly, the pro-doping faction can easily ask the anti-dopers to point to the horrible result that lies at the bottom of the slope — the one that means we shouldn’t start the slide. In other words, they can issue the following challenge: If skeletal reinforcements for American football players aren’t troubling, then maybe we’re wrong to think that toe-extensions are troubling? And if we’re wrong about that, maybe we’re wrong about most forms of enhancement too?

Live to this kind of pro-enhancement argument, McNamee suggests the opponent takes a different tack. Instead of assuming that there must be a horrible result at the bottom of the slope, they can appeal to the possibility of arbitrary results: ones that seem to lose all connection to moral principle. In other words, they can say that by accepting the first slide down the slope, we will create conditions in which moral arbitrariness can thrive, i.e. conditions in which our decisions about which kinds of practices are permissible and which are impermissible seem to lack any firm grounding in reason.

To spell out the argument formally:


  • (8) If we accept X (some type of enhancement or quasi-treatment like skeletal reinforcement), then we will create conditions in which moral arbitrariness can thrive.
  • (9) We should not wish to create the conditions in which moral arbitrariness can thrive.
  • (10) Therefore, we should not accept X.


This is straightforward enough, but we need some reason for accepting (9). Why is it that moral arbitrariness is so problematic? McNamme has the following to say:

Let us take one brief example. There are aspects of our lives where, as a widely shared intuition, we might think that in the absence of good reasons we ought not to discriminate among people arbitrarily. Healthcare might be considered precisely one such case. Given the ever-increasing demand for public healthcare services and products it could be argued that access to them ought typically to be governed by publicly disputable criteria such as clinical need, or potential benefit, as opposed to choices of an arbitrary or subjective nature. (Sports, Virtues and Vices, pg. 186-187)

This is interesting. I think it appeals somewhat to the Rawlsian notion of public reason, i.e. the notion that practices need to be accepted for reasons that are accessible to all members of the public, not because of chance or whim. I probably agree with this, but then I’m forced to question premise (8): why would endorsing enhancement lead to moral arbitrariness? Why would we lose the ability to assess different forms of enhancement with publicly accessible reasons simply by accepting one particular form?

McNamee says the following:

Nothing in the pro-doping, or more broadly the pro-enhancement, position seems to allow for such objective dispute let alone prioritisation…In the absence of a defensible telos, over and above the mindless mantra of more medals, more glory (the narrowly conceived citius, altius, fortius), of clearly and substantively specified ends (beyond the banner of unrestrained “enhancement”), elite athletes, their coaches and their sports emdical back-up teams alike ought to resist the potentially open-ended transformations of human nature and potentialities.

I don’t quite know what to make of this. On the one hand, McNamee demands that decisions about the permissibility of enhancement be open to public scrutiny but says that they can’t do this until proponents of enhancement have some clear, morally defensible telos. At the same time, he acknowledges that there is kind of telos in sight (more medals, more glory, unrestrained enhancement), but dismisses this on, I hate to say it, arbitrary grounds. That is to say, he just seems to assume, without clear argument, that the very idea of enhancing and transforming human potentialities is not “morally defensible”.

Surely he must have a better reason to object to it than that?



3. The Vices of the Doper
Perhaps he does, but it is one that is somewhat distant from the slippery slope argument we have been discussing up til now. As I said at the outset, McNamee’s book defends the idea that sporting contests are a kind of morality play: a forum in which the moral virtues are celebrated, and the vices condemned. Thus it’s no real surprise to find out that one of the main reasons he objects to doping (and enhancement) is that dopers exhibit certain moral vices. McNamee points to two vices in particular: pleonexia (injustice) and shamelessness. Let’s discuss these briefly.

The Vice of Pleonexia: To be virtuous, a person must be just. And to be just they must be willing to give others their due and accept what is due to them. The problem with the doper is that they are not willing to accept what is due to them: they demand more and more, and they shut out others in the process, denying them their due as well. So goes MacNamee’s argument. The problem with this argument argument though is that it is forced to make certain questionable assumptions about the desert relation, i.e. the relation that determines whether somebody deserves a particular kind of punishment or reward. McNamee assumes that commitment, talent and dedication form the basis of the desert relation in sport and that those who engage in doping lack these traits. I think this is arguable on two grounds. First, on the ground that dopers may well display some kind of talent, commitment and dedication (is doping not evidence of dedication?); and second, on the grounds that there may be some trait possessed by the doper that ought to be included in the desert relation.


The Vice of Shamelessness: Shame is tied to feelings of guilt or personal failure. The person who experiences shame will feel both that they have let themselves down and that they have left others down too. As a result, they will seek to hide themselves away. McNamee doesn’t think that we should encourage athletes to feel shame, rather he thinks we should encourage them to have the capacity to feel shame. The problem with dopers is that they may lack this capacity. McNamee uses the example of Charlie Francis to illustrate what he means. Francis said that if Ben Johnson had simply stuck to schedule — knowing when to come off certain drugs and doses — he would not have been caught. This, as McNamee notes, is to treat doping as “a mere problem of timing, not one of ethics”. In contrast to this, McNamee thinks dopers should feel ashamed of any successes they gain by doping. They have let themselves and others down. There are many problems with this argument. The main one is that the belief that the doper should feel shame is usually driven by the belief that their successes are fraudulent or unfair to others. But this is exactly what the pro-doper would argue against. They would say that doped-up performances are neither necessarily fraudulent nor necessarily unfair.

This brings us to the end of McNamee’s arguments against doping. While there is some interesting material in there, I think McNamee has a long way to go before what he says is at all persuasive. His anti-doping SSA needs further work before the connection between enhancement and moral arbitrariness is shown; his claim that the doper exhibits the vice of injustice needs to clarify the nature of the desert relation that is being invoked; and his belief that dopers should feel shame is driven by assumptions that the pro-doper has no reason to accept.