Wednesday, December 17, 2014

Meaning, Value and the Collective Afterlife: Must others survive for our lives to have meaning?



Samuel Scheffler made quite a splash last year with his book Death and the Afterlife. It received impressive recommendations and reviews from numerous commentators, and was featured in a variety of popular outlets, including the Boston Review and the New York Review of Books. I’m a bit late to the party, having only got around to reading it in the past week, but I think I can see what all the fuss was about.

The book really does offer some interesting, and novel, insights into what it takes to live a meaningful life. The most interesting of those insights comes from Scheffler’s defence of the collective afterlife dependency thesis. According to this thesis, much of what makes our lives valuable is dependent on the existence of a collective afterlife. This collective afterlife is not, according to Scheffler, to be understood in supernatural or religious terms; it is to be understood in secular and naturalistic terms. It is the continued existence of beings like us in an environment which is roughly equivalent to the one in which we now live.

Scheffler is quite careful in his development of this thesis. He distinguishes three different versions of it, and clarifies (to some extent) exactly what needs to be preserved in this collective afterlife. I’m going to skip over some of this nuance in what follows. I’m just going to look at Scheffler’s defence of the unrefined version of the dependency thesis, as well as some criticisms of that idea. In particular, I’m going to look at Mark Johnston’s criticism, which claims that if Scheffler is right, then life is nothing more than a Ponzi scheme: it needs an infinite stream of future generations to “pay in” in order to make life meaningful for the current generation.


1. What is this “collective afterlife” you speak of?
Before looking at the argument proper, we need to clarify the central thesis. As I just said, it all hinges on the notion of a collective afterlife. Scheffler alludes to this idea several times in the book. He knows that his use of that term is contentious — “afterlife” brings with it a rich set of religious connotations — but that’s part of the fun. Here is a quick definition, based on my own reading between the lines:

Collective Afterlife: The continued existence of human-like beings in conditions roughly equivalent to those in which you now live, after your death.

A couple of points about this definition. First, note how it refers to “human-like beings”, not humans. This is my addition. Throughout the book Scheffler talks (or implies) that his imagined collective afterlife involves the existence of human beings, but I take it that it is not absolutely essential for the beings that exist in the collective afterlife to be human (i.e. genetic members of homo-sapiens). Human-like beings, with similar properties of personhood and similar goals and aspirations would be sufficient. That brings us to the other part of the definition, which is also mine, and which claims that they must live in conditions roughly equivalent to those in which we now live. It turns out that the precise conditions in which future generations must live is somewhat contentious as between Scheffler and his critics. It’s pretty clear that, in order to confer meaning on our lives, the lives of future generations must share at least some of our values, aspirations and needs, and that they must not live in a state of abject immiseration and deprivation, but they probably don’t need to have lives that are exactly the same as ours. I’ll return to this later when looking at Johnston’s criticism. Finally, note how the definition makes no appeal to the continued existence of humans that are particularly close to us (i.e. friends and family). This is important because one of things that Scheffler points out in his book is that, in order to confer value on our lives, the lives of future beings need not bear a close relation to us.

So much for that. What role does the collective afterlife play in our lives? Scheffler claims that it plays quite a big role. He claims that much of what we value in life (our plans, hopes, projects, activities and so on) depends for its value on the existence of a collective afterlife:

…our conception of a human life…relies on an implicit understanding of such a life as itself occupying a place in an ongoing human history, in a temporally extended chain of lives and generations. 
(Scheffler 2013, p. 43)

This is the dependency thesis:

The Collective Afterlife Dependency Thesis (CADT): The existence of a collective afterlife is an important condition for living a valuable life; without a collective afterlife our present lives would be denuded of much of their value.

To be clear, this is my definition of the thesis, not Scheffler’s. He is much more careful in his discussion. He distinguishes between an attitudinal, evaluative and justificatory version of the thesis. These distinctions look into whether the collective afterlife is something that merely affects our attitudes to our lives, whether it actually affects what is valuable about our lives, and whether the actual (as opposed to believed) existence of the afterlife is essential. I’m going to ignore these distinctions for now. You’ll also note that my definition refers to the collective afterlife as an “important” condition for value in life. I use that term because I don’t think Scheffler intends for it to be understood as either a necessary or a sufficient condition; but he does clearly think it has a significant impact on the amount of value in our lives. Hence “important” seems like the most appropriate descriptor.


2. Scheffler’s argument for the CADT
Scheffler doesn’t present a formal argument for the CADT in his book. Instead, he presents a series of thought experiments and reflections upon those thought experiments. As always, I would like to recover as much formal structure from these reflections as possible. So in what follows I’ll try to show how those thought experiments can be used as part of a semi-formal defence of the CADT. There are two thought experiments that are particularly important for this purpose.

The first thought experiment is:

Doomsday Thought Experiment: Suppose that you will live a long, normal human life, but that 30 days after your death, all human life will be destroyed in some catastrophic event (for example, an asteroid collision). Suppose, further, that you know this catastrophic event will take place as you are living your life. What effect would this have?

Scheffler suggests, in a long and thoughtful analysis, that it would have a pretty devastating affect on your life. It would rob many of your projects and activities of their value, and would probably induce a significant amount of despair, grief and existential hand-wringing. He further contends that it is not really plausible to react to the scenario with indifference. As he puts it:

[F]ew of us would be likely to say… “So what? Since it won’t happen until thirty days after my death, it isn’t of any importance to me. I won’t be around to experience it, and so it doesn’t matter to me in the slightest.” 
(Scheffler 2013, p. 19)

Of course, it’s always dangerous when philosophers play these intuition-mongering games. There may be some people who do react with utter indifference (think Kirsten Dunst in Melancholia - if you think life is pretty pointless anyway you might not be too bothered). But I still sympathise with what Scheffler is saying. I certainly don’t think that I would react with utter indifference. The possibility of the doomsday scenario after my death would probably change my attitude to life.

Scheffler thinks these likely reactions tell us something interesting about what it takes to live a valuable life. In particular, he thinks they suggest that there is a strong nonexperiential aspect to what makes life worth living. In the doomsday scenario, your life and experiences are unaffected — you do not die prematurely — but nevertheless the value of your life is, somehow, affected. He also thinks that these reactions suggest that there is a significant conservatism to what makes our lives valuable. In other words, we want the things we currently value and care about to continue to exist after we die. Combined, these two implications provide some support for the CADT. They point to the need for the continued existence of beings like us, living lives like ours, in order for our lives to have as much value as we seem to think they do.



One problem with the doomsday thought experiment, however, is that it conflates the continued existence of beings with lives that are close to our own with the continued existence of beings with lives like our own. What do I mean by this? I mean it could be, for all the doomsday thought experiment suggests, that what induces all the despair and existential angst is the fact that our children, friends and family, or any other being close to us, will die. Although Scheffler thinks the continued existence of such beings is an important part of what confers value on our lives, he thinks that their existence alone does not do justice to the CADT. This leads to the second thought experiment:

Collective Infertility Thought Experiment: Suppose that the entire human race is infertile. In other words, the current generation of humans is the last generation of humans that will ever live. (A situation depicted in the novel and film The Children of Men). What effect would that have on our lives?

Again, Scheffler suggests that it would have a pretty devastating effect. It would induce a significant amount of despair and existential angst. Indeed, this is something that the Children of Men tries to illustrate in some rich, imaginative detail. We are shown a world in which anarchy and anomie reign supreme, and in which only an extremely authoritarian government can keep control. In the book, it is said to give rise to ennui universel, and that only those who “lack imagination” or who are in the grip of an extreme egotism are immune from the negative effects.

In these respects, the collective infertility scenario is similar to the doomsday one. But there are some crucial differences. As Scheffler points out, the despair in the collective infertility scenario is not just caused by the prospective deaths of ourselves and people we care about. In fact, we already know that everyone we know and love will someday die and yet this, in and of itself, does not induce the same degree of existential angst. The despair in the collective infertility scenario is caused by the fact that everyone — including those with whom we have no special or personal connection — is gradually going extinct. The fact that we feel despair at this generalised extinction tells us something interesting. It tells us that there is a strong altruistic element to the role of the collective afterlife in our own lives. We care about the general fate of humankind, not just the fate of people we know and love. Once again, this seems to support the CADT.



To summarise all this in a simple formal argument, we could construct the following:


  • (1) If our intuitive reaction to certain thought experiments suggests that the continued existence of human-like beings in conditions roughly equivalent to those in which we now live is an important condition for meaning and value in our lives, then we are warranted in accepting the CADT.

  • (2) Our intuitive reactions to the Doomsday Thought Experiment and the Collective Infertility Thought Experiment suggest that the continued existence of human-like beings in conditions roughly equivalent to those in which we now live is an important condition for meaning and value in our lives.

  • (3) Therefore, we are warranted in accepting the CADT.



You might think it’s silly to spell out the argument in this level of detail. But one thing I like about this semi-formal reconstruction is that it renders transparent the type of inference that is taking place. Scheffler is defending the CADT on the basis of our reactions to certain thought experiments. Though this is a common methodology in philosophy, there are no doubt people who will worry about inferring such a significant thesis from such a limited set of reflections. All I can say to such people is that Scheffler’s reflections are much more detailed than I am making them out to be in this post, and even if his argument is ultimately lacking, it provides much food for thought.


3. The Ponzi Scheme Problem
There are several criticisms and commentaries on Scheffler’s argument. Some of them are modest in nature. For example, Susan Wolf — in a response contained within the original book — argues that much of what we value (e.g. certain intellectual and artistic pursuits) could still retain value in the face of the Doomsday scenario. This is modest insofar as it doesn’t completely deny that the collective afterlife plays a role in conferring value on our present lives. But there are also critics who take issue with the CADT as a whole. One of them is Mark Johnston who, in his review of the book, argues that if we take the CADT seriously, life ends up being akin to a Ponzi Scheme. And since he feels that this is implausible, he rejects the CADT.

Let’s try to make sense of this criticism. As best I can tell, it works as a reductio of the CADT:


  • (4) If the CADT is true, then the possibility of our lives being full of value and meaning is dependent on the existence of future generations living lives full of value and meaning.

  • (5) If the possibility of our lives being full of value and meaning depends on the existence of future generations living lives full of value and meaning, then life turns out to be a Ponzi scheme: we need an infinite stream of future generations to pay into the system in order to make our lives meaningful.

  • (6) But we are not going to have an infinite stream of future generations paying into the system.

  • (7) Therefore, our current lives are denuded of much of their value and meaning.

  • (8) It is implausible to think that our current lives are denuded of much of their value and meaning.

  • (9) Therefore, the CADT is implausible.


Johnston’s argument appeals to the “rough equivalence” concept that I introduced earlier on. As you’ll recall, I said that in order for the collective afterlife to confer value on our present lives, it cannot be the case that future generations live in a state of abject immiseration and deprivation, and that they must live lives that are roughly equivalent to those that we now live. Johnston is taking this a step further and arguing that their future lives must be very similar to our own, at least with respect to the amount of value and meaning in them. He then combines this with a transference principle for the conferral of value:

Transference Principle: If human generationn (Gn) lacks value and meaning in their lives, then so too does Gn-1, and Gn-2, all the way back to G1.

In other words, the lack of meaning and value in one future generation transfers back to the present generation. As Antti Kauppinen puts it, Johnston here seems to be endorsing a kind of Recursive Afterlifism. The question is whether this is itself a plausible construal of the CADT.

Kauppinen thinks that it is not, and I have similar feelings. While I appreciate the metaphor of the Ponzi scheme, I have a hard time accepting the transference principle upon which Johnston’s criticism is based. Kauppinen suggests in his commentary that future generations need not match us in terms of value and meaning in order for our activities and projects to have value conferred upon them by the existence of those future generations. For example, finding a cure for cancer in the present generation would be a valuable activity if it benefitted some future generations (e.g. 10 future generations). It would not be robbed of its value simply because there won’t be an infinite stream of happy future generations. What we end up with is a modified version of the transference principle. Instead of the amount of value and meaning in Gn being entirely determined by the amount of value in Gn+1, we have a situation in which the amount of value and meaning in Gn is partly determined by the amount of value and meaning in Gn+1. This more modest form of collective afterlifism has some disturbing implications. It suggests that life for the final generation of humans will indeed be devoid of much meaning and value, and that things won’t be much better for the second-to-last generation. But this is entirely consistent with the CADT. It simply suggests that the impact of the eventual demise of the human race attenuates as we go back in time. I find that to be a plausible construal of the CADT.


4. Conclusion
I’m going to leave it there. To quickly recap, Scheffler’s book argues that the amount of value and meaning in our lives is highly dependent upon the existence of a collective afterlife. He defends this by analysing two thought experiments, in one of which the human race goes extinct 30 days after your death, and in the other of which the human race is collectively infertile and dying out. One thing I have not covered in this post is the role of our deaths in conferring meaning on our lives. This is another, probably more controversial, aspect of Scheffler’s book. He thinks that our deaths are important for conferring meaning on our lives, and that the collective afterlife is more significant than our (individual) continued existence. I hope to cover that argument in more detail another time.

Tuesday, December 16, 2014

Should we criminalise robotic rape and robotic child sexual abuse?


I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is going to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Hence why I wrote the article.

For the detail, you’ll have to read the original paper (available here, here, and here). But in an effort to entice you to do that, I thought I would use this post to provide a brief overview.


1. What is robotic rape and robotic child sexual abuse?
First things first, it is worth clarifying the phenomena of interest. I’m sure people have a general sense of what a sex robot is, and maybe some vaguer sense of what an act of robotic rape or child sexual abuse might be, but it’s worth being as clear as possible at the outset in order to head-off potential sources of confusion. So let’s start with the notion of a sex robot. In the article, I define a sex robot as any artifact that is used for the purposes of sexual stimulation and/or release with the following three properties: (i) a human-like form; (ii) the ability to move; and (iii) some degree of artificial intelligence (i.e. an ability to interpret, process and act upon information from its environment).

As you can see from this definition, my focus is on human-like robots not on robots with more exotic properties, although I briefly allude to those possibilities in the article. This is because my argument appeals to the social meaning that might attach to the performance of sexual acts with human-like representations. For me, the degree of human-likeness is a function of the three properties included in my definition, i.e. the more human-like in appearance, movement and intelligence, the more human-like the robot is deemed to be. For my argument to work (if it works at all) the robots in question must cross some minimum threshold of human-likeness, but I don’t know where that threshold lies.

So much for sex robots. What about acts of robotic rape and robotic child sexual abuse? Acts of robotic rape are tricky to define given that legal definitions of rape differ across jurisdictions. I follow the definition in England and Wales. Thus, I view rape as being non-consensual sexual intercourse performed in the absence of a reasonable belief in consent. I then define robotic rape as sexual intercourse performed with a robot that mimics signals of non-consent, where it would be unreasonable for the performer of those acts to deny that the robot was mimicking signals of non-consent. I know there is some debate as to what counts as a signal of non-consent. I try to sidestep this debate in the article by focusing on what I call “paradigmatic signals of non-consent”. I accept that the notion of a paradigmatic signal of non-consent might be controversial. Acts of robotic child sexual abuse are easier to define. They arise whenever sexual acts are performed with robots that look and act like children.

Throughout the article, I distinguish robotic acts from virtual acts. The former are performed by a human actor with a real, physical robot partner. The latter are performed in a virtual world via an avatar or virtual character. There are, however, borderline cases, e.g. virtual acts performed using immersive VR technology with haptic sensors (e.g. such as those created by the Dutch company Kiiroo). I am unsure about the criminalisation argument in such cases, for reasons that will become clearer in a moment.


2. What is the prima facie argument for criminalisation?
With that definitional work out of the way, I can develop the main argument. That argument proceeds in a particular order. It starts by focusing on the purely robotic case, i.e. the case in which the robotic acts have no extrinsic effects on others. It argues that even in such a case, there may be grounds for criminalisation. That gives me a prima facie argument for criminalisation. After that, I focus on extrinsic effects, and suggest that they are unlikely to defeat this prima facie argument. Let’s see how all this goes.

The prima facie argument works like this:


  • (1) It can be a proper object of the criminal law to regulate conduct that is morally wrong, even if such conduct has no extrinsically harmful effects on others (the moralistic premise).

  • (2) Purely robotic acts of rape and child sexual abuse fall within the class of morally wrong but extrinsically harmless conduct that it can be a proper object of the criminal law to regulate (the wrongness premise).

  • (3) Therefore, it can be a proper object of the criminal law to regulate purely robotic acts of rape and child sexual abuse.


I don’t really defend the first premise of the argument in the article. Instead, I appeal to the work of others who have. For example, Steven Wall has defended a version of legal moralism that argues that actions involving harm to the performer’s moral character can, sometimes, be criminalised; likewise, Antony Duff has argued that certain public wrongs are apt for criminalisation even when they do not involve harm to others. I use both accounts in my article and suggest that if I can show that purely robotic acts of rape and child sexual abuse involve harm to moral character or fall within Duff’s class of public wrongs, then I can make the prima facie case for criminalisation.

This first premise is likely to be difficult for many, particularly those with a classic liberal or Millian approach to criminalisation. They will argue that only harm to others renders something apt for criminalisation. I sympathise with this view (which is why I am cagey about the argument as a whole) but, again, appeal to others who have tried to argue against it by showing that a more expansive form of legal moralism need not constitute a severe limitation of individual liberty and how it may be very difficult to consistently hold to the liberal view. I also try to soften the blow by highlighting different possible forms of criminalisation at the end of article (e.g. incarceration need not be the penalty). Still, even then I accept that my argument may simply lead some to question the moralistic principles of criminalisation upon which I rely.

Premise two is where I focus most of my attention in the article. I defend it in two ways, each way corresponding to a different version of legal moralism. First, I argue that purely robotic acts of rape and child sexual abuse may involve harm to moral character. This is either on the grounds that the performance of such acts encourages/requires the expression of a desire for the real-world equivalents, or on the grounds that the performance requires a troubling insensitivity to the social meaning of those acts. This is consistent with Wall’s version of moralism. Second, I build upon this by arguing that the insensitivity to social meaning involved in such acts (particularly acts of robotic rape) would allow for them to fall within Duff’s class of public wrongs. The idea being that in a culture that has condoned or belittled the problem of sexual assault, an insensitivity to the meaning of those acts demands some degree of public accountability.

In defending premise (2) I rely heavily on work that has been done on the ethics of virtual acts and fictional representations, particular the work of Stephanie Patridge. This reliance raises an obvious objection. There are those — like Gert Gooskens — who argue that our moral characters are not directly implicated in the performance of virtual acts because there is some distance between our true self and our virtual self. I respond to Gooskens by pointing out that the distance is lessened in the case of robotic acts. I rely on some work in moral psychology to support this view.

That is my defence of the prima facie argument.




3. Can the prima facie argument be defeated?
But it is important to realise how modest that argument really is. It only claims that robotic rape and robotic child sexual abuse are apt for criminalisation all else being equal. It does not claim that they are apt for criminalisation all things considered. The argument is vulnerable to defeaters. I consider two general classes of defeaters in the final sections of the paper.

The first class of defeaters is concerned with the possible effects of robotic rape and robotic child sexual abuse on the real-world equivalents of those acts. What if having sex with a child-bot greatly reduced the real-world incidence of child sexual abuse? Surely then we would be better off permitting or facilitating such acts, even if they do satisfy the requirements of Duff or Wall’s versions of moralism? This sounds right to me, but of course it is an empirical question and we have no real evidence as of yet. All we can do for now is speculate. In the article, I speculate about three possibilities. Robotic rape and robotic child sexual abuse may: (a) significantly increase the incidence of real-world equivalents; (b) significantly reduce the incidence of real-world equivalents; or (c) have an ambiguous effect. I argue that if (a) is true, the prima facie argument is strengthened (not defeated); if (b) is true, the prima facie argument is defeated; and if (c) is true then it is either unaffected or possibly strengthened (if we accept a recent argument from Leslie Green about how we should use the criminal law to improve social morality).

The second class of defeaters is concerned with the costs of an actual criminalisation policy. How would it be policed and enforced? Would this not involve wasteful expenditure and serious encroachments on individual liberty and privacy? Would it not be overkill to throw the perpetrators of such acts in jail or subject them to other forms of criminal punishment? I consider all these possibilities in the article and suggest various ways in which the costs may not be as significant as we first think.




So that’s it. That is my argument. There is much more detail and qualification in the full version. Just to be clear, once again, I am not advocating criminalisation. I am genuinely unsure about how we should approach this phenomenon. But I think it is an issue worth debating and I wanted to provide a (provocative) starting point for that debate.

Tuesday, December 9, 2014

Sunday, December 7, 2014

Brain-based Lie Detection and the Mereological Fallacy




Some people think that neuroscience will have a significant impact on the law. Some people are more sceptical. A recent book by Michael Pardo and Dennis Patterson — Minds, Brains and Law: The Conceptual Foundations of Law and Neuroscience — belongs to the sceptical camp. In the book, Pardo and Patterson make a passionate plea for conceptual clarity when it comes to the interpretation of neuroscientific evidence and its potential application in the law. They suggest that most neurolaw hype stems from conceptual confusion. They want to throw some philosophical cold water on the proponents of this hype.

In many ways, I am sympathetic to their aims. I too am keen to downplay the neurolaw hype. Once upon a time, I wrote a thesis about criminal responsibility and advances in neuroscience. Half-way through that thesis, I realised that few, if any, of the supposedly revolutionary impacts of neuroscience on the law were all that revolutionary. Most were simply rehashed arguments about free will and responsibility, dressed up in a neuroscientific garb, but which had been around for millennia. I also agree with the authors that there has been much misunderstanding and philosophically naivety on display.

That said, one area of neurolaw that I’m slightly more bullish about is the potential use of brain-based lie detection. But let me clarify. I’m not bullish about the use of “lie detection” per se, but rather EEG-based recognition detection tests or concealed information tests. I’ve written about them many times. Initially I doubted their practical importance and worried about their possibly mystifying effects on legal practice. But, more recently, I’ve come around to the possibility that they may not be all that bad.

Anyway, I’m participating in a conference next week about Pardo and Patterson’s book and so I thought I should really take a look at what they have to say about brain-based lie detection. That’s what this post is about. It’s going to be critical and will argue that Pardo and Patterson’s scepticism about EEG-based tests is misplaced. There are several reasons for this. One of the main ones is that they focus too much on fMRI lie detection, and not enough on the EEG-based alternatives; another is that they fail to really engage with the best current scientific work being done on the EEG tests. The result is that their central philosophical critique of these methods seems to lack purchase, at least when it comes to this particular class of tests.

But I’m getting ahead of myself. To develop this critique, I first need to review the basic techniques of brain-based lie detection and to summarise Pardo and Patterson’s main argument (the “Mereological Fallacy”-argument). Only then can I move on to my own critical musings.


1. What kinds of technologies are we talking about?
When debating the merits of brain-based lie detection techniques, it’s important to distinguish between two distinct phenomena: (i) the scanning technology and (ii) the testing protocol. The scanning technology is what provides us with data about brain activity. There are currently two main technologies in use in this field. Functional magnetic resonance imaging (fMRI) is used to track variations in the flow of oxygenated blood across different brain regions. This is typically thought to be a good proxy measure for underlying brain activity. Electro-encephalographic imaging (EEG) tracks variations in electrical activity across the scalp. This too is thought to be a good measure of underlying brain activity, though the measure is cruder than that provided by fMRI (by “cruder” I mean less capable of being localised to a specific sub-region of the brain).

The testing protocol is how the data provided by the scanning technology is used to find out something interesting about the test subject. In the classic control question test (CQT) the data is used to make inferences as to whether a test subject is lying or being deceitful. This testing protocol involves asking the test subject a series of questions, some of which are relevant to a particular incident (e.g. a hypothetical or real crime), some of which are irrelevant, and some of which are emotionally salient and similar to the relevant questions. The latter are known as “control” questions. The idea behind the CQT is that the pattern of brain activity recorded from those who lie in response to relevant questions will be different from the pattern of activity recorded from those who do not. In this way, the test can help us to separate the deceptive from the honest.

This is to be contrasted with the concealed information test (CIT), which doesn’t try to assess whether a test subject is being deceptive or not. Instead, it tries to assess whether they, or more correctly their brain, recognises certain information. The typical CIT involves presenting a test subject with various stimuli (e.g. pictures or words). These stimuli are either connected to a particular incident (“probes”), not connected to a particular incident but similar to those that are (“targets”), or irrelevant to the incident (“irrelevants”). The subject will usually be asked to perform some task to ensure that they are paying attention to the stimuli (e.g. pressing a button or answering a question). The idea behind the CIT is that certain recorded patterns of activity (data signals) are reliably correlated with the recognition of the probe stimuli. In this way, the test can be used to separate those who recognise certain information from those who do not. Since the information in question will usually be tied to a crime scene, the test is sometimes referred to as the guilty knowledge test. But this name is unfortunate and should be avoided. The test does not prove guilt or even knowledge. At best, it proves recognition of information. Further inferences must be made in order to prove that a suspect has guilty knowledge. Indeed, calling it the “concealed” information test is not great either, since the suspect may or may not be “concealing” the information in question. For these reasons, I tend to prefer calling it something like a memory detection test or, better, a recognition detection test, but concealed information test is the norm within the literature so I’ll stick with that.

As I said above, scanning technologies and testing protocols are distinct. At present, it happens to be the case that EEGs are used to provide the basis for a CIT, and that fMRIs are used to provide the basis for a CQT. But this is only because of present limitations in what we can infer from the data provided by those scans. It is possible that fMRI data could provide the basis for a CIT; and it is possible that EEG data could provide the basis for a CQT. In fact, there are already people investigating the possibility of an fMRI-based CIT.

All that said, the technology I am most interested in, and the one that I will focus on for the remainder of this post, is the P300 CIT. This is an EEG-based technology. The P300 is a particular kind of brainwave (an “evoked response potential”) that can be detected by the EEG. The P300 is typically detected when a subject views a rare and meaningful (i.e. recognised) stimulus, in a sequence of other stimuli. As such, it is thought to provide a promising basis for a CIT. I won’t go into any great depth about the empirical evidence for this technique, though you can read about it in some of my papers, as well as in this review article from Rosenfeld et al 2013. I’m avoiding this because Pardo and Patterson’s criticisms of these technologies is largely conceptual in nature.

Let’s turn to that critique now.


2. The P300 and the Mereological Fallacy
Before I get into the meat of Pardo and Patterson’s argument, I need to offer some warnings to the reader. Although in the relevant chapter of their book, the authors do lump together the P300 CIT with the fMRI CQT, it is pretty clear that their major focus is on the latter, not the former. This is clear both from the time they spend covering the evidence in relation to the fMRI tests, and from their focus on the concept of lying in their critique of these tests. It is also clear from the fact that they assume that the P300 CIT is itself a type of lie detection. In this they are not entirely correct. It is true that one may be inclined to infer deceptiveness from the results of a P300 CIT, either because the testing protocol forces subjects to lie when responding to the stimuli, or because the subject themselves may deny recognising the target information. But inferring deceptiveness is not the primary goal nor the primary forensic use of this test — inferring recognition is.

Pardo and Patterson’s preoccupation with lie detection should blunt some of the force of my critique. After all, my focus is on recognition detection, and so it may be fairly said that my defence of that technology misses their larger point, or does not really call into question their larger point. Nevertheless, I do think there is some value to what I am about to say. Pardo and Patterson do still talk about the P300 CIT and they do still argue that the mereological fallacy (which I’ll explain in a moment) could apply to the interpretation of evidence drawn from that test. The fact that they spend less time fleshing this out doesn’t mean the topic is irrelevant. Indeed, it may serve to bolster my critique since it suggests that their application of the mereological fallacy to the P300 CIT is not as well thought-out, nor as respectful of the current state of the art in research and scholarship, as it should be.

But what is this mereological fallacy and how does it affect their argument? Mereology is the study of part-whole relations, so as you might gather the mereological fallacy arises from a failure to appreciate the difference between a whole and one of its parts. For the sake of clarity, let’s distinguish between two versions of the mereological fallacy. The first, more general one, can be defined like this:

The General Mereological Fallacy: Arises whenever you ascribe properties that are rightly ascribed to a whole to some part or sub-part of that whole. For example, applying the predicate “fast” to a runner’s leg, rather than to the runner themselves.

The second, more specific one, can be defined like this:

The Neurolaw Mereological Fallacy: Arises whenever a neurolaw proponent ascribes behavioural or person-level properties to a state of the brain. (In other words, whenever they assume that a brain state is constitutive of or equivalent to a behavioural or person-level state). For example, applying the predicate “wise” to a state of the brain, as opposed to the person whose brain state it is.



This more specific version of the fallacy is the centrepiece of Pardo and Patterson’s book. Indeed, their book is effectively one long elaboration of how the neurolaw mereological fallacy arises in various aspects of the neurolaw literature. In basing their criticism on this fallacy, they are following the work of others. As far as I am aware, the mereological fallacy was first introduced into debates about the philosophy of mind by Bennett and Hacker. Pardo and Patterson are simply adapting and updating Bennett and Hacker’s critique, and applying it to the neurolaw debate. This is not a criticism of their work since they do that job with great care and aplomb; it is simply an attempt to recognise the origins of the critique.

Anyway, the neurolaw mereological fallacy provides the basis for Pardo and Patterson’s main critique of brain-based lie detection. Though they do not set this critique out with any formality, I think it can be plausibly interpreted as taking the following form (see pp. 99-105 for the details):


  • (1) If it is likely that the use of brain-based lie detection evidence would lead legal actors (lawyers, judges, juries etc) to commit the neurolaw mereological fallacy, then we should be (very) cautious about its forensic uses.

  • (2) The use of brain-based lie detection evidence is likely to lead legal actors to commit the neurolaw mereological fallacy.

  • (3) Therefore, we should be (very) cautious about the forensic uses of brain-based lie detection evidence.


Let’s go through the main premises of this argument in some detail.

The first premise is the guiding normative assumption. I am not going to challenge it here. I will simply accept it arguendo (“for the sake of argument”). Nevertheless, one might wonder why we should endorse it? Why is the mereological fallacy so normatively problematic? There are some reasons. The main one is that the law cares about certain concepts. These include the intentions of a murder suspect, the knowledge of the thief, and the credibility or potential deceptiveness of the witness. The application of these concepts to real people is what carries all the normative weight in a legal trial. The content of one’s intentions and the state of one’s knowledge is what separates the criminal from the innocent. The deceptiveness of one’s testimony is what renders it probative (or not) in legal decision-making. Pardo and Patterson maintain that all these concepts, properly understood, apply to the behavioural or personal level of analysis. For example, they argue that deceptiveness depends on a complex relationship between a person’s behaviour and the context in which that behaviour is performed. To be precise, being deceptive means saying or implying something that one believes to be false, in a social context in which truth-telling is expected or demanded. These behavioural-contextual criteria are what ultimately determine the correct application of the predicate “deceptive” to an individual.

If we make a mistake in the application of those predicates, it has significant normative implications. If we deem someone deceptive when, by rights, they are not, then we risk a miscarriage of justice (or something less severe but still malign). The concern that Pardo and Patterson have is that neurolaw will encourage people to make such mistakes. If they start using neurological criteria as the gold-standard in the application of normative, behavioural-level predicates like “intention” and “knowledge”, then they risk making normative errors. This is why we should be cautious about the use of neuroscientific evidence in the law.

But how cautious should we be? That’s something I’m not entirely clear about from my reading of Pardo and Patterson’s book. They are not completely opposed to the use of brain-based lie detection in the law. Far from it. They think it could, one day, be used to assist legal decision-making. But they do urge some level of caution. My sense from their discussion, and from their book as a whole, is that they favour a lot of caution. This is why I have put “very” in brackets in my statement of premise (1).

Moving on then to premise (2), this is the key factual claim about the use of brain-based lie detection evidence. In its current form it does not discriminate between the P300 CIT and the fMRI CQT. Pardo and Patterson’s concern is that evidence drawn from these tests will lead to legal actors confusing the presence of brain signal X with the subject’s meeting the criteria for the application of a behavioural predicate like “knowing” or “intending” or “deceiving”. In the case of the P300 CIT, the fallacy arises if the detection of the P300 is taken to be equivalent to the detection of a “knowledge”-state within the subject’s brain, instead of merely evidence that can be used to infer that the subject is in the appropriate behavioural knowledge state.

But do proponents of this technology make the fallacy? Pardo and Patterson argue that they do. They offer support for this by quoting from an infamous proponent of the P300 CIT: Lawrence Farwell. When describing how the technology worked, Farwell once said that the “brain of the criminal is always there, recording events, in some ways like a video camera”. Hence, he argued that the P300 CIT reveals whether or not crime-relevant information is present in the brain’s recording. Farwell is committing the fallacy here because he thinks that the state of knowing crime relevant information is equivalent to a brain state. But it is not:

This characterization depends on a confused conception of knowledge. Neither knowing something nor what is known — a detail about a crime, for example — is stored in the brain…Suppose, for example, a defendant has brain activity that is purported to be knowledge of a particular fact about a crime. But, suppose further, this defendant sincerely could not engage in any behavior that that would count as manifestation of knowledge. On what basis could one claim and prove that the defendant truly had knowledge of this fact? We suggest that there is none; rather, as with a discrepancy regarding lies and deception, the defendant’s failure to satisfy any criteria for knowing would override claims that depend on the neuroscientific evidence 
(Pardo and Patterson 2013, pp. 101-102)

Or as they put it again later, behavioural evidence is “criterial” evidence for someone knowing a particular fact (satisfaction of the behaviour criteria simply is equivalent to being in a state of knowledge); neuroscientific evidence is merely inductive evidence that can be used to infer what someone knows. People like Farwell are wont to confuse the latter with the former and hence wont to commit the mereological fallacy.

That, at any rate, would appear to be their argument. Is it any good?


3. Should we take the mereological fallacy seriously?
I want to make three criticisms of Pardo and Patterson’s argument. First, I want to suggest that the risk of proponents of the P300 CIT committing the mereological fallacy is, in reality, slight. At least, it is when one takes into account the most up-to-date work being done on the topic. Second, I want to push back against Pardo and Patterson’s characterisation of the mereological fallacy in the case of the P300 CIT. And third — and perhaps most significantly — I want to argue that in emphasising the risk of a neurolaw mereological fallacy, Pardo and Patterson ignore other possible — and arguably more serious — evidential errors in the legal system.

(Note: these criticisms are hastily constructed. They are my preliminary take on the matter. I hope to revise them after next week’s conference)

Turning to the first criticism, my worry is that in their defence of premise (2), Pardo and Patterson are constructing something of a straw man. For instance, they cite Lawrene Farwell as an example of someone who might confuse inductive neuroscientific evidence of knowledge with criterial behavioural evidence of knowledge. But this is a misleading example. Farwell’s characterisation of the brain as something which simply records and stores information has been criticised by leading proponents of the P300 CIT. For example, J. Peter Rosenfeld, himself a leading psychophysiologist and P300 researcher, wrote a lengthy critical appraisal of Farwell back in 2005. In it, he identified the problems with Farwell’s analogy, and noted that the act of remembering or recollecting information is highly fallible and reconstructive. There are also other P300 CIT researchers have actually tried to check the vulnerability of the technique to false memories. Beyond this, Farwell has been more generally criticised by experts in the field. In a recent commentary on a review article written by Farwell, the authors (a group of leading P300 researchers) said this:

By selectively dismissing relevant data, presenting conference abstracts as published data, and most worrisome, deliberately duplicating participants and studies, he misrepresents the scientific status of brain fingerprinting. Thus, [Farwell] violates some of the cherished canons of science and if [he] is, as he claims to be, a ‘brain fingerprinting scientist’ he should feel obligated to retract the article. 
(Meijer et al, 2013)

Of course, Farwell isn’t a straw man: he really exists and he really has pushed for the use of this technology in the courtroom. So I’m not claiming that there is no danger here or that Pardo and Patterson are completely wrong to warn us about it. My only point is that Farwell isn’t particularly representative of the work being done in this field, and that there are others that are live to the dangers of assuming that the P300 signal does anything more than provide inductive evidence of knowledge. To be fair, I have a dog in this fight since I have written positively about this technology. But I would never claim that the detection of a P300 is criterial evidence of guilty knowledge; I would always point out that further inferential steps are needed to take reach such a conclusion. I am also keen to point out that this technology is not yet ready for forensic use. Along with other proponents, I think widespread field-testing — in which the results of a P300 are measured against other more conclusive forms of evidence (including behavioural evidence) in actual criminal/legal cases — would be needed before we seriously consider it.

This leads me to the second criticism, which is that I not entirely sure about Pardo and Patterson’s characterisation of the mereological fallacy, at least as it pertains to the P300 CIT. They are claiming that there is an important distinction between a person knowing something and the neurological states of that person. Knowledge is a state pertaining the whole, whereas neurological states are sub-parts of that whole. Fair enough. But as I see it, the P300 CIT is not a test of a subject’s knowledge at all. It is a recognition test. In fact, it is not even a test of whether a person recognises information; rather, it is a test of whether the person’s brain recognises information. A person’s brain could recognise a stimulus without the person themselves recognising the stimulus. Why? Because large parts of what the brain does are sub-conscious (sub-personal — if we assume the personal is defined by continuing streams of consciousness). Figuring out whether a subject’s brain recognises a stimulus seems forensically useful to me, and it need not be confused with assuming that the person recognises the stimulus.

The final criticism is probably the most important. A major problem I have with Pardo and Patterson’s discussion of brain-based lie detection is how isolated it feels. They highlight the empirical and conceptual problems with this form of evidence without considering that evidence in its appropriate context. I will grant that there is a slight risk that proponents of the P300 CIT will commit the mereological fallacy. But how important is that risk? Should it really lead us to be (very) cautious about the use of this technology? That’s something that can only be assessed in context. What other methods do we currently use for determining whether a witness or suspect recognises certain crime-relevant information? There are several. The most common are robust questioning, cross examination and interrogation. Verbal or behavioural responses from these methods are then used to make inferences about what someone knows or does not know. But these methods are not particularly reliable. Even if behavioural criteria determine what it means for a subject to know something, there are all sorts of behavioural signals that can mislead us. Is someone hiding something if they are being fidgety? Or if they look nervous and blink too often? What if they change their story? We routinely make inferences from these behavioural signals without knowing for sure how reliable they are or how likely they are to mislead us (though we may have some intuitive sense of this).

And this matters. One of the points that I, and others, have been making in relation to the P300 CIT is that it provides a neurological signal, from which we can make certain inferences, and that it comes with known error rates and precise protocols for its administration. In this respect it seems to have a comparative advantage over many of the other methods we use for making similar inferences. This is why we should take it seriously. In other words, even if it does carry with it the risk that legal actors will commit the mereological fallacy, that risk has to be weighed against the risks associated with other, similar, evidential methods. If the latter outweigh the former, Pardo and Patterson’s argument seem a good deal less significant.




4. Conclusion
To briefly sum up, Pardo and Patterson offer an interesting and philosophically sophisticated critique of brain-based lie detection. They argue that one of the dangers with this technology is that the legal actors who make use of it will be prone to commit the neurolaw mereological fallacy. This fallacy arises when they ascribe behavioural-level properties to brain states. Though I agree that this is a fallacy, I argue that it is not that dangerous, at least in the case of evidence drawn from the P300 CIT. This is for three reasons. First, I think the risk of actual proponents of the technology committing this fallacy is slight. With the exception of Lawrence Farwell — whom Pardo and Patterson critique — most proponents of the technology are sensitive to its various shortcomings. Second, Pardo and Patterson’s characterisation of the mereological fallacy — at least when it comes to this type of evidence — seems misleading. The P300 CIT provides a signal of brain-recognition of certain information, not person-recognition of information. And third, and most important, the risk of committing the mereological fallacy must be weighed against the risk of making faulty inferences from other types of evidence. It is suggested that the latter are likely to be higher than the former.

Friday, December 5, 2014

The Philosophy of Sex (Series Index)




Once you've written nearly 700 posts, you begin to see patterns you never really appreciated. For example, I just realised that I've written quite a bit about the philosophy of sex (broadly construed). In doing so, I've covered a number of controversial debates and issues. These include: the permissibility of pornography; the criminalisation of prostitution; the punishment of rape and sexual assault; and the ethics of sex in virtual and robotic worlds.

Anyway, I thought it might be useful to group together everything I've written on the topic in this one post. I think it makes for some interesting reading. I've divided this up by theme, starting with the basic views on the ethics of sex, and then moving into more specialised debates. I haven't included the numerous posts I have written on the ethics of same-sex relations. There's another index-post that will give you links to them.


1. Introduction: General Issues in the Ethics of Sex


  • On Benatar's Two view of Sexual Ethics - A look at David Benatar's classic paper which argued that a casual attitude toward sex implies that there is nothing particularly wrong about rape and child sexual abuse. I tried to resist Benatar's conclusions.




2. The Ethics of Pornography







3. Prostitution and the Ethics of Commercial Sex






4. Criminal Law: Rape, Sexual Assault and Incest



  • On Rubenfeld and the Riddle of Rape by Deception - My analysis and critique of Jed Rubenfeld's controversial article on rape by deception. Rubenfeld argued that rape law should not be premised on consent and the right to sexual autonomy. Instead, it should be based on the right to self-possession and bodily autonomy. 




5. Robotic and Virtual Sex


  • Will sex workers be replaced by robots? (A Precis) - A brief summary of my paper on the topic of sex work and technological unemployment. I try to argue -- contra others -- that sex work may be one of the few areas that is resistant to technological unemployment.







Wednesday, December 3, 2014

Are we innate retributivists? Review of the Psychological Evidence



Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.

The ethical debate about the merits of consequentialism vis-a-vis retributivism is long-running. I have written about it many times in the past. In this post, I want to take a slightly different perspective on the matter. I want to look at the psychological basis of our punishment practices. Why do ordinary people impose punishment on others? Do they do so for consequentialist reasons? Or are they all “innate” retributivists? (I use the term “innate” loosely: I do not mean to imply that there is a natural instinct or hardwired drive for retributivism — though that is possible; I simply mean to imply that retributivism might be the default or cognitively more felicitous approach for most people).

There has been much research on this matter over the past 20 years. I want to review some of it here. In doing this, I rely heavily on a article by Carlsmith and Darley (both leading researchers on this topic). I’ll talk about methodology first, then look at some studies.


1. How do you study the psychology of punishment?
In order to study the psychology of punishment you first need to clarify your hypothesis. The main research question in this area concerns the difference between retributivism and consequentialism. So let’s offer slightly more formal definitions of those concepts:

Retributivism: The belief that punishment should fit the crime. In other words, that punishment should only ever be imposed for historical wrongdoing, and that the amount of punishment imposed on an individual should be proportionate to the gravity of the wrong and the individual’s level of blameworthiness.

Consequentialism: The belief that punishment should help us to achieve some good outcome. In other words, that punishment should be future-oriented. The outcomes in question can vary depending on the particular theory (e.g. rehabilitation, general deterrence, special deterrence, incapacitation). The amount of punishment should be sensitive to the amount of good that can be achieved.

The innate retribution hypothesis is that most people make punishment decisions on a retributivist basis, rather than on a consequentialist basis. That is to say, they are sensitive to the gravity of the wrong and the degree of blameworthiness of the wrongdoer, not the amount of deterrence or rehabilitation (or whatever) that can be achieved through punishment. To give an example, for the retributivist the unknown person who committed ten, intentional and carefully planned murders would deserve more punishment than the famous celebrity who was found driving under the influence, even if the latter’s punishment would have a greater deterrent effect.

How can we study this hypothesis? Carlsmith and Darley identify two methods:

Verbal report method: Simply ask people why they chose to punish someone in the way that they did.

Behavioural measure method: Ask people to perform various punishment-related tasks (e.g. ascribe punishment to an individual based on a vignette) and then try to infer their underlying cognitive processes from their behaviour on that task.

The verbal report method is easy to implement, but flawed. People often don’t have a good understanding of why they made the decisions they made; they often say what they think other people want to hear; and they often rationalise decisions after the fact. That’s why most researchers prefer the behavioural measure method, even though it is more difficult to implement.

A typical behavioural study will present subjects with a series of vignettes. The vignettes will describe a crime or wrong, and the subject will be asked to choose an appropriate punishment. The vignettes will vary either in terms of the wrong being done, the degree of blameworthiness, or the amount of good that could be achieved through punishment. By varying the vignettes in this manner, researchers can tease apart the role of retributive and consequentialist criteria in punishment-related decision-making.


2. What does the research show?
That should be enough by way of background. What does the research actually show? Does it support the innate retribution hypothesis? Carlsmith and Darley argue that it does. In particular, they argue that research up to now shows that people are naturally drawn to retribution-related information, that their judgments are sensitive to that information (and not consequentialism-related information), that they find it easier to understand retributivism, that there few individual differences in this inclination, and that there is a disparity between what people believe about their punishment-related goals and their actual behaviour. Let’s look at the studies that back all this up:

People are drawn to retribution-related information: This finding comes from a study by Carlsmith, published in 2006. The study worked like this: Subjects entered the lab knowing only that a crime had taken place. They were then presented with different categories of information about that crime. Some categories covered deterrence-related information, some covered incapacitation-related information, and some covered retribution-related information. The finding was that subjects were overwhelmingly drawn to the retribution-related information. Indeed, 97% of subjects selected such information on a first trial, and strong majorities (64% and 57%) selected it on second and third trials. In a follow-up, it was also found that confidence in judgment went up more when subjects received the retribution-related information than when they received information drawn from the other categories.

Punishment decisions are highly sensitive to retribution-relevant criteria, not consequentialist criteria: This is supported by a couple of studies. Darley et al 2000 compared retribution-relevant criteria to incapacitation-related criteria (i.e. likelihood of an individual offender re-offending). Subjects were presented with 10 vignettes that varied in terms of the gravity of the crime (petty theft to assassination) and in terms of both the prior history of the offender and the future likelihood of reoffending. The study found that punishment decisions were highly sensitive to the retribution-related criteria and that subjects largely ignored the likelihood of reoffending. In a sister study, published two years later, Carlsmith et al (2002), found that the same was true when you compared retribution to general deterrence. This study also included more variations in the degree of wrongdoing and blameworthiness, and found that the sensitivity across these variations tracked the predictions of the innate retribution hypothesis.

People find it easier to understand the retributive theory of punishment: This finding comes from the Darley et al (2000) study. In that study, subjects were asked to review the same vignettes a second and third time from a particular theoretical perspective. Subjects were given an explanation of the different theories (retribution or incapacitation) and were then asked to assign punishment in each of the vignettes on the basis of those theories. Subjects found it much easier to adopt the retribution perspective. Indeed, even when explicitly instructed to ignore retribution, they were still were found to pay some attention to retribution-relevant criteria (e.g. the gravity of the offence).

There are few, if any, individual differences in punishment style: The Carlsmith et al 2002 study tried to see whether there were systematic differences in punishment style between individuals. In other words, could it be that even though the majority were retributivists, there were a few consistent consequentialists? They investigated this by explaining the theories to the subjects and asking them to pick statements that best articulated their own views. This led to the categorisation of experimental subjects as either retributivists, consequentialists or pluralists. The experimenters then tracked the decisions of these different groups in relation to the different vignettes. They found practically no difference. People who identified with the deterrence position were slightly more sensitive to deterrence-related criteria, but the effect was small and dwarfed by their sensitivity to retribution-related criteria.

Self-reported justifications for punishment bear little relation to actual punishment-related behaviour: This is unsurprising in light of the previous finding, but it’s worth knowing a bit more about it. In a 2008 study, Carlsmith tried to see whether people’s self-reported justifications for punishment were consistent with their actual punishment-related decisions. He found that they were not, thereby showing problems with self-report studies. He also found that although people expressed support for deterrence related policies (e.g. zero tolerance) they soon abandoned these policies once they realised that those policies failed to track proportionality.

People are unlikely to endorse a system of restorative justice that lacks retributive features: This a slightly narrower finding. Restorative justice is the name given to approaches to criminal justice that try to minimise harsh treatment. The goal instead being to repair the harm done to victims of crime and to restore the offender to their community. In a study done by Gromet and Darley in 2006, it was found that people were less likely to endorse such a system if it ignored retribution completely. The study looked at different offences and asked subjects to assign the offender to one of three courts. The first court was purely restorative (with no punitive elements), the second was mixed, and the third was traditional (i.e. punitive only). The experimenters found that people accepted the purely restorative system for low-level offending (e.g. “Halloween mischief”) but not for higher levels of offending (e.g. attempted rape or murder).



Taken together, these findings seem to make an impressive case for the innate retribution hypothesis. They suggest that people are cognitively inclined to the retributive point of view. There are, however, problems with this analysis.


3. Concluding thoughts
I need to preface this with a confession. I have not read all the details of these studies. I know that they try to control for various confounds and use multiple manipulations of the vignettes in order to rule out competing hypotheses. That said, in the review article, Carlsmith and Darley suggest that their studies proceed from a particular philosophical assumption: that retributivisim and consequentialism are, fundamentally, irreconcilable. They accept that deterrence-based judgments could be sensitive to the level of wrongdoing, but they argue that this sensitivity is contingent in nature only. The ultimate goal is deterrence and if you can deter more people by severely punishing a less-deserving offender, then so be it. Achievement of that goal trumps any sensitivity to wrongdoing. As they put it themselves:

We have argued, as have numerous philosophers, that retribution and utility cannot be effectively integrated and that logic dictates an “either/or” approach. This strikes many readers as overly simplistic because it is easy to think “I want both…”. There is certainly nothing wrong with this sentiment, and holding these desires simultaneously is not illogical. Indeed, it is frequently the case that a punishment can serve both functions effectively. Nonetheless, the two justifications are not isomorphic and frequently diverge on appropriate sentences. It is in these situations that one must make a choice, and that one motive will trump the other. 
(Carlsmith and Darley 2008, p. 202)

And it is these situations that their vignettes are designed to test.

This seems like a philosophically sophisticated position, but it may lead some to question the results being presented. There are those who think that consequentialism-based decision-making can incorporate a proportionality requirement. They will be inclined to think an appropriately designed study could account for this. This reminds me, somewhat, of the empirical study of free will and responsibility. Many of the early studies on people’s attitudes toward free will and responsibility assumed that deterministic explanations of human behaviour ruled out responsibility (i.e. that the two concepts were not reconcilable). But, of course, this ignored numerous sophisticated compatibilist views. More recent studies have tried to correct for this.

I wonder whether something similar could be true of the study of punishment? I am not sufficiently well-read on the psychological literature, but the analogy seems pretty direct. The logical, either/or incompatibility between free will and determinism is similar to the logical, either/or incompatibility between historical proportionality and consequentialism. Furthermore, the two domains of study seem to be importantly linked: beliefs about free will and responsibility often drive retributive judgments. That said, there are differences. I’m certainly more persuaded by the irreconcilability in the case of retributivism and consequentialism.

Another point to make is that the studies discussed above all seem to be performed on samples taken from the US population (primarily college students in the studies that I looked at). Although some of these are claimed to be representative samples of the US population, one may still wonder what the extent of cultural determination is in these cases. The US is, arguably, a more retributively inclined nation. Would the same results hold up in other countries? I found one study (on which Darley is a co-author) comparing US citizens to Canadians and Germans which suggested that differences across individuals are minimal, despite the general cultural disparities.

Anyway, those are just some quick thoughts on this topic. Carlsmith and Darley also use their empirical findings to make some normative claims about the law. I’m much more confident about my ability to assess those normative claims, so I’ll have a look at them in a future post.

Wednesday, November 26, 2014

The Epistemological Objection to Divine Command Theory




Regular readers will know that I have recently been working my through Erik Wielenberg’s fascinating new book Robust Ethics. In the book, Wielenberg defends a robust non-natural, non-theistic, moral realism. According to this view, moral facts exist as part of the basic metaphysical furniture of the universe. They are sui generis, not grounded in or constituted by other types of fact.

Although it is possible for a religious believer to embrace this view, many do not. One of the leading theistic theories holds that certain types of moral fact — specifically obligations — cannot exist without divine commands (Divine Command Theory or DCT). This is the view defended by the likes of Robert Adams, Stephen Evans, William Lane Craig, Glenn Peoples and many many more.

In this post, I want to share one of Wielenberg’s objections to the DCT of moral obligations. This objection holds that DCT cannot provide a satisfactory account of obligations because it cannot account for the obligations of reasonable non-believers. This objection has been defended by others over the years, but Wielenberg’s discussion is the most up-to-date.

That said, I don’t think it is the most perspicuous discussion. So in what follows I’m going to try to clarify the argument in my usual fashion. In other words, you can expect lots of definitions, numbered premises and argument maps. This is going to be a long one.


1. Background: General Problems with Theological Stateism
Theological voluntarism is the name given to a general family of theistic moral theories. Each of these theories holds that a particular moral status (e.g. whether X is good/bad or whether X is permissible/obligatory) depends on one or more of God’s voluntary acts. The divine command theory belongs to this family. In its most popular contemporary form, it holds that the moral status “X is obligatory” depends on the existence of a divine command to do X (or refrain from doing X).

In his book, Wielenberg identifies a broader class of theistic moral theories, which he refers to under the label ‘theological stateism”:

Theological Stateism: The view that particular moral statuses (such as good, bad, right, wrong, permissible, obligatory etc) depend for their existence on one or more of God’s states (e.g. His beliefs, desires, intentions, commands etc).

Theological stateism is broader than voluntarism because the states appealed to may or may not be under voluntary control. For instance, it may be that God necessarily desires or intends that the torturing of innocent children be forbidden. It is not something that he voluntarily wills to be the case. Indeed, the involuntariness of the divine state is something that many theists find congenial because it helps them to avoid the horns of the Euthyphro dilemma (though it may lead to other theological problems). In any event, all voluntarist theories are subsumed within the class of theological stateism.

The foremost defender of the DCT is Robert M. Adams. As mentioned above, he and other DCT believers think that commands are necessary if moral obligations are to exist. The command must take the form of some sign that is communicated to a moral agent, expressing the view that X is obligatory.

Adams offers several interesting arguments in favour of this view. One of the main ones is that without commands we cannot tell the difference between an obligatory act (one that it is our duty to perform) and a supererogatory act (one that is above and beyond the call of duty). Here’s an analogy I have used to explain the gist of this argument:

Suppose you and I draw up a contract stating that you must supply me with a television in return for a sum of money. By signing our names to this contract we create certain obligations: I must supply the money; you must supply the TV. Now suppose that I would really like it if you delivered the TV to my house, rather than forcing me to pick it up. However, it was never stipulated in the contract that you must deliver it to my door. As it happens, you actually do deliver it to my door. What is the moral status of this? The argument here would be that it is supererogatory (above and beyond the call of duty), not obligatory. Without the express statement within the contract, the obligation does not exist.

Adams’s view is that what is true for you and me in the contract, is also true when it comes to our relationship without God. He cannot create obligations unless he communicates the specific content of those obligations to us in the form of a command. This is why Adams critiques other stateist theories such as divine desire theory. He does so on the grounds that they allow for the existence of obligations that have not been clearly communicated to the obligated. He thinks this is not a sound basis for the existence of an obligation.


2. Reasonable Non-Believers and the Epistemological Objection
The fact that communication is essential to Adams’s DCT creates a problem. If there are no communications, or if the communications are unrecognisable (for at least some segment of the population) then moral obligations do not exist (for at least some segment of the population). The claim made by several authors is that this is true for reasonable non-believers, i.e. those who do not believe in God but who do not violate any epistemic duty in their non-belief.

This has sometimes been referred to as the epistemological problem for DCT, but that can be misleading. The problem isn’t simply that reasonable non-believers cannot know their moral obligations. The problem is that, for them, moral obligations simply don’t exist. Though this objection is at the heart of Wielenberg’s discussion, and though it has been discussed by others in the past, I have nowhere seen it formulated in a way that explains clearly how it works or why it is a problem for DCT. To correct for that defect, I offer the following, somewhat long-winded, formalisation:


  • (1) According to DCT, for any given moral agent (S), an obligation to X (or to refrain from X) exists if and only if God commands S to X (or refrain from X).

  • (2) A theological stateist theory of moral obligations fails to account for the existence of obligations unless the moral agents to whom the obligation applies have knowledge of the relevant theological state.

  • (3) DCT is a theological stateist theory of moral obligations.

  • (4) Therefore, DCT fails to account for the existence of an obligation to X (or to refrain from X) unless S has knowledge of God’s commands (from 1, 2 and 3)

  • (5) If there are reasonable non-believers (i.e. people who don’t believe in God and who do not violate any epistemic duties), then they cannot have knowledge of God’s commands.

  • (6) There are reasonable non-believers.

  • (7) Therefore, on DCT, moral obligations fail to exist for reasonable non-believers (from 4, 5 and 6)

  • (8) DCT cannot be a satisfactory theory of moral obligations if it entails that moral obligations do not exist for reasonable non-believers.

  • (9) Therefore, DCT cannot be a satisfactory theory of moral obligations.





A word or two on each of the premises. Premise (1) is simply intended to capture the central thesis of DCT. I don’t think a defender of DCT would object. Premise (2) is based on Adams’s objections to other stateist theories (and, indeed, his more general defence of DCT). As pointed out above, he thinks awareness of the contents of the command is essential if we are to distinguish obligations from other possible moral statuses, and to avoid the unwelcome possibility of people being obliged to do X without being aware of the obligation. Premise (3) follows from the definition of stateist theories, and (4) then follows as an initial conclusion.

That brings us to premise (5), which is the most controversial of the bunch and the one that defenders of the DCT have been most inclined to dispute. We will return to it below. Premise (6) is also controversial. Many religious believers assume that non-believers have unjustifiably rejected God. This is something that has been thrashed out at length in the debate over Schellenberg’s divine hiddenness argument (which also relies on the supposition of reasonable non-belief). I’m not going to get into the debate here. I simply ask that the premise be accepted for the sake of argument.

The combination of (4), (5) and (6) gives us the main conclusion of the argument, which is that DCT entails the non-existence of moral obligations for reasonable non-believers. I’ve tacked a little bit extra on (in the form of (8) and (9)) in order to show why this is such a big problem. I don’t have any real argument for this extra bit. It just seems right to say that if moral obligations exist at all, then they exist for everybody, not just theists. In any event, and as we are about to see, theists have been keen to defend this view, so they must see something in it.

That’s a first pass at the argument. Now let’s consider the views of three authors on the plausibility of premise (5): Wes Morriston, Stephen Evans and Erik Wielenberg.


3. Morriston on Why Reasonable Non-believers Cannot Know God’s Commands
We’ll start with Morriston who has, perhaps, offered the most sustained analysis of the argument. He tries to defend premise (5). To understand his defence, we need to step back for a moment and consider what it means for God to command someone to perform or refrain from performing some act. The obvious way would be for God to literally issue a verbal or written command, i.e. to state directly to us that we should do X or refrain from doing X. He could do this through some authoritative religious text or other unambiguous form of communication (just as I am unambiguously communicating with you right now). The problem is that it is not at all clear that we have such direct verbal or written commands. At the very least, this is something that reasonable non-believers reasonably deny.

As a result of this, most DCT defenders argue that we must take a broader view of what counts as a communication. According to this broader view, the urgings of conscience or deep intuitive beliefs that doing X would be wrong, could count as communications of divine commands. It may be more difficult for the reasonable non-believer to deny that they have epistemic access to those communications.

Morriston thinks that there is a problem here. His view can be summed up by the following argument:


  • (10) To know that a sign (e.g. an urging of conscience) is an obligation-conferring command, one must know that the sign emanates from the right source (an agent with the ability to issue such a command).

  • (11) A reasonable non-believer does not know that a sign (e.g. an urging of conscience) emanates from the right source.

  • (12) Therefore, a reasonable non-believer cannot know whether a sign (e.g. an urging of conscience) is an obligation-conferring command (and therefore (5) is true).





Premise (10) is key here. Morriston derives support for it from Adams’s own DCT. According to Adams, God’s commands have obligation-conferring potential because God is the right sort of being. He has the right nature (lovingkindness and maximal goodness), he has the requisite authority, and we stand in the right kind of relationship to him (he is our creator, he loves us, we prize his friendship and love). It is only in virtue of those qualities that he can confer obligations upon us through his commands. Hence, Morriston is right to say that knowledge of the source is essential if the sign is to have obligation-conferring potential.

Morriston uses a thought experiment to support his point:

Imagine that you have received a note saying, “Let me borrow your car. Leave it unlocked with the key in the ignition, and I will pick it up soon.” If you know that the note is from your spouse, or that it is from a friend to whom you owe a favour, you may perhaps have an obligation to obey this instruction. But if the note is unsigned, the handwriting is unfamiliar, and you have no idea who the author might be, then it’s as clear as day that you have no such obligation. 
(Morriston, 2009, 5-6)

And, of course, the problem for the reasonable non-believer is that he/she does not know where the allegedly obligation-conferring signs are coming from. They might think that our moral intuitions arise from our evolutionary origins, not from the diktats of a divine creator.

The upshot of this is that premise (5) looks to be pretty solid.


4. Evans’s Response to Morriston
Stephen C. Evans tries to respond to Morriston. He does so with a lengthy thought experiment:

Suppose I am hiking in a remote region on the border between Iraq and Iran. I become lost and I am not sure exactly what country I am in. I suddenly see a sign, which (translated) reads as follows: “You must not leave this path.” As I walk further, I see loudspeakers, and from them I hear further instructions: “Leaving the path is strictly forbidden”. In such a situation it would be reasonable for me to form a belief that I have an obligation to stay on the path, even if I do not know the source of the commands. For all I know the commands may come from the government of Iraq or the government of Iran, or perhaps from some regional arm of government, or even from a private landowner whose property I am on. In such a situation I might reasonably believe that the commands communicated to me create obligations for me, even if I do not know for sure who gave the commands. 
(Evans 2013, p. 113-114)

Evans goes on to say that something similar could be true in the case of God’s commands. They may be communicated to people in a manner that makes it reasonable for them to believe that they have obligation-conferring potential, even if they don’t know for sure who the source of the command is.

Evans’s thought experiment is probably too elaborate for its own good. I’m not sure why it is necessary to set it on the border between Iraq and Iran, or to stipulate that the sign has to be translated. It’s probably best if we simplify its elements. What Evans really seems to be saying is that in any given scenario, if a sign with the general form of a command is communicated to an agent and if it is a live epistemic possibility for that agent that the sign comes from a source with the authority to create obligations (like the government or a landowner) then it is reasonable for that agent to believe that the sign creates an obligation. To express this in an argumentative form:


  • (13) In order for an agent to reasonably believe that a sign is an obligation-conferring command, two conditions must be met: (a) the agent must have epistemic access to the sign itself; and (b) it must be a live epistemic possibility for that agent that the sign emanates from a source with obligation-conferring potential.

  • (14) A reasonable non-believer can have epistemic access to signs that communicate commands and it is a live epistemic possibility for such agents that the signs emanate from God.

  • (15) Therefore, reasonable non-believers can reasonably believe in the existence of God’s obligation-conferring commands (and therefore (5) is false).




5. Wielenberg’s Criticisms of Evans
It is at this point that Wielenberg steps into the debate. And, somewhat disappointingly, he doesn’t have much to say. He makes two brief objections to Evans’s argument. The first is that Evans assumes (as did Morriston) that the sorts of signs available to reasonable non-believers will be understood by them to have a command-like structure. But it’s not clear that this will be the case.

Morriston and Evans both use thought experiments in which the communication to the moral agent takes the form of a sentence with a command-like structure (e.g. “You must not stray from the path”). This means that the agent knows they are being confronted with a command, even if they don’t know where it comes from. The same would not be true of something like a deep moral intuition or an urging of conscience. A reasonable non-believer might simply view that as a hard-wired or learned response to a particular scenario. Its imperative, command-like structure would be opaque to them.

The second point that Wielenberg makes is that Evans confuses reasonable belief in the existence of an obligation with reasonable belief in the existence of an obligation-conferring command. The distinction is subtle and obscured by the hiker thought experiment. In that thought experiment, the hiker comes to believe in the existence of an obligation to stay on the path because they recognise the possibility that the command-like signs they are hearing or seeing might come from a source with obligation-conferring powers. If you cut out the command-like signs — as Wielenberg says you must — you end up in a very different situation. Suppose that the landowner or government has mind control technology. Every time you walk down the path, you are sprayed with a mist of nanorobots that enter your brain and alter your beliefs in such a way that you think you have an obligation to stay on the path. In that case, there is no command-like communication, just a sudden belief in the existence of an obligation. Following Adams’s earlier arguments, that wouldn’t be enough to actually create an obligation: you would not have received the clear command. That’s more analogous to the situation of the reasonable non-believer.

At least, I think that’s how Wielenberg’s criticism works. Unfortunately, he isn’t too clear about it. Nevertheless, I think we can view it as a rebuttal to premise (13) of Evans’s argument.


  • (16) The reasonable non-believer cannot recognise the command-like structure of signs such as the urgings of conscience. At best, for them the urgings of conscience create strong beliefs in the existence of an obligation. Under Adams’s theory, strong belief is not enough for the existence of an obligation. There must be a clear command.




6. Concluding Thoughts
I think the epistemological objection to DCT is an interesting one. And I hope my summary of the debate is useful. Hopefully you can now see why the lack of knowledge of a command poses a problem for the existence of obligations under Adams’s modified DCT. And hopefully you can now see how proponents of the DCT try to rebut this objection.

What do I think about this? I’m not too sure. On the whole, the epistemological objection strikes me as something of a philosophical curio. It’s not the strongest or most rhetorically persuasive rebuttal of DCT. Furthermore, I’m unsure of Wielenberg’s contribution to the debate. I feel like this criticism misses one way in which to interpret Evans’s response. I’ll try to explain.

To me, Evans is making a point about moral/philosophical risk and the effect it has on our belief in the existence of a command, not the contents of that command. I’ve discussed philosophical/moral risk in greater depth before. The main idea in discussions of philosophical/moral risk is that where you have a philosophically contentious proposition (like the possible existence of divine commands) there is usually some significant degree of uncertainty as to whether that proposition is true or false (i.e. there are decent arguments on either side). The claim then is that recognition of this uncertainty can lead to interesting conclusions. For instance, you might be have no qualms about killing and eating sentient animals, but if you recognise the risk that this is morally wrong, you might nevertheless be obliged not to kill and eat an animal. The argument for this is that there is a considerable risk asymmetry when it comes to your respective options: eating meat might be perfectly innocuous, but the possibility that it might be highly immoral trumps this possible innocuousness and generates an obligation to not eat meat. Recognition of the risk generates this conclusion.

It might be that Evans’s argument makes similar claims about philosophical risks pertaining to God’s existence and God’s commands. Even if the reasonable non-believer does not believe in the existence of God or in the existence of divine commands, they might nevertheless recognise the philosophical risk (or possibility) that those things exist. And they might recognise it especially when it comes to interpreting the urgings of their own consciences. The result is that they recognise the philosophical risk that a particular sign is an obligation-conferring command, and this recognition is enough the generate the requisite level of knowledge. The fact that they do not really believe that a particular sign has a command-like structure is, contra Wielenberg, irrelevant. What matters is that they recognise the possibility that has such a structure.

Just to be clear, I don’t think this improves things greatly for the defender of DCT. I think it would be very hard to defend the view that mere recognition of such philosophical risks/possibilities is sufficient to generate obligations for the reasonable non-believer (for one thing, there are far too many potential philosophical risks of this sort). Adams’s arguments seem to imply that a reasonable degree of certainty as to the nature of the command is necessary for any satisfactory theory of obligations. Recognition of mere possibilities seems to fall far short of this.