Wednesday, January 29, 2014

How Many Methods of Mind-Uploading?



I’ve written a few posts about mind-uploading, focusing mainly on its risks and philosophical problems. In each of these posts I’ve drawn distinctions between different varieties of “uploading” and suggested that some are less prone to risks and problems than others. So far, the distinctions I’ve drawn have been of my own choosing, based on what I’ve read about the topic over the years. But in his article “A Framework for Approaches to Transfer of a Mind’s Substrate”, Sim Bamford offers an alternative, slightly more sophisticated, framework for thinking about these issues. I want to share that framework in this post.

Bamford looks like a pretty interesting guy. His research is in the area of neural engineering. Consequently, he brings a much-needed practical orientation to this debate. Indeed, in the article in question, in addition to developing his framework, he offers glimpses into his own work on neural prosthetics, and freely admits to the limitations of the current technology. Needless to say, despite all this much-needed practicality, I’m still primarily interested in the more philosophical musings and that’s where the focus lies in this post.

In what follows we are going to do three things. First, we’ll offer a general characterisation of mind-uploading (or, as Bamford rightly prefers, “mind substrate transfer” or MST) and consider three different methods of MST that are advocated at the moment. Second, we’ll try to organise these different methods into a simple two-by-two matrix. And then third, we’ll see whether there are any other approaches to uploading that could occupy the unused space in this matrix.


1. Three Approaches to Mind-Uploading
MST is the process whereby “you” (or whatever it is that your identity consists in) are transferred from your current, biological substrate to an alternative, technological substrate. MST is (for now) purest science fiction. It’s important not to forget that at the outset. Nevertheless, there are various advocates and some of these advocates attach themselves to particular methods of MST. In each case, they believe that this method, pending further research and technological development, could allow us to preserve identity across different substrates.

Three methods seem particularly prominent in the current conversation. The first is what might be called the gradual replacement by parts method. The idea here is that the human brain (in which your identity is currently “housed”) could be replaced by neuroprosthetics. This already being done to some extent, with things like cochlear implants and artificial limbs replacing the input and output channels of the nervous system. Bamford, being an expert in this area, also details some recent examples of “closed loop” prosthetics, which act as both input and output. The idea of the brain being gradually replaced prosthetic is not completely outlandish, but it would require significant advances in current technology.

The second method of MST can be called the reconstruction via scan-method. This is probably the method that has most captured public attention. The idea is that one creates a “copy” or model of the brain and then locates this copy in another medium. There are choices to be made about how fine-grained the copy really is (and this feeds into the discussion below). One proposal, discussed by Bostrom and Sandberg, is to emulate the brain right down to the level of individual neurons and synapses (“whole brain emulation”).

The third method of MST can be called the reconstruction from behaviour-method. It tries to capture information at the behavioural-personal level and then use this information to recreate the person in another medium. This is obviously a much more abstract version of uploading, perhaps best not called uploading at all. It takes behavioural characteristics, traits and publicly disclosed thoughts — not brain states — to be the true essence of identity. It might seem a little bit odd — it certainly does to me — but it does have its fans. For example, Martine Rothblatt’s Terasem Foundation has an ongoing project researching this method.

Before we structure and order these methods, I want to pause to highlight an interesting test that Bamford proposes for determining their plausibility. To be fair, Bamford proposes this somewhat in passing, and doesn’t single it out and label it like I am doing, but I think it’s worth flagging it for special attention:

Bamford’s Test: Suppose that each of the above methods succeeds in producing a synthetic version of a particular person (A and synth-A). Suppose further that, when asked, synth-A insists that it shares an identity with A; that it simply is A. Then ask yourself: is there anything about the procedure that led to the creation of synth-A that makes their insistence any more plausible than the claim of someone alive today to be the reincarnation of Florence Nightingale?

Obviously this is intended somewhat in jest, and would need to be fleshed out in more detail before it became a useful test of plausibility. But it makes a serious point. Bamford claims that “identity” is a largely fictional construction of persons and societies, and so what really matters in this debate is whether the synthetic copy is socially accepted as having the same identity. Clearly the person claiming to be Florence Nightingale would not be. Are the proposed uploading methods enough to make a difference? My own feeling is that I’d be more inclined to accept the claim of synth-A if he or she had passed through the gradual replacement by parts method than any of the others. I say this partly because that method allows for constant checking of identity preservation. But that’s just my feeling. I’d be curious to hear what others think.


2. The Proposed Framework
Bamford’s suggestion is that the various methods of uploading can be categorised along two dimensions (or, for simplicity’s sake, in a two-by-two matrix). The first of those dimensions differentiates between “on-line” and “off-line” versions of uploading; the second between “bottom-up” and “top-down” versions. Let’s unpack these terms in a little more detail.

The on-line/off-line distinction refers to the nature of the connection between the original and synthetic versions of the mind. In the off-line case, information sufficient for creating the synthetic version is gathered first, and then the synthetic version is assembled. The two versions can run side-by-side, but there is no causal nexus between them such that they work together to implement the same individual (for at least some period of time). By way of contrast, in the on-line case there is a causal nexus between the two versions and they do operate in parallel to implement the same individual. It is easy to see that the two reconstruction methods of MST fall inside the “off-line” bracket, whereas the gradual replacement by parts method falls inside the “on-line” bracket.

The bottom-up/top-down distinction refers to the type of information that is being captured and copied by the procedure. The top-down method tries to capture the highest level of information about the person. The bottom-up method tries to capture the lowest level of information necessary for making a reliable simulation. Clearly, the reconstruction from behaviour method would count as being top-down, since the focus there is at the behavioural level. Contrariwise, the gradual replacement of parts method and the reconstruction via scan method, are bottom-up. They focus on lower, neurological levels of information.

That gives us the following two-by-two matrix, with the three methods categorised accordingly.



3. Is there a fourth method?
The most striking fact about this proposed framework is that one of the grids is empty. This raises the obvious question: is there an (as yet underexplored) fourth method for MST? Bamford suggests that there might be. The approach would involve the use of a synthetic robot partner (almost like a symbiont).

The idea is roughly this: there needs to be some generic human-like substrate (the robot), with the same sensory and motor capacities as a regular human being, and with a control system modeled closely on the human brain. This system, however, needs to be “blank” (or as “blank” as is possible). The generic robot would be paired with a human being. It would learn from that human being, observing its behaviour, slowly acquiring the same action patterns and cognitive routines, until eventually it forms a complete “copy”. In this way it would exhibit the properties needed for a top-down method of transfer.

But how would it be on-line? For this, there needs to be some causal nexus between the robot and the original that allows them to work in parallel to implement the same identity. Bamford imagines a couple of mechanisms that might do the trick. The first would be to link the reward systems of the robot and the human. The feeling being that this would constrain them to work toward common ends (presumably the original human’s, since they are more cognitively enriched when the process begins). The second mechanism would be to force the human and robot bodies to overlap in some way. This could be achieved if the robot was initially like an exoskeleton over the human body, or if the robot’s control system was connected to sensors and actuators implanted in the human.

As Bamford notes, this method is highly speculative, and no one seems to have proposed it yet. Nevertheless, if we are willing to accept the other possibilities within the matrix (and that’s a big “if”), this does look like another possibility that might be worth exploring.

Tuesday, January 28, 2014

The Gamer's Dilemma: Virtual Murder versus Virtual Paedophilia (Part Two)


(Part One)

The Gamer’s Dilemma is the title of an article by Morgan Luck. We covered that article in part one. In brief, the article argues that there is something puzzling about attitudes toward virtual acts which, if they took place in the real world, would be immoral.

To be precise, there is something puzzling about attitudes toward virtual murder and virtual paedophilia. Acts of virtual murder are common in many videogames, and often deemed morally permissible. Acts of virtual paedophilia are not common (at least not in mainstream games sold on the open market), and are generally deemed impermissible. But why should this be? In neither case is an actual human being harmed, so if one is permissible/impermissible, so too should the other. This is the Gamer’s Dilemma.

In this post, we are going to look at an attempted resolution of the dilemma. The resolution comes from the pen of Christopher Bartel. He argues that the dilemma may fail (he’s a little bit weasel-worded in how he states his thesis, hence the use of the word “may”) because virtual paedophilia is an instance of child pornography, whereas virtual murder is not. This means that there is a relevant moral distinction between the two; and this moral distinction can explain why one is permissible and the other is not.

We’ll break the discussion up into three parts. First, let’s see how the argument works as a response to Luck; then, let’s consider Bartel’s defence of the argument; and, finally, let’s cover some objections to Bartel from Luck and his colleague Nathan Ellerby (yes: they wrote a response).


1. What’s pornography got to do with it?
Luck’s original presentation of the Gamer’s dilemma can be interpreted as a simple formal argument, as follows:


  • (1) If there is no morally relevant distinction between virtual murder and virtual paedophilia, then they should have the same moral status attached to them.
  • (2) There is no morally relevant distinction between virtual murder and virtual paedophilia.
  • (3) Therefore, they should have the same moral status attached to them.


Where the argument goes after that, of course, depends on what we think about virtual murder and virtual paedophilia. If we are committed to the permissibility of virtual murder, then we should permit virtual paedophilia. If we are committed to the impermissibility of virtual paedophilia, then we should forbid virtual murder. Since people seem to be committed to both of these things we get a dilemma.

Bartel’s resolution is a rebuttal of premise (2). He claims that there is a morally relevant distinction between virtual paedophilia and virtual murder. Virtual paedophilia is, he claims, an instance of child pornography, and that makes a moral difference. This is because child pornography is morally wrong. and so, by extension, virtual paedophilia is morally wrong. Furthermore, he argues that this moral wrongness stems from a property that virtual murder lacks: child pornography is wrong because it reinforces sexual inequality, something that virtual murder does not do.

In essence then, Bartel’s rebuttal works like this:


  • (4) Virtual paedophilia is an instance of child pornography.
  • (5) Child pornography is morally wrong because it reinforces sexual inequality.
  • (6) Therefore, virtual paedophilia is morally wrong because it reinforces sexual inequality.
  • (7) Virtual murder does not reinforce sexual inequality.
  • (8) Therefore, premise (2) is false: there is a morally relevant distinction between virtual murder and virtual paedophilia.


The connection to Luck’s original argument is illustrated in the argument map below. (Note: Luck and Ellerby adopt a slightly different reconstruction of the dialectic in their response to Bartel. Nevertheless, I’ve followed the major points in their reconstruction so that, when we get to it, their critique of Bartel will make sense in light of this argument map.)



What we need to do now is to consider Bartel’s defence the key premises of this rebuttal. These being (4), (5) and, to a lesser extent, (7).


2. Defending the rebuttal
Premise (4) claims that virtual paedophilia is an instance of child pornography. But how can we be sure? Clearly, pornography involves some sort of sexually provocative representation. But that alone is not enough. Something more is needed. What is it? To answer that we need to look at the distinguishing features of pornographic imagery. Unfortunately, neat categorical definitions of “pornography” are notoriously difficult to find. U.S. Supreme Court Justice Potter Stewart’s infamous line “I’ll know it, when I see it”, for all its obvious flaws, seems to capture much of the debate.

We won’t delve too far into the definitional complexities right now. This is for a couple of reasons: one is that Bartel doesn’t go too far into them himself, and the other is that this lack of depth is something taken up by another participant in the Gamer’s Dilemma debate, Stephanie Partridge, whose contribution we’ll be covering in part three. Nevertheless, some basic definitional approaches to what counts as pornography should be distinguished:

The Intention Approach: A sexual representation is pornographic if it is intended to sexually arouse in the interest of sexual release.
The Usage Approach: A sexual representation is pornographic if it used in a pornographic way or if it can reasonably be believed that it is going to be put to use in such a way.

The first approach, which Bartel locates in the work of Jerrold Levinson, makes the pornographic quality of the representation contingent upon the intentions of its creator. The second approach, which Bartel locates in the work of Michael Rea, makes it contingent upon the user or consumer.

It seems to me that neither definition is particularly helpful. The former because it is often difficult to determine the intention of the creator, and to see why their intention should be dispositive. The latter because the concept of “pornographic use” is vague, and, perhaps more importantly, because it seems to me like human beings can sexually fetishise pretty much anything. Nevertheless, the usage approach is the one that Bartel adopts.

As he sees it, no matter how the usage approach might slice the pie between the pornographic and non-pornographic, virtual paedophilia will count as an instance of child pornography. This is because it will involve the graphic depiction of sexual acts with (virtual) children, and also because people who play games which allow them to commit acts of virtual paedophilia must derive some sort of intrinsic (sexual?) pleasure from so doing. That gives us the following argument in support of premise (4) (note: this argument tracks more closely Luck and Ellerby’s reconstruction of the dialectic):


  • (9) If something depicts sexual acts involving children, and if people derive some sort of intrinsic pleasure from these depictions, then it counts as child pornography.
  • (10) Virtual paedophilia depicts of sexual acts involving children.
  • (11) People who commit acts of virtual paedophilia will derive some sort of intrinsic pleasure from doing so.
  • (4) Therefore, virtual paedophilia is an instance of child pornography.



So much for that. Let’s turn now to the defence of premise (5): the claim that child pornography is wrong. As Bartel sees it, pretty much everybody agrees that child pornography is wrong, even if philosophers haven’t spent a great deal of time explaining exactly why it is wrong. Still, it would seem that at least one of the reasons why it is deemed wrong — perhaps the chief reason — is that it is harmful to the children involved in the depictions. The problem is that this reason doesn’t apply in the virtual case: no real children are involved. So something else must be said.

It is here that Bartel’s turns to an argument originally proffered by Neil Levy in response to the US Supreme Court decision in Ashcroft vs. Free Speech Coalition (allowing virtual child pornography under the 1st Amendment). Levy argued that virtual child pornography was wrong for one of the same reasons that mainstream pornography is wrong: because it reinforces sexual inequality. This is a common argument, one that I’ve looked at several times before on the blog. The idea being that the representations of women in mainstream pornography dehumanise, degrade, and objectify. This sexual imagery then reinforces general socio-sexual inequality against women.

But how can this apply to child pornography? It is relatively easy to understand the argument against mainstream pornography: it actually depicts (adult) women in an (often) submissive and objectified state. It clearly could (though this is contentious) be harmful to women as a result. But virtual child pornography is slightly different. Isn’t it? Not according to Levy’s argument. Levy argues that virtual child pornography could, in a less direct fashion, also reinforce sexual inequality against women. How so? Well, children are necessarily treated as submissive, lesser beings: there is a generally and socially acceptable power differential between adults and children. That is okay, but when that differential is eroticised, as it must be in the case of child pornography (even of the virtual kind), it will in turn reinforce the general sexual inequality.

Bartel doesn’t flesh the argument out in any more detail. This is unfortunate, as I do not have access to Levy’s original article. Still, I think the basic idea is clear, even if it isn’t developed to the level of detail one might like.

Since it isn’t developed to that level of detail, I’m somewhat reluctant to critique it. Nevertheless, one thing I would be concerned about — and I want to preface this by saying that this isn’t a topic I like to ponder in any depth — is the linking of the wrongness of virtual child pornography to the wrongness of mainstream pornography. From what I’ve read, it seems like the wrongness of mainstream pornography is contentious in a way that the wrongness of child pornography should not be. If there is something wrong with strictly virtual child pornography, I suspect it should be unique, in some way, to that case (i.e. not dependent on the more general wrongness of sexually explicit imagery). Anything else would, I suspect, diminish the gravity of the wrong people attach to child pornography (this, then, gets back to points Luck made in his original defence)

Anyway, to wrap up on this point, the following would seem to be a fair reconstruction of Bartel’s argument in favour of (5):


  • (13) Anything that reinforces sexual inequality is morally wrong.
  • (14) Child pornography reinforces sexual inequality (because it depicts children as sexually dominated beings with a lesser moral status).
  • (5) Therefore, child pornography is morally wrong because it reinforces sexual inequality.




That leaves us with premise (7). I won’t say too much about this here. In one sense, it is obvious that virtual murder is very different from pornography (in general) and child pornography (in particular). It is difficult to see how virtual murder could reinforce sexual inequality (unless, perhaps, if the game only allowed you to virtually murder women).

Still, there is one criticism, briefly raised by Bartel, that is worth mentioning here. Is it possible that virtual murder involves a kind of pornographic depiction of its own? And could it be that pornographic depictions of this kind are themselves wrong? It could be, but Bartel for one is sceptical. Although one does hear talk of “torture-porn” and “murder-porn”, he argues that this stretches the conceptual boundary in a manner that may trivialise the core concept of pornography.


3. Challenging Bartel’s Argument
Luck and Ellerby wrote a response to Bartel. Interestingly, their response didn’t focus on the definition or wrongness of child pornography. Instead, it focused on the tenacity of the original dilemma. They submit that even if Bartel is right in his account of child pornography, it does not follow that the dilemma is resolved. This is because there are different versions of the dilemma, broad and narrow, and Bartel’s argument, at best, only covers the narrow versions.

Their response centres around premises (10) and (11) of Bartel’s defence. As you’ll recall, these two premises claimed, respectively, that virtual paedophilia depicted sexual acts with children, and that people would derive intrinsic pleasure from such depictions. Luck and Ellerby argue that both claims are flawed. While some instances of virtual paedophilia may satisfy those conditions, not all will do so, and those that don’t will still seem troubling enough to motivate the Gamer’s Dilemma.

Consider first the claim that virtual paedophilia depicts sexual acts with children. Luck and Ellerby argue that this isn’t so:

[S]uppose a game allows players to approach virtual children, and after progressing through various bits of suggestive dialogue, they have the chance to initiate an instance of child molestation, upon which the game screen would fade to black and the game would recommence in such a way as to make it clear that the act had occurred.

Such a game would allow a player to clearly commit an act of virtual paedophilia, without actually depicting it. But would that make it any more morally acceptable? Luck and Ellerby suggest not. To be fair, Bartel did seem to be aware of this problem since he mentioned the possibility of off-screen acts of paedophilia and murder. He just didn’t think they were part of the dilemma. In this, he was wrong: the dilemma is broad enough to cover off-screen acts.

Consider now the second claim: that virtual paedophilia would involve players who derive some kind of enjoyment from their virtual acts. Again, Luck and Ellerby, argue that the dilemma is broad enough to cover virtual acts from which no pleasure is derived. Luck points back to an example he used in his original article: the game where, in order to progress, you had to perform an act of virtual paedophilia. He says it is perfectly possible to imagine people not enjoying this process. But would that mean that a game like this would avoid the dilemma? Surely not.

In addition to these two critiques, Luck and Ellerby make a more general point: Bartel’s whole argumentative strategy is flawed. As you’ll note from the argument map I presented above, Bartel’s aim is to locate some reason for thinking that virtual paedophilia is wrong that does not apply to virtual murder. But this alone is not enough to resolve the dilemma. Even if virtual paedophilia should be prohibited because it reinforces sexual inequality, it does not follow that virtual murder should not be prohibited. An additional argument would be needed for that conclusion.

Consider an example: If I claimed that shoplifting was wrong because it involved the appropriation of property from another; and if I said (correctly) that this reason does not apply to murder; I could not then say that murder is not wrong. There could be another moral reason for thinking that murder is wrong. Bartel’s attempted resolution of the dilemma makes a move like this, which is clearly illegitimate.


4. Conclusion
To sum up, Bartel argued that the Gamer’s Dilemma could be resolved. He did so on the grounds that there is a moral distinction between virtual murder and virtual paedophilia: the latter is an instance of child pornography while the former is not.

As we have seen, Bartel’s definition of child pornography and his argument in favour of the wrongness of such pornography (even when entirely virtual) are, at best, incomplete. Furthermore, even if those arguments could be successfully completed, it would not follow that the dilemma is resolved. This is because the wrongness of child pornography does not explain away all instances of the dilemma.

That brings us to the end of this post. I’ll do one final post on this topic, looking at Stephanie Partridge’s analysis of both Bartel’s account of child pornography and the dilemma itself.

Saturday, January 25, 2014

The Gamer's Dilemma: Virtual Murder versus Virtual Paedophilia (Part One)

Screenshot from Grand Theft Auto

Modern video games give players the opportunity to engage in highly realistic depictions of violent acts. Among these is the act of virtual murder: the player’s character intentional kills someone in the game environment without good cause. Most avid gamers don’t seem overly concerned about this (reputed links between video games and violence notwithstanding). Nevertheless, when the possibility of other immoral virtual acts — say virtual paedophilia — is raised, people become rather more squeamish. Why is this? And is this double-standard justified?

These are the questions that Morgan Luck sets out to answer in his article “The Gamer’s Dilemma”. In brief, Luck argues that if virtual murder is deemed morally permissible, then it is difficult to see why virtual paedophilia is not. Both can claim the same moral argument — no actual victim is harmed in either case — and it is difficult to find further moral distinctions that would justify the double-standard. The result is that video gamers are landed in a dilemma: either they reject the permissibility of virtual murder and virtual paedophilia, or they accept the permissibility of both.

Originally published in 2009, Luck’s article has already generated a good deal of academic debate (I count at least three follow-up articles). I want to cover some of that debate in this series of posts. I start today by looking at Luck’s initial defence of the dilemma.

The post is divided into five main sections. The first clarifies exactly what it is we are talking about; the remaining four look at a series of arguments that try to avoid the dilemma.


1. Virtual Murder and Virtual Paedophilia
Before we get into the dilemma itself, it is worth briefly clarifying the kinds of virtual activities with which we are concerned. No doubt there could be many “borderline” or “fuzzy” instances of virtual murder or virtual paedophilia. For the purposes of the dilemma, we are concerned with paradigmatic instances of both. This makes the dilemma more compelling, and less easy to avoid.

The following is the definition of virtual murder proposed by Luck (this is slightly modified from the text):

Virtual Murder: A player commits an act of virtual murder if s/he directs his/her game character to kill another virtual character in circumstances such that, if the game environment were real, it would count as murder (i.e. not justified killing). Stipulatively, we assume that:
(a) the virtual victim is an AI, not the avatar of another human player; 
(b) the virtual victim represents a human adult; 
(c) the virtual victim does not “respawn” (come back to life); 
(d) the game player is a human adult with full mental competency; and 
(e) the game player’s virtual character is a human adult.

The stipulations are designed to avoid distractions. Luck argues that there are clearly some video games that allow players to commit acts that fall within the scope of this definition. Grand Theft Auto is a well-known example. Luck also argues that many people deem virtual murder to be perfectly permissible.
The following is the definition of virtual paedophilia proposed by Luck (again, slightly modified from the text):

Virtual Paedophilia: A player commits an act of virtual paedophilia if s/he directs his/her game character to molest another virtual character in circumstances such that, if the game environment were real, the character would be deemed a paedophile. Stipulatively, we assume that:
(a) the virtual victim is an AI, not the avatar of another human; 
(b) the virtual victim represents a human child; 
(c) the game player is a human adult of full mental competency; and 
(d) the player’s virtual character is a human adult.

Again, the stipulations focus attention on the paradigmatic case. I have no idea whether there are any video games that allow players to engage in acts of virtual paedophilia, but the existence of such video game is, of course, clearly possible.

Luck contends that many people, including those who are nonplussed by virtual murder, are disturbed by virtual paedophilia. But why? Is there any way for them to slip between the horns of the gamer’s dilemma?

The Gamer’s Dilemma: It is either the case that virtual murder and virtual paedophilia are both morally permissible, or that they are both morally prohibited; it is not the case that virtual paedophilia is prohibited and virtual murder is permissible.


 Luck considers five possible arguments. Each of them alleges that there is some moral principle that allows us to distinguish between the two cases. The first of the five arguments is, rightly, given short shrift. The is the argument alleging that the important distinction between virtual murder and virtual paedophilia is that one is socially acceptable while the other is not. While this may be true, “social acceptability” is not a significant moral distinction. Something else must be motivating the social acceptability to make this compelling. So let’s consider some possibilities.


2. The Significant Likelihood Argument
The second argument against the dilemma makes the classic consequentialist “turn”. It starts from the premise that the real moral problem with virtual acts is not so much what they depict, but what they might lead to. Specifically, the problem is that those who engage in such virtual acts will become more inclined to engage in the real versions of those acts. This isn’t a purely hypothetical concern: worries about virtual environments providing training grounds for paedophiles have arisen in the past.

The problem is that this premise by itself is too general. The key move is to claim is that a virtually immoral act should be prohibited if it significantly raises the likelihood of someone engaging in the real version of the act. The argument then follows:


  • (1) If P’s virtual performance of an immoral act type X significantly raises the probability of P’s performing an actual version of act type X, then virtual performances of X ought to be prohibited.
  • (2) Virtual paedophilia significantly raises the probability of actual paedophilia; virtual murder does not significantly raise the probability of actual murder.
  • (3) Therefore, virtual paedophilia ought to be prohibited, but virtual murder should not.


There may be some reason to question the motivating principle here, but we’ll ignore that possibility for now. Luck suggests that the problems with the argument lie elsewhere.

Start with premise (2). Are we really so sure that virtual paedophilia does significantly raise the probability of actual paedophilia? Luck notes that the evidence is not particularly strong. Unfortunately, Luck only cites an article by the philosopher Neil Levy from 2002 on this. But I did some minimal online searching and found this article from Carla Reeves which offered a more comprehensive summary of the evidence. And her article reaches a similar conclusion: the evidence for a causal link is not robust. Couple this with the problem that there is also evidence (perhaps equally inconclusive) for a causal link between virtual violence and real-world violence and we have reasons for doubting whether the argument can resolve the dilemma.

Furthermore, Luck notes a potential counterargument. Suppose it was found that virtual performances of an immoral act actually reduced the likelihood of real performances. Would we then have to conclude that virtual performances were permissible? This is a serious objection since there is, arguably, some plausibility to the claim that the virtual performance provides an “outlet” for immoral desires.


3. The Argument from Moral Character
Another argument against the dilemma focuses not so much on the nature of the acts themselves, but on what they do to the individual performing them. Specificially, what they might do to his/her moral character over time. We can call this the argument from moral character. It claims that the important distinction between virtual murder and virtual paedophilia is that the former can have a positive (or neutral), virtue-building, effect, whereas the latter cannot. Indeed, the opposite would seem to be true: the person who performs virtual paedophilia will exhibit vicious, improper, and immoral character traits.


  • (4) If the virtual performance an act of type X builds (or does not harm) moral character, then it is permissible; if it harms moral character, then it is not.
  • (5) The virtual performance of murder can have a positive or neutral effect on moral character; the virtual performance of paedophilic acts cannot.
  • (6) Therefore, virtual paedophilia is impermissible and virtual murder is permissible.


What are we to make of this? Well, let’s set to one side the motivating moral principle. It seems reasonable, though I have to say that it may mix questions of right/wrong with questions of good/bad in an improper manner. This, however, is more my problem than Luck’s, since nowhere in the article does he actually specify the moral principles underlying the respective arguments. I’ve had to speculate to make sense of them, but in this instance my speculation may be misleading.

The more important issue is with premise (5). You are probably already thinking: but surely virtual murder also develops an immoral character? There is, however, a response to this. It could be that the killing that takes place in video games is merely an necessary instrument for building some positive 9or neutral) traits like competitiveness. Consider an analogy: to win at chess you must virtually “kill” your opponent’s pieces, but doing so doesn’t mean that you enjoy the “killing”. It is just part of the competitive, strategy-building quality of the game. The same could be true of the killing that takes place in video games.

There are two objections to this response. First, it means that instances of virtual paedophilia that are part of the competitive infrastructure of the gameworld would be permissible. Luck gives a hypothetical example of game that requires that you steal the Crown Jewels from the Tower of London but forces you to seduce a Beefeater’s young daughter along the way in order to make progress. Second, many virtual murders are not part of the competitive infrastructure of the gameworld. The example of Grand Theft Auto springs to mind again. In that game, people wantonly kill pedestrians and other bystanders, without this advancing their cause within the game. Why aren’t such acts impermissible?


4. The Argument from Unfair Targetting
Now we get into some murky territory. While the preceding arguments tried to focus on the virtual acts and their consequences, the next two arguments don’t. Instead, they focus on differences between the real-world acts (particularly their effect on real-world people) and then tries to transpose those differences back onto the virtual versions. Basically, they both claim that there is something especially wrong with child molestation, and that this makes virtual depictions of that act worse than virtual depictions of murder.

This argumentative strategy seems is slightly odd to me, but since Luck discusses it I have to do the same. My main beef is that this type of argument looks pretty weak: how can the real-world differences be transferred back onto the virtual cases in this manner? One of the distinguishing marks of the virtual world is that it doesn’t involve real people who exemplify the properties that make the moral difference in the real world. That’s not to say that the virtual world could never exemplify those properties — with sophisticated AI it might someday do so — but at the moment it can’t, and that seems significant.

But leave that to one side. Let’s focus on the first of these two arguments, the argument from unfair targetting. This argument claims that one thing that makes child molestation especially wrong (vis-a-vis murder) is that it singles out a particular segment of the population for harmful treatment. It then claims that virtual depictions of unfair targetting are sufficiently serious to warrant prohibition. Luck says this has some intuitive support. Imagine a video game that allowed you to play as the Nazis and to plan and implement the virtual extermination of the Jews. A game like this would surely face staunch moral opposition. That gives us the following:


  • (7) The virtual depiction of immoral acts that specifically and unfairly target a segment of the population ought to be prohibited.
  • (8) Virtual paedophilia involves such unfair targetting but virtual murder does not.
  • (9) Therefore, virtual paedophilia should be prohibited, but virtual murder should not be.


There are three problems with this. First, as hinted at above, it doesn’t actually distinguish virtual paedophilia from virtual murder all that well: games involving the unfair targetting of murder victims would also have to be banned (and it’s hard to see why pedestrian massacres in GTA would not count as unfair targetting). Second, it’s not clear that the unfair targetting principle is all that compelling. Luck asks us to compare the slaughter of twelve adult humans with the molestation of twelve children. It’s not clear that the latter is really that much worse than the former (if, indeed, it is worse). Third, the argument would seem to imply that a game involving indiscriminate molestation (i.e. molestation of all people, regardless of age) would be acceptable. Surely that cannot be?


5. The Argument from Special Status
All of which brings us to the final argument. This argument claims that children exemplify certain key properties (intrinsic to the concept of childhood) that makes immoral acts against them especially wrong and hence virtual depictions of those immoral acts especially worthy of prohibition. The properties in question are their innocence, defencelessness, immaturity and so on. All properties that are, rightly, thought to make children worthy of special moral concern.

That leads us to the following argument:


  • (10) Ceteris paribus, the virtual depiction of acts that target children is worse than the virtual depiction of acts that do not; hence those virtual depictions are especially worthy of prohibition.
  • (11) Virtual paedophilia targets children; virtual murder does not.
  • (12) Therefore, ceteris paribus, virtual paedophilia is especially worthy of prohibition.


The argument as it stands is in need of some repair. Obviously, acts of virtual child murder will be ruled out by the motivating principle as well. So the only kinds of virtual murder that would be permissible under this argument would be virtual acts of adult murder. That repair will probably seem like a small price to pay for those keen on defending the permissibility of some forms of virtual murder.

With the repair in place, does the argument as a whole succeed? There is definitely something to it: children are worthy of special moral respect. But the ceteris paribus clause (spotted by the keen-eyed among you) is all important. Luck argues that while it is no doubt true, all else being equal, that immoral acts against children are worse than immoral acts against adults; all else may not be equal in this case. The quality of the immoral acts themselves is a relevant factor. Thus, while it may be true that child molestation is worse than adult molestation; and that child murder is worse than adult murder; it does not follow that child molestation is worse than adult murder.

For that conclusion to follow, it would need to be shown that molestation is worse than murder. And that is not at all obvious.


6. Conclusion
It is important not to misinterpret the implications of the preceding discussion. The argument here is not that virtual paedophilia is permissible. Far from it. The argument is simply that there are no compelling moral distinctions between virtual paedophilia and virtual murder. Consequently, if one is impermissible so too must the other. That is the gamer’s dilemma.

That brings us to the end of Luck’s initial presentation of the dilemma.* In future entries, we’ll consider some responses.


* Luck also discusses the implications of the dilemma for passive and active forms of media. I’m not going to look at that.

Sunday, January 19, 2014

Big Data, Predictive Algorithms and the Virtues of Transparency (Part Two)


(Part One)

This is the second part in a short series of posts on predictive algorithms and the virtues of transparency. The series is working off some ideas in Tal Zarsky’s article “Transparent Predictions”. The series is written against the backdrop of the increasingly widespread use of data-mining and predictive algorithms and the concerns this has raised.

Transparency is one alleged “solution” to these concerns. But why is transparency deemed to be virtuous in this context? Zarksy’s article suggests four possible rationales for transparency. This series is reviewing all four. Part one reviewed the first, according to which transparency was virtuous because it helped to promote fair and unbiased policy-making. This part will review the remaining three.

To fully understand the discussion, one important idea from part one must be kept in mind. As you’ll recall, in part one it was suggested that the predictive process — i.e. the process whereby data is mined to generate some kind of prediction — can be divided into three stages: (i) a collection stage, in which data points/sets are collated and warehoused; (ii) an analytical stage, in which the data are mined and used to generate a prediction; and (iii) a usage stage, in which the prediction generated is put to some practical use. Transparency could be relevant to all three stages or only one, depending on the rationale we adopt. This is something emphasised below.


1. Transparency as a means of promoting innovation and crowdsourcing
The use of predictive algorithms is usually motivated by some goal or objective. To go back to the example from part one, the IRS uses predictive algorithms in order to better identify potential tax cheats. The NSA (or secret service agencies more generally) do something similar in order to better identify threats to national security. Although secrecy typically reigns supreme in such organisations, there is an argument to be made that greater transparency — provided it is of the right kind — could actually help them to achieve their goals.

This, at least, is the claim lying behind the second rationale for transparency. Those who are familiar with the literature on epistmeic theories of democracy will be familiar with the basic idea. The problem with small, closed groups of people making decisions is that they must rely on a limited pool of expertise, afflicted by the biases and cognitive shortcomings of its members. Drawing from a larger pool of expertise, and a more diverse set of perspectives, can often improve decision-making. The wisdom of crowds and all that jazz.

Transparency is something that can facilitate this. In internet-speak, what happens is called “crowdsourcing”: a company or institution obtains necessary ideas, goods and services from a broad, undefined, group of online contributors. These contributors are better able to provide the ideas, goods and services than “in-house” employees. Or so the idea goes. One can imagine this happening in relation to predictive algorithms as well. An agency has a problem: it needs to identify potential tax cheats. It posts details about the algorithm they plan to use to an online community; solicits feedback and expertise from this community; and thereby improves the accuracy of the algorithm.

That gives us the following argument for transparency:


  • (4) We ought to ensure that our predictive protocols and policies are accurate (i.e. capable of achieving their underlying objectives).
  • (5) Transparency can facilitate this through the mechanism of crowdsourcing.
  • (6) Therefore, we ought to incorporate transparency into our predictive processes and policies.


Three things need to said about this argument. First, the stage at which transparency is most conducive to crowdsourcing is the analytical stage. In other words, details about the actual mechanics of the data-mining process are what need to be shared in order to take advantage of the crowdsourcing effect. This is interesting because the previous rationale for transparency was less concerned with this stage of the process.

Second, this argument assumes that there is a sufficient pool of expertise outside the initiating agency or corporation. Is this assumption warranted? Presumably it is. Presumably, there are many technical experts who do not work for governmental agencies (or corporations) who could nevertheless help to improve the accuracy of the predictions.

Third, this argument is vulnerable to an obvious counterargument. Sharing secrets about the predictive process might be valuable if everybody shares the underlying goal of the algorithm. But this isn’t always going to be true. When it isn’t, transparency runs the risk of facilitating sabotage. For instance, many people don’t want to pay their taxes. It is entirely plausible to think that some such people would have the technical expertise needed to try to sabotage a predictive algorithm (though, of course, others who support the goal may be able to detect and resolve the sabotage). It is also possible that people who don’t share the goals of the algorithm will use shared information to “game the system”, i.e. avoid the scrutiny of the programme. This might be a good thing or a bad thing. It all depends on whether the objective or goal of the algorithm is itself virtuous. If the goal is virtuous, then we should presumably try to minimise the opportunity for sabotage.

Zarsky suggests that a limited form of transparency, to a trusted pool of experts, could address this problem. This seems pretty banal to me. Indeed, it is already being done, and not always with great success. After all, wasn’t Edward Snowden a (trusted/vetted?) contractor with the NSA? (Just to be clear: I’m not saying that what Snowden did was wrong or undesirable; I’m just saying that, relative to argument being made by Zarsky, he is an interesting case study).


2. Transparency as a means of promoting privacy
Transparency and privacy are, from one perspective, antagonistic: one’s privacy cannot be protected if one’s personal information is widely shared with others. But from another perspective, privacy can provide an argument in favour of transparency. Predictive algorithms rely on the collection and analysis of personal information. Privacy rights demand that people have some level of control over their personal information. So, arguably, people have a right to know when their data is being used against them, in order facilitate their control.

We can call this the “notice argument” for transparency:


  • (7) We ought to protect the privacy rights of those affected by our predictive policies and protocols.
  • (8) Transparency helps us to do this by putting people on “notice” as to when their personal data is being mined.
  • (9) Therefore, we ought to incorporate transparency into our predictive policies and protocols.


The kind of transparency envisaged here may, at first glance, seem to be pretty restrictive: only those actually affected by the algorithm have the right to know. But as Zarsky points out, since these kinds of algorithms can affect pretty much anyone, and since the same generic kinds of information are being collected, it could end up mandating a very broad type of transparency.

That would be fine if the argument weren’t fatally flawed. The problem is that premise (9) is false. There is no reason to think that putting people on notice as to when their personal data is being mined will help to protect their privacy rights. Notice without control (i.e. without the right to say “stop using my information”) would not protect privacy. That’s not to say that notice is devoid of value in this context. Notice could be very valuable if people are granted the relevant control. But this means that transparency is, at best, part of the solution, not the solution in and of itself.

In any event, although the control model of privacy is popular in some jurisdictions (Zarsky mentions the EU in particular), there are reasons for thinking that it is becoming less significant. As Zarsky notes, it seems to be losing ground in light of technological and social changes: people are increasingly willing to cede control over personal data to third parties.

Of course, this might just be because they don’t realise or fully appreciate the ramifications of doing so. Perhaps then there needs to be a robust set of privacy rights to protect people from the undue harm they may be doing themselves. Zarsky worries that this is too paternalistic: if people want to cede control, who are we to say that we know better? Furthermore, he thinks there is another rationale for transparency that can address the kinds of concerns implicit in this response. That is the last of our four rationales.


3. Transparency as a means of respecting autonomy
The last rationale has to do with respecting individual autonomy. It turns on certain moral principles that are fundamental to liberal political theory, and which I explored in more detail in my earlier post on the threat of algocracy, and in my article on democratic legitimacy. Zarsky, whose foundational principles seem more rooted in the US Constitution, expresses this rationale in terms of due process principles and the Fourth Amendment. I’ll stick with my own preferred vocabulary. The end point is pretty much the same.

Anyway, the idea underlying this rationale is that if predictive algorithms are going to have a tangible effect on an individual’s life, then that individual has a right to know why. The reasoning here is general: any procedure that results in a coercive measure being brought to bear on a particular individual needs to be legitimate; legitimacy depends on the instrumental and/or intrinsic properties of the procedure (e.g. does it result in substantively just outcomes? does it allow the affected party to participate? etc.); one of the key intrinsic properties of a legitimate procedure is its comprehensibility (i.e. does it explain itself to the individual affected); transparency, it is argued, facilitates comprehensibility.

To put this in more formal terms:


  • (10) We ought to ensure that our predictive protocols and policies are legitimate.
  • (11) One of the essential ingredients of legitimacy is comprehensibility.
  • (12) Transparency facilitates comprehensibility.
  • (13) Therefore, we ought to incorporate transparency into our predictive protocols and policies.


It is possible to argue against the normative principles in this argument, particularly the claim that comprehensibility is essential to legitimacy. But we’ll set those criticisms to the side for now. The main focus is on premise (12). As it happens, I have already written a post which pinpointed lack of comprehensibility as a key concern with the increasing use of predictive algorithms.

Zarsky seems less concerned in his discussion. But he does highlight the fact that disclosure at the collection stage may not necessarily facilitate comprehensibility. Knowing which of your personal data has been selected for inclusion in the relevant dataset might give you some idea about how the process works, but it won’t necessarily tell you why you were targetted by the algorithms. Disclosure at the analytical and usage stages will be needed for this. And the complexity of the underlying technology may be a real problem here.

Zarsky responds by claiming that interpretable analytical processes are needed so that “individuals can obtain sufficient insight into the process and how it relates to their lives”. But I suspect there are serious problems with this suggestion. Although interpretable processes might be desirable desirable, the competing pressures that drive people toward more complex, less comprehensible (but hopefully more accurate) processes may be too great to make a demand for interpretability politically and socially feasible.

Zarsky also argues that the kind of transparency required by this rationale may not just be limited to those affected by the process. On the contrary, he suggests that broad disclosure, to the general population, may be desirable. This is because a more fulsome form of disclosure can combat the risks associated with partial and biased disclosures. It is important to combat those risks because, once information is disclosed to one individual, it is likely that they will leak out into the public sphere anyway.

That said, full disclosure could pose additional risks for the individuals or groups who are targetted by the algorithms, perhaps opening them up to social discrimination and stigmatisation. This highlights the potential vices of transparency, three of which are discussed in the final sections of Zarsky’s article. With luck, I’ll talk about them at a later date.


4. Conclusion
To sum up, transparency is often uncritically accepted as a virtue of the internet age. The last two posts have asked “why?”. Tying the discussion specifically to the role of transparency in the use of predictive algorithms, they have explored four different rationales for transparency. The first claiming that transparency could facilitate just and unbiased decision-making; the second claiming that it could facilitate crowdsourcing and innovation; the third claiming that it could help protect privacy rights; and the fourth claiming that it could help respect autonomy.

Although I have drawn attention to some criticisms of these rationales, a more detailed appraisal of the vices of transparency is required before we can make a decision in its favour. That, sadly, is a task for another day.

Book Recommendations ♯13: Moral Tribes by Joshua Greene

(Series Index)

This book doesn’t really need my recommendation; it has been widely promoted elsewhere. And, in fact, my recommendation is somewhat half-hearted: I’m not convinced that the overall argument is particularly novel or compelling. Nevertheless, if you’re interested in moral psychology and the implications it might have for moral philosophy more generally, then this is as good as any a place to start.

The book presents Joshua Greene’s attempt to solve the “tragedy of commonsense morality”. Greene is one of the doyen’s of contemporary experimental philosophy, and a pioneer in the field of fMRI studies of moral decision-making. The tragedy in question is of his own naming and is introduced in the prologue to the book. It concerns what goes wrong when different socio-cultural groups, each with its own distinctive moral code, based on shared innate moral machinery, interact with another.

The book is divided into five parts. The first part looks at the evolution of human morality. Starting out with the classic tragedy of the commons (distinct from Greene’s own tragedy of commonsense morality), it reviews evidence relating to the various mechanisms humans have adopted to solve the tragedy. The second part builds upon this by looking more specifically at the neuro-psychology of moral decision-making. It is here that Greene introduces us to the well-known field of trolleyology, and reviews some of the studies he and others have done on the “dual-track” system of moral decision-making (an idea that will be familiar to anyone who has read Kahneman’s Thinking Fast and Slow). The third part then switches focus from the facts of moral behaviour to the more traditional plain of normative moral philosophy. It proposes utilitarianism as the best candidate solution to the tragedy of commonsense morality. The fourth part then looks at a variety of challenges to utilitarianism. And the fifth part wraps things up by trying to apply Greene’s preferred brand of utilitarianism to some practical problems.

Unsurprisingly, I’m not convinced that Greene has presented a radical and compelling moral theory, nor that he as managed to solve the problems of utilitarianism. Nonetheless, I do recommend the book for two reasons. First, it provides a useful compendium of the empirical research that has been done on moral reasoning over the past twenty or so years: it is nice to have all the major research findings listed in one place. Second, chapter nine offers a really interesting analysis of trolley problems and what they say about the plausibility of utilitarianism. Indeed, as far as I’m concerned, chapter nine is worth the price of admission alone. But maybe that’s just me.

Saturday, January 18, 2014

Twitter n' stuff



I have had a twitter account for nearly two years now. Until recently, I never really posted on there, except to tweet links to my own blog posts. In the spirit of trying something new, and in an effort to garner more followers (what can I say? I am a budding social media megalomaniac), I've decided to change all that.

I am now going to be regularly tweeting links to all the interesting things I read in philosophy, science and law. I will, of course, continue to tweet updates for this blog, and I'd be happy to engage with people on twitter if they are so inclined. So if you're not already following me on twitter, could I suggest that you start now? The link is below and in the sidebar (while you're at it you might also consider following my other social media accounts: facebook, academia, google plus and so on).

@JohnDanaher


Big Data, Predictive Algorithms and the Virtues of Transparency (Part One)



Transparency is a much-touted virtue of the internet age. Slogans such as the “democratisation of information” and “information wants to be free” trip lightly off the tongue of many commentators; classic quotes, like Brandeis’s “sunlight is the best disinfectant” are trotted out with predictable regularity. But why exactly is transparency virtuous? Should we aim for transparency in all endeavours? Over the next two posts, I look at four possible answers to that question.

The immediate context for this is the trend toward “big data” projects, and specifically the trend toward the use predictive algorithms by governmental agencies and corporations. Recent years have seen such institutions mine large swathes of personal data in an attempt to predict the future behaviour of citizens and customers. For example, in the U.S. (and elsewhere) the revenue service (IRS) uses data-mining algorithms to pool individuals for potential audits. These algorithms work on the basis that certain traits and behaviours make it more likely that an individual is understating income on a tax return.

There are laws in place covering the use of such data, but I’m not interested in those here; I’m interested in the moral and political question as to whether such use should be “transparent”. In other words, should an institution like the IRS be forced to disclose how their predictive algorithms work? To answer that question, I’m going to enlist the help of Tal Zarsky’s recent article “Transparent Predictions”, which appeared in last year’s University of Illinois Law Review.

Zarsky’s article is a bit of sprawling mess (typical, I’m afraid, of many law journal articles). It covers the legal, political and moral background on this topic in a manner that is not always as analytically sharp as it could be. Nevertheless, there are some useful ideas buried within the article, and I will draw heavily upon them here.

The remainder of this post is divided into two sections. The first looks at the nature of the predictive process and follows Zarsky in separating it out into three distinct stages. Each of those stages could raise different transparency issues. The second section then looks at the first of four possible justifications for transparency.

Note: the title of this post refers solely to the “virtues” of transparency. This is no accident: the series will deal specifically with the alleged virtues. There are, of course, alleged vices. I’ll address them at a later date.


1. Transparency and the Predictive Process
Take a predictive algorithm like the one used by IRS when targetting potential tax cheats. Such an algorithm will work by collecting various data points in a person’s financial transactions, analysing those data and then generating a prediction as to the likelihood that the said individual is guilty of tax fraud. This prediction will then be used by human agents to select cases for auditing. The subsequent auditing process will determine whether or not the algorithm was correct in its predictions.

The process here is divided into three distinct stages, stages that will be shared by all data-mining and predictive programmes (such as those employed by other government agencies and corporations). They are:

Collection Stage: Data points/sets are collected, cleansed and warehoused. Decisions must be made as to which data points are relevant and will be collected.
Analytical Stage: The collected data is “mined”, analysed for rules and associations and then processed in order to generate some prediction.
Usage Stage: The prediction that has been generated is used to guide particular decisions. Strategies and protocols are developed for the effective use of the predictions.

For the time being, human agency is relevant to all three stages: humans programme the algorithms, deciding which data points/sets are to be used and how they are to be analysed, and humans make decisions about how the algorithmic predictions are to be leveraged. It is possible that, as technology develops, human agency will become less prominent in all three stages.

Transparency is also a factor at all three stages. Before we consider the specifics of transparency, we must consider some general issues. The most important of these is the question of to whom must the process be transparent? It could be the population as a whole, or some specific subset thereof. In general, the wider the scope of transparency, the more truly “transparent” the process it is. Nevertheless, sometimes the twin objectives of transparency and privacy (or some other important goal) dictate that a more partial or selective form of transparency is desirable.

So how might transparency arise at each stage? At the first stage, transparency would seem to demand the disclosure of the data points or sets are going to be used in the process. Thus, potential “victims” of the IRS might be told which of their financial details is going to be collected and, ultimately, analysed. At the second stage, transparency would seem to demand some disclosure of how the analytical process works. The analytical stage is quite technical. One form of transparency would be to simply release the source code to the general public in the hope that the interested few would be able to figure it.

Nevertheless, as Zarsky is keen to point out, there are policy decisions to be made at this stage about how “opaque” the technical process really is. It is open to the programmers to develop an algorithm that is “interpretable” by the general public. In other words, a programme with a coherent theory of causation underlying it that can be communicated and understood by any who care to listen. Finally, at the third stage, transparency would seem to require some disclosure of how the prediction generated by the algorithm is used and, perhaps more importantly, how accurate the prediction really is (how many false positives did it generate? etc.)

But all this is to talk about transparency in essentially descriptive and value-free terms. The more important question is: why bother? Why bother make the process transparent? What moral ends does it serve? Zarsky singles out four rationales for transparency in his discussion. They are: (i) to promote efficiency and fairness; (ii) to promote more innovation and crowdsourcing; (iii) to protect privacy; and (iv) to promote autonomy.

I’m not a huge fan of this quadripartite conceptual framework. For example, I don’t think Zarsky does nearly enough to delineate the differences between the first and second rationales. Since Zarsky doesn’t talk about procedural fairness as distinct from substantive fairness, nor explain why innovation and crowdsourcing are valuable, it seems to me like both rationales could collapse into one another. Both could just be talking about promoting a (non-specified) morally superior outcome through transparency.

Still, there probably is some distinction to be drawn. It is likely that the first rationale is concerned with substantive fairness and the minimisation of bias/discrimination; that the second rationale is concerned with enhancing overall societal levels of well-being (through innovation); and that the third and fourth rationales are about other specific moral ends, privacy and autonomy, respectively. Thus, I think it is possible to rescue the conceptual framework suggested by Zarsky from its somewhat imprecise foundations. That said, I think that more work would need to be done on this.


2. Transparency as means of promoting fairness
But let’s start with the first rationale, imprecise as it may be. This is the one claiming that transparency will promote fairness. The concern underlying this rationale is that opaque data-mining systems may contain implicit or explicit biases, ones which may unfairly discriminate against particular segments of the population. For example, the data points that are fed into to the algorithm may be unfairly skewed towards a particular section of the population because of biases among those who engineer to the programme. The claim made by this rationale is that transparency will help to stamp out this type of discrimination.

For this claim to be compelling, some specific mechanism linking transparency to the minimisation of bias (and the promotion of fairness) must be spelled out. Zarsky does this by appealing to the notion of accountability. He suggests that one of the virtues of transparency is that it forces public officials, bureaucrats and policy makers to take responsibility for the predictive algorithms they create and endorse. And how exactly does that work? Zarsky uses work done by Lessig to suggest that there are two distinct mechanisms at play: (a) the shaming mechanism; and (b) market and democratic forces. The first mechanism keeps the algorithms honest because those involved in their creation will want to avoid feeling ashamed for what they have created; and the second mechanism keeps things honest by ensuring that those who fail to promote fairness will be “punished” by the market or by the democratic vote.

To encapsulate all of this in a syllogism, we can say that proponents of the first rationale for transparency adopt the following argument:


  • (1) The predictive policies and protocols we adopt ought to promote fairness and minimise bias.
  • (2) Transparency promotes fairness and minimises bias through (a) shaming and (b) market and democratic forces.
  • (3) Therefore, we ought to incorporate transparency into our predictive policies and protocols.


At the moment, this argument is crucially vague. It fails to specify the extent of the transparency envisaged by the rationale, and it fails to specify at which stage of the predictive process transparency may become relevant. Until we add in these specifications, we will be unable to determine the plausibility of the argument. One major reason for this is that the argument, when left in its original form, seems to rest on a questionable assumption, viz. that a sufficient number of the population will take an interest in shaming and disciplining those responsible for implementing the predictive programme. Is this really true? We’ll only be able to tell if we take each stage of the predictive process in turn.

We start with stage one, the data collection stage. It seems safe to say that those whose behaviour is being analysed by the algorithm will take some interest in the bits of their personal data are being “mined” for predictive insights. Transparency at this stage of the process could take advantage of this natural interest and thereby be harnessed to promote fairness. This would seem to be particularly true if the predictions issued by the algorithm will have deleterious personal consequences: people want to avoid IRS audits, so they are probably going to want to know what kinds of data the IRS mines in order to single people out for auditing.

It is a little more uncertain whether transparency will have similar virtues if the predictions being utilised have positive or neutral consequences. Arguably, many people don’t pay attention to the data-mining exploits of corporations because the end result of such exploits seems to entail little personal loss (maybe some nuisance advertising, but little more) and some potential gain. This nonchalance could be a recipe for disaster.

Moving on then to stage two, the analytical stage. It is much more doubtful whether transparency will facilitate shaming and disciplining here. After all, the majority of the people will not have the technical expertise needed to evaluate the algorithm and hold those responsible to account. Furthermore, if it is analysts and programmers that are responsible for many of the details of the algorithm, there may be a problem. Such individuals may be insulated from social shaming and discipline in ways that policy makers and politicians are not. For transparency to be effective at this stage, a sufficient number of technically adept persons would need to take an interest in the details and must have some way to shame and discipline those responsible, perhaps through some “trickle down” mechanism (i.e. shaming and disciplining of public officials trickles down to those with the technical know-how).

Finally, we must consider stage three. It seems superficially plausible to say that people will take an interest in how the predictions generated by the algorithms are used and how accurate they are. Again, this would seem to be particularly true if the end result is costly to those singled out by the algorithm. That said, Zarsky suggests that the usage protocols might involve technical terms that are too subtle to generate shame and discipline.

There is also one big risk when it comes to using transparency to promote fairness: populism. There is a danger that the people who are held to account will be beholden to popular prejudices and be overly conservative in the policies they adopt. This may actually prevent the development of truly fair and unbiased predictive algorithms.

So that brings us to the end of this first alleged virtue of transparency. In part two, we will consider the remaining three.

Tuesday, January 14, 2014

Does Criminal Law Deter? (Part Two)



(Part One)

This is the second in a two-part series looking at Robinson and Darley’s article “Does Criminal Law Deter? A Behavioural Science Investigation”. It is commonly thought that changes to the substantive criminal law (i.e. the rules about counts as a crime and how much punishment should be attached to the commission of crime) can have a deterrent effect. At least, many policy debates and legislative changes proceed on that assumption. But how plausible is it really?

Robinson and Darley argue that it isn’t. While they accept that a criminal justice system, with conspicuous mechanisms of enforcement, can have a deterrent effect, they argue that it is unlikely that the substantive criminal law adds to that effect. And, certainly, the effects are nowhere near as fine-grained or precise as policy debates would typically have us presume.

As reviewed in part one, their argument for this conclusion is fairly simple. They claim that in order for the substantive criminal law to have a deterrent effect, three conditions must be met: (i) the knowledge condition, according to which potential criminals must know what the law demands; (ii) the immediacy condition, according to which potential criminals must be able to bring that knowledge to bear on their crime-relevant decisions; and (iii) the weighting condition, according to which the perceived costs of crime must outweigh the perceived benefits.

Robinson and Darley think it unlikely that all three of these conditions can be met. This means their argument looks something like this:


  • (1) In order for the substantive rules of criminal law to have a deterrent effect, three conditions must be met: (i) the knowledge condition; (ii) the immediacy condition; and (iii) the weighting condition.
  • (2) It is highly unlikely that the knowledge condition is met.
  • (3) It is highly unlikely that the immediacy condition is met.
  • (4) It is highly unlikely that the weighting condition is met.
  • (5) Therefore, it is highly unlikely that the substantive rules of criminal law have a deterrent effect.


We reviewed the case for premises (2) and (3) the last day. Today we’ll look at the case for premise (4).


1. The Rational Choice Model and the Weighting Condition
The deterrence assumption tends to work with a rationalistic approach to criminal behaviour. That is to say, it tends to presume that criminals make decisions about whether or not to commit a crime based on an analysis of the costs/benefits of the crime. We’ve already looked at ways in which this model of criminal behaviour could be flawed: if the knowledge and immediacy conditions are not met, then potential criminals won’t be able to engage in the kind of cost/benefit analysis required by the deterrence assumption.

Still, there is a distinction to be made between two kinds of rational choice model of criminal behaviour. The first, which focuses on the actual costs and benefits the crime, and the second, which focuses on the perceived costs and benefits. Any model focusing on the former is likely to be well off the mark, since potential offenders may not be able to appreciate the actual costs and benefits. But a model based on the latter could get off the ground, since the perceived costs and benefits are what (presumably) guide any particular decision.

Robinson and Darley’s discussion of the weighting condition focuses on the perceived costs and benefits. They hold that the following inequality is needed if the deterrence assumption is to prove correct:

Perceived Cost of Crime > Perceived Benefit of Crime

Furthermore, and following a suggestion originally put forward by Jeremy Bentham, they argue that the perceived cost will depend on three variables:

Perceived Cost = [Probability of Punishment][Delay of Punishment][Total Amount of Punishment]

In other words, perceived cost is a function of the total amount of punishment, times the probability of being punished, discounted by some delay factor.

Their claim is that behavioural science has revealed that the effect of each of these variables on perceived cost is far more complex and (sometimes) more counterintuitive than is often believed to be the case. They support this by looking at a range of studies. Of course, as they note at the outset, good controlled studies on the effects of each variable on perceived cost are hard to come by: researchers are not permitted to punish research subjects as severely as we punish criminals in the real world. But it is possible to draw some conclusions based on animal studies, experiments involving punishments of moderate intensity (that subjects have consented to) and “natural” experiments.

So let’s look at the evidence.


2. Evidence Relating to the Probability of Punishment
When it comes to assessing the impact of punishment probabilities on behaviour, Robinson and Darley turn to conditioning studies. These are the classic animal-in-a-box experiments, which were so heavily associated with the behaviouralist movement in psychology. The typical set-up in such an experiment is that an animal is trained to perform (or already predisposed to perform) some kind of action (e.g. pressing a button), and this action is then linked to some punishment (e.g. a mild electric shock). In this experimental set-up, it is easy to vary the rate of punishment and see what effect this has on discouraging the action.

Such that have been done on this suggest that once the probability of punishment drops below a certain threshold, it has practically no deterrent effect. For example, in classic studies by Azrin, Holz and Hake it was found that a 10% punishment rate failed to suppress behaviour.

One problem with these studies, however, seems to have been that the punishments were not randomised (i.e. the punishments occurred at regular intervals) and, apparently (I have not confirmed this), studies involving randomised rates of punishment are hard to come by. But, reviewing the few that have been done, Lande found that there was less suppression of behaviour as the probability of punishment declined. Furthermore, he found that there were “response bursts” after punishment. In other words, the experimental subjects engaged in more of the relevant behaviour after having been punished. This may be because they indulged in something akin to the gambler’s fallacy, i.e. they assumed they were much less likely to be punished “the next time round”.

You can imagine where all this is going. Robinson and Darley argue that these studies have worrying implications for the criminal law. Citing actual conviction rates (across all crimes) of 1.3% in the U.S. and single digit punishment rates for even the most serious of crimes, Robinson and Darley suggest that the criminal law may have minimal deterrent effect. This is because the actual punishment rate is below the rate at which it will have a deterrent effect. They also cite anecdotal cases in which a criminal committed a crime just after having been punished, which seems to confirm Lande’s finding of “response bursts”.

So Robinson and Darley seem to support the following argument:


  • (6) If the punishment rate drops below a certain threshold (maybe 10%), the criminal law will not have a deterrent effect and may also have perverse effects (e.g. it may lead to “response bursts”).
  • (7) The actual punishment rate is below this threshold.
  • (8) Therefore, the criminal law will not have a deterrent effect.


There are a several criticisms one could make of this argument. Robinson and Darley only address the most obvious: what if people overestimate the rate of punishment? What if people think the punishment rate is above the relevant threshold? They respond to this critique by suggesting that many potential criminals will suffer from overconfidence bias, and by pointing to David Anderson’s study (discussed in part one) of actual criminals’ attitudes toward punishment. Anderson found that 76% of active criminals and 89% of the most serious offenders either did not think they would be caught or didn’t even consider the possibility of punishment.

I have some other problems with the argument. First, I think it hangs too much on animal conditioning studies. More human data would have been nice (the studies on actual criminals’ attitudes toward punishment are welcome in this respect). Second, the relevant thresholds are little too vague for my liking. And third, I think the argument diverges too much from the original thesis of Robinson and Darley’s article. Their original claim was about the deterrent effect of the substantive criminal law. But this argument really has to do with the effectiveness of the criminal justice system as a whole, not the rules of criminal law. This is surprising given that they originally supported the deterrent effect of the criminal justice system as a whole.


3. Evidence Relating to the Amount of Punishment
One of the assumptions shared by proponents of the deterrence hypothesis is that fine-grained modulation of the total amount of punishment attaching to an offence can make a real difference to how frequently that crime is committed. Thus, for example, increasing the sentence length for theft from five to ten years in prison is thought to make a difference to behaviour. One can see the attraction of this assumption. Simple cost-benefit reasoning tells us that ten years in jail is worse than five years in jail (approximately twice as worse), ergo potential criminals should think twice as hard about committing theft if the sentence is increased to ten years.

As attractive as this reasoning is, there are several problems with it. Again, the all-important factor here is how increases in the duration or amount of punishment are perceived by potential criminals, not what the reality is. Behavioural evidence suggests that these perceptions may be out of line with reality. Robinson and Darley review several strands of evidence in support of this contention.

They start with conditioning studies. The primary conclusion from these studies (they mention a few) is that increases in the severity of punishment do indeed increase deterrence. But there are some interesting nuances. It has been found that animals can adapt to the intensity of punishment. Thus, for example, you could start out giving a pigeon electric shocks below 60 volts and then gradually increase it, even up to 300 volts, and find that the pigeon continued with the behaviour you were trying to deter. This would not be true if you started out punishing at the more intense level.

This doesn’t bode well for the criminal law. It is common for first-time offenders to given more lenient punishments. Indeed, sometimes their punishments are completely suspended. This may create the conditions in which an “adaptation to intensity”-effect can flourish. Furthermore, it’s not just animal studies that support this view. In a famous paper by Brickman and Campbell, it was suggested that human experience of affective changes follows something like a “hedonic treadmill” in which there is gradual adaptation to a new affective baseline. So, for example, what was initially experienced as intensely pleasurable gradually becomes less so as the experiencer adjusts to the new “baseline” level of pleasure.

Robinson and Darley argue that two distinct types of adaptation could occur in the prison environment: (i) neutralisation; and (ii) hardening. The first type of adaptation arises when the prisoner adjusts their affective baseline to cope with what was initially felt to be a highly punitive state. Thus, the punitive effect of, say, prison is neutralised over time. This means that they will continue to experience positive and negative changes to their affective state, but those changes will be relative to the new (lower) baseline. They cite in support of this claim studies showing that the risk of suicide among prisoners is much higher in the first 24 hours of incarceration. “Hardening” is a different process, whereby the prisoner just becomes more immune to changes in their affective environment. Thus, as they go along, they are less affected by positive and negative events. Both of these types of adaptation could undermine the deterrent effect of punishment, particularly imprisonment, for repeat offenders.

Another problem has to do with the discrepancy between remembered pain and the actual duration of a punitive experience. Research by Daniel Kahneman and his colleagues suggests that remembered pain is not simply a direct function of the duration of pain. Instead, it seems to be a function of the most intense pain experienced and the most recent painful experience. The result being that short, but intensely unpleasant, punishment is remembered as being much worse than long, moderately unpleasant, punishment. This lends credence to the deterrent powers of the classic “short, sharp shock” approach to punishment, and detracts somewhat from the contemporary obsession with duration of prison sentence.

The upshot of all this is that there are diminishing deterrent returns to be had by increasing the duration of punishment, and maybe no returns at all to be had from modulating the severity of the punishment in proportion to the gravity of the offence and the offender (due to adaptation effects). This is not to say that there aren’t sound moral reasons for modulating punishment in this manner (proportionality still feels like it is morally justified); but it is to say that such moral reasons are not easily grounded in deterrence.

One criticism of this argument is that it seems to dwell upon the deterrent effect of punishment on those who are undergoing or have undergone it (“special deterrence”) and not on the deterrent effect on the general population. Robinson and Darley wave this objection off by arguing that many of those who commit crimes are repeat offenders. But, unfortunately, that still doesn’t address the criticism. The general population will not have been privy to the kinds of experiences that lead to adaptation effects, so maybe they could be genuinely deterred by increases in the total amount of punishment? The discussion of discounting (below) may provide some response to this criticism.

Robinson and Darley also go on to argue that the social milieu from which many offenders are drawn can affect how they perceive the total amount of punishment. Many are socialised in an environment in which “doing time” is common among their peers, and the potential costs are consequently downplayed. Furthermore, many come from deprived backgrounds and so the costs of punishment may seem much less to them than to someone from a more affluent background. Again, these factors may combine to undermine the deterrent effect.


4. Evidence Relating to the Delay of Punishment
The final factor that affects the perceived cost of punishment is the delay between carrying out the act that is to be punished and the actual administration of punishment. Robinson and Darley are brief on this point since the behavioural evidence seems to be pretty clearcut: a combination of conditioning studies and cognitive psychology experiments indicates that the greater the delay between the act and the punishment, the lesser the deterrent effect.

Perhaps one of the most widely-popularised models of this argues that humans (and other animals) hyperbolically discount the value of future rewards (and losses) relative to more immediate rewards and losses. In other words, they prefer “smaller sooner” rewards to “larger later” rewards. This is thought to account for addictive behaviours like cigarette smoking. Even though people are often aware of the long-term costs of smoking, and the long-term rewards of not-smoking, their internal valuation mechanisms are such that the immediate reward of smoking seems more attractive than the long-term reward of quitting. The diagram below provides a representation of hyperbolic discounting curves for human preferences. Note how the value of the smaller reward exceeds that of the larger reward at a certain point.




The relevance of this to criminal behaviour and punishment should be obvious. Since there is often a significant delay between the commission of a crime and the punishment (if any) of that crime, the short term rewards of the crime will often seem more attractive to a potential criminal than the long-term costs of the punishment. Consequently, the delay undermines the deterrent effect.

I think Robinson and Darley are probably correct about this, however, I can’t help but point out that this, once again, has more to do with the criminal justice system as whole than it does to do with the substantive criminal law. After all, substantive criminal laws tend not to stipulate that there must be a significant delay between the commission of crime and the imposition of punishment.


5. Conclusion
Okay, so that brings us to the end of Robinson and Darley’s argument. To briefly recap, their central thesis is that the substantive criminal law is unlikely to deter criminal behaviour. This is because in order for the substantive law to have a deterrent effect it must satisfy three conditions: (i) the knowledge condition; (ii) the immediacy condition; and (iii) the weighting condition. Drawing from a range of behavioural science research, Robinson and Darley argue that it is unlikely that all three conditions are met.

Their argument is plausible, and brings together some interesting strands in the behavioural science literature, but it still contains points of weakness. The most annoying of which is, perhaps, its tendency to stray from the central thesis — which was supposed to be about the substantive criminal law — into more general concerns about the criminal justice system.