Monday, July 27, 2015

The Psychology of Revenge: Biology, Evolution and Culture


The Murder of Agamemnon - A Revenge Killing?


“Revenge is a dish best served cold…” 
(Ancient Klingon Proverb)

When I was younger I longed for revenge. I remember school-companions doing unspeakably cruel things to me — stealing my lunch, laughing at my misfortune and so forth (hey, it all seemed cruel at the time). I would carefully plott my revenge. The revenge almost always consisted of performing some similarly unspeakably cruel act towards them. Occasionally, my thoughts turned to violence. Sometimes I even lashed out in response.

I’m less inclined towards revenge these days. Indeed, I am almost comically non-confrontational in all aspects of my life. But I still feel the pangs. When wronged, I’ll briefly get a bit hot under the collar and my thoughts will turn to violence once more. I’ll also empathise with the characters in the innumerable revenge narratives that permeate popular culture, willing them on and feeling a faint twinge of pleasure when they succeed. I don’t think I ever act on the impulses anymore, but I have come close. And I’m sure everyone has had similar feelings.

But why is this? Why do we so frequently seek revenge? And how can we stop ourselves from acting on the impulse? I want to look at some potential answers to those questions today. In particular, I want to cover three related topics. First, I want to consider the psychology and neurobiology of revenge, focusing on why revenge can oftentimes feel pleasurable. Second, I want to consider the supposed ‘rationality’ of revenge, i.e. why the instinct for revenge is sometimes a good thing, and why the instinct may have evolved. And third, I want to examine the various methods that can be used to minimise the amount of vengeance being sought in society at any given time.

In doing all this, I’ll be drawing heavily from the discussion in Steven Pinker’s book The Better Angels of our Nature, and from the various studies cited therein.


1. The Mechanics of Revenge
One thing that is noticeable about revenge is how common it is. Literary classics of the distant and recent past often extoll its virtues in poetic terms; and it is a frequent motive for state and non-state violence (consider the use of reprisals in international conflicts). In addition to this, Pinker, following work by McCullough and Daly and Wilson, suggests that blood feuds — cases in which one tribe/gang kill the members of rival tribe/gang in retaliation for a similar attack on themselves — are endorsed by around 95% of the world’s cultures.

The commonality of revenge suggests that there is something deep within the architecture of the typical human brain that facilitates it. This seems to be borne out by a variety of studies. For one thing, it is easy enough to provoke people into seeking revenge in simple psychological experiments. Once more citing the work of McCullough, Pinker mentions studies done on college students (as pretty much all psychological experiments are…) in which the students are first given an insulting evaluation written by a fellow student, and then given the opportunity to punish the evaluator in a variety of ways (electric shocks, blasts with an air horn). It is very easy to induce students to engage in such revenge attacks.

So which brain systems undergird this thirst for revenge? Pinker mentions two. The first is the so-called Rage Circuit. This is a pathway linking the midbrain to the hypothalamus and amygdala. The rage circuit works by receiving pain signals from other parts of the nervous system and then responding, rapidly, with aggressive behavioural patterns. If activated, it provokes an animal to lash out at the nearest available victim. Jaak Panksepp performed experiments on the rage circuits of cats. The experiments involved activating the rage circuit with an electrical current. This provoked an instantaneous reaction from the cat. It would leap towards Panskepp with its claws and fangs bared, while hissing and spitting. It is likely that the thirst for revenge starts with the rage circuit: when we are hurt, we have an instant urge to lash out.

But it doesn’t end there. It is known that the stimulation of the rage circuit is unpleasant and animals will often work to switch it off. But the desire for revenge can linger. The reason for this seems to be that other brain systems support the quest for revenge. In particular, there is the so-called ‘Seeking’ system, named by Panskepp. This is a network within the brain that facilitates reward and pleasure seeking behaviour and incorporates the mesolimbic and mesocortical dopamine systems. You have probably come across some description of them before. The original experimental work on them involved rats placed in Skinner boxes. Every time the rats pressed a lever in the box they would stimulate their dopamine systems. It was found that rats would do so until they dropped dead from exhaustion. For a long time, this was thought to provide the neurobiological basis for addiction, although nowadays scientists realise that addiction is a more complex phenomenon.

Anyway, the important point here is that revenge seems to activate the seeking system. People appear to crave revenge, hoping that it will prove satisfying and rewarding. Studies done by Dominique de Quervain and his colleagues scanned the brains of men who had been wronged in a simple trust game (they entrusted another with some money and that other kept it for himself). The men were given the opportunity to punish the wrongdoer at some cost to themselves. It was found that part of the striatum (a key component in brain’s seeking system) lit up as they pondered the opportunity, and that the more it lit up, the more likely the men were to punish the others. This seems to indicate that reward seeking is part of the motivation for revenge.


2. The Rationality of Revenge
The commonality of revenge, and the fact that people seem to crave it, poses another question: why have we evolved (or been enculturated) to pursue revenge? After all, there is something of a paradox underlying our lust for revenge. It is a costly endeavour, and no matter how much pain we inflict on the wrongdoer, we can never really correct for the historical wrongdoing that provokes our revenge. And yet revenge persists.

Pinker favours a ‘deterrence’ explanation for revenge. We seek revenge, and derive pleasure from it, because it is an effective means of deterring would-be wrongdoers. Now, on a previous occasion, I discussed a whole range of psychological evidence suggesting that people’s punishment-related behaviours did not, in fact, follow the logic of deterrence. Au contraire, those studies suggested that people were natural-born retributivists: they sought revenge because they felt it was important for people to get their ‘just deserts’, and not because it would deter other wrongdoers. But the contradiction between these experimental findings and Pinker’s preferred explanation is more apparent than real. The studies discussed in that earlier post focused on the proximate psychological causes of revenge, i.e. on what best explained individual judgments and patterns of behaviour. Pinker’s explanation focuses on the ultimate societal causes of revenge, i.e. on what best explains the persistence of revenge in spite of its costly nature. His claim is that deterrence is the best ultimate explanation for this persistence. That is perfectly consistent with the claim that most individuals follow a retributivist (non-deterrentist) logic.

What evidence can be adduced in favour deterrence explanation? Pinker discusses two main pieces. Both come from studies of iterated prisoner’s dilemmas (IPDs) (note: I am not going to explain what the PD or IPD is here because I have discussed it on previous occasions - the important point is that PDs are thought to provide a good model for many social dilemmas). The first piece of evidence is largely theoretical, and focuses on computer-based simulations of IPDs. These computer-based simulations seem to confirm the long-term effectiveness of vengeance in achieving deterrence. The second is largely experimental, and focuses on how real people behave in lab-based IPDs. These also seem to confirm the willingness to seek and effectiveness of revenge. (You may dispute my calling the computer-based simulations ‘theoretical’ as opposed to ‘experimental’ evidence. I guess they are a type of experiment, but they are experimental tests of highly formalised strategies, not tests of the behaviour of real people.)

The computer-based simulations of IPDs are fascinating, and have generated a rich literature over the years. As you probably know, the standard PD involves two players, each faced with two choices: cooperate or defect. Collectively, the best strategy is to cooperate; but, individually, the best strategy is to defect (it dominates all other choices). But this is only true if the PD is a once off. If the players repeatedly interact in PD-style games, over multiple rounds and with different opponents, then other strategies can prevail. This is the key insight from the computer-based simulations. One of the earliest, and most enduring, findings from those simulations was that a simple programme called TIT FOR TAT could beat out most competitors in an IPD tournament. The TIT FOR TAT programme embodied the logic of deterrence-based revenge. It involved cooperation on the first round of the tournament, and then a change in subsequent rounds, depending on what the opponent did in the previous round. Thus, for example, if the opponent defected in the first round, TIT FOR TAT would defect in the second round; if the opponent cooperated in the second round, TIT FOR TAT would switch back to cooperation in the third round; and so on. The idea is that this models deterrence-based revenge because it rewards and punishes opponents with a view to changing outcomes in future rounds.

The success of TIT FOR TAT in IPDs is attributed to the fact that it is nice, clear, retaliatory and forgiving. But TIT FOR TAT is not an unbridled success. One difficulty is that it can easily degenerate into an endless cycle of defection (sometimes called a ‘death spiral’), particularly if one TIT FOR TAT is playing against another TIT FOR TAT and they happen to first interact on a round when they are both playing ‘defect’. Alternative strategies can be more effective in the right environments. For instance, GENEROUS TIT FOR TAT, which randomly restarts cooperating on some rounds, or TIT FOR TWO TATS, which avoids immediate retaliation by waiting to see whether its opponent defects in two successive rounds, or CONTRITE TIT FOR TAT, which tries to correct for its own mistakes, can be more effective.

I could go on about the details and variations, but that would be unnecessary. The important point is that strategies that all these strategies incorporate some degree of revenge (and, importantly, forgiveness), and can help to sustain long-term cooperation. This supports the deterrence-explanation. I should probably note at this point that after Pinker published his book there was an interesting paper published by Press and Dyson on IPDs. The paper proved that extortionate strategies (called ‘Zero Determinant’ strategies), i.e. ones that weren’t simply vengeful and forgiving, were optimal in some IPDs. There has been much hype about this result, and you can read explanations of it here, but it doesn’t completely undermine the long-term effectiveness of TIT FOR TAT and its variations.

So much for the theoretical bit of evidence, what about the work done on actual human beings? Since the late 1990s, a whole series of studies have been published showing that costly punishment can help to sustain cooperation in repeated PD-style interactions (researchers refer to the phenomenon as 'altruistic punishment'). The most famous study in this vein comes from Fehr and Gachter. The study involved a Public Goods game wherein people were given the opportunity to contribute to a common investment fund (which would benefit them all), or to free ride on the good will of others who invested. If experimental subjects were allowed to punish free riders, free-riding was eliminated over repeated plays of the game. Furthermore, other experiments have found that people are more likely to punish when they think others are watching. This demonstrates a willingness to seek a reputation for revenge in a social setting. This again seems to confirm the deterrence explanation because a reputation for revenge is important for deterrence.

The upshot is that deterrence — and the pursuit of mutually beneficial cooperation — look like reasonable explanations for the long-term persistence of revenge.


3. The Modulation of Revenge
Granting that revenge is common, and occasionally rational, there remains a challenge: how can we ensure that there is not too much of it? It is clear that too much revenge can be destructive. This is obvious to anyone who has lived through seemingly endless cycles of blood-feuding (the real-world equivalent of the TIT FOR TAT ‘death spirals’). It might be trite and simplistic to put it this way, but such cycles seem to be part of the reason for the persistence of sectarian violence in Northern Ireland. Or, at least, it seemed that way to me as child growing up in the Republic of Ireland.

Is it possible to prevent such destructive cycles of revenge? Would it be possible to create a world in which there was no need to seek revenge, i.e. in which revenge lost its rationality? In his analysis, Pinker identifies five factors which seem to modulate and reduce the need for revenge. I won’t discuss them in too much detail here. Instead, I will simply give short descriptions and links to relevant supporting evidence:

A. The Presence of Leviathan: The Leviathan is, of course, Hobbes’s famous term for the state. The Leviathan effectively functions as a means for outsourcing violence (in particular revenge). We all have Leviathans in our lives. When I was a school-child, I did not necessarily need to lash out at the cruel behaviour of my companions, I could sometimes outsource my revenge to a teacher who could punish the bullies on my behalf. This outsourcing of revenge can have two major benefits. First, the Leviathan can function as a more effective deterrent if it can create the belief that it is ‘all-seeing’ and ‘all-knowing’ (or close enough) and capable of retaliating even if the wrongdoer crushes their victim. Second, the Leviathan may be less prone to the distorting biases that fuel cycles of revenge. It is well-known that victims often overestimate the degree of harm they have suffered, and consequently can punish wrongdoers in excess. Shergill et al performed an experiment in which people placed their finger under a bar that applied a precise amount of force. They were then asked to press down on the finger of another experimental subject with the same amount of force. It was found that they used approximately eighteen times more force than they originally received, highlighting the gap between perceived harm and reality. Pinker refers to this as part of the ‘moralization gap’ and highlights further evidence in support of it. Leviathan, as a third party, may avoid the excesses of this gap.

B. Civic-Mindedness and Perceptions of Governmental Legitimacy: The mere presence of Leviathan is not enough in itself to eliminate destructive cycles of revenge. It is clear that the people who are subjected to the authority of Leviathan must have some degree of civic-mindedness, i.e. must be committed to the institutional basis for Leviathan and perceive them to be legitimate. Herrmann, Thoni and Gachter performed a cross-cultural study of Public Goods games which highlighted this. They found, somewhat surprisingly, that in some cultures players actually punished people who contributed generously to the public investment fund. This is odd since generous contributors of this sort actually benefitted the group as a whole. When they dug into the data a little deeper, Herrmann, Thoni and Gachter found that a major predictor of this willingness to spitefully punish generous contributors was the degree of civic mindedness in the respective cultures. In other words, cultures in which the commitment to the rule of law was weak (e.g. countries where people didn’t pay taxes, cheated on social welfare payments etc.) were more likely to engage in spiteful punishment.

C. Expanding the Circle of Empathy: This is an obvious one. It is well-known that we are more likely to forgive people who fall within our natural circle of empathy (kin, friends etc) for their transgressions. This modulates our desire for revenge. Thus, creating an expanded circle of empathy can help prevent destructive cycles of revenge. The question, of course, is how to do this. Various cultural practices and rituals can help to create ‘fictive kinships’ which are often effective means of expanding the circle of empathy. Religions have been good at this, and often explicitly invoke kinship metaphors (e.g. ‘brothers and sisters in Christ’). But there is a dark side to this too as you can often create an excessive in-group/out-group mentality, which can in turn fuel revenge and associated forms of violence.

D. Shared Goals: A simple way to overcome excessive in-group/out-group mentalities is to generate common interests, i.e. to make the success of one group dependent on the success of another. There was a famous experiment to this effect performed on a group of boys at the Robbers Cave summer camp back in the 1950s. The boys were arbitrarily divided into two separate groups at the start of camp. This generated intense loyalty within the groups, and intense rivalry between them, with acts of provocation and retaliation following soon after. But the experimenters found that they could reduce this rivalry by bringing the groups together and forcing them to work together for mutual benefit, e.g. in having to restore the camp’s water supply. The value of such mutual interdependencies is often highlighted as a major reason why countries that trade with one another are less likely to go to war.

E. Creating a Perception of Harmless: A final way to reduce destructive cycles of revenge is to cultivate a reputation for non-violence. That is: to signal to the other side that you are not going to continue with a destructive conflict. Apologies and reconciliation events are central to this, but apologies are often deemed ‘cheap talk’. They are easy to make and easy to break. There is some suggestion that physiological responses like blushing are a way in which evolutionary forces facilitated costly signaling of apologies. There is also evidence from the study of international and civil conflicts that apologies and reconciliation events are more likely to work when they are costly, involve some symbolic (but incomplete) justice, and involve participants with some shared history. The work of Long and Brecke is the key source here.

I have illustrated these five modulators in the diagram below.





4. Conclusion
To briefly sum up, revenge seems to be common, occasionally rational and capable of being reduced. Its commonality is illustrated by its near-universal endorsement, and the ease with which it can be endorsed. It seems to be undergirded by two major brain systems: the Rage circuit, which facilitates rapid violent responses to perceived harm; and the Seeking circuit, which facilitates reward-seeking behaviours. The rationality of revenge is illustrated by its utility as a deterrence mechanism in iterated versions of the prisoner’s dilemma. And the ability to reduce the amount of destructive revenge is illustrated by the five factors listed above.

Sunday, July 26, 2015

How to Study Algorithms: Challenges and Methods




(Series Index)

Algorithms are important. They lie at the heart of modern data-gathering and analysing networks, and they are fueling advances in AI and robotics. On a conceptual level, algorithms are straightforward and easy to understand — they are step-by-step instructions for taking an input and converting it into an output — but on a practical level they can be quite complex. One reason for this is the two translation problems inherent to the process of algorithm construction. The first problem is converting a task into a series of defined, logical steps; the second problem is converting that series of logical steps into computer code. This process is value-laden, open to bias and human error, and the ultimate consequences can be philosophically significant. I explained all these issues in a recent post.

Granting that algorithms are important, it seems obvious that they should be subjected to greater critical scrutiny, particularly among social scientists who are keen to understand their societal impact. But how can you go about doing this? Rob Kitchin’s article ‘Thinking critically about and researching algorithms’ provides a useful guide. He outlines four challenges facing anyone who wishes to research algorithms, and six methods for doing so. In this post, I wish to share these challenges and methods.

Nothing I say in this post is particularly ground-breaking. I am simply summarising the details of Kitchin’s article. I will, however, try to collate everything into a handy diagram at the end of the post. This might prove to be a useful cognitive aid for people who are interested in this topic.


1. Four Challenges in Algorithm Research
Let’s start by looking at the challenges. As I just mentioned, on a conceptual level algorithms are straightforward. They are logical and ordered recipes for producing outputs. They are, in principle, capable of being completely understood. But in practice this is not true. There are several reasons for this, some are legal/cultural, some are technical. Each of them constitutes an obstacle that the researcher must either avoid or, at least, be aware of.

Kitchin mentions four obstacles in particular. They are:

A. Algorithms can be black-boxed: Algorithms are oftentimes proprietary constructs. They are owned and created by companies and governments, and their precise mechanisms are often hidden from the outside world. They are consequently said to exist in a ‘black box’. We get to see their effects on the real world (what comes out of the box), but not their inner workings (what’s inside the box). The justification for this black-boxing varies, sometimes it is purely about protecting the property rights of the creators, other times it is about ensuring the continued effectiveness of the system. Thus, for example, Google are always concerned that if they reveal exactly how their Pagerank algorithm works, people will start to ‘game the system’, which will undermine its effectiveness. Frank Pasquale wrote an entire book about this black-boxing phenomenon, if you want to learn more.

B. Algorithms are heterogeneous and contextually embedded: An individual could construct a simple algorithm, from scratch, to perform a single task. In such a case, the resultant algorithm might be readily decomposable and understandable. In reality, most of the interesting and socially significant algorithms are not produced by one individual or created ‘from scratch’. They are, rather, created by large teams, assembled out of pre-existing protocols and patchworks of code, and embedded in entire networks of algorithms. The result is an algorithmic system, that is much harder to decompose and understand.

C. Algorithms are ontogenetic and performative: In addition to being contextually embedded, contemporary algorithms are also typically ontogenetic. This is a somewhat jargonistic term, deriving from biology. All it means is that algorithms are not static and unchanging. Once they are released into the world, they are often modified or adapted. Programmers study user-interactions and update code in response. They often experiment with multiple versions of an algorithm to see which one works best. And, what’s more, some algorithms are capable of learning and adapting themselves. This dynamic and developmental quality means that algorithms are difficult to study and research. The system you study at one moment in time may not be the same as the system in place at a later moment in time.

D. Algorithms are out of control: Once they start being used, algorithms often develop and change in uncontrollable ways. The most obvious way for this to happen is if algorithms have unexpected consequences or if they are used by people in unexpected ways. This creates a challenge for the researcher insofar as generalisations about the future uses or effects of an algorithm can be difficult to make if one cannot extrapolate meaningfully from past uses and effects.

These four obstacles often compound one another, creating more challenges for the researcher.


2. Six Methods of Algorithm Research
Granting that there are challenges, the social and technical importance of algorithms is, nevertheless, such that research is needed. How can the researcher go about understanding the complex and contextual nature of algorithm-construction and usage? It is highly unlikely that a single research method will do the trick. A combination of methods may be required.

Kitchin identifies six possible methods in his article, each of which has its advantages and disadvantages. I’ll briefly describe these in what follows:

1. Examining Pseudo-Code and Source Code: The first method is the most obvious. It is to study the code from which the algorithm was constructed. As noted in my earlier post there are two bits to this. First, there is the ‘pseudo-code’ which is a formalised set of human language rules into which the task is translated (pseudocode follows some of the conventions of programming languages but is intended for human reading). Second, there is the ‘source-code’, which is the computer language into which the human language ruleset is translated. Studying both can help the researcher understand how the algorithm works. Kitchin mentions three more specific variations on this research method:
1.1 Deconstruction: Where you simply read through the code and associated documentation to figure out how the algorithm works.
1.2 Genealogical Mapping: Where you ‘map out a genealogy of how an algorithm mutates and evolves over time as it is tweaked and rewritten across different versions of code’ (Kitchin 2014). This is important where the algorithm is dynamic and contextually embedded.
1.3 Comparative Analysis: Where you see how the same basic task can be translated into different programming languages and implemented across a range of operating systems. This can often reveal subtle and unanticipated variations.
There are problems with these methods: code is often messy and requires a great deal of work to interpret; the researcher will need some technical expertise; and focusing solely on the code means that some of the contextual aspects of algorithm construction and usage are missed.

2. Reflexively Producing Code: The second method involves sitting down and figuring out how you might convert a task into code yourself. Kitchin calls this ‘auto-ethnography’, which sounds apt. Such auto-ethnographies can be more or less useful. Ideally, the researcher should critically reflect on the process of converting a task into a ruleset and a computer language, and think about the various social, legal and technical frameworks that shape how they go about doing this. There are obvious limitations to all this. The process is inherently subjective and prone to individual biases and shortcomings. But it can nicely complement other research methods.

3. Reverse-engineering: The third method requires some explanation. As mentioned above, one of the obstacles facing the researcher is that many algorithms are ‘black-boxed’. This means that, in order to figure out how the algorithm works, you will need to reverse engineer what is going on inside the black box. You need to study the inputs and outputs of the algorithm, and perhaps experiment with different inputs. People often do this with Google’s Pagerank, usually in an effort to get their own webpages higher up the list of search results. This method is also, obviously, limited in that it provides incomplete and imperfect knowledge of how the algorithm works.

4. Interviews and Ethnographies of Coding Teams: The fourth method helps to correct for the lack of contextualisation inherent in some of the preceding methods. It involves interviewing or carefully observing coding teams (in the style of a cultural anthropologist) as they go about constructing an algorithm. These methods help the researcher to identify the motivations behind the construction, and some of the social and cultural forces that shaped the engineering decisions. Gaining access to such coding teams may be a problem, though Kitchin notes one researcher, Takhteyev, who conducted a study while he was himself part of an open-source coding team.

5. Unpacking the full socio-technical assemblage: The fifth method is described, again, in somewhat jargonistic terms. The ‘socio-technical assemblage’ is the full set of legal, economic, institutional, technological, bureaucratic, political (etc) forces that shape the process of algorithm construction. Interviews and ethnographies of coding teams can help us to understand some of these forces, but much more is required if we hope to fully ‘unpack’ them (though, of course, we can probably never fully understand a phenomenon). Kitchin suggests that studies of corporate reports, legal frameworks, government policy documents, financing, biographies of key power players and the like are needed to facilitate this kind of research.

6. Studying the effects of algorithms in the real world: The sixth method is another obvious one. Instead of focusing entirely on how the algorithm is produced, and the forces affecting its production, you also need to study its effects in the real world. How does it impact upon the users? What are its unanticipated consequences? There are a variety of research methods that could facilitate this kind of study. User experiments, user interviews and user ethnographies would be one possibility. Good studies of this sort should focus on how algorithms change user behaviour, and also how users might resist or subvert the intended functioning of algorithms (e.g. how users try to ‘game’ Google’s Pagerank system).

Again, no one method is likely to be sufficient. Combinations will be needed. But in these cases one is always reminded of the old story about the blind men and the elephant. Each is touching a different part, but they are all studying the same underlying phenomenon.




Tuesday, July 21, 2015

Epistemology, Communication and Divine Command Theory


I have written about the epistemological objection to divine command theory (DCT) on a previous occasion. It goes a little something like this: According to proponents of the DCT, at least some moral statuses (like the fact that X is forbidden, or that X is bad) depend for their existence on God’s commands. In other words, without God’s commands those moral statuses would not exist. It would seem to follow that in order for anyone to know whether X is forbidden/bad (or whatever), they would need to have epistemic access to God’s commands. That is to say, they would need to know that God has commanded X to be forbidden/bad. The problem is that there is a certain class of non-believers — so-called ‘reasonable non-believers’ — who don’t violate any epistemic duties in their non-belief. Consequently, they lack epistemic access to God’s commands without being blameworthy for lacking this access. For them, X cannot be forbidden or bad.

This has been termed the ‘epistemological objection’ to DCT, and I will stick with that name throughout, but it may be a bit of a misnomer. This objection is not just about moral epistemology; it is also about moral ontology. It highlights the fact that at least some DCTs include a (seemingly) epistemic condition in their account of moral ontology. Consequently, if that condition is violated it implies that certain moral facts cease to exist (for at least some people). This is a subtle but important point: the epistemological objection does have ontological implications.

Anyway, in this post I want to take another look at this so-called epistemological objection. I do so through the lens of Glenn Peoples’s article, simply entitled ‘The Epistemological Objection to Divine Command Ethics’. Peoples is a theist and a proponent of DCT (or so I believe). He thinks that the epistemological objection fails. His paper focuses on two versions of the objection and two versions of DCT. The first version of the objection he views as being ‘crude’; the second is slightly more sophisticated and comes from work done by Wes Morriston.

I’m going to ignore what Peoples says about the ‘crude’ versions. I tend to agree that they are crude and, frankly, uninteresting. So I’ll focus on Morriston’s version instead. As will become clear, I much more favourably disposed to Morriston’s line of argument than Peoples seems to be. I will try to explain why as I go along.

I’ll do so in three parts. First, I’ll try to explain the differences between the two versions of DCT mentioned in Peoples’s article. Second, I’ll outline and analyse Peoples’s argument for thinking that the epistemological objection fails in the case of the first version of the DCT. And third, I’ll outline and analyse his argument for thinking that it fails in the case of the second version of DCT. I’ll offer my own responses in each section.


1. Two Versions of Divine Command Theory
Sloppy terminology is abundant in philosophy. This is a real shame since it often means that participants in philosophical debates end up talking past each other. This is particularly true in debates about DCTs, where several of the theories that are grouped under that heading are not really properly called ‘command’ theories at all.

Obviously, DCTs all share the claim that certain (perhaps all) moral statuses depend on God in some way. On a previous occasion I followed Erik Wielenberg’s suggestion and drew a distinction between two classes of these divine-dependency theories. The first, and more general, class is that of ‘theological stateism’. All theories in this class claim that certain moral statuses depend for their existence on one or more of God’s states of being (e.g. his nature, his beliefs, his desires etc). The second, and more narrowly circumscribed class, is that of ‘theological voluntarism’. Theories in this class claim that certain moral statuses depend for their existence on one or more of God’s voluntary acts (e.g. his willing or intending X; his commanding X). Voluntarist theories are a subset of stateist theories, and DCTs are a further subset of voluntarist theories. I have tried to illustrate this below.




Hopefully that is reasonably clear. Within the class of command theories, Morriston and Peoples introduce further two further distinctions. They are:


Causal Divine Will Theories: These theories hold that some moral statuses (most commonly that status of being obligatory) are dependent for their existence on God’s willing that they be so. This sort of view was defended by Philip Quinn, and was referred to as a ‘command’ theory, but Morriston argues that it is not really about commands per se since on Quinn’s view the commands need not be communicated. Whether that is sufficient to disqualify it from being a ‘command’ theory is debateable. For now, I’ll view it as such.

Modified Divine Command Theories: These theories hold that some moral statuses (most commonly the status of being obligatory) are dependent for their existence on God’s commanding and communicating that they be so. This is the sort of view defended by Robert Adams and is, according to Morriston, properly called a ‘command’ theory since communication is essential to the creation of the particular moral status.


Adams’s view is worthy of further consideration here since it is quite popular among contemporary DCTers. I have discussed it on a few previous occasions. In essence, Adams thinks that axiological moral statuses (i.e. the status of being good or bad) do not depend for their existence on God’s commands. But he thinks that God’s commands are necessary for the creation of certain deontic moral statuses, in particular the status of being obligatory. Indeed, Adams argues that without commands from an authoritative agent we cannot know the difference between something’s being morally supererogatory (i.e. above and beyond our moral obligations) and morally obligatory. For instance, it might be a morally excellent thing for me to send half my income to charitable organisations in the developing world, but without an authoritative command we cannot say that it is obligatory.

Communication of commands is consequently essential to Adams’s theory since without being told (in some way) that X is obligatory we cannot know that it really is. This need for communication turns out to be important when assessing the strength of Morriston’s critique. I will return to it later.


2. The Epistemological Objection and Causalist Theories
Now that we have distinguished between these two versions of theological voluntarism, we can proceed to assess the strength of epistemological objection in relation to each. We start with the causalist theory propounded by Quinn. Peoples argues that the epistemological objection has no real impact on this theory. I am less convinced of this.

We have to understand what he argues first. Peoples, following Quinn, argues that divine will theories are pure ontological theories. In other words, they do not incorporate an epistemic condition into their account of moral ontology. He doesn’t put it in these terms, but that’s the gist of it. To illustrate, he offers the following quote from Quinn on the epistemological objection:


Our theory asserts that divine commands are conditions causally necessary and sufficient for moral obligations and prohibitions to be in force. It makes no claims at all about how we might come to know just what God has commanded. For all the theory says, it might be that we can come to know what God has commanded by first coming to know what is obligatory and forbidden. After all, it is a philosophical truism that the causal order and the order of learning need not be the same. 
(Quinn 2006, 44-45)


Quinn is clear in this passage that his theory (unlike Adams’s) makes ‘no claims at all’ about moral epistemology. It only claims that an act of the divine will is necessary to bring moral obligations into existence. How people come to learn of those obligations is irrelevant. I have tried to illustrate this in the diagram below. The bit in the shaded box represents Quinn’s account of moral ontology; ordinary moral agents sit outside this box. They may come to know what the moral truths are, or they may not. This does not upset the plausibility of the underlying ontological theory.



Peoples seems to think that this is right. He thinks that if Quinn says his theory contains no epistemic conditions, then his theory contains no epistemic conditions. The epistemological objection has no foothold against such a theory. In saying this, Peoples is assisted by the fact that Morriston himself concedes that the objection has no impact on Quinn’s theory. I’m less convinced about this. For one thing, I don’t believe that the proponent of a theory is always the final arbiter of what that theory does or does not entail. For another, I believe that any plausible account of moral ontology probably has to include some implicit epistemic condition.

I am not alone in this belief. It seems to be pervasive in contemporary metaethics. I wrote a series of posts on this topic a few years back. In them, I looked at typical methodological approaches in metaethics. Oftentimes, proponents of a particular metaethical theory will assess that theory relative to a number of plausibility conditions, i.e. things that they think any good metaethical theory should account for. Included in those conditions there is usually something about how moral facts ‘join up’ with the reasoning capacities of moral agents. This typically requires some plausible account of how a moral agent comes to know what its relevant moral obligations are. A failure to account for this renders a theory less plausible. This is why there is so much discussion of debunking arguments in the literature. It is also why I wrote so much about those debunking arguments. For instance, in the debate between moral realists and moral anti-realists, some anti-realists argue that realism is implausible because it doesn’t explain how evolved beings like us could come to have knowledge of moral reality.

It could be that this approach to metaethics is fundamentally misconceived. But if it is not, then it seems like epistemic conditions must be folded into any plausible account of moral ontology. Thus, we should not be so eager to embrace Quinn’s statement that his theory ‘makes no claims at all’ about moral epistemology. It probably has to, if it is to be plausible.


3. The Epistemological Objection to Modified Command Theories
Let’s move on to Adams’s theory. As I mentioned above, Adams’s seems to concede that his account of moral ontology includes an epistemic condition. For him, moral obligations do not exist unless they are commanded and communicated to a moral agent by God. Remember how the communication is necessary in order for the moral agent to be able to distinguish between what is supererogatory and what is obligatory. I’ve tried to illustrate this in the diagram below. You should be able to see from this how different Adams’s theory is from Quinn’s. Whereas Quinn left the agent’s awareness of the command out of his account of moral ontology; Adams’s incorporates it into his account.




Morriston seizes upon this in presenting his version of the epistemological objection. It goes a little something like this:



  • (1) According to Adams, in order for X (or not-X) to be a moral obligation it must be commanded by God and communicated to the moral agent to whom it applies.
  • (2) In order for a command to X (or not-X) to be communicated to a moral agent it must be communicated via a sign that the agent is capable of identifying and understanding.
  • (3) A reasonable non-believer has no epistemic vices, but cannot identify an/or understand divine commands.
  • (4) Therefore, a reasonable non-believer cannot have moral obligations (under the terms of Adams’s theory).



We need to clarify certain aspects of this argument before we can evaluate it. First, we need to clarify the concept of a reasonable non-believer. A reasonable non-believer is someone who honestly searches for proof of God’s existence, but cannot find any evidence that brings them to believe. In doing this, the reasonable non-believer does not violate any epistemic duties. They are not bitter or biased or closed to potential sources of evidence. They simply cannot find any. The reasonableness of these non-believers is crucial to Morriston’s argument. We can safely assume that Adams’s theory does not require that commands be understood by the insane or the morally evil. It is only those who are epistemically open that are affected. Another point of clarification is that the conclusion of the argument can be taken in a number of different ways. I like to use it to argue that the modified DCT fails to provide a fully plausible account of moral ontology. Others like to use it as something akin to a reductio of the modified DCT. In other words, they say things like ‘but of course reasonable non-believers have knowledge of moral obligations; therefore, the DCT is absurd’. Maybe there is no practical difference between these two positions. Just a difference in style.

Moving on to the evaluation of the argument, there is really only one premise that is at issue. That is premise (3). A proponent of the DCT could target the first part of premise (3) and argue that there is no such thing as a reasonable non-believer. Since I like to think of myself as a reasonable non-believer, I’m not inclined to accept that line of argument. But Peoples thinks there may be something to it, though he doesn’t discuss it at any great length. That leaves the second part of premise (3) as the other potential target. They could argue that a reasonable non-believer does in fact have the ability to identify and understand the relevant divine commands. To make this argument credible, they would need to offer a fuller account of what it means for an obligation to be communicated to a moral agent. This means they need to go back into premise (2) and flesh out the standard of communication that is being implied by that premise.

Now, in his discussion of the argument, Morriston seems to have a very narrow conception of the possible forms of divine communication. He seems to think that (on Adams’s theory) God must communicate his commands in the form of a speech act. Peoples, rightly in my opinion, argues that no proponent of the DCT has such a narrow conception of divine communication. Instead, they all talk about multiple possible forms of divine communication (e.g. via moral intuition, general revelation, special revelation, and natural law). So to make the epistemological objection compelling, you must show that communication fails across these multiple possible forms.

And this is where Peoples thinks the argument falls down. Morriston argues that in order to have the requisite knowledge of the divine command, the moral agent must know the source of the command. That is to say, they must know that the command emanated from God. But of course this is exactly what a reasonable non-believer cannot know. Peoples thinks this is wrong. He says they only need to have knowledge of the content of the command. To underscore his point, he relies on Adams’s brief sketch of what it takes for God to communicate a command to an agent:

Adams’s Communicative Standard: “In my opinion, a satisfactory account of [this standard] will have three main points: (1) A divine command will always involve a sign, as we may call it, that is intentionally caused by God; (2) In causing the sign God must intend to issue a command, and what is commanded is what God intends to command thereby; (3) The sign must be such that the intended audience could understand it as conveying the intended command.” (Adams, Finite and Infinite Goods).

Peoples makes much of condition (3). He points out that this condition says nothing about the agent needing to understand the source of the command:

“Adams did not say that a sign needs to be such that a person can understand that it conveys a divine command, but only that he can understand it as conveying “the intended command”. He does not even need to know that it is a command….In slogan form: People need knowledge of the command, not knowledge about the command.” 
(Peoples 2011)

He then goes on to give an example of how someone might know the content of a command without knowing its source:

“Consider for example the possibility that God conveys the ‘sign’ to people regarding some act (let’s pick murder) via a proper function of the human conscience. Nobody needs to know what conscience is, how we got one, or that God uses it to ensure that we have some true beliefs in order for them to know, via conscience, that murder is wrong.” 
(Peoples 2011)

What he is imagining here is a case in which someone has a really strong innate feeling that murder is forbidden, without knowing how or why they came to have it. Even still, God has successfully communicated his command to them. This is why Peoples thinks that Morriston’s argument fails. He goes on to point out that in such a case a reasonable non-believer might have incomplete moral knowledge, or might fail to appreciate how bad the violation of that command is, but that this is irrelevant to whether they satisfy the epistemic condition in Adams’s argument.

I have some problems with this. To repeat something I said earlier, I don’t think we can merely take Adams’s word for it regarding the communicative standard implied by his theory. He might think that knowledge of content is all that is required; but that doesn’t mean he is right. Remember the importance of the supererogation/obligation distinction. In his original work, Adams’s seems pretty clear that a command from a being with the right kind of authority is needed in order for an agent to be able to distinguish an obligation from an act of supererogation. As best I can tell, this implies that the agent must have knowledge of the source of the command as well as knowledge of its content. It is not enough that the agent knows that killing is really bad, or that giving money to charity is really good. They must know that these things are morally required of them. And under Adams’s theory knowing that these things were commanded by the right kind of entity is critical to drawing the distinction between what is great and what is obliged.

Admittedly, this is merely the sketch of an argument. But it seems to be truer to the communicative demands of Adams’s theory. If so, the epistemological objection still has some bite because reasonable non-believers will be incapable of knowing that a command (be it communicated via speech or conscience or whatever) emanates from the right kind of source. This is something I discussed at much greater length in my previous post on this topic.


Right, I'm exhausted with this topic now. That's it for this post.

Monday, July 20, 2015

The Philosophical Importance of Algorithms


IBM's Watson (Image from Clockready via Wikipedia)

In the future, every decision that mankind makes is going to be informed by a cognitive system like Watson…and our lives will be better for it. 
(Ginni Rometty commenting on IBM’s Watson)

I’ve written a few posts now about the social and ethical implications of algorithmic governance (algocracy). Today, I want to take a slightly more general perspective on the same topic. To be precise, I want to do two things. First, I want to discuss the process of algorithm-construction and the two translation problems that are inherent to this process. Second, I want to consider the philosophical importance of this process.

In writing about these two things, I’ll be drawing heavily from the work done by Rob Kitchin, and in particular from the ideas set out in his paper ‘Thinking critically about and researching algorithms’. Kitchin is currently in charge of The Programmable City research project at Maynooth University in Ireland. This project looks closely at the role of algorithms in the design and function of ‘smart’ cities. The paper in question explains why it is important to think about algorithms and how we might go about researching them. I’ll be ignoring the latter topic in this post, though I may come back to it at a later stage.


1. Algorithm-Construction and the Two Translation Problems
The term ‘algorithm’ can have an unnecessarily mystifying character. If you tell someone that a decision affecting them was made ‘by an algorithm’, or if, like me, you talk about the rise of ‘algocracy’, there is a danger that you present an overly alarmist and mysterious picture. The reality is that algorithms themselves are relatively benign and easy to understand (at least conceptually). It is really only the systems through which they are created and implemented that give rise to problems.

An algorithm can be defined in the following manner:

Algorithm: A set of specific, step-by-step instructions for taking an input and converting into an output.

So defined, algorithms are things that we use everyday to perform a variety of tasks. We don’t run these algorithms on computers; we run them on our brains. A simple example might be the sorting algorithm you use for stacking books onto the shelves in your home. The inputs in this case are the books (and more particularly the book titles and authors). The output is the ordered sequence of books that ends up on your shelves. The algorithm is the set of rules you use to end up with that sequence. If you’re like me, this algorithm has two simple steps: (i) first you group books according to genre or subject matter; and (ii) you then sequence books within those genres or subject areas in alphabetical order (following the author’s surname). You then stack the shelves according to the sequence.

But that’s just what an algorithm is in the abstract. In the modern digital and information age, algorithms have a very particular character. They lie at the heart of the digital network created by the internet of things, and the associated revolutions in AI and robotics. Algorithms are used to collect and process information from surveillance equipment, to organise that information and use it to form recommendations and action plans, to implement those action plans, and to learn from this process.

Everyday we are exposed to the ways in which websites use algorithms to perform searches, personalise advertising, match us with potential romantic partners, and recommend a variety of products and services. We are perhaps less-exposed to the ways in which algorithms are (and can be) used to trade stocks, identify terrorist suspects, assist in medical diagnostics, match organ donors to potential donees, and facilitate public school admissions. The multiplication of such uses is what gives rise the phenomenon of ‘algocracy’, i.e. rule by algorithms.

All these algorithms are instantiated in computer code. As such, the contemporary reality of algorithm construction gives rise to two distinct translation problems:


First Translation Problem: How do you convert a given task into a human-language series of defined steps?

Second Translation Problem: How do you convert that human-language series of defined steps into code?


We use algorithms in particular domains in order to perform particular tasks. To do this effectively we need to break those tasks down into a logical sequence of steps. That’s what gives rise to the first translation problem. But then to implement the algorithm on some computerised or automated system we need to translate the human-language series of defined steps into code. That’s what gives rise to the second translation problem. I call these ‘problems’ because in many cases there is no simple or obvious way in which to translate from one language to the next. Algorithm-designers need to exercise judgment, and those judgments can have important implications.

Kitchin uses a nice example to illustrate the sorts of issues that arise. He discusses an algorithm which he had a role in designing. The algorithm was supposed to calculate the number of ‘ghost estates’ in Ireland. Ghost estates are a phenomenon that arose in the aftermath of the Irish property bubble. When developers went bankrupt, a number of housing developments (‘estates’) were left unfinished and under-occupied. For example, a developer might have planned to build 50 houses in a particular estate, but could have run into trouble after only fully completing 25 units, and selling 10. That would result in a so-called ghost estate.

But this is where things get tricky for the algorithm designer. Given a national property database with details on the ownership and construction status of all housing developments, you could construct an algorithm that sorts through the database and calculates the number of ghost estates. But what rules should the algorithm use? Is less than 50% occupancy and completion required for a ghost estate? Or is less than 75% sufficient? Which coding language do you want to use to implement the algorithm? Do you want to add bells and whistles to the programme, e.g. by combining it with another set of algorithms to plot the locations of these ghost estates on a digital map? Answering these questions requires some discernment and judgment. Poorly thought-out answers can give rise to an array of problems.


2. The Philosophical Importance of Algorithms
Once we appreciate the increasing ubiquity of algorithms, and once we understand the two translation problems, the need to think critically about algorithms becomes much more apparent. If algorithms are going to be the lifeblood of modern technological infrastructures, if those infrastructures are going to shape and influence more and more aspects of our lives, and if the discernment and judgment of algorithm-designers is key to how they do this, then it is important that we make sure we understand how that discernment and judgment operates.

More generally than this, if algorithms are going to sit at the heart of contemporary life, it seems like they should be of interest to philosophers. Philosophy is divided into three main branches of inquiry: (i) epistemology (how do we know?); (ii) ontology (what exists?); and (iii) ethics/morality (what ought we do?). The growth of algorithmic governance would seem to have important repercussions for all three branches of inquiry. I’ll briefly illustrate some of those repercussions here though it should be noted that what I am about to say is by no means exhaustive (Note: Floridi discusses similar ideas under his concept of information philosophy).

Looking first to epistemology, it is pretty clear that algorithms have an important impact how we acquire knowledge and on what can be known. We witness this in our everyday lives. The internet and the attendant growth in data-acquisition has resulted in the compilation of vast databases of information. This allows us to collect more potential sources of knowledge. But it is impossible for humans to process and sort through those databases without algorithmic assistance. Google’s Pagerank algorithm and Facebook’s Edgerank algorithm effectively determine a good proportion of the information with which we a presented on day-to-day basis. In addition to this, algorithms are now pervasive in scientific inquiry and can be used generate new forms of knowledge. A good example of this is the C-Path cancer prognosis algorithm. This is a machine-learning algorithm that was used to discover new ways in which to better assess the progression of certain forms of cancer. IBM hope that their AI system Watson will be provide similar assistance to medical practitioners. And if we believe Ginni Rometty (from the quote at the top of this post) use of such systems will effectively become the norm. Algorithms will shape what can be known and will generate knew forms of knowledge.

Turning to ontology, it might be a little trickier to see how algorithms can actually change our understanding of what kinds of stuff exists in the world, but there are some possibilities. I certainly don’t believe that algorithms have an effect on the foundational questions of ontology (e.g. whether reality is purely physical or purely mental), though they may change how we think about those questions. But I do think that algorithms can have a pretty profound effect on social reality. In particular, I think that algorithms can reshape social structures and create new forms of social object. Two examples can be used to illustrate this. The first example draws from Rob Kitchin’s own work on the Programmable City. He argues that the growth in so-called ‘smart’ cities gives rise to a translation-transduction cycle. On the one hand, various facets of city life are translated into software so that data can be collected and analysed. On other hand, this new information then transduces the social reality. That is to say, it reshapes and reorganises the social landscape. For example, traffic modeling software might collect and organise data from the real world and then planners will use that data to reshape traffic flows around a city.

The second example of ontological impact is in the slightly more esoteric field of social ontology. As Searle points out in his work on this topic, many facets of social life have a subjectivist ontology. Objects and institutions are fashioned into existence out of our collective imagination. Thus, for instance, the state of being ‘married’ is a product of a subjectivist ontology. We collectively believe in and ascribe that status to particular individuals. The classic example of a subjectivist ontology in action is money. Modern fiat currencies have no intrinsic value: they only have value in virtue of the collective system of belief and trust. But those collective systems of belief and trust often work best when the underlying physical reality of our currency systems is hard to corrupt. As I noted before, the algorithmic systems used by cryptocurrencies like Bitcoin might provide the ideal basis for a system of collective belief and trust. Thus, algorithmic systems can be used to add to or alter our social ontology.

Finally, if we look to ethics and morality we see the most obvious philosophical impacts of algorithms. I have discussed examples on many previous occasions. Algorithmic systems are sometimes presented to people as being apolitical, technocratic and value-free. They are anything but. Because judgment and discernment must be exercised in translating tasks into algorithms, there is much opportunity for values and to affect how they function. There are both positive and negative aspects to this. If well-designed, algorithms can be used to solve important moral problems in a fair and efficient manner. I haven’t studied the example in depth, but it seems like the matching algorithms used to facilitate kidney exchanges might be a good illustration of this. I have also noted, on a previous occasion, Tal Zarsky’s argument that well-designed algorithms could be used to eliminate implicit bias from social decision-making. Nevertheless, one must also be aware that implicit biases can feed into the design of algorithmic systems, and that once those systems are up and running, they may have unanticipated and unexpected outcomes. A good recent example of this is the controversy created by Google’s photo app, which used a facial recognition algorithm to label photographs of some African-American people as ‘gorillas’.

Anyway, that’s all for this post. Hopefully the challenges of algorithm construction and the philosophical importance of algorithmic systems is now a little clearer.


Wednesday, July 15, 2015

How should you title an academic article?




I have two guiding presumptions about the nature of academic publishing. The first is that academics want their work to be read. Academia is, for better or worse, a popularity contest. Academics want their work to be popular among other academics, and among policy-makers and the general public (depending on their goals and the nature of their research). ‘Popular’ doesn’t necessarily mean respected or admired. It is, of course, better to be popular and right, or popular and interesting, or popular and thought-provoking. But if you can’t be any of these things, then being debated and discussed is probably better than being ignored (within reason: if you are so controversial or stupid that you are constantly ridiculed, harassed or threatened, it is unlikely to be pleasant; anonymity might be better in that case).

In saying this, I don’t mean to downplay the intrinsic merits or rewards of writing and research. There is a lot to be said for the process of thinking and puzzling out an issue; of gaining private insight into some important concept or truth. But if you are only in it for these intrinsic rewards, then you don’t need to publish at all. If you are publishing your work, then popularity must matter at some level. This is true even if you only care about publishing in terms of the material rewards it brings. In the modern academy, career advancement depends, to a large extent, on how popular your work is. Universities love popularity metrics (e.g. reputational rankings). And the importance of all this is reflected in the fact that most academic publishers now provide you with a variety of popularity metrics whenever you publish your work with them. These include things like the number of downloads, shares on social networking sites, and citation rates. Academics often reference these things when looking for promotion or employment (I know I do).

My second presumption about the nature of academic publishing is that attention spans are incredibly short, and probably getting shorter all the time. This is certainly true for me. The internet is a rich cornucopia of information, and academic papers are published at an alarming rate. Deciding which papers to read is like trying to drink from a firehose. This means that if you want your work to be read, you really need to grab the potential reader’s attention. But how can you do this? I have a tendency to use my own experience as a guide — based on the assumption that there is nothing abnormal or non-average about me. A more data-driven approach would be useful but I’m quite lazy on that front. In any event, based on my own experience, two things determine whether or not I will read an article: the first is the article title; the second is the article abstract.

Now, I have a pretty rigid set of views about what an article abstract should look like. I think it should provide a very clear summary of the argument (or arguments) that will be defended in the article. The reader should be left in no doubt about the position(s) you will end up with at the end of the article. I also have a preferred template or test I use when writing an abstract. I wrote about this on a previous occasion. But despite my well-ordered approach to writing article abstracts, my approach to article titles is completely haphazard. I come up with something that feels or looks intuitively adequate, and then I think about it no more.

But if the goal is to be read, then this is pretty odd approach to take. In many ways, the title is likely to be more important than the abstract. The title is the first thing the reader sees. It will determine whether or not they even look at the abstract. So I really should be thinking about article titles in a more systematic manner. This post is a first step in this direction. I want to use it to catalogue some of my previous article-titling strategies, and to offer some reflections or thoughts on these strategies. And I also want to use it as a springboard for debate and discussion. It would be great if people could share their own thoughts and reflections on how to come up with article titles in the comments section.

I’ll start the ball rolling by describing my own approaches. As I just said, I’m pretty haphazard on this front. Nevertheless, there are some patterns and rules to what I do. The main rule is that I don’t like overly ‘clever’ or ‘funny’ titles. When I first started reading academic journal articles, I was enamoured with what I took to be funny or clever titles. I won’t name or shame anybody but you can imagine the kind of thing. Articles with titles like: ‘Bitch Better Have My Money: On the wisdom of debt forgiveness’. Over time I grew tired and suspicious of these title. Maybe this is irrational, but I think titles of this sort have a tendency to obscure. My own preferred titling-strategies settle into four categories:


The Question Title: A title which contains a provocative question of some sort. Some people hate question-titles. There is long-standing trope in journalism that any headline in the form of a question can always be answered ‘no’. But this isn’t true and I think question-titles have great merit. Questions can raise intriguing issues that pique a reader’s curiosity, and I think they can convey the subtle implication that the approach taken in the article will be inquisitive and non-ideological in nature (even if concrete conclusions are reached). I have only used a question in two of my article titles in the past — “Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised?” and “Hyperagency and the Good Life: Does Extreme Enhancement Threaten Meaning? — but I think I will do it more often in the future.

The Propositional Title: A title which contains (implicitly or explicitly) a clear statement of the main proposition(s) that will be defended in the article. I think this is a good approach to take, provided that the propositions being defended are interesting and capable of being stated succinctly. Many of my article titles are implicitly propositional, but I think only one or two have been successful on this front. My article on AI risk was titled “Why AI Doomsayers are like Sceptical Theists and Why it Matters”, which sets out pretty clearly what I will attempt to argue in main body of the article. And my article on the death penalty was titled “Kramer’s Purgative Rationale for Capital Punishment: A Critique”, which just about manages to imply what will be argued, though it doesn’t explain exactly what the problem with Kramer’s rationale is. I would like to experiment with more explicit propositional titles in the future.

The Descriptive-Triplet or -Doublet Title: A title which mentions the two or three key concepts or topics that will be addressed in the article. Descriptive titles definitely have their merits. I like them because they can be effective ways of conveying to the reader what the article is about, and they can allow readers to easily identify whether the concepts or topics covered are relevant to their own areas of research. I also think that doublets and triplets can be succinct, memorable and pleasing to the ear. Nevertheless, this is definitely a format that I tend to overuse, and I often fall back on it when desperate. For example, my last two articles have adopted the descriptive-triplet format — “Human Enhancement, Social Solidarity and the Distribution of Responsibility” and “Common Knowledge, Pragmatic Enrichment and Thin Originalism”. These seem dull and uninspiring to me now. I’m not sure I would have any interest in reading an article with a similar-sounding title.

The Ridiculous Title: A title which attempts to be provocative, descriptive or propositional but which fails due to length or obscurity. This is really just a catch-all category for the article titles I have come up with which seem — to my eyes — to fail miserably to provide an interesting hook for a reader. My favourite example of this from my own work is the article I published earlier this year on brain-based lie detection. For some unknown reason I thought the following would be a good title: “The Comparative Advantages of Brain-Based Lie Detection: The P300 Concealed Information Test and Pre-Trial Bargaining”. I think the idea was to provide a title that covered the main concepts and ideas, and then gave a sense of what the argument would be (something about the ‘comparative advantages’ of brain-based lie detection tests, whatever they are). But I think it fails miserably because it is replete with jargon (what is a “P300 Concealed Information Test”?) and is overly long. If I were given the chance, I would definitely re-title it to something like “Stopping the Innocent from Pleading Guilty: How Brain-Based Lie Detection Might Help” — which would give a clearer sense of what is being argued in the piece and why it is important.

So those are my strategies and thoughts. Do you have any thoughts on this topic? Do you know of any good data-based studies of academic article titles? (Someone must have looked into this in a systematic way). If so, please share in the comments section.

Tuesday, July 14, 2015

New Paper - Human Enhancement, Social Solidarity and the Distribution of Responsibility




I have a new paper coming out in the journal Ethical Theory and Moral Practice. This one deals with two objections to human enhancement. In both cases I first try to strengthen and clarify the objections before arguing why I think they ultimately fail. Fuller details are below. The official version of the paper won't be published for a couple of months, but you can access the final pre-publication drafts at the links I provide (philpapers is open access; academia.edu may require free sign-up in order to access).

Title: Human Enhancement, Social Solidarity and the Distribution of Responsibility
Journal: Ethical Theory and Moral Practice
Links: Philpapers; Academia; Official
Abstract: This paper tries to clarify, strengthen and respond to two prominent objections to the development and use of human enhancement technologies. Both objections express concerns about the link between enhancement and the drive for hyperagency (i.e. the ability to control and manipulate all aspects of one’s agency). The first derives from the work of Sandel and Hauskeller and is concerned with the negative impact of hyperagency on social solidarity. In responding to their objection, I argue that although social solidarity is valuable, there is a danger in overestimating its value and in neglecting some obvious ways in which the enhancement project can be planned so as to avoid its degradation. The second objection, though common to several writers, has been most directly asserted by Saskia Nagel, and is concerned with the impact of hyperagency on the burden and distribution of responsibility. Though this is an intriguing objection, I argue that not enough has been done to explain why such alterations are morally problematic. I try to correct for this flaw before offering a variety of strategies for dealing with the problems raised. 
 
 

Friday, July 10, 2015

The Case for a Marriage-Free State





The last couple of months have seen major victories for marriage equality. In May, Ireland voted to legalise same-sex marriage in a national referendum — the first country in the world to do so by popular vote. In June, the US Supreme court issued a landmark 5-4 decision legalising same-sex marriage throughout the United States. These were important steps toward building a fairer and more just society. If marriage is to continue to exist as a legally-recognised relationship status, then it is important that it do so in an egalitarian and inclusive manner. I don’t think anyone should doubt this.

But there is something worth doubting in the midst of all these victories. Should marriage continue to exist as a legally-recognised relationship status? Think about what this means. We enter into relationships with other human beings all the time. These relationships tend to support a number of different functions or roles. Some are purely commercial or business oriented; some are concerned with friendship and sociality; some are sexually intimate; some are directed towards property-sharing; some are about rearing children; some are about mutual caregiving and support. Legal recognition of these relationship functions usually results in the parties to them gaining a number of legal rights and duties. The distinctive feature of marriage-recognition (in modern liberal societies) is that it focuses on a particular kind of relationship — viz. a monogamous relationship — which fulfils a number of these functions — typically caregiving, sexual intimacy and child-rearing — and bundles together a bunch of rights and duties that then attach to the members of that relationship. The question is whether this special status and bundling of rights should continue.

This is a question that has long exercised certain feminist theorists. They view marriage as a problematic institution with a number of troubling properties. Some are inclined to support alternative kinds of relationship-recognition. Clare Chambers’s article ‘The Marriage Free State’ offers an interesting perspective on this debate. She argues that the state should stop legally recognising marriage and should not replace marriage-recognition with some alternative type of relationship-recognition (e.g. civil unions). Instead, she argues that the state should regulate relationship functions on a piecemeal basis.

In the remainder of this post, I want to take a look at Chambers’s argument for the marriage-free state. Two caveats before I do so. First, as I understand it, the defence of the marriage-free state contained in the paper I read is an incomplete and imperfect overview of the case that she will present in a forthcoming book. Second, this is not a topic with which I am overly familiar. Reading and writing about Chambers’s paper is way for me to feel my way into this debate. I’ll offer some of my own thoughts along the way, but these are very much preliminary and, no doubt, naive.


1. The Feminist Critique of Marriage
If you’re going to make the case for a marriage-free state, then it probably makes sense to first ask whether marriage-recognition is a bad thing. After all, societies have been affording married couples a special legal status for centuries (millenia even). If we are going to move away from this societal status quo, we’ll need some convincing (not that loyalty to the status quo is always a good thing; just that it usually takes something dramatic to push people away from it). Fortunately, many leading feminist theorists have already obliged on this front by providing a number of reasons to doubt the value of marriage-recognition.

The problem, as Chambers notes, is that there is a faintly paradoxical air to two of the standard critiques:

Critique One: Marriage is a deeply patriarchal and sexist institution that oppresses and harms women.

Critique Two: Marriage is an inegalitarian institution because it is (traditionally anyway) heterosexist and so excludes homosexual couples from the benefits enjoyed by heterosexual couples.

The second critique has obviously found favour, and marriage-equality is now on the ascendancy in many Western countries. But the second critique appears to be in tension with the first. The problem is this: the first critique seems to suggest that marriage is bad for some of the people who enter into it (specifically women); the second seems to suggest that marriage is good for the people who enter into it and should therefore be expanded to others. How can marriage be both of these things? We need to think in a little bit more detail about the alleged harms of marriage.

These harms can be divided into two main categories:

Practical Harms of Marriage: These are harms to the material or legal condition of the people who enter into the marriage. They result directly from being in this particular kind of relationship.

Symbolic Harms of Marriage: These are harms that result from the social meanings that attach to the institution of marriage. These harms need not result from being in this particular kind of relationship. Indeed, they are often felt by people who are not party to a marital relationship.

Historically, there were many practical harms to women who entered into marriage. The most obvious of these were legal. Married women lost legal status and effectively became the chattel of their husbands. This then exposed them to a number of potential material harms such as domestic abuse, marital rape, loss of opportunity, and increased burdens of carework and child-rearing. Obviously, the status of non-married women in such societies wasn’t exactly stellar either (though it did improve over time) so it may be difficult to say whether women were worse off when married, but this doesn’t detract from the fact that, historically, there were a number of practical harms clearly associated with the institution.

What’s the position nowadays? In most Western societies, the legal disbenefits of marriage have disappeared. Women are no longer their husband’s chattel. They retain their independent legal status and associated legal rights. They also gain legal rights associated with inheritance, tax, and property-sharing (though this varies from jurisdiction-to-jurisdiction). Still, these changes have been costly: many women had to suffer in the process. Furthermore, many material harms of marriage persist, including in particular the disproportionate share of home/care based work that is taken on by women. On the whole, though, it is probably very difficult to determine whether being married is, on net, practically bad for women. The effects are likely to vary greatly, depending on the woman, the partner, and the relevant social and cultural norms. (Also, though it is not mentioned in Chambers’s article, there are some studies suggesting various health benefits of marriage. They would presumably need to be factored into any overall assessment - though cause and effect is difficult to disentangle)

The symbolic harms of marriage are rather different. Symbolically, marriage tends to reinforce a certain view of women and their role or status in society. This is clear in the symbolism of the traditional ‘white wedding’. Chambers describes it aptly:

The white wedding is replete with sexist imagery: the father ‘giving away’ the bride; the white dress symbolising the bride’s virginity (and emphasising the importance of her appearance); the vows to obey the husband; the minister telling the husband ‘you may now kiss the bride’ (rather than the bride herself giving permission, or indeed initiating or at least equally participating in the act of kissing); the reception at which, traditionally, all the speeches are given by men; the wife surrendering her own name and taking her husband’s. 
(Chambers 2012)

In addition to this, the social meaning that attaches to the institution of marriage has a number of indirect effects on women. Chambers does quite a good job outlining these, and I am loathe to cut out all the details and examples she gives, but I don’t want to repeat everything she says so I’ll just cut to the bottom line: the social meaning tends to reinforce the view that being married is what women should ultimately aspire to; that not being married is to be in an inferior state of existence; and this has the effect of narrowing women’s aspirations and opportunities.

Now, it might be argued in reply that these symbolic effects have improved over time. Women don’t have to take on their husband’s names, they don’t have to have traditional ‘white weddings’ (though the pressures are still there) and so on. But whether marriage can symbolically break with the negative features of its past is unclear. One of the reasons why marriage is socially valued and protected is because it is traditional. This means that the symbolic meaning is directly linked with the history of the institution. Consequently, it is much more likely that marriage is symbolically tainted by its past meanings; and hence much more difficult to drain it of those meanings.

This brings us back then to the second critique of marriage: its heterosexist and inegalitarian nature. Why is it, if marriage is so bad, that many feminist and homosexual theorists and activists support marriage equality? The apparent paradox is easily resolved. What these theorists recognise is that (a) marriage-recognition does bring with it some practical (primarily legal) benefits and (ii) the symbolic value of marriage would, if extended to homosexual couples, help foster a greater sense of social belonging and acceptance. Nevertheless, accpeting these two things is consistent with believing in the practically and symbolically negative aspects of marriage too. In other words, the position can be that it would be better if marriage-recognition is extended to include homosexual couples, but it would be even better again if the state stopped legally recognising marriage.


2. The Case for Piecemeal Relationship-Recognition
But if the state is going to stop recognising marriage, what is it going to do instead? Presumably, relationships will still happen and will still need to be regulated. This is where Chambers’s paper gets really interesting. To think about what happens in a marriage-free state, we need to recall what it means, legally speaking, to recognise marriage as a special relationship status. It means that you single out a particular kind of relationship (monogamous unions) for special recognition, you presume that this relationship brings together a number of important relationship functions, and you bundle together a bunch of rights and duties and apply them to the members of these relationships. When it comes to non-marital forms of relationship-recognition, this implies two choices either: (i) we continue to bundle or (ii) we don’t.

The difference is between holistic relationship recognition and piecemeal relationship recognition. In the former case, we establish a new unique relationship status that replaces marriage (e.g. a civil union) and assign a bundle of rights and duties to that relationship. In the latter case, we don’t establish a new unique relationship status. Instead, we look to the different relationship functions, and regulate those functions individually (i.e. on a piecemeal basis).

The case for alternative holistic regulation has been set out by others. Chambers points in particular to the work of Elizabeth Brake and Tamara Metz who both call for the state to provide special recognition for caregiving relationships in lieu of marriage. Metz argues for recognising intimate caregiving unions (ICGUs); Brake argues for minimal marriage, which is a relationship based on caregiving. In Brake’s case, it is argued that there should be no upper limit on the number of parties to such a relationship, nor any restriction on entry on the basis of sex/gender.

Chambers argues that there are two problems with these holistic approaches to relationship-recognition:

The Bundling Problem: The holistic approach assumes that most of the important functions of life can be satisfied in one core relationship. In other words, that we can get what we need in terms of property-sharing, intimacy, caregiving and child-rearing (among other things) in one special relationship. It also assumes that the state is well-placed to determine and regulate the nature and extent of that relationship. Bundling also has an exclusionary effect insofar as the rights and duties are only obtainable by those who are in such relationships.

The Opt-In Problem: Proposals for holistic regulation invariably assume that the special relationship status is one that people will opt into. On the one hand, this makes sense: people should be free to determine whether they want the bundle of rights and duties associated with that relationship status. On the other hand, the opt-in approach often works against weaker and more vulnerable relationship partners. People can be involved in factually equivalent relationships and yet not have the associated legal rights because they have not opted-in or because the status quo favours one of the relationship partners not opting in. This used to be a particular problem for non-married co-habiting couples, and still is in some jurisdictions, though more favourable rules are now in place.

In light of these problems, Chambers argues for a piecemeal approach to relationship recognition. This approach rejects bundling. It focuses instead on the different relationship-functions and regulates those individually. Thus, for example, there would be one set of regulations for the child-rearing function, another for the property-sharing function, another for the sexual-intimacy function and so on. There would be no particular ex ante restrictions on who could share these functions. The regulations for each function would have to be developed and argued for independently. More controversially, Chambers argues that the regulation of these functions should not be conducted on an opt-in basis. Instead, the rules should apply simply by virtue of the fact that people share those functions with others.

I have some resistance to this prima facie compulsory system of regulation, but there are three points worth bearing in mind. First, it is possible in many cases that people will consent to sharing those relationship functions with others and will be aware, in advance, of the rights and duties associated with doing so. Second, we already impose some relationship regulations on people without their explicit consent. For instance, many of the rights and duties associated with parenting and child-rearing now apply irrespective of the parents’ official marital status (though there are still many that do). This is usually justified on the grounds that the child’s interests take precedence (i.e. that there is a greater good at stake). And third, Chambers suggests that in some cases the rules and regulations could apply on an opt-out basis. This would preserve liberty by an alternative means.



3. Conclusion
Anyway, that’s it for this post. The briefly recap, there is a standard feminist critique of the institution of marriage. This critique holds that marriage is heterosexist and oppressive to women for both practical and symbolic reasons. Finding some alternative form of relationship-recognition would, therefore, be welcome. When looking for an alternative, we have two options with which to contend. We can adopt a holistic approach and look for some alternative relationship status into which we bundle rights and duties, e.g. civil unions or intimate caregiving unions. The problem with this holistic approach is that it assumes most of the important life functions can be satisfied within one relationship status and that people should be free to opt-into a bundle of rights and duties. This is often factually inaccurate, exclusionary and problematic for the more vulnerable members of a relationship. Consequently, Chambers thinks that we should regulate relationships on a piecemeal basis, focusing on the different relationship functions instead of one particular relationship status. I think this proposal is interesting and I look forward to seeing her flesh it out in more detail in her forthcoming book.