Tuesday, June 28, 2016

The Machine Made me Do It: Human responsibility in an era of machine-mediated agency




[This the text of a talk I'm delivering at the ICM Neuroethics Network in Paris this week]

Santiago Guerra Pineda was a 19-year old motorcycle enthusiast. In June 2014, he took his latest bike out for a ride. It was a Honda CBR 600, a sports motorcycle with some impressive capabilities. Little wonder then that he opened it up once he hit the road. But maybe he opened it up a little bit too much? He was clocked at over 150mph on the freeway near Miami Beach in Florida. He was going so fast that the local police decided it was too dangerous to chase him. They only caught up with him when he ran out of gas.

When challenged to explain his actions, Pineda gave an interesting response:

The thing is when you ride the motorcycle you can’t let the motorcycle get control of you…and at that moment the motorcycle took control of me and I just kept going faster and faster.

As one journalist put it, Pineda was suggesting that the machine made him do it.

It’s a not-altogether unfamiliar idea. Many is the time I’ve wondered whether I am wielding or yielding to the technology that suffuses my daily life. But would this ‘machine made me do it’ excuse ever hold up in practice? Could we avoid the long arm of the law by blaming it all on the machine?

That’s the question I’ve been asked to answer. I cannot hope to answer it definitively but I can hope to shed some analytical clarity on the topic and make some contribution to the debate. I’ll try to make two arguments: (i) that in order to answer the question you need to disambiguate the complexity of human-machine relationships; and (ii) this complexity casts doubt on many traditional (and not-so traditional) theories of responsibility. In this way, studying the complexity of human machine relationships may be a greater boon to the hard incompatibilist school of thought than fundamental discoveries in human neuroscience will ever be. This last bit is controversial, but I’ll try to motivate it towards the end of this talk.


1. The Four Fundamental Relationships between Man and Machine
When we say ‘the machine made me do it’ what exactly are we saying? Consider three short stories, each of them based on real life events, each of them pointing to a different relationship between man and machine:

Story One - GPS Car Accident: In March 2015, a car being driven by the Chicagoan Iftikhar Hussain plunged 37 feet off the derelict Cline Avenue Bridge in Southwest Chicago. Mr Hussain survived the accident but his wife Zhora died of burns after the car burst into flames. At the time of the accident, Mr Hussain had been fixated on following his GPS and had not noticed the warning signs indicating that the bridge was closed.

Story Two - DBS-induced Johnny Cash Fan: Mr B suffered from Obsessive Compulsive Disorder. His doctors recommended that it be treated with an experimental form of deep brain stimulation therapy. This involved electrodes being implanted into his brain to modulate the activity in a particular sub-region. A control device could be used to switch the electrodes on and off. The treatment was generally successful, but one day while the device was switched on Mr B developed a strong urge to listen to the music of Johnny Cash. He bought all his CDs and DVDs. When the device was switched off, the love of Johnny Cash dissipated.

Story Three - Robot Killer: In July 2015, at a Volkwagen production plant in Germany, a manufacturing robot killed a machine technician. The 22 year-old was helping to assemble the machine when it unexpectedly grabbed him and crushed him against a metal plate. The company claimed the accident was not the robot’s fault. It was due to human error. The story was reported around the world under the headline ‘Robot kills man”. 

In the first story, one could argue that GPS made the man do it, but it sounds like a dodgy way of putting it. There was no coercion. Surely he shouldn’t have been so fixated? In the second story, one could argue that the DBS made the man like Johnny Cash, but there is something fuzzy about the relationship between the man and the machine. Is the machine part of the man? Can you separate the two? And in the third story, one could argue that the machine did something to the man, but again it feels like there is a complicated tale to be told about responsibility and blame. Was human error really the cause? Whose error? Can a machine ever ‘kill’ a man?

I think it’s important that we have an analytical framework for addressing some of this complexity. Human agency is itself a complex thing. It has mental and physical components. To exercise your agency you need to think about your actions: you need to translate desires into intentions by combining them with beliefs. You also need to have the physical capacity to carry out those intentions. It is the combination of these mental and physical components that is essential for responsibility. Indeed, its essentiality is embedded into the fundamental structure of a criminal offence. As every first year law student learns (in the Anglo-American world at least) a crime consists of two fundamental elements: a guilty mind and a guilty act. You need to have both for criminal liability.

This requires nuance in any conversations about machines making us do things. It forces us to ask the question: how exactly does the machine interfere with our agency? Does it interfere with the mental components or the physical ones? What I want to suggest here is that there are four specific types of machine ‘interference’ that can arise, and that these four types of interference settle into two general types of man-machine relationship. This might sound a little confusing at first, so let’s unpack it in more detail.

The two general relationships are (I) the outsourcing relationship; and (II) the integration relationship. The outsourcing relationship involves human beings outsourcing some aspect of their agency to a machine. In other words, they get the machine to do something on their behalf. This outsourcing relationship divides into two major sub-types: (a) the outsourcing of action-recommendations, i.e. getting the machine to decide which course of action would be best for you and then implementing that action through your own physical capacities (possibly mediated through some machine like a car or a bike) - this is an indirect interference with mental aspects of agency; and (b) the outsourcing of action-performances, i.e. you decide what the best course of action is and get a machine to physically implement it - this is an interference with physical aspects of agency. From the three stories given above, story one would seem to involve the outsourcing of action-recommendations: the GPS told the man where to go and the man followed the instructions. And story three would seem to involve the outsourcing of action-performances: somebody decided that an industrial robot was the fastest and most efficient way to assemble a car and designed it to perform certain actions in a controlled environment.

This brings us to the integration-relationship. This involves human beings integrating a machine into their own biology. In other words, it involves the fusion of their biological wet-ware with technological hard-ware. The second story clearly involves some form of human-machine integration. The DBS device is directly incorporated into the patient’s brain. But again, there are different forms of machine integration. The brain itself is a complex organ. Some brain activities are explicit and conscious — i.e. they are directly involved in the traditional mental aspects of agency — others are implicit and subconscious — they seem to operate on the periphery of the traditional mental aspects of agency. The changes made by the device could manifest in conscious-reasoning and decision-making, or it could operate below the level of conscious reasoning and decision-making. This suggests to me that the integration-relationship divides into two major sub-types: c) bypassing-integration, i.e. the machine integrates with the implicit, subconscious aspects of brain activity and so bypasses the traditional mental capacities of agency and (d) enhancing-integration, i.e. the machine integrates with the explicit, conscious aspects of the brain and enhances traditional mental capacities of agency.* I suspect the story of Mr B involves bypassing-integration as opposed to enhancing-integration: the device presented him with a new desire. Although he was consciously aware of it, it was not something that he could rationally reflect upon and decide to endorse: it’s incorporation into his life was immediate and overwhelming.

This gives us a hierarchically-organised, somewhat complex taxonomy of human-machine relationships. I have tried to illustrate it in the diagram below. Note that my initial description of the relationships doesn’t even do justice to their complexity. There are important questions to be asked about the different ways in which a machine might bypass explicit mental processing and the different degrees of performance outsourcing. Some of this complexity will be teased apart in the discussion below. For now, this initial description should give you a sense of the framework I am proposing.


2. A Compatibilistic Analysis of the ‘Machine made me do it’ Excuse
I said at the outset that disambiguating the complexity of human-machine relationships would help us to analyse the ‘machine made me do it’-excuse. But how? I’ll make a modest proposal here. Each of the four relationships identified above involves machinic interference with human agency. Therefore, each of them falls — admittedly loosely — within the potential scope of the ‘machine made me do it’ label. By considering each type of interference, and the impact it may have on responsibility, separately we can begin to approach an answer to our opening question: will the excuse ever work?

To help us do that, we need to think about the conditions of responsibility, i.e. what exactly is it that makes a human being responsible for their actions? There are many different accounts of those conditions. The classic Aristotelian position holds that there are two fundamental conditions of responsibility: (i) the volitional condition, i.e. the action must be a voluntary performance by the agent and (ii) an epistemic condition, i.e. the agent must know what they were doing. There has been much debate about the first of those conditions; relatively less about the second (though this is now beginning to change). In relation to the first, much of the debate has centred on the need for free will and the various different accounts of what it takes to have free will.

I cannot do justice to all the different accounts of responsibility that have been proposed on foot of this debate. For ease of analysis, I will limit myself to the standard compatibilistic theories of responsibility, i.e. those accounts that hold it is possible for an agent to voluntarily and knowingly perform an act even if their actions are causally determined. Compatibilistic theories of responsibility reject the claim that in order to be responsible for what you do you must be able to do otherwise - thus they are consistent with the deterministic findings of contemporary neuroscience. They argue that having the right causal relationship to your actions is all that matters. In an earlier post, I suggested that these were the four most popular compatibilistic accounts of responsibility:

Character-based accounts: An agent is responsible for an action if it is caused by that agent, and not out of character for that agent. Suppose I am a well-known lover of fine wines. One day I see a desirable bottle of red. I decide to purchase it. The action emanated from me and was consistent with my character. Therefore I am responsible for it (irrespective of whether my character has been shaped by other factors). The most famous historical exponent of this view of responsibility was David Hume. Some feel it is little bit too simplistic and suggest various modifications.

Second-order desire accounts: An agent is responsible for an action if it is caused by the first-order desire of that agent and that first-order desire is reflectively endorsed by a second order desire. I have a first order desire for wine (i.e. I want some wine). I buy a bottle. My first order desire is endorsed by a second order desire (I want to want the wine). Therefore, I am responsible.

Reasons-responsive accounts: An agent is responsible for an action if it is caused by their decision-making mechanism (mental/neurological) and that decision-making mechanism is responsive to reasons. In other words, if the mechanism was presented with different reasons for action it would produce different results in at least some possible worlds. I like wine. This gives me a reason for buying it. But suppose that in some shops the wine is prohibitively expensive and buying it would mean I lose out on other important foodstuffs. This gives me a reason not to buy it. If I purchase wine in one shop, I can be said to be responsible for this decision if in some of the other shops where it is prohibitively expensive I would have not purchased wine. This is a somewhat complicated theory, particularly when you add in the fact that there are different proposals regarding how sensitive the decision-making mechanism needs to be.

Moral reasons-sensitivity accounts: An agent is responsible for an action if it is caused by their decision-making mechanism and that mechanism is capable of grasping and making use of moral reasons for action. This is very similar to the previous account, but places a special emphasis on the ability to understand moral reasons.


Each of these accounts fleshes out the connection an agent must have to their acts in order to be held responsible for those acts. They each demand a causal connection between an agent and an act; they then differ on the precise mental constituents of agency that must be causally involved in producing the act. What I would say — and I won’t have time to defend this claim here — is that they each ultimately appeal to some causal role for explicit, consciously represented mental constituents of agency. In other words, they all say that in order to be responsible you must consciously endorse what you do and consciously respond to different reasons for action. The link can be direct and proximate, or indirect and distal, but it is still there.**

With that in mind we can now ask the question: how would these different accounts of responsibility make sense of the four different machinic interferences with agency outlined above? Here’s my initial take on this.

The first type of interference involves the outsourcing of action-recommendations to a machine — just as Iftikhar Hussain outsourced his route-planning to his GPS. Initially, it would appear that this does nothing to undermine the responsibility of the agent. The machine is just a tool of the agent. The agent can take onboard the machine’s recommendation without any direct or immediate interference with character, rational reflection or reasons-responsivity. But things get a little bit more complicated when we think about the known psychological effects of such outsourcing. Psychologists tell us that automation bias and complacency is common. People get comfortable handing over authority to machines. They stop thinking. The effect has been demonstrated among pilots and other machine operators relying on automated control systems. The result is that actions are no longer the product of rational reflection or of a truly reasons-responsive decision-making mechanism. This might lend some support to the ‘machine made me do it’ line of thought.

The one wrinkle in this analysis comes from the fact that most compatibilistic theories accept that the link between our actions and our conscious capacities need not be direct. If you fall asleep while driving and kill someone as a result, your responsibility can be traced back in time to the moments before you fell asleep — when you continued to drive despite being drowsy. If we know that outsourcing action-recommendations can have these distorting effects on our behaviour, then our responsibility may be traced back in time to when we chose to make use of the machine. But what if such outsourcing is common from birth onwards? What if we grow up never knowing what it is like to not rely on a machine in this way? More on this later.

The second type of interference involves the outsourcing of action-performances to a machine. The effect of this on responsibility really depends on the degree of autonomy that is afforded to the machine. If the machine is a simple tool — like a car or a teleoperated drone — then using the machine provides no excuse. The actions performed by the machine are (presumably) direct and immediate manifestations of the agent’s character, rational reflection, or reasons responsive decision-making mechanism. If the machine has a greater degree of autonomy — if it responds and adapts to its environment in unpredictable and intelligent ways — then we open up a potential responsibility gap. This has been much discussed in the debate about autonomous weapons systems. The arguments there tend to focus on whether doctrines of command responsibility could be used to fill the responsibility-gap or, more fancifully, on whether machines themselves could be held responsible.

The third type of interference involves the bypassing-integration of a machine into the agent’s brain. This would seem to straightforwardly undermine responsibility. If the machine bypasses the conscious and reflective aspects of mental agency, then it would seem wrong to say that any resultant actions are causally linked to the agent’s character, rational reflection, reasons-responsivity and so on. So in the case of Mr B, I would suggest that he is not responsible for his Johnny Cash loving behaviour. The only complication here is that once he knows that the machine has this effect on his agency — and he retains the ability to switch the machine on and off — one might be inclined to argue that he acquires responsibility for those behaviours through his continued use of the machine. But this argument should be treated with caution. If the patient needs the machine to treat some crippling mental or physical condition, then he is faced with a very stark choice. Indeed, one could argue that patients facing such a stark choice represent pure instances of the ‘machine made me do it’ excuse. Their choices are coerced by the benefits of the machine.

The fourth type of interference involves the enhancing-integration of a machine into the agent’s brain. This might be the most straightforward case. If the machine really does have an enhancing effect, then this would not seem to undermine responsibility. If anything, it might make them more responsible for their actions. This is a line of argument that has been made by Nicole Vincent and her colleagues on the Enhancing Responsibility project. The major wrinkle with this argument has to do with the ‘locus of control’ for the machine in question. In the case of DBS, the patient can control the operation of the device themselves. Thus they have responsibility both for the initial use of the device and any downstream enhancing effects it may have (except in some extreme cases where they lack the capacities for responsibility prior to switching on the machine). In other words, in the DBS case the locus of control remains with the agent and so it seems fair to say they retain responsibility when the machine is being used. But imagine if the machine is not controlled directly by the agent? Imagine if the switching on and off of the device is controlled by another agent or by some automated, artificially intelligent computer program? In that case it would not seem fair to say that the agent retains responsibility, no matter what the enhancing effect might be. That seems like a classic case of manipulation.

Which brings me to my final argument…


3. The Machine-based Manipulation Argument
Everything I have just argued assumes the plausibility of the compatibilist take on responsibility. It assumes that human beings can be responsible for their actions, even if everything they do is causally determined. It suggests that machinic-interference in human agency can have complex effects on human responsibility, but it doesn’t challenge the underlying belief that humans can indeed be responsible for their actions.

This is something that hard incompatibilists dispute. They think that true moral responsibility is impossible. It is not compatible with causal determinism; nor is it compatible with indeterminism. One of the chief proponents of hard incompatibilism is Derk Pereboom. I covered his defence of the view on my blog in 2015. His main argument against the compatibilist take on responsibility is a variation on a manipulation argument. The starting premise of this argument is that if the action of one agent has been manipulated into existence by another agent, then there is no way that the first agent can be responsible for the action. So if I grab your hand, force you to take hold of a knife, and then use your arm to stab the knife into somebody else, there is no sense in which you are responsible for the stabbing.

Most people will accept that starting premise. Manipulation involves anomalous and unusual causation. In the example given, the manipulator bypasses the deliberative faculties of the agent. The agent is forced to do something without input from their agency-relevant capacities. How could they be responsible for that? (Even if they happen to like the outcome.)

Pereboom develops his argument by claiming that there is no difference between direct manipulation by another agent and other forms of external causation. He builds his argument slowly by setting out four thought experiments. The first thought experiment describes a case in which a neuroscientist implants a device into someone’s brain to manipulate their deliberation about some decision. This doesn’t bypass their deliberative faculties, but it nevertheless seems to undermine responsibility. The second thought experiment involves the same set up, only this time the neuroscientist manipulates the deliberative faculties at birth. Again, Pereboom argues that this seems to undermine responsibility. The third thought experiment is similar to the second, except this time the agent’s deliberative faculties are manipulated (effectively brainwashed) by the agent’s peers as they are growing up. It seems iffy to ascribe responsibility there too. Then the fourth thought experiment involves generic social, biological and cultural determination of an agent’s deliberative faculties. Pereboom challenges proponents of compatibilism to explain the difference between the fourth case and the preceding three. If they cannot, then the compatibilist take on responsibility is wrong.

Pereboom’s argument has been challenged. One common response is that his jump from the third thought experiment to the fourth is much too quick. It is easy to see how responsibility is undermined in the first two cases because there is obvious manipulation by an outside agent. It is also pretty clear in the third case. All three involve anomalous, manipulative causation. But there is nothing equivalent in the fourth case. To suppose that it involves something anomalous or responsibility-undermining is to beg the question.

Now, I’m not saying that this defence of compatibilism is any good. It would take a much longer article to defend that point of view (if you’re interested some of the relevant issues were thrashed out in my series of posts on Pereboom’s book). The point I want to make in the present context is relatively simple. If machinic interference with human agency becomes more and more common, then we are going to confront many more cases of anomalous causation. The lines between ordinary agency and manipulated agency will be much more blurry.

This could be a real threat to traditional theories of responsibility and agency. And it could be more of a threat than fundamental discoveries in the neuroscience of human behaviour. No matter how reductionistic or mechanistic explanations of human behaviour become, it will always be possible for philosophers to argue that they do nothing to unseat our commonsense intuitions about free will and responsibility. That we can cohere our commonsense intuitions with the findings of neuroscience. It is less easy to account for machinic interference in behaviour in the same way. It’s not the case that machinic interference involves fundamental discoveries about the mechanisms of human behaviour. It always involves interference with behaviour.


* You may ask: why couldn’t the integration be disenhancing? It probably could be, but here I assume that most forms of integration are intended to be enhancing and that they are successful in doing so. If they were unsuccessful, that would add further complexity to the analysis, but the general line of argument would not change. 

 ** I include the ‘indirect and distal’ comment in order to allow for the possibility of tracing responsibility for an unconscious act at T2 back in time to a conscious act at T1.

Monday, June 27, 2016

Episode #6 - Deborah Lupton on the Quantified Self

Deborah-Lupton-pic

This is the sixth episode in the Algocracy and Transhumanism podcast. In this episode, I talk to Deborah Lupton about her book The Quantified Self (Polity Press 2016). Deborah is a Centenary Research Professor at the University of Canberra in Australia. She is a widely-published scholar. Her current research focuses on a variety of topics having to do with digital sociology and the impact of technology on human life. Our conversation is divided into three main topics: (i) what is the quantified self? (ii) how is the 'self' affected by self-tracking technologies? and (iii) what are the political and social consequences of self-tracking?

You can listen to the episode below. You can download the mp3 here. You can also subscribe via Stitcher and iTunes (RSS feed).



Show Notes

  • 0:00 - 0:30 - Introduction
  • o:30 - 8:05 - What is the quantified? Is 'self-tracking' a better term?
  • 8:05 - 11:30 - Are we all self-trackers?
  • 11:30 - 14:25 - What kinds of data are being tracked?
  • 14:25 - 16:20 - Who is attracted to the quantified self movement?
  • 16:20 - 21:20 - What is the link between self-tracking and gamification?
  • 21:20 - 26:10 - Does self-tracking help to promote autonomy and self-control?
  • 26:10 - 28:30 - Does self-tracking contribute to a culture of narcissism?
  • 32:00 - 43:13 - The metaphysics of the self in the QS movement: reductionism, dualism and cyborgification
  • 43:13 - 46:40 - Do the benefits of self-tracking help to normalise mass surveillance?
  • 51:00 - 54:00 - The Quantified Self and the Neoliberal State
  • 54:00 - 57:30 - Self-tracking and the Risk Society
  • 57:30 - End - The involuntary imposition of self-tracking
 

Links

Sunday, June 19, 2016

Episode #5: Hannah Maslen on the Ethics of Neurointerventions

This is the fifth episode in the Algocracy and Transhumanism Podcast. In this episode I speak to Hannah Maslen. Hannah is a research fellow at the Uehiro Centre for Practical Ethics in Oxford and is affiliated with the Oxford Martin School. Her research focuses on ethical issues in general, but she has a particular interest in the ethics of neurointerventions and the philosophy of punishment. In this episode, we talk primarily about her work on neurointerventions.

We start by explaining what a neurointervention is and then look at three main issues: (i) how neurointerventions could be used to treat certain psychiatric disorders (specifically anorexia nervosa) and how that might impact on autonomy; (ii) how we might be able to enhance responsibility through neurointerventions like modafinil and (iii) the role of remorse in the criminal justice system and how we might be able to encourage people to feel remorse through neurointerventions.

You can listen to the podcast below. You can download the mp3 at this link. You can also subscribe on Stitcher and iTunes (via the RSS feed).




Show Notes:

0:00 - 0:30 - Introduction to Hannah
0:30 - 7:05 - What is a neurointervention?
7:05 - 11:40 - Do neurointerventions bypass our rational capacities? Do they treat us passively rather than actively?
11:40 - 17:45 - Using Deep Brain Stimulation to affect the motivation, control and affective responses of patients with anorexia nervosa.
17:45 - 23:30 - Can we alter someone's desires with DBS? The importance of the wanting/liking distinction
23:30 - 27:50 - How might the use of DBS affect someone's autonomy?
27:50 - 31:25 - Neurointerventions and value pluralism
31:25 - 34:50 - Could we enhance responsibility through the use of neurointerventions?
34:50 - 38:00 - Should some people be under a moral/legal duty to enhance (e.g. doctors and pilots)?
38:00 - 41:20 - Would responsibility-enhancement lead us to ignore systemic causes of disadvantage?
41:20 - 43:10 - Won't robots be doing all the responsible work anyway?
43:10 - 52:15 - What is remorse and what role does it play in the criminal justice system?
52:15 - 59:50 - Could we use neurointerventions to enhance remorse?
59:50 - End - Would enhanced remorse be less valuable?


Relevant Links

Friday, June 17, 2016

The Ethics of Algorithmic Outsourcing: An Analysis




Our smart phones, smart watches, and smart bands promise a lot. They promise to make our lives better, to increase our productivity, to improve our efficiency, to enhance our safety, to make us fitter, faster, stronger and more intelligent. They do this through a combination of methods. One of the most important is outsourcing,* i.e. by taking away the cognitive and emotional burden associated with certain activities. Consider the way in which Google maps allows us to outsource the cognitive labour of remembering directions. This removes a cognitive burden and potential source of anxiety, and enables us to get to our destinations more effectively. We can focus on more important things. It’s clearly a win-win.

Or is it? Evan Selinger (whom I recently interviewed for my podcast) has explored this question in his work. He has his concerns. He worries that an over-reliance on certain outsourcing technologies may be corrosive of virtue. Indeed, in one case he goes so far as to suggest that it may be turning people into sociopaths. In this post I want to unpack the arguments Selinger advances in support of this view.


1. The Varieties of Outsourcing
Before we get into the arguments themselves, it is worth injecting some precision into our understanding of technological outsourcing. The term is being used in a stipulative fashion in this post. It obviously has economic connotations. In economic discussions ‘outsourcing’ is used to describe the practice whereby economic actors rely on agents or institutions outside of themselves to perform one of their key productive tasks. This is usually done in the interests of cost/efficiency savings. These economic connotations are useful in the present context. Indeed, in my interview with him, Selinger suggested that he liked the economic connotations associated with term. But he also noted that it applies to many domains. Across all those domains it has a fundamental structure that involves accepting the assistance of another agent or thing and relying on that thing to perform some important labour on your behalf (for more details on this fundamental structure check out the podcast).

 For the remainder of this post, we’ll focus solely on ‘technological outsourcing’. We’ll define this as the practice whereby people get their computers, smart phones (etc) to perform certain day-to-day tasks that they would otherwise have to perform themselves.

It is worth drawing a distinction between two major variants of such technological outsourcing:

Cognitive Outsourcing: Using a device to perform cognitive tasks that you would otherwise have to perform yourself, e.g. getting a calculator to perform arithmetical operations instead of doing them in your head.
Affective Outsourcing: Using a device to perform an affective task that you would otherwise have to perform yourself, e.g. getting an app to send ‘I love you’ texts to your partner at predetermined or random intervals.

Presumably there are many more variants that could be discussed. In previous posts I have talked about the decision-making and motivational outsourcing that is sometimes performed by technological devices. But I’ll stick with just these two variants for now. I do so because, as Selinger points out in one of his articles, there is a sense in which cognitive outsourcing has become ethically normalised. Outsourcing cognitive tasks like memorising phone numbers, directions, mathematical operations and so forth is utterly unexceptionable nowadays. And although people like Nicholas Carr may worry about the degenerative effects of cognitive outsourcing, most of the ethical concern is focused solely on the efficacy of such technologies: if they are better at performing the relevant cognitive task, then we are pretty comfortable with handing things over to them. Affective outsourcing might be more ethically interesting insofar as it has a direct impact on our interpersonal relationships (which is where most affective labour is performed).

Are there many apps that facilitate affective outsourcing? Selinger discusses some in his work. I’ll just mention two here. Both have to do with the affective labour performed within romantic relationships. As anyone who is in a relationship will know, there are many simple but regular affective tasks that need to be performed in order to maintain a smoothly functioning relationship. One of them is to send occasional messages to your partner (when you are apart) to remind them that you still care. The problem is that sometimes we get distracted or have other more important things to do. How can we perform this simple affective task at such times? Apps like Romantimatic and Bro App provide the answer. They allow you to send automatic text messages to your partner at appropriate times. Romantimatic seems fairly well-intentioned, and is primarily designed to remind you to send messages (though it does include pre-set messages that you can select with minimal effort). Bro App is probably less well-intentioned. It is targeted at men and is supposed to allow them to spend more time with the ‘bros’ by facilitating automated messaging. Jimmy Fallon explains in the clip below.



To some extent, these are silly little apps. Selinger speculates that the BroApp may just be a performance art piece. Nevertheless, they are still available for download, and they are interesting because they highlight a potential technological trend which, according to Selinger anyway, might have important repercussions for the ethics of interpersonal relationships.

Why is this? I detect three main lines of argument in his work. Let’s go through each of them in some detail.


2. The Agency and Responsibility Problem
The first major objection has to do with agency and responsibility. Selinger puts it like this in his article ‘Don’t outsource your dating life’:

The more hard and tedious work outsourcing can remove from our lives, the more tempting it will be to take advantage of it.
And yet, that’s exactly why we need to be vigilant.
Defenders of outsourcing believe the Do It Yourself (DIY) ethics has too much cultural prominence and is wrongly perceived as evidence of thrift or even moral virtue. They attribute this mistake to people having difficulty placing a proper value on their time.
Setting a value on time, however, is more complicated than outsourcing boosters lead us to believe.
First, outsourcing can impact responsibility. A calendar isn’t just a tool for offloading memorising commitments. It can be very helpful in organzing busy lives and ensuring we meet our obligations. But delegation can be negative. If you only think of a lover because your phone prompts you to, maybe you’re just not that into your lover.


There are few things going on in this quote and I want to disaggregate them. The last line, in particular, seems to blend into the issue of deceptiveness (discussed below) since it suggests that dependency on an app of this sort highlights a lack of sincerity in your affective performances. We’ll get to that. For now, let’s just focus on the responsibility bit.

I take it that the idea motivating this objection is that responsibility is an important social and personal virtue. In other words, it is good for us to take responsibility for certain aspects of our lives. It would be wrong (and impossible) to take responsibility for everything (you can’t ‘do it all yourself’) but there definitely some things for which you should take responsibility. One of those things is your interactions with your romantic partner. Reliance on outsourcing apps and devices undermines that responsibility. It creates a dependency. This means we no longer have the right responsibility-connection with certain reactions we encourage in our partners. This prevents us from taking responsibility for those reactions.

But what is the responsibility-connection? One could write an entire treatise on that topic, but in broad outline you are responsible for an outcome if (a) you cause that outcome through your actions; (b) you know about the important factual and moral properties of that outcome (the epistemic condition); and c) you voluntarily willed that outcome (the volitional condition). The worry then is that reliance on outsourcing apps prevents one or more of these conditions from being satisfied. For example, you could argue that because the app automatically selects and sends the messages, you don’t cause the eventual outcome (and you may lack awareness of it) and hence you cannot be responsible for it.

Putting it more formally:

  • (1) Taking responsibility for certain outcomes is an important social and personal virtue.
  • (2) In order to take responsibility for certain outcomes you must cause, know and voluntarily will those outcomes.
  • (3) Reliance on outsourcing apps undermines one or more of these conditions.
  • (4) Therefore, reliance on outsourcing apps undermines responsibility (from 2 and 3)
  • (5) Therefore, reliance on outsourcing apps undermines an important social and personal virtue.


Is this argument any good? I have some fondness for it. I too think that responsibility is an important social and personal virtue. although I reject the desire to over-responsibilise personal failings (a desire that is commonplace in some economic/political ideologies). I think cultivating the power of agency and responsibility is important. For me, the good life doesn’t simply consist in being a passive recipient of the benefits of technology and progress; it requires taking an active role in that progress. And I sometimes worry that technology denies us this active role.

That said, I think the plausibility of this argument will always depend on the mechanics of the particular app. It is not obvious to me that an app like Romantimatic or even Bro App will always sever the responsibility-connection. Using a bit of technology to achieve an outcome that you both desire and intend does not undermine your responsibility. So if you genuinely want to make your partner happy by sending them text messages at certain intervals, and if you merely use the app do execute your intention, then I think you are still sufficiently connected to the outcome to take responsibility for it. Indeed, there is an argument to the effect that technological assistance of this sort actually increases your responsibility because it enhances your ability to perform tasks of this sort.

On the other hand, it does depend on how the app executes your desires and intentions. The more autonomous the app is, the greater the risk of undermining your responsibility. This is something that is widely discussed in the AI and robotics literature. If apps of this sort become more independent from their users — if they start generating and sending message-content that is well-outside the bounds of what their users actually desire and intend — then I think the responsibility argument could work. (For more on responsibility and automation, see my recent paper on robotics and the retribution gap).


3. The Deception/Inauthenticity Objection
A second argument against these apps is that they encourage us to be deceptive in our interpersonal relationships. This deceptive intent is clearly evinced by the makers of the BroApp. The rationale behind the app is that it helps you to maintain the pretense of communicating with your romantic partner when you are actually spending time with your friends (the ‘bros’). Selinger endorses this interpretation of these apps in his article ‘Today’s apps are turning us into sociopaths’**

…the reason technologies like BroApp are problematic is that they’re deceptive. They take situations where people make commitments to be honest and sincere, but treat those underlying moral values as irrelevant — or, worse, as obstacles to be overcome.

To put this into argumentative form:

  • (6) It is a bad thing to be deceptive in your interpersonal relationships.
  • (7) Affective outsourcing apps (such as BroApp) encourage deceptive interpersonal communications.
  • (8) Therefore, these apps are bad things.

The argument is too general in this form. Premise (6) needs to be finessed. There may be contexts in which a degree of deceptiveness is desirable in interpersonal relationships. The so-called ‘white lies’ that we tell to keep our partners happy may often be justifiable (I pass no judgment on that here). And premise (7) is problematic insofar as not all of these apps encourage ‘deceptiveness’, at least not within the strict definition of that term. Deceptiveness is usually taken to denote an active intent to mislead another as to the context or the truth of what you are saying. It’s not clear to me that automated messaging always involves that kind of active intent. I think someone could set up a pre-scheduled set of messages that sincerely and truthfully convey their feelings toward another person.

What might be really problematic is not so much that these apps encourage deceptiveness but that they undermine the real value of certain types of interpersonal communication. Selinger highlighted this in the interview I did with him. He suggested that there are some interpersonal communications in which the value of the communication lies in the fact that it is an immediate, deliberate and conscious representation of how you feel about another person. In other words, that the real value of receiving an affectionate message from a loved one, lies in the fact that the other person is really thinking about you at that moment — that they are being intentional and ‘present’ in the relevant communicative context. The problem is that the very logic of these apps — the automated outsourcing of communications — serves to corrode this ‘real’ value. It might make your partner feel good for a moment or too. But it does so at the expense being consciously present in the communication:

  • (9) The real value of certain types of interpersonal communication is that they are immediate, conscious and intentional representations of how we feel.
  • (10) Affective outsourcing apps (by their very nature) create communications that are not immediate, conscious and intentional.
  • (11) Therefore, affective outsourcing apps undermine the real value of certain types of interpersonal communication.

This is a more formidable objection because it gets to the heart of what these apps are. It is, if you like, as close to an in-principle objection as you are likely to get. But there are several limitations to bear in mind.

First, it clearly doesn’t apply to outsourcing tout court — sometimes there is no value to the immediate conscious performance of an act. The value of dividing up a bill at a restaurant does not lie in the conscious performance of the arithmetical operation; it lies in getting the right result. That said, I think it might apply to more contexts than we first realise. For instance, I happen to believe that the real value of certain cognitive acts lies in the fact that they are immediate, conscious and intentional. That’s how I feel about the value of participation in deliberative political processes and is part of the reason why I worry about algorithmic outsourcing in political contexts.

Second, even in those contexts in which it does apply there may be important tradeoffs to consider. To say that the ‘real’ value of certain communications lies in their immediate, conscious and intentional nature is not to say that ‘all’ the value lies in those properties.

Finally, it is worth noting that technology is not the only thing that undermines this value. Internal automaticity (i.e. the unconscious/subconscious performance of certain acts) also undercuts this value and it can be prompted by many non-technological factors (e.g. mindless repetition of a task, stress or busyness at work).


4. The Virtue-Corrosion Objection
The final objection is simply a more general version of the preceding one. The idea is that the good life (i.e. the life of meaning and flourishing) is one in which people develop (and are encouraged to develop) virtuous character traits — e.g. thoughtfulness, generosity, charity, mercy, courageousness and so on. These are performative virtues. That is to say, the goodness lies in the ability to perform actions in a manner that exemplifies and enriches those virtuous traits. It is not enough for you to simple facilitate an outcome that is generous or charitable — you must be responsible for those outcomes through your own actions.

The worry about outsourcing apps — possibly of all kinds but particularly those involving affective performances (since affect is often central to virtue) — is that they discourage the cultivation of these performative virtues. Think about the logic underlying something like Romantimatic or BroApp. It is an instrumentalising logic. It assumes that the goal of interpersonal communication is to produce a desirable outcome in the mind of your romantic partner. But maybe that’s not all it is about. Maybe it is also about cultivating virtuous performances in interpersonal communication. By outsourcing the activity you completely miss this.

I’m not going to spell this out as a logical argument. As I say, I think it may just be a more general repackaging of the preceding objections. Nevertheless, I think it is a valuable repackaging insofar as it highlights once again how the good life involves more than just being the passive recipient of the benefits that technology brings. The virtues are constitutive of the good; they are not mere instruments of the good. If we create outsourcing apps that both allow and encourage us to bypass the virtues we could be in for trouble.

It goes without saying that this objection still needs to be treated with caution. Not all forms of outsourcing will corrode virtue. Many could free us up to become more virtuous by allowing us to focus on the skills and practices that matter most. Also, we must bear in mind what I previously said about responsibility. Outsourcing doesn’t necessarily entail the severing of the responsibility-connection that is needed for cultivating the virtues.

As with most technology-related debates, the issue is not black-and-white.


* You could just call this ‘automation’, but I want to stick with ‘outsourcing’ since it is the term used in the work of Evan Selinger.

** I have no idea whether Selinger was responsible for the article headlines. It’s quite possible (probable) that he was not.

Wednesday, June 15, 2016

Getting Better: From Naive to Deliberate Practice



I would like to be a better swimmer, a better runner, a better guitarist, a better singer, a better lecturer, a better writer, a better organiser, a better partner, and generally a better person. But how can I achieve all these things? I have no method. I approach things haphazardly, hoping that sheer repetition will lead to betterment. This hope is probably forlorn.

Here’s an example. Years ago, I decided I wanted to improve my swimming. I joined a swimming club. I took a few lessons. And I participated in some rigorous training sessions. Eventually I felt pretty good about myself. I was able to swim for long periods without getting tired. I swam a lot. Maybe 2.5 − 3km, three to four days a week. By the end of it, being able to stay in a swimming pool for a couple of hours on end was all I achieved. I never really improved my speed or the quality of my strokes. Indeed, I often didn’t bother to check if I’d improved. My feeling good about myself was enough. So after an initial ascent up the learning curve, I plateaued.

I have repeated the same pattern throughout my life. I have initial bursts of enthusiasm in which I try to develop some skillset, and once I achieve reasonable proficiency (or what seems like reasonable proficiency to me), I just repeat myself ad nauseum. According to Anders Ericsson’s new book Peak I’m not alone. I, like many others, have the wrong approach to improving my abilities. If I want to get really good at something, I need to move away from the naive belief that sheer repetition leads to improvement. I need to embrace what he calls deliberate practice.

I’m fascinated by Ericsson’s ideas for several reasons. Peak is a pop-science book. It exposes and distills the results of Ericsson’s empirical research with expert performers. Some of those results have already leaked into the popular consciousness, most infamously through Malcolm Gladwell’s formulation of the 10,000 hour rule (though both Ericsson and Gladwell claim that this has been misinterpreted). Cutting through this popular noise and hearing from the original source is a useful corrective. Also, I’m interested in how Ericsson’s ideas can be applied not just to the sports interests and hobbies that I happen to have, but also to my day-to-day work as an academic. Cal Newport — who I’ve written about before — thinks that academics have much to learn by incorporating the principles of deliberate practice into their work lives. I’m not entirely convinced, but I want to experiment with the idea over the coming months.

But to do that I need to have a clear sense of what deliberate practice entails. Strangely enough, this is something that is quite ‘hidden’ in Ericsson’s book. You have to wade through more than a hundred pages to get a summary of the key principles and even then Ericsson adds complications by highlighting forms of practice that are not quite deliberate but would be beneficial. Indeed, by my count, Ericsson identifies four different kinds of practice in the book. My goal in this post is to offer a useful one-stop summary of all four.

I’ll start with a synoptic view. The image below depicts the four kinds of practice. They are arranged along a spectrum. At the extreme left you have the weakest form of practice — i.e. the one that is least likely to improve your skillset — and at the extreme right you have the strongest form of practice — i.e. the one that is most likely to improve your skillset. Arranging the forms of practice in this manner gives us a useful starting principle: if you want to get better at something, try to move your practice style along the spectrum so you get as close as possible to the extreme right (deliberate practice). The principle is useful because, as we will see in a moment, deliberate practice is a very particular thing. It is not possible in every domain. So you can’t adopt the principle of engaging in deliberate practice all the time. But when it is not possible, you want to get as close to it as possible.




So much for the synoptic view. Now let’s get into the nitty-gritty of the individual practice styles. We’ll start with ’Naive Practice’. That can be defined in the following manner:

Naive Practice: You start off with a general sense of what you want to achieve. You get some kind of instruction (either through independent research or from an actual coach). You practice until you reach a satisfactory level of performance — one that is more or less automatic. And then you plateau, i.e. you stay in a comfort zone.

This is obviously the kind of practice I engaged in when I wanted to improve my swimming. It is the kind of practice that most people engage in. Ericsson cites some interesting research suggesting that many forms of professional education result in this plateau-ing effect. For example, doctors have been found not to improve significantly in their abilities despite years of experience. They reach a plateau of performance.

’Purposeful practice’ is an improvement on this. It can be defined in the following manner:

Purposeful Practice: A concerted effort to improve some skillset by doing four things:
(a) Practicing with some well-defined, specific goals in mind, i.e. targets for improvement. In the case of swimming the specific targets might be daily/weekly/monthly improvements in time. It is important that these goals are not vague and wooly, e.g. I want to get faster; they need to be specific, e.g. I want to improve my 100m freestyle by .05 of a second. As Ericsson puts it, ‘purposeful practice is about putting together a bunch of baby-steps to reach a longer-term goal’ (2016, 15).
(b) Being focused, i.e. giving the task your full attention while you are trying to reach your specific goal. This is important because you want to avoid the automaticity that is common in naive practice. Sticking with the swimming example, you want to make sure you are not just going through the motions. You are concentrating on your stroke and technique during your practice sessions.
c) Using feedback to improve your performance, i.e. testing your performance to see whether or not you are getting better. So in the swimming example, this would mean actually recording your times, tracking how many strokes you take to get from one end of the pool to another, using a coach to give guidance on your technique and so on. The idea is that you can’t really improve unless you know whether your efforts have been successful.
(d) Getting out of your comfort zone, i.e. pushing yourself to improve. This is one of the main things that separates purposeful practice from naive practice. The plateau-ing effect results from people staying in a comfort zone. If you want to get off the plateau you need to move outside the comfort zone. In the case of swimming this would mean constantly trying to improve your times and your stroke rate — aiming to beat your personal best and so on.

In most cases of purposeful practice (and indeed the other forms of practice that we are about to discuss) it helps if you break the task you are trying to master down into a number of sub-tasks. You then try to master your technique at each of those subtasks. In the case of swimming this might mean working on breathing technique and legwork separately, honing those skills, and then knitting them back together into the overall performance. This ability to hone the essential, but oftentimes boring, sub-skills is one of the key attributes of expert performers.

The next step along the spectrum is ‘proto-deliberate practice’, but you can’t understand what that means until you know what deliberate practice is. So we’ll skip ahead to that. ‘Deliberate practice’ is the gold standard (according to Ericsson). It is the kind of practice you find among elite performers in fields like sport and music. It is similar to purposeful practice insofar as it involves focus, specific goals, moving outside your comfort zone and feedback. Where it differs is in the body of knowledge upon which it is based. It is possible in fields where there are clear objective (or semi-objective) standards of success and a well-developed knowledge base about effective training routines and methods. There are also usually expert coaches who can provide assistance to practicers. In essence, deliberate practice is informed purposeful practice.

Ericsson argues that deliberate practice has seven key elements to it:

Deliberate Practice: Informed purposeful practice. It includes:
(a) Developing skills that others have figured out, i.e. relying on an established knowledge base about what training techniques and methods work.
(b) Practicing with well-defined, specific goals in mind (same as for purposeful practice)
c) Consistently moving outside your comfort zone (same as for purposeful practice)
(d) Using full attention and conscious actions, i.e. similar to the ‘focus’ element of purposeful practice. Involves being fully present and engaged in your training.
(e) Using feedback and modification to reach your goals (same as for purposeful practice)
(f) Developing effective mental representations, i.e. developing new cognitive frameworks that allow you to master the skillset. It has been found in study after study of expert performers that they have developed advanced mental representations that allow them to overcome limitations faced by other performers. The classic example of this comes from the study of chess players and how they represent the pieces on the chessboard. Unlike novice chess players, they do not ‘see’ individual pieces on the board; instead, they see classic game sequences and patterns. This allows them to ‘chunk’ information into higher order representations and overcome limitations of working memory.
(g) Building upon your preexisting skillset, i.e. building newly acquired skills on top of previously acquired skills. This is the way that most learning is done and highlights the importance of mastering ‘foundational’ skills.

Skills like violin-playing and swimming lend themselves to deliberate practice. And it is these kinds of skills that Ericsson has focused on in his research. In both cases, there is a highly developed body of knowledge and reasonably clear objective standards of success or failure (opinions of experts in the case of music and times in the case of swimming). The problem is that not every domain is like that. Sometimes we are trying to develop skills in a relatively novel domain, where we lack well-developed pathways to success. In other cases, we may lack obvious objective standards of success. Indeed, many aspects of professional life are like this. I’d be hard pushed to come up with clear objective standards of success in academia (there are some metrics like number of papers published/cited, funding awards won and so on — whether they actually delineate what it means to be successful is another question).

Still, in cases like this, Ericsson thinks that we can approximate the gold standard of deliberate practice. I call these cases of ‘proto-deliberate practice’:

Proto-deliberate Practice: An attempt to approximate deliberate practice by adopting the purposeful approach and doing three things:
(a) Finding an expert (or experts) whose performance clearly outstrips that of others in that domain. Bear in mind that this will be difficult when there are no obvious objective standards of success and that your selection of the ‘best’ may be biased in various ways.
(b) Figuring out what they do differently. Again, this can be difficult. You need to know what other people are doing and how the expert performers differ from those norms. Trying to get them to unpack their mental representations can be a useful technique but bear in mind they may not even know what they do differently.
c) Try to develop training routines that allow you to follow their lead. This will involve some trial and error as you try to figure out how you can change your performance to approximate what they are doing. Again, copious use of feedback and modification is desirable at this stage.


So there you have it. A quick overview of the four main types of practice. It’s worth closing with two observations. First, by identifying the difference between deliberate practice and other less successful forms of practice, Ericsson is not suggesting that we should all try to approximate deliberate practice all the time. Far from it. There are many cases where naive practice is sufficient. In my case, this might be true of swimming. I doubt I’ll ever be a competitive swimmer (even at an amateur level). I don’t really need to hone my technique. I just need to be able to enjoy the experience. I’ve probably achieved that. You should save deliberate practice for the things you really want to get better at.

Second, there is an interesting relationship between deliberate practice and creativity. As described, you might think that deliberate practice is the antithesis of creativity. After all, it seems to be about copying the training techniques of others. It’s not about creating new styles of performance or developing wholly novel domains for human enjoyment. But this is probably wrong. Expert musicians and sports-stars are often quite creative. They simply build the creativity upon a strong foundational skillset. This argument requires elaboration, but I think it is right. To take an example that is close to my own heart, I think having good foundational knowledge of Standard English is useful if you want to adopt creative writing styles. In a sense, it’s only if you have mastered the conventional that you able to appreciate the opportunities for creativity.

Monday, June 13, 2016

The Philosophical Merits of Effective Altruism (Index)




I recently completed a series of posts on the merits of effective altruism. The series was an extended analysis of Iason Gabriel's article 'Effective Altruism and its Critics' (originally titled 'What's wrong with effective altruism'). The series included a bonus guest post from Gabriel himself in which he expanded upon one of the arguments in the paper.

Anyway, I thought it might be worth indexing all the entries in this series in this post. I thought it might also be worth providing links to some of the key texts arguing for and against the principles of effective altruism. I make no claims respecting the comprehensiveness of this list; they are simply some of the resources I have read and found interesting. If you would like to suggest other links in the comments section, please feel free to do so.


Series Index




Further Reading






  • MacFarquhar, Larissa - Strangers Drowning (not a philosophical analysis or defence of EA but an interesting set of biographical sketches of people who try to do extreme moral good)



  • Krisha, Nakul - 'Add Your Own Egg', The Point Magazine (a tribute to Bernard Williams that includes some criticisms of EA)







  • Alexander, Scott -  'Beware Systemic Change' SlateStarCodex (defence of EA against the 'systemic change' objection)


Saturday, June 11, 2016

Episode #4 - Evan Selinger on Algorithmic Outsourcing and the Value of Privacy

evanselinger_wiredopinion

This is the third episode in the Algocracy and Transhumanism Podcast. In this episode I interview Evan Selinger. Evan is a Professor of Philosophy at the Rochester Institute for Technology. He is widely-published scholar in the ethics and law of technology. He is currently working on a book with Brett Frischmann entitled Being Human in the 21st Century which is due out with Cambridge University Press in 2017. In this interview we talk about two main topics: (i) the ethics of technological outsourcing and (ii) the value of privacy and the nature of obscurity.

You can listen to the podcast below. You can download the mp3 here. You can also subscribe via Stitcher and iTunes.



Show Notes

0:00 - 1:25 - Introduction to Evan  
1:25 - 8:50 - What is algorithmic outsourcing? The fundamental structure of outsourcing
8:50 - 14:50 - Technological and non-technological examples of outsourcing   
14:50 - 18:50 - Cognitive vs Affective Outsourcing  
18:50 - 28:00 - Outsourcing interpersonal communications  
28:00 - 32:20 - Objections to the outsourcing of interpersonal communications  
32:20 - 41:00 - Is this a problem with technology or something technology encourages?  
41:00 - 45:50 - What is privacy?  
45.50 - 53:45 - What is obscurity? How does it relate to privacy?  
53:45 - 1:02:20 - Is obscurity under threat?  
1:02:20End - Isn't privacy dead? Shouldn't we embrace total transparency?   

Links

  • Allo - Google Messaging App
  • Crystal - app for determining psychological profiles