Wednesday, April 29, 2015

Is Automation Making us Stupid? The Degeneration Argument Against Automation





(Previous entry)

This post continues my discussion of the arguments in Nicholas Carr’s recent book The Glass Cage. The book is an extended critique of the trend towards automation. In the previous post, I introduced some of the key concepts needed to understand this critique. As I noted then, automation arises whenever a machine (broadly understood) takes over a task or function that used to be performed by a human (or non-human animal). Automation usually takes place within an intelligence ‘loop’. In other words, the machines take over from the traditional components of human intelligence: (i) sensing; (ii) processing; (iii) acting and (iv) learning. Machines can take over some of these components or all of them; humans can be fully replaced or they can share some functions with machines.

This means that automation is a complex phenomenon. There are many different varieties of automation, and they each have a unique set of properties. We should show some sensitivity to those complexities in our discussion. This makes broad-brush critiques pretty difficult. In the previous post I discussed Carr’s claim that automation leads to bad outcomes. Oftentimes the goal behind automation is to improve the safety and efficiency of certain processes. But this goal is frequently missed due to automation complacency and automation bias. Or so the argument went. I expressed some doubts about its strength toward the end of the previous post.

In this post, I want to look at another one of Carr’s arguments, perhaps the central argument in his book: the degeneration argument. According to this argument, we should not just worry about the effects of automation on outcomes; we should worry about its effects on the people who have to work with or rely upon automated systems. Specifically, we should worry about its effects on the quality of human cognition. It could be that automation is making us stupider. This seems like something worth worrying about.

Let’s see how Carr defends this argument.


1. The Whitehead Argument and the Benefits of Automation
To fully appreciate Carr’s argument, it is worth comparing it with an alternative argument, one that defends the contrary view. One such argument can be found in the work of Alfred North Whitehead. Alfred North Whitehead was a famous British philosopher and mathematician, probably best known for his collaboration with Bertrand Russell on Principia Mathematica. In his 1911 work, An Introduction to Mathematics, Whitehead made the following claim:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

Whitehead may not have had modern methods of automation in mind when he wrote this — though his work did help to inaugurate the computer age — but what he said can certainly be interpreted by reference to them. For it seems like Whitehead is advocating, in this quote, the automation of thought. It seems like he is saying that the less mental labour humans need to expend, the more ‘advanced’ civilization becomes.

But Carr thinks that this is a misreading, one that is exacerbated by the fact that most people only quote the line starting ‘civilization advances…’ and leave out the rest. If you look at the last line, the picture becomes more nuanced. Whitehead isn’t suggesting that automation is an unqualified good. He is suggesting that mental labour is difficult. We have a limited number of ‘cavalry charges’. We should not be expending that mental effort on trivial matters. We should be saving it for the ‘decisive moments’.

This, then, is Whitehead’s real argument: the automation of certain operations of thought is good because it frees us up to think the more important thoughts. To put it a little more formally (and in a way that Whitehead may not have fully endorsed but which suits the present discussion):


  • (1) Mental labour is difficult and finite: time spent thinking about trivial matters limits our ability to think about more important ones.

  • (2) It is good if we have the time and ability to think the more important thoughts.

  • (3) Therefore, it would be good if we could reduce the amount of mental labour expended on trivial matters and increase the amount spent on important ones.

  • (4) Automation helps to reduce the amount of mental labour expended on trivial matters.

  • (5) Therefore, it would be good if we could automate more mental operations.


This argument contains the optimism that is often expressed in debates about automation and the human future. But is this optimism justified?


2. The Structural Flaws in the Whitehead Argument
Carr doesn’t think so. Although he never sets it out in formal terms, I believe that his reason for thinking this can be understood in light of the preceding version of Whitehead’s argument. Look again at premise (4) and the inference from that premise and premise (3) to (5). Do you think this forms a sound argument? You shouldn’t. There are at least two problems with it.

In the first place, it seems to rely on the following implicit premise:


  • (6) If we reduce the amount of mental labour expended on trivial matters, we will increase the amount expended on more important ones.


This premise — which is distinct from premise (1) — is needed if we wish to reach the conclusion. Without it, it does not follow. And once this implicit premise is made explicit, you begin to see where the problems might lie. For it could be that humans are simply lazy. That if you free them from thinking about trivial matters, they won’t expend the excess mental labour on thinking the hard thoughts. They’ll simply double down on other trivial matters.

The second problem is more straightforward, but again highlights a crucial assumption underlying the Whitehead argument. The problem is that premise (5) seems to be assuming that automation will always be focused on the more trivial thoughts, and that the machines will never be able to take away the higher forms of thinking and creativity. This assumption may also turn out to be false.

We have then two criticisms of the Whitehead argument. I’ll give them numbers and plug them into an argument map:


  • (7) In freeing us up from thinking trivial thoughts, automation may not lead to us thinking the more important ones: we may simply double-down on other trivial thoughts.

  • (8) Automation may not be limited to trivial matters; it may take over the important types of thinking too.




But this is to speak in fairly abstract terms. Are there any concrete reasons for thinking these implicit premises and underlying assumptions do actually count against the Whitehead argument? Carr thinks that there are. In particular, he thinks that there is some strong evidence from psychology suggesting that the rise of automation doesn’t simply free us up to think more important thoughts. On the contrary, he thinks that the evidence suggests that the creeping reliance on automation is degenerating the quality of our thinking.


3. The Degeneration Argument
Carr’s argument is quite straightforward. It starts with a discussion of the generation effect. This is something that was discovered by psychologists in the 1970s. The original experiments had to do with memorisation and recall. The basic idea is that the more cognitive work you have to do during the memorisation phase, the better able you are to recall the information at a future date. Suppose I gave you a list of contrasting words to remember:

HOT: COLD
TALL: SHORT

How would you go about doing it? Unless you have some familiarity with memorisation techniques (like the linking or loci methods), you’d probably just read through the list and start rehearsing it in your mind. This is a reasonably passive process. You absorb the words from the page and try to drill them into your brain through repetition. Now suppose I gave you the following list of incomplete word pairs, and then asked you to both (a) complete the pairs; and (b) memorise them:

HOT: C___
TALL: S___

This time the task requires more cognitive effort. You actually have to generate the matching pair in your mind before you can start trying to remember the list. In experiments, researchers have found that people who were forced to take this more effortful approach were significantly better at remembering the information at a later point in time. This is the generation effect in action. Although the original studies were limited to rote memorisation, later studies revealed that it has a much broader application. It helps with conceptual understanding, problem solving, and recall of more complex materials too. As Carr puts it, these experiments show us that ‘our mind[s] rewards us with greater understanding’ when we exert more focus and attention.

The generation effect has a corollary: the degeneration effect. If anything that forces us to use our own internal cognitive resources will enhance our memory and understanding, then anything that takes away the need to exert those internal resources will reduce our memory and understanding. This is what seems to be happening in relation to automation. Carr cites the experimental work of Christof van Nimegen in support of this view.

Van Nimegen has done work on the role of assistive software in conceptual problem solving. Some of you are probably familiar with the Missionaries and Cannibals game (a classic logic puzzle about getting a group of missionaries across a river without being eaten by a cannibal). The game comes with a basic set of rules and you must get the missionaries across the river in the least number of trips while conforming to those rules. Van Nimegen performed experiments contrasting two groups of problem solvers on this game. The first group worked with a simple software program that provided no assistance to those playing the game. The second group worked a software program that offered on-screen prompts, including details as to which moves were permissible.

The results were interesting. People using the assistive software could solve the puzzles and performed better at first thanks to the assistance. But they faded in the long-run. The second group emerged as the winners: they solved the puzzles more efficiently and with fewer wrong-moves. What’s more, in a follow up study performed 8 months later, it was found that members of the second group were better able to recall how to solve the puzzle. Van Nimegen went on to repeat that result in experiments involving different types of task. This suggests that automation can have a degenerating effect, at least when compared to traditional methods of problem-solving.

Carr suggests that other evidence confirms the degenerating effect of automation. He cites an example of a study done on accounting firms using assistive software, which found that human accountants relying on this software had a poorer understanding of risk. Likewise, he gives the (essentially anecdotal) example of software engineers relying on assistive programs to clear-up their dodgy first draft code. In the words of one Google software developer, Vivek Haldar, this has led to “Sharp tools, dull minds.”

Summarising all this, Carr seems to be making the following argument. This could be interpreted as an argument in support of premise (7), given above. But I prefer to view it as a separate counterargument because it also challenges some of the values underlying the Whitehead argument:


  • (9) It is good if humans can think higher thoughts (i.e. have some complexity and depth of understanding).

  • (10) In order to think higher thoughts, we need to engage our minds, i.e. use attention and focus to generate information from our own cognitive resources (this is the ‘generation effect’).

  • (11) Automation inhibits our ability to think higher thoughts by reducing the need to engage our own minds (the ‘degeneration effect’).

  • (12) Therefore, automation is bad: it reduces our ability to think higher thoughts.





4. Concluding Thoughts
What should we make of this argument? I am perhaps not best placed to critically engage with some aspects of it. In particular, I am not best placed to challenge its empirical foundation. I have located the studies mentioned by Carr and they all seem to support what he is saying, and I know of no competing studies, but I am not well-versed in the literature. For this reason, I just have to accept this aspect of the argument and move on.

Fortunately, there are two other critical comments I can make by way of conclusion. The first has to do with the implications of the degeneration effect. If we assume that the degeneration effect is real, it may not imply that we are generally unable to think higher thoughts. It could be that the degeneration is localised to the particular set of tasks that is being automated (e.g. solving the missionaries and cannibals game). And if so, this may not be a big deal. If those tasks are not particularly important, humans may still be freed up to think the more important thoughts. It is only if the effect is more widespread that a problem arises. And I don’t wish to deny that this could be the case. Automation systems are becoming more widespread and we now expect to rely upon them in many aspects of our lives. This could result in the spillover of the degeneration effect.

The other comment has to do with the value assumption embedded in premise (9) (which was also included in premise (2) of the Whitehead argument). There is some intuitive appeal to this. If anyone is going to be thinking important thoughts I would certainly like for that person to be me. Not just for the social rewards it may bring, but because there is something intrinsically valuable about the act of high-level thinking. Understanding and insight can be their own reward.

But there is an interesting paradox to contend with here. When it comes to the performance of most tasks, the art of learning involves transferring the performance from the conscious realm to the sub-conscious realm. Carr mentions the example of driving: most people know how difficult it is to learn how to drive. You have perform a sequence of smoothly coordinated and highly unnatural actions. This takes a great deal of cognitive effort, at first, but over time it becomes automatic. This process is well-documented in the psychological literature and is referred to as ‘automatization’. So, ironically, developing our own cognitive resources may simply result in further automation, albeit this time automation that is internal to us.

The assumption is that this internal form of automation is superior to the external form of automation that comes with outsourcing the task to a machine. But is this necessarily true? If the fear is that the externalisation causes us to lose something fundamental to ourselves, then maybe not. Could the external technology not simply form part of ‘ourselves’ (part of our minds)? Would externalisation not then be ethically on a par with internal automation? This is what some defenders of the external mind hypothesis like to claim, and I discuss the topic at greater length in another post. I direct the interested reader there for more.

That’s it for this post.

Monday, April 27, 2015

The Automation Loop and its Negative Consequences



I’m currently reading Nicholas Carr’s book The Glass Cage: Where Automation is Taking Us. I think it is an important contribution to the ongoing debate about the growth of AI and robotics, and the future of humanity. Carr is something of a techno-pessimist (though he may prefer ‘realist’) and the book continues the pessimistic theme set down in his previous book The Shallows (which was a critique of the internet and its impact on human cognition). That said, I think The Glass Cage is a superior work. I certainly found it more engaging and persuasive than his previous effort.

Anyway, because I think it raises some important issues, many of which intersect with my own research, I want to try to engage with its core arguments on this blog. I’ll do so over a series of posts. I start today with what I take to be Carr’s central critique of the rise of automation. This critique is set out in chapter 4 of his book. The chapter is entitled ‘The Degeneration Effect’, and it makes a number of arguments (though none of them are described formally). I identify two in particular. The first deals with the effects of automation on the quality of decision-making (i.e. the outputs of decision-making). The second deals with the effects of automation on the depth and complexity of human thought. The two are connected, but separable. I want to deal with them separately here.

In the remainder of this post, I will discuss the first argument. In doing so, I’ll set out some key background ideas for understanding the debate about automation.


1. The Nature of the Automation Loop
Automation is the process whereby any action, decision or function that was once performed by a human (or non-human animal) is taken over by a machine. I’ve discussed the phenomenon before on this blog. Specifically, I have discussed the phenomenon of algorithm-based decision-making systems. They are a sub-type of automated system in which a computer algorithm takes over a decision-making function that was once performed by a human being.

In discussing that phenomenon, I attempted to offer a brief taxonomy of the possible algorithm-based systems. The taxonomy made distinctions between (i) human in the loop systems (in which humans were still necessary for the decision-making to take place); (ii) human on the loop systems (in which humans played some supervisory role) and (iii) human off the loop systems (which were fully automated and prevented humans from getting involved). The taxonomy was not my own; I copied it from the work of others. And while I still think that this taxonomy has some use, I now believe that it is incomplete. This is for two reasons. First, it doesn’t clarify what the ‘loop’ in question actually is. And second, it doesn’t explain exactly what role humans may or may not be playing in this loop. So let’s try to add the necessary detail now with a refined taxonomy.

Let’s start by clarifying the nature of the automation loop. This is something Carr discusses in his book by reference to historical examples. The best of these is the automation of anti-aircraft missiles after the end WWII. Early on in that war it was clear that the mental calculations and physical adjustments that were needed in order to fire an anti-aircraft missile effectively were too much for any individual human to undertake. Scientists worked hard to automate the process (though they didn’t succeed fully until after the war — at least as I understand the history):

This was no job for mortals. The missile’s trajectory, the scientists saw, had to be computed by a calculating machine, using tracking data coming in from radar systems along with statistical projections from a plane’s course, and then the calculations had to be fed automatically into the gun’s aiming mechanism to guide the firing. The gun’s aim, moreover, had to be adjusted continually to account for the success or failure of previous shots. 
(Carr 2014, p 35)

The example illustrates all the key components in an automation loop. There are four in total:

(a) Sensor: some machine that collects data about a relevant part of the world outside the loop, in this case the radar system.

(b) Processor: some machine that processes and identifies relevant patterns in the data being collected, in this case some computer system that calculates trajectories based on the incoming radar data and issues instructions as to how to aim the gun.

c) Actuator: some machine that carries out the instructions issued by the processor, in this case the actual gun itself.

(d) Feedback Mechanism: some system that allows the entire loop to learn from its previous efforts, i.e. allows it to collect, process and act in more efficient and more accurate ways in the future. We could also call this a learning mechanism. In many cases humans still play this role by readjusting the other elements of the loop.


These four components should be familiar to anyone with a passing interest in cognitive science and AI. They are, after all, the components in any intelligent system. That is no accident. Since automated systems are designed to take over tasks from human beings they are going to try to mimic the mechanisms of human intelligence.

Automation loops of this sort will come in many different flavours, as many different flavours as there are different types of sensor, processor, actuator and learning mechanism (up to the current limits of technology). A thermostat is a very simple type of automation loop: it collects temperature data, processes it by converting it into instructions for turning on or off the heating system. It then makes use of negative feedback to constantly regulate the temperature in a room (modern thermostats like the Nest have more going on). A self-driving car is a much more complicated type of automation loop: it collects visual data, processes it quite extensively by identifying and categorising relevant patterns, and then uses this to issue instructions to an actuating mechanism that propels the vehicle down the road.

Humans can play a variety of different roles in such automation loops. Sometimes they might be sensors for the machine, collecting and feeding it relevant data. Sometimes they might play the processing role. Sometimes they could be actuators, i.e. the muscle that does the actual work. Sometimes they might play one, two or all three of these roles. Sometimes they might share these roles with the machine. When we think about humans being in, on, or off the loop, we need to keep in mind these complexities.

To give an example, the car is a type of automation device. Traditionally, the car just played the part of the actuator; the human was the sensor and processor, collecting data and issuing instructions to the machine. The basic elements of this relationship now remain the same, albeit there is some outsourcing and sharing of sensory and processing functions with the car’s onboard computers. So, for example, my car can tell me how close I am to an object by making a loud noise; it can keep my car travelling at a constant speed when cruising down a motorway; and it can even calculate my route and tell me where to go using its built-in GPS. I’m still very much involved in the loop; but the machine is taking over more of the functions I used to perform myself.

Eventually, the car will be a fully automated loop, with little or no role for human beings. Not even a supervisory one. Indeed, some manufacturers want this to happen. Google, reportedly, want to remove steering wheels from their self-driving cars. Why? Because it is only when humans take over that accidents seem to happen. The car will be safer if left to its own devices. This suggests that full automation might be better for the world.


2. The Consequences of Automation for the External World
Automation is undertaken for a variety of reasons. Oftentimes the motivation is benevolent. Engineers and technicians want to make systems safer and more effective, or they want to liberate humans from the drudge work, and free them up to perform more interesting tasks. Other times the motivation might be less benevolent. Greedy capitalists might wish to eliminate human workers because it is cheaper, and because humans get tired and complain too much.

There are important arguments to be had about these competing motivations. But for the time being let’s assume that benevolent motivations predominate. Does automation always succeed in realising these benevolent aims? One of Carr’s central contentions is that it frequently does not. There is one major reason for this. Most people adhere to something called the ‘substitution myth’:


Substitution Myth: The belief that when a machine takes over some element of a loop from a human, the machine is a perfect substitute for the human. In other words, the nature of the loop does not fundamentally change through the process of automation.


The problem is that this is false. The automated component of the loop often performs the function in a radically different way and this changes both the other elements of the loop and the outcome of the loop. In particular, it changes the behaviour of the humans who operate within the loop or who are affected by the outputs of the loop.

Two effects are singled out for consideration by Carr, both of which are discussed in the literature on automation:

Automation Complacency: People get more and more comfortable allowing the machine to take complete control.

Automation Bias: People afford too much weight to the evidence and recommendations presented to them by the machine.

You might have some trouble understanding the distinction between the two effects. I know I did when I first read about them. But I think the distinction can be understood if we look back to the difference between human ‘in the loop’ and ‘on the loop’ systems. As I see it, automation complacency arises in the case of a human on the loop system. The system in question is fully automated with some limited human oversight (i.e. humans can step in if they choose). Complacency arises when they choose not to step in. Contrariwise, automation bias arises in the case of a human in the loop system. The system in question is only partially automated, and humans are still essential to the process (e.g. in making a final judgment about the action to be taken). Bias arises when they don’t second-guess or go beyond recommendations given to them by the machine.

There is evidence to suggest that both of these effects are real. Indeed, you have probably experienced some of these effects yourself. For example, how often do you second-guess the route that your GPS plans for you? But so what? Why should we worry about them? If the partially or fully automated loop is better at performing the function than the previous incarnation, then isn’t this all to the good? Could we not agree with Google that things are better when humans are not involved?

There are many responses to these questions. I have offered some myself in the past. But Carr clearly thinks that these two effects have some seriously negative implications. In particular, he thinks that they can lead to sub-optimal decision-making. To make his point, he gives a series of examples in which complacency and bias led to bad outcomes. I’ll describe four of them here.

I’ll start with two examples of complacency. The first is the case of the 1,500 passenger ocean liner Royal Majesty, which ran aground on a sandbar near Nantucket in 1995. The vessel had been traveling from Bermuda to Boston and was equipped with a state of the art automated navigation system. However, an hour into the voyage a GPS antenna came loose and the ship proceeded to drift off course for the next 30 hours. Nobody on board did anything to correct for the mistake, even though there were clear signs that something was wrong. They didn’t think to challenge the wisdom of the machine.

A similar example of complacency comes from Sherry Turkle’s work with architects. In her book Simulation and its Discontents she notes how modern-day architects rely heavily on computer-generated plans for the buildings they design. They no longer painstakingly double-check the dimensions in their blueprints before handing the plans over to construction crews. This results in occasional errors. All because they have become reluctant to question the judgment of the computer program.

As for bias, Carr gives two major examples. The first comes from drivers who place excessive reliance on GPS route planners when driving. He cites the 2008 case of a bus driver in Seattle. The top of his bus was sheared off when he collided with a concrete bridge with a nine-foot clearance. He was carrying a high-school sports team at the time and twenty one of them were injured. He said he did not see the warning lights because he was busy following the GPS instructions.

The other example comes from the decision support software that is nowadays used by radiographers. This software often flags particular areas of an X-ray scan for closer scrutiny. While this has proven helpful in routine cases, a 2013 study found that it actually reduces the performance of expert readers in difficult cases. In particular, it ia found that the experts tend to overlook areas of the scans not flagged by the software, but which could be indicative of some types of cancer.

These four examples support the claim that automation complacency and automation bias can lead to inferior outcomes.


3. Conclusion
But is this really persuasive? I think there are some problems with the argument. For one thing, some of these examples are purely anecdotal. They highlight sub-optimal outcomes in certain cases, but they involve no proper control data. The Royal Majesty may have run aground in 1995 but how many accidents have been averted by the use of automated navigation systems? And how many accidents have arisen through the fault of human operators? (I can think of at least two high profile passenger-liner accidents in the past couple of years, both involving human error). Likewise, the bus driver may have crashed into the bridge, but how many people have gotten to their destinations faster than they otherwise would have through the use of GPS? I don’t think anecdotes of this sort are a good way to reach general conclusions about the desirability of automation systems.

The work on radiographers is more persuasive since it shows a deleterious comparative effect in certain cases. But, at the same time, it also found some advantages to the use of the technology. So the evidence is more mixed there. Now, I wouldn’t want to make too much of all this. Carr provides other examples in the book that make a good point about the potential costs of automation. For instance, in chapter five he discusses some other examples of the negative consequences of automation and digitisation in the healthcare sector. So there may be a good argument to be made about the sub-optimal nature of automation. But I suspect it needs to be made much more carefully, and on a case-by-case basis.

In saying all this, I am purely focused on the external effects of automation, i.e. the effects with respect to the output or function of the automated system. I am not concerned with the effects on the humans who are being replaced. One of Carr’s major arguments is that automation has deleterious effects for them too, specifically with respect to the degeneration of their cognitive functioning. This turns out to be a far more interesting argument and I will discuss it in the next post.

Thursday, April 23, 2015

The Ethics of Having Children: Deontological Arguments



The having and begetting of children is central to human life. For many, it is a natural and unqualified good. The belief that your life is somehow incomplete or inferior if you do not have children persists in many cultures. Most people never question whether it is ethical to have children. But when you think about it this is pretty odd. A child is a sentient being who is highly dependent on the care of other human beings (typically its biological parents). So if you do have children, you are voluntarily taking on a significant moral responsibility and entrusting into your care a being capable of suffering great moral harms. This is not something to be taken lightly.

Consequently, it seems legitimate to ask the question: is it (morally) right to have children? In other words, is the having and begetting of children morally permissible, impermissible, obligatory or supererogatory? These are questions addressed at considerable length in Christine Overall’s book Why Have Children: The Ethical Debate. One of the unique features of the book is how it takes seriously the special responsibilities and risks that procreation imposes on women. And one of the remarkable conclusions of the book is that most arguments that are and have been made defending the permissibility of having children are bad or unpersuasive. I want to see why this is the case.

I start today by looking at deontological arguments in favour of having children. As you know, deontological arguments are always premised on the notion that certain actions are intrinsically good/bad and hence it is permissible/impermissible to perform them, irrespective of their consequences. Or that we have a moral duty/obligation to perform certain acts. In the case of deontological arguments for having children, this means that the claim shared by all of the following arguments is that having children is intrinsically good or required by some obligation/duty. How can this be defended?
We’ll look at six possibilities, all discussed in Overall’s book. The first two claim that having and bearing children is intrinsically good; the other four focus on alleged duties to have children. All are problematic.


1. The Argument from the Intrinsic Heroism of Bearing Children
The simplest deontological argument for having children works from the claim that bearing children is itself an act exemplifying intrinsic goods. This is a tricky concept to get your head around — I have heard it said that claims about the intrinsic worthiness of particular actions are the last refuge of those with strong moral commitment but no actual argument. But it has been defended by at least some people.

Overall cites the example of Rosalind Hursthouse, who has argued that in bearing children women do something that is intrinsically morally heroic. She draws explicitly on an analogy between going into battle and bearing a child. Men have historically been praised for their courage and heroism in participating in warfare. Oftentimes the praise has little direct connection to the moral merits of the warfare itself. Thus, for example, although many agree that WWI was a hugely wasteful conflict, many still praise the heroism of those who participated, believing that their actions cultivated a number of key moral virtues.

Rosalind wonders why similar praise is not heaped on women for the act of bearing children. She thinks that in performing this act, women also do something that cultivates moral virtues like courage, fortitude and endurance. In fact, she goes further and suggests that women have the great advantage of being born with the biological capacity to bear children and that failure to exercise that capacity may (tentatively) imply that one hasn’t done anything worthwhile with one’s life.

The suggested argument here is the following (this is my reconstruction, not Overall’s):


  • (1) It morally permissible (perhaps praiseworthy) to engage in actions that cultivate moral virtues like courage, fortitude and endurance.
  • (2) In exercising their innate capacity to bear children, women engage in an action that cultivates moral virtues like courage, fortitude and endurance.
  • (3) Therefore, it is morally permissible (perhaps praiseworthy) to have children.


Overall thinks that Hursthouse’s argument is refreshing, insofar as it takes seriously the differential risks and responsibilities of procreation, but she finds it pretty unpersuasive nonetheless. Even if we grant that premise (2) is true (and it may not always be true) we are forced to consider the flaws in premise (1). It is simply not true that it is permissible or praiseworthy to perform acts that cultivate moral virtues. This seems to be true in the case of the soldier, if we consider it in more depth. There may be some sense in which a Nazi officer exhibited the virtues of fortitude and endurance when performing his duties at a concentration camp, but the value of those virtues would never be sufficient to outweigh the harm that his actions caused. He ought not to have done what he did; he cannot be praised by simply ignoring the consequences of his actions.

The same reasoning applies to prospective mothers. However noble the act of bearing children might be, we cannot lose sight of the fact that it results in the creation of a sentient life. If the quality of life of that being is deficient, or if it involves great suffering, it automatically cancels out the goods associated with the act of bearing the child. Furthermore, we must remember that children cannot consent to their own creation, nor can we claim that we bear them for their own benefit: they do not exist prior to procreation and gestation, and so cannot be the recipients of moral gifts.

In the end, Overall suspects that Hursthouses’s commitment to the moral worthiness of child bearing relies implicitly on claims about the value of things other than the act of bearing them. The argument is thus unpersuasive as it collapses into other reasons for having children.






2. The Argument from the Continuation of Lineage
So what might these other reasons be? One popular claim is that having children is intrinsically good because it helps to continue one’s family lineage. In this context, “family lineage” can be understood in a number of distinct ways. Three common understandings are of lineage as the continuation of a family name, the possession of family property, or the perpetuation of a certain shared genetic makeup.

The argument seems to work like this:


  • (4) It is intrinsically valuable to continue one’s family lineage (i.e. name, property possession and genetic makeup).
  • (5) Bearing and raising children ensures that one continues one’s family lineage.
  • (6) Therefore, bearing and raising children is intrinsically valuable.


We cannot doubt the popularity of this rationale for having children down through the ages, particularly in terms of continuing the family name and the possession of property. Anyone who follows period dramas set in aristocratic courts or the fantasy series Game of Thrones will have some sense of this. Also, I know from my own family history that the continued possession of farmland has been a powerful rationale for marriage and procreation. That said, in most of those cases it is not the continuation of the family lineage itself that seems to be valuable but rather the fact that continuation of the lineage is linked to other social goods (e.g. power and prestige). So it’s not clear that the continuation of lineage is valuable enough to offset the moral risks and responsibilities associated with procreation.

The other problem is that the social goods that are linked to the continuation of the family name or the possession of property are, to some extent, socially arbitrary. Historically, women could not continue their family name through procreation, only men could do that (though there is some cross-cultural variation here, e.g. continuation of Jewish identity is matrilineal not patrilineal). Even if we have some system that is less sexist, it still wouldn’t be clear why the continuation of a name is intrinsically valuable. Likewise, property inheritance regimes often denied ownership to some children and hence made their lives more difficult than they might otherwise have been. And, in any event, the question of inheritance arises after birth; it cannot be a reason for procreation in the first place. The arbitrariness of the link between procreation and these other social goods creates problems for both premise (4) and (5) of the argument. It creates problems for premise (4) insofar as it suggests that the continuation of one’s family lineage is not necessary for other goods or sufficient to offset risks and responsibilities arising from procreation. And it suggests problems for premise (5) insofar as it suggests that bearing and raising children may not guarantee these other social goods.

But this is to focus on continuation of lineage in terms of name and property possession. What about the continuation of a certain genetic lineage? This has also been cited as a rationale for procreation. It was perhaps most popular during the heyday of the eugenics movement. The motivating principle behind that movement was that those with a superior genetic makeup had a duty to perpetuate that makeup (and oftentimes also to stamp out ‘inferior’ genetic makeups). The association with eugenics may, in itself, be enough to damn this version of the argument. Most people will be familiar with the sordid history of that movement, though it should be noted that many of the more sordid aspects were associated with negative eugenics (i.e. the stamping out of ‘inferior’ genetic makeups) not with positive eugenics (i.e. the continuation of the ‘superior’ lines). Is there anything at all to be said in favour of the latter?

Probably not. As Overall notes, to believe that there are ‘superior’ genetic lines is itself problematic. It is often a mask for ableist, classist and racist assumptions. Added to this, to believe that one’s own genetic makeup is intrinsically worthy of continuation seems pretty conceited, and runs contrary to what we know about biological evolution. It is now pretty clear that the continual mixing of genetic lines is essential to ensuring the health and well-being of future generations. After all, if continuing a particular genetic line was the optimal choice, we should be favouring incestuous unions. The problem is that the mixing of genetic lines is going to undermine the preservation of any one line in particular. A final problem with this argument is that the desire to perpetuate a genetic lineage is often premised on the desire to preserve your own phenotypic traits, but there is, of course, no guarantee that your biological child will share those traits. And assuming that they will (or that they must) may create an unhealthy and unrealistic set of expectations.

For all these reasons, the continuation of family lineage argument seems problematic.




3. The General Argument from Duty to Others
All the remaining arguments focus on the duty to have children. We can start with a very general version of this argument and introduce specific variations as we go along:


  • (7) You ought to fulfil your moral duties to others.
  • (8) Having and raising children fulfils your moral duties to others.
  • (9) Therefore, you ought to have and raise children.


The first premise of this argument is straightforward, perhaps even tautologous. The second premise is the key. Who could one possibly owe a duty to that would require the having and raising of children? Obviously not the child, since they do not yet exist. So who else could it be?

Overall notes that there are many ‘pronatalist’ pressures in society. That is to say: pressures to have and raise children. These pressures emanate from many sources and can certainly create the feeling that one owes it to others to have children. A common source of these pronatalist pressures is one’s family, e.g. parents or grandparents. As she puts it:

Some people long to become grandparents. Such people may put pressure on their adult children to “start a family”. Pronatalist pressures are still ubiquitous, and the resulting tendency to define womanliness in terms of procreation and manliness in terms of begetting has not disappeared…Having children thereby becomes a means to conformity, a way of giving the community the gendered behavior it expects. Married persons who are childless…are then…bombarded by suggestions that they should “get busy and have a baby”. 
(Overall 2012, p. 64)

Within this quote there is already an objection to the general argument from duty to others. Put simply, there is no real moral duty to have children emanating from these social pressures. Instead, there is a pressure to conform to social norms, which may themselves have negative aspects (e.g. the perpetuation of gendered social dynamics). In addition to this, as Overall points out, having children out of a sense of duty to others will rarely be adequate to “sustain the great amount of commitment, work and devotion that go into child rearing”. Not focusing one’s attention on the real, sentient human being that will be created through the act of procreation is never a good start.

On the specific claim that there might be a duty to one’s parents, Overall is equally dismissive. Partly for the reasons just cited but also because if the belief is that one needs to honour and respects one’s parents, then there are perfectly good ways to do that without entailing the same level of moral risk and responsibility. One could, for example, provide them with care when they need it; help them achieve and perpetuate other values; encourage their life endeavours and so on.

In sum, arguments from a duty to others face an uphill battle. But let’s focus on a few more specific variants.






4. The Argument from Promise to One’s Partner
The first more specific variation rests on the claim that one might have a duty to have children that emanates from a promise to one’s partner. This argument cannot apply to everyone; only to those who have made a relevant promise to their partner. It works something like this:


  • (10) You have a moral duty to fulfil your promises.
  • (11) You may have promised your partner that you would have a child with them.
  • (12) You may have moral duty to have a child with your partner.


This argument can be dispatched with relative ease. It only gets off the ground because people believe that promising X incurs and duty to X. This belief is problematic. We could grant that a promise to X gives rise to a prima facie duty to X; but we cannot grant that it gives rise to an all things considered duty to X. The promise to X must always be weighed against countervailing moral considerations. In this case, one’s lack of enthusiasm for the endeavour (if present), coupled with the moral risks and responsibilities that having a child entails, would always seem sufficient to outweigh the prima facie duty incurred by promising.

In fact, I would go further than this. Some people have this notion that promising gives rise to content-independent moral reasons for action. That is to say, that in promising X you have a reason to X, irrespective of the actual content of X. I think this is implausible. I think a promise to X only gives rise to a reason (possibly a duty) to X if X is itself morally permissible. If I promise my father that I will cut the grass, I may have a moral reason (possibly even a duty) to cut the grass. This is because grass cutting is a morally permissible act. But if I promise my father that I will kill another man, I cannot possibly have a moral reason (or duty) to kill another man. This is because killing another man is not morally permissible. In other words, promising is not a type of moral alchemy: it cannot convert a morally impermissible act into a duty.

To bring this back to the debate about children, the problem is that the morally permissibility of having children is itself under dispute. To argue that promising incurs a duty is, consequently, to presume what needs to be established.





5. The Argument from Religious Duty
Another class of specific duty arguments arises from certain religious beliefs and commitments. Many are familiar with the biblical injunctions to “be fruitful and multiply”, often taken to extremes by individual religious movements, e.g. the Quiverfull movement. Arguments in this vein are usually premised on a commitment to the Divine Command Theory of metaethics, which holds that the content of moral duties is determined by God’s command. The argument works like this:


  • (13) God specifies the content of your moral duties through commands.
  • (14) God has commanded you to have children.
  • (15) Therefore, you have a moral duty to have children.


I find this argument pretty difficult to evaluate in a short space of time. I have already written so much about the general problems with Divine Command Theory and the specific problems attached to the interpretation of alleged commands. I can’t really do justice to the full sweep of problems in this post. I direct interested readers elsewhere. I also find this to be one of the more frustrating passages of Overall’s chapter on deontological argument insofar as I feel she doesn’t do complete justice to the issues raised by this type of argument.

Nevertheless, I will follow suit and say three things about this argument. First, I would point out that this argument will hold no appeal for the non-believer. The believer might argue that the duty applies irrespective of belief in God, but as I have discussed elsewhere, this is a dubious claim. Second, I would point out that, even if you are a theist, there are serious philosophical problems associated with the belief that God specifies the content of your duties via His commands. These problems arise because of classical and modern variants of the Euthyphro dilemma, as well as the existence of alternative, more plausible, metaethical theories. Finally, even if one were committed to the DCT, it is often difficult to interpret specific biblical passages as divine commands. The authorship of the biblical text may cause one to doubt that it contains genuine commands, and oftentimes what counts as general command is disputed. Many sophisticated bible-believers hold that certain commands (e.g. the command to commit genocide on the Amalekites) must be understood in the proper context and do not apply to modern audiences. Maybe the same is true of the alleged biblical commands to procreate?




6. The Argument from Duty to the State
A final variation on the argument from duty to others focuses on one’s duties to the state (i.e. the social, political and/or legal authority under which one lives). Here, we seem to be drawn back into the torrid world of the eugenics movement which, particularly in its fascist guise, focused on the duty to procreate for the benefit of the state. This may, once again, be enough to impugn this argument, but let’s see if something more favourable can be said.

In doing so, we must distinguish between different state or society-centric arguments for having and rearing children. There are consequentialist arguments, which are concerned with the future benefits of one’s offspring to the society; and there are deontological arguments which are concerned with a specific duty one owes to the state or society. The latter could be forward-looking, in the sense that the duty might be thought to flow from the positive consequences of having children (a kind of rule utilitarianism). Or they could be backward-looking, in the sense that the duty might be thought to flow from the fact that one has been benefitted by the society already and one must now repay this debt.
Because she deals with consequentialist arguments elsewhere, Overall focuses her attention on the backward-looking variant of the argument in her chapter on deontological arguments. But it is difficult to formulate an actual argument here. Instead we seem to be left with a general principle to evaluate:


  • (16) If your society has benefitted you in the past (e.g. by providing you with an education, access to healthcare, a participative public space, employment and fulfillment etc), then you have a duty to repay that debt by contributing to the creation of new citizens.


There are lots of problems with this principle. For one thing, its conditional nature implies that there will be some people who don’t owe this duty. People who have not been benefitted by society, who have been victims of poverty, crime or war, will have nothing they need to repay. For another thing, it is very difficult to see why the mere fact that one benefitted should give rise to a specific procreative duty. A society might be good, but that doesn’t mean that there is a moral obligation to perpetuate it.

More importantly for Overall, there is the fact that if this duty were taken seriously, it would seem to convert women into “procreative serfs”. Women would be viewed by the state and other members of society as mere instruments for securing some social good. I think Overall might be slightly exaggerating the impact of accepting such a duty, but there are certainly dangers here that we should avoid. In any event, having a child out of a sense of duty to the state, when one really doesn’t want to, would raise the inadequate parent problem once more.




7. Conclusion
To sum up, in this post I have looked at a variety of deontological arguments for having children. These arguments have been premised on the belief that having and rearing children is intrinsically valuable, or on the belief that there is a moral duty to have children. As we have seen, most of the traditional arguments in this vein are unpersuasive.

Wednesday, April 22, 2015

The Ethics of Robot Sex: Interview on Robot Overlordz Podcast


Ex Machina

I had the good fortune to be asked back on to the Robot Overlordz podcast this week. I am the guest on episode #163 during which I chat with the hosts (Mike Johnston and Matt Bolton) about the ethical, legal and social implications of sex robots. We also talk about related issues from the world of AI and futurism.

You can listen to the full episode (approx 30 mins) here:

Some of the topics addressed include:


  • Will sex workers be replaced by sex robots? Just as factory workers have been replaced by robots, some think that human sex workers be replaced by robotic equivalents. Indeed, some people think we should welcome this possibility due to the legal and ethical advantages of robotic sex workers over humans.
  • Is it a good idea to create robots that cater to sexual fantasies? For example, would it be a good idea to create robots that allow people to act out rape fantasies or act upon socially deviant sexual desires. Should technology be a playground in which we can act out the best and worst aspects of our moral characters?
  • The meaning of the film Ex Machina: Perhaps a slight digression from the main topic, but since there is a sex robot element to it, we also discuss the recent (and highly recommended) film Ex Machina and some of the philosophical lessons that can be learned from it.


Regular readers will know that I have written about some of these topics at length before. See for example my papers on (i) sex work and technological unemployment and (ii) robotic rape and robotic child sexual abuse.





Wednesday, April 15, 2015

Should libertarians hate the internet? A Nozickian Argument against Social Networks




My title is needlessly provocative, and may ultimately disappoint, but bear with me a moment. I’ve recently been reading Andrew Keen’s book The Internet is not the Answer. It is an interesting, occasionally insightful, but all too often hyperbolic, personalised and repetitive critique of the internet age. I recommend it, albeit in small doses. But this is a digression. I do not wish to give a full review here. Instead, I wish to dwell on one idea that struck me while I read it.

In the fourth chapter of the book, entitled “The Personal Revolution”, Keen launches into a scathing critique of the “Instagram”-generation. He excoriates them for being a selfie-obsessed, narcissistic and attention-seeking generation, increasingly parochial and disengaged from the world. But he reserves his major criticisms for the company itself, which is simply one of the many large-scale internet networks that provides a social space or platform in which we can upload, share and search one another’s content (Google, Facebook and Youtube being the other obvious examples - and yes I know Youtube is owned by Google). He argues that these networks are economically exploitative because they profit from our free labour.

Now, there is nothing particularly new in this argument. It is something others have written about before and at greater length, perhaps most notably Jaron Lanier in his book Who Owns the Future? (which I must now confess to not having read, though I am familiar with the basic thesis from his various online talks). Nevertheless, as I read through Keen’s critique, it occurred to me that his argument could be re-fashioned into the language of analytic political philosophy, specifically into the language of libertarianism. Doing so may give libertarians a reason to “hate” the infrastructure being created by the modern internet (though I have my doubts). Why? Because that infrastructure may represent the most significant, and unjust, violation of property rights in recent human history. I think this is interesting because libertarians often love the internet and many of the leading tech evangelists espouse a libertarian view. I also think it is interesting because it is relatively easy to craft an egalitarian argument against the infrastructure of the internet, crafting a libertarian one seems like more of a challenge.

In the remainder of this post, I outline the basic elements of that libertarian anti-internet argument. Just to be clear, this is very rushed and incomplete: I only provide the bones an argument that needs more flesh. In particular, I know that the understanding of libertarianism that I present is pretty naive, relying as it does on a simple Nozickian conception of property rights. I know this has been disputed, endlessly, in the philosophical literature. But I think there is some value to assuming that simple conception arguendo (i.e. for the sake of argument) and seeing if anything interesting follows. Also, as you will see, my conclusion is somewhat deflationary. I don’t think there really is a strong libertarian argument against the internet. But I think it is something worth pondering nonetheless.

One other point before I begin: I don’t know if the argument I present has been put forward in these libertarian-esque terms before. It may well have been. My limited google-searching reveals no such presentation, but I haven’t researched the matter in depth. I don’t claim novelty for the insights (if any) contained in this post.


1. The Nozickian Conception of Property Rights
As mentioned, the libertarian view I am going to work with is a fairly unsophisticated version of that presented by Robert Nozick in his classic book Anarchy, State and Utopia. Consequently, I must start by outlining some of the core features of the political philosophy defended in that book. As many readers will know, Nozick’s book was written in response to Rawls’s classic A Theory of Justice. In the latter book, Rawls defended an egalitarian model of political justice that supported the redistribution of property (wealth) from rich to poor, provided certain fundamental principles of justice were complied with. This in turn provided support for a big government, collecting taxes and engaging in certain acts of social engineering.

Nozick rejected this view in favour of the robust protection of individual property rights and a minimal state. Central to this view was his conception of individual rights, specifically individual property rights. He derived this account partly from work done by classical liberals like John Locke and partly from Kantian moral theory. The details are fascinating and could be picked apart and debated at much greater length. But I won’t go into too many of the intricacies right now. Instead, I focus on three main features of Nozick’s account.

First, there is Nozick’s reliance on the concept of the separateness of persons. This is the Kantian element of his theory. It holds that human persons exemplify certain key moral properties: they are rational planners, they have free will, and inherent dignity. They cannot be treated as mere objects, instruments or resources (as in, for example, slavery). They must be treated as ends in themselves and their dignity must be respected.

Second, there is Nozick’s conception of self-ownership and its attendant rights. Nozick argues that persons have rights to self-ownership. That is to say, they own themselves, their bodies, their skills and their talents, and the fruits of their labour. The latter being particularly important because it suggests a way in which individual agents can gain rightful ownership over elements of the natural world by mixing those elements with their own labour. Nozick argues that individual self-ownership implies a strong set of negative rights. It is wrong to interfere with someone’s ownership over their own property (which, remember, includes their bodies, skills, talents and the fruits of their labour).

Third, and building from the foundation provided by the other two features, there is Nozick’s entitlement theory of justice. This has to do with just ownership of the various types of property. Nozick argues that there are three ways in which people can acquire rightful ownership over property. The first is via just acquisition, which is where one acquires the property in the first instance through mixing it with one’s labour. The second is via just transfer, which where the property is voluntarily transferred to one by its rightful owner. And the third is via just rectification, which is where property is transferred (forcibly) in order to correct for some historical injustice. This last method is Nozick’s major concession to some form of redistribution.

This leads to Nozick’s overarching principle of justice, which looks something like this:

Nozick’s Principle of Justice: A distribution of wealth/property in a given society is just if and only if everyone in that society is entitled to what he or she has, i.e. they have gotten what they have in accordance with the principles of just acquisition, transfer and rectification.

This is all we need to build the argument against the internet.


2. Why the Internet is a Nozickian Nightmare
So what is the Nozickian argument against the internet? To answer that, I need to tone down the hyperbole somewhat. This is not really an argument against the internet as a whole. It is an argument against one portion of the internet, albeit a sizeable portion. This is the portion that provides social networking and content-sharing platforms and uses those networks primarily as vehicles for selling advertising and some other services to commercial enterprises.

Such networks have become hugely influential, and have netted large fortunes for their creators. For example, the creators of Google and Facebook (Sergey Brin, Larry Page and Mark Zuckerberg) each have personal fortunes estimated at around $30 billion. Less impressive, but still astounding, would be the fortunes amassed by the likes of Kevin Systrom (Instagram - $400 million), and Brian Acton and Jan Koum (both of Whatsapp, worth $2.7 and $7.2 billion respectively).

Keen’s argument against these enterprises and their associated founders — as channeled through the prism of Nozick — is that the wealth amassed by these companies largely (though not entirely) emanates from an unjust transfer of wealth/property. To put it in formal terms:



  • (1) A distribution of wealth/property in a given society is just if and only if everyone in that society is entitled to what he or she has, i.e. they have gotten what they have in accordance with the principles of just acquisition, transfer and rectification (Nozick's Principle).

  • (2) The distribution of wealth that has resulted from the creation and success of companies like Google, Facebook, Instagram, Whatsapp, Youtube etc is the product of an unjust transfer (i.e. a transfer of wealth that was not undertaken in accordance with the three principles of just acquisition, transfer and rectification).

  • (3) Therefore, the distribution of wealth resulting from the creation and success of such companies is unjust (in Nozickian terms).



All the action lies with premise (2) of this argument. The premise would probably be a pretty easy sell if the argument rested on a more egalitarian principle of just distribution. The fact that it relies on the Nozickian principle makes it a tougher sell. The obvious riposte from any sensible defender of the free market will be that the fortunes amassed by these companies and their creators results from good, clean free market operations. Mark Zuckerberg (to take an example) used his personal talents and ingenuity to create a social networking platform, which he provided for free to users (the “free” point is one I shall return to), and then sold or rented out to investors and advertisers. These latter individuals voluntarily transferred their wealth to Facebook. Where is the injustice in any of this?

The answer lies in the precise details of the business model that makes the likes of Facebook so valuable. Facebook is not a traditional tech business like Apple or Microsoft. It did not create a product that it then sold to customers. It created a platform that it provided for free to end users, and then sold this to investors and commercial enterprises. The question is why are these people willing to pay so much for the access to and control over the platform? The answer, according to the likes of Keen, lies in the fact that the Facebooks, Instagrams and Youtubes of this world profit from the unrewarded labour of its users.

In other words, it is we — the end users and content providers — that make the companies so valuable. It is our labour and talents that provide the valuable commodity, that can be manipulated, packaged and sold to the investors and advertisers. This seems to be literally true. On Instagram, Facebook, Youtube and Google, it is our data (search terms, photos, instant messages) that is collected, mined, and sold. Indeed, these companies openly admit that they take ownership over all this data (read the terms of services for Instagram, for example). In short, they are little more than thieves (robber barons).

Or so Keen seems to suggest. Here’s what he has to say about the success of Instagram (and contrasting it with the fate of Kodak):

Instagram really did have just thirteen full-time employees when Facebook paid a billion dollars for the startup. Meanwhile, in Rochester, Kodak was closing 13 factories and 130 photo labs and laying off 47,000 workers…So who, exactly, is doing the work, providing all the labor, in a billion dollar startup that employed only thirteen people? We are. All 150 million of us are members of the Snap Nation…When Facebook and Twitter fought a bidding war to acquire Instagram, they weren’t competing for Kevin Systrom’s cheap, off-the-shelf photography or the code he and Mike Krieger slapped together in a few months. What they were paying for was you and me. They wanted us—our labor, our productivity, our network, our supposed creativity. 
(Keen 2015, 114)

And shortly after sharing these thoughts he is even more adamant about the unjust theft that is taking place:

Data factories [like Google, Facebook, Instagram, Snapchat etc.] are eating the world. But while this has created a coterie of boy plutocrats like Evan Spiegel, Kevin Systrom, and Tumblr’s twenty-seven-year-old CEO, David Karp, it certainly isn’t making the rest of us rich. You see, for the labor we invest in adding intelligence to Google, or content to Facebook, or photos to Snapchat, we are paid zero. 
(Keen 2015, 114)

What’s worse is that these companies, in turn, seduce us into thinking that the data we provide is ours — that the network is in our common ownership, but this is not true. Commenting again on Instagram (which seems to be his favourite example):

But the problem is that we don’t own any of it — the technology, the profit, maybe not even “our” billions of photographs. We work for free in the data factory and Instagram takes not only all the revenue from their business, but the fruits of our labor, too…the question of who actually owns Instagram’s content remains as fuzzy as its photos. As a July 2013 white paper by the American Society of Media Photographers noted, most Instagram users don’t “understand the extent of the rights that they are giving away”. The company’s “onerous” terms of use, the ASMP white paper reported, “gives Instagram perpetual use of photos and video as well as the nearly unlimited right to license the images to any and all third parties. 
(Keen 2015, 116)

There is clearly a strong egalitarian bent to Keen’s criticisms in these passages, but there is also — to my ears — a strong Nozickian bent. The complaint that there is an unjust transfer of wealth taking place seems clear. But is this complaint any good?


3. Objections and Replies
Now, as I said previously, I think this argument would be more persuasive if it were made from an egalitarian standpoint. But I like pursuing the libertarian angle both because it seems to better capture the injustice highlighted by Keen (i.e. the exploitation of our talents and skills that seems to be taking place) and because it runs contrary to some of the dominant streams of thought in the tech world.

Nevertheless, I think it remains a pretty tough sell. There are some fairly general concerns and then two obvious major objections to it. In terms of the general concerns, there is the fact that Keen dwells heavily on one or two examples (particularly Instagram) without fully explaining or exploring the ownership rights that arise on other networks. I don’t currently know what the situation is. I’m sure I could find out, but I haven’t done so yet (the fact that I haven’t may be part of the problem Keen is alluding to though). Also, I think Keen is overly dismissive of the labour and hard work done by some of the “boy plutocrats” (as he describes). I don’t know that Kevin Systrom is justifiably worth $400 million (who is?) but I don’t think it is fair for Keen to describe it as being “slapped together”.

But there are more serious criticisms to contend with. The first is that there is nothing unjust about the transfer taking place here because, even if it is uncompensated, it is wholly voluntary. Nobody is forcing us to use these services. We are free to disengage from Facebook, Instagram, Google and so forth, at any time we like. If we continue to use them, we must accept their terms and conditions (which we usually do by ticking a box). Perhaps you could respond here by saying that the transfer of ownership rights over photographs on the likes of Instagram is not truly voluntary. Maybe this is because there is some subtle form of social coercion taking place, or because the companies deliberately exploit the known fact that most people don’t read the terms of service. But if you go down that road it will impugn many other, seemingly legit, businesses.

The second objection is that, in any event, these transfers may not be uncompensated at all. We receive one major benefit-in-kind, namely free use of the platform. And, on at least some platforms, users can leverage their content provision into a form of income. Thus, for instance, popular Youtube uploaders can take a cut of the money Google earns from advertising. This looks like a win-win. Now, you could complain here that free usage isn’t adequate recompense for the content provided, or that even amongst Youtube uploaders the bulk of the wealth goes to relatively few “big name” content providers, but arguing over the adequacy of the compensation received doesn’t seem like a very libertarian thing to do. This is much more at home in the egalitarian worldview (and in that case I have a lot of sympathy for it).

Thus, to sum up, I think it is intriguing to explore the libertarian (Nozickian) argument against the internet. I’m just not sure that I am able to come up with a persuasive version of it. Maybe someone else can do a better job.

New Paper - The Epistemic Costs of Superintelligence



I have a new paper coming out in Minds and Machines. It deals with the debate about AI risk, and takes a particular look at the arguments presented in Nick Bostrom's recent book Superintelligence. Fuller details are available below. The official version won't be out for a few weeks but you can access the preprint versions below.

Title: Why AI Doomsayers are Like Sceptical Theists and Why it Matters
Journal: Minds and Machines
Links: (Official; Academia; Philpapers
Abstract: An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting is its potential implication. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could lead to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. 

Sunday, April 5, 2015

Bitcoin and the Ontology of Money


Image representing the decentralised Bitcoin network

Money is accordingly a system of mutual trust, and not just any system of mutual trust: money is the most universal and most efficient system of mutual trust ever created. 
(Yuval Noah Harari 2011, 180)

Money has long fascinated me, and not for the obvious reasons. Although I’d like to have more of it, my interest is largely philosophical. It is the ontology of money that has always disturbed me. Ever since I was a child, collecting old coins and hoarding my pocket money, I’ve wondered why it is that certain physical tokens can function as money and others cannot. What is money made from? What is it grounded in? Why do certain monetary systems fail and others succeed?

For many years, I set these questions aside, convinced that I had a basic grasp of how they could be answered. But in the past year they have re-emerged. I have started to teach a course on money and banking law. And, as is my wont, I cannot possibly teach it without trying to address some of the deeper philosophical issues. Some of these issues are purely political or ethical, but some are ontological. In particular, I have found the ontological questions to be pertinent when trying to assess the nature of cryptocurrencies like Bitcoin.

This post is my first attempt to write down my thoughts on the ontology of money, in general, and Bitcoin, in particular. It is pitched at an introductory level, reassembled from notes I prepared for my classes. It may well contain some misconceptions or misunderstandings. I welcome feedback. The post is an attempt to achieve some degree of insight through the process of writing; not an attempt to reproduce well-thought out views in a written form.

I’m going to break the discussion up into four main parts. First, I’ll talk about the social function of money. Second, I’ll discuss two different ontologies of money — the naive realist ontology and the more sophisticated subjectivist ontology. Third, I’ll talk about five steps in the social evolution of money. And then fourth, and finally, I’ll talk about Bitcoin and how it fits into my preferred subjectivist ontological theory.


1. The Social Functions of Money
I don’t think the ontology of money can be understood apart from the social function of money. So that’s where I shall start. I can’t remember where I picked up the idea (I think it might have been from this podcast) but I like the analogy between money and a computer's operating system. Money can be seen as the operating system for an economy: the software program that greases the engines of production and exchange (I guess that’s an awkward mix of metaphors).

Money has its origins in the importance of exchange. We all face the same basic problem of existence. We are mortal, fragile, biological beings. We need certain things in order to ensure our survival: food, clothing, shelter and so on. We also want certain things to make our existence more pleasant. How do we go about getting the things that we need and want? There are two basic strategies. The first is that of self-sufficiency, i.e. you, as an individual, can source and produce all the things you need to survive and (if you have any time left over) all the things you want in order to live a more pleasant life. The second is that of specialisation and exchange. Rather than spending all of your time catering to all of your needs and wants, you specialise in producing some particular good or providing some particular service, and then exchange with others who specialise in other areas.

While the self-sufficiency strategy holds appeal for some people, most societies have opted for the specialisation and exchange strategy. And this is where money comes into play. Although money is not strictly necessary for exchange (one could have systems of altruistic or communal sharing), it is the most common method of facilitating specialisation and exchange. This is because money performs three essential functions:

Medium of Exchange: It provides a medium for exchanging different goods and services. Put another way, it provides a medium for converting one type of good or service into another. This is perhaps the most important social function of money.

Store of value: The medium is one that retains (to some degree) its value over time. In other words, if you have money put in your hands on Monday it can still function as a useful medium of exchange on Friday, because people still accept that it has a roughly equivalent value. If money loses its value too rapidly — as happens in hyperinflationary cycles — it ceases to provide a useful medium of exchange. Similar negative effects occur if it increases in value too rapidly: people might stop using it as a medium of exchange and hoard it instead.

Unit of account: It provides a way of precisely accounting for the value of different goods and services. In other words, it is not too coarse-grained in how it accounts for their value. This is important insofar as an effective price system is thought to be essential not just for facilitating exchange, but for facilitating competition and the efficient use of resources.



This is the standard economic account of the functions of money. It will be familiar to all first year economics students. But it is connected to the ontological question that I find more interesting: what is the “stuff” that performs these three functions?



2. A Subjectivist Ontology of Money
Ontology is the study of what there is; what kinds of things exist and what are they made of. When it comes to the ontology of money, I tend to think that there are two schools of thought. One, which I think is clearly incorrect, is the naive realist theory of money. This is one that I, and I suspect others, found appealing in our youths. The other, which I think is correct, is the sophisticated subjectivist theory of money. Of course, the labels “naive” and “sophisticated” are value-laden, and I could perhaps do without them. Nevertheless, I think they are worth tacking on, mainly because I think there is a naive subjectivist theory that we should seek to avoid. More on this anon.

I start by offering a brief description of the naive realist ontology:

Naive Realist Ontology: Money is any physical commodity with intrinsic value, that can be used as a medium of exchange (i.e. it is a commodity that is valuable, but also portable, divisible, hard to fake etc.).

I call this a naive realist ontology because it maintains that money is “out there” in the natural world, the world beyond that of human imagination and culture. It corresponds, roughly, to other realist ontologies insofar as it thinks that the phenomenon of interest is not dependent on human observers for its existence (i.e. it thinks that money is mind-independent). This is a view that I think many people find attractive in their youths. I know I did. I used to think that coins, particularly those made of precious metals, counted as money because there was some mystical intrinsic value attached to those metals.

This view is naive and clearly false. History proves this point. Humans have used many commodities over the centuries, from tea leaves and cowry shells to pieces of paper and lines of computer code, as money. Many of these things lack intrinsic value (whatever that is). In fact, most of them lack instrumental value (beyond the value they provide as a medium of exchange). This is true of precious metals just as much as it is true of coloured printed paper. It is obvious that a piece of paper with numbers and symbols upon it has no intrinsic value, but, when you think about it, it is also pretty clear that gold and silver are devoid of intrinsic value. They merely have the value that particular cultures attach to them (e.g. for ornamentation and jewelry). If such a diverse, and clearly intrinsically valueless array of phenomena can count as money, the naive realist view must be false.

But this then raises the question: how is it that such a diverse array of phenomena can function as money? The answer lies in the sophisticated subjectivist ontology of money:


Sophisticated Subjectivist Ontology: Money is anything (physical or non-physical) that humans are collectively willing to represent and intend to function as a medium of exchange, store of value and unit of account.


To go back to the software analogy introduced earlier on, money isn’t some physical hardware that is “out there” in the real world, rather it is a software package that all participants in an economy simply agree to run on their brains. I believe it was Terence McKenna who once said that “what we call reality is, in fact, nothing more than a culturally sanctioned and linguistically reinforced hallucination”, and while I wouldn’t endorse this as a general description of reality, I think it is perfectly true when it comes to the reality of money. It really is just a collective hallucination.

That said, we must not fall into the trap of endorsing a naive subjectivist ontology. It is not the case that we can simply and easily hallucinate money into existence. It is far more complicated than that. Yuval Harari’s quote — which served as an epigram to this post — captures the difficulty. Money is a system of mutual trust, and societies have to go to extraordinary lengths to reinforce that system of mutual trust. As he puts it:

Why should anyone be willing to exchange a fertile rice paddy for a handful of useless cowry shells? Why are you willing to flip hamburgers, sell health insurance or babysit three obnoxious brats when all you get for your exertions is a few pieces of coloured paper? People are willing to do such things when they trust the figments of their collective imagination. Trust is the raw material from which all types of money are minted….What created this trust was a very complex and long-term network of political, social and economic relations. 
(Harari 2011, 180)

Indeed, it is when you realise how difficult it is to create a system of mutually reinforcing trust that something of the naive realist view of money starts to creep back in. It turns out that some physical (and social) phenomena are better at creating such a system than others. So we cannot completely ignore the features of the objective (mind-independent) reality when it comes to understanding the ontology of money. This point can be illustrated by considering the historical evolution of money.


3. Five Steps in the Evolution of Money
Unfortunately, history is full of messy and complex facts. I cannot pretend to know everything about how money came to be or how it changed over time. Nevertheless, I think I can point to five steps in the evolution of money. These five steps may represent something of a historical “just so” story — one that cannot be easily mapped onto a chronological sequence of historical events — but I think they do capture something of the truth.

(Note added after original publication: The historical problems with the following account have been pointed out to me in the comments. I accept that it is probably inaccurate and tried to hedge against this possibility in the original draft. Nevertheless, I think the steps outlined below do provide some potential conceptual (i.e. non-historical) truths when we adopt an exchange theory of money. This exchange theory is what my ontological theory covers. If we adopt alternative theories (e.g. the social relation theory) some aspects of that ontology would be affected, though I think the basic gist -- that money is an exercise in reinforced collective imagination -- would be retained. It is something I will be thinking about and may write an alternative ontology of money/bitcoin at a later date. For the time being, you can take this as the exchange ontology of money/bitcoin).

I’ll start by describing the five steps and explaining why they succeeded or failed to create an effective system of money. I’ll use this as a bridge toward the discussion of Bitcoin:

Step One - Barter: This is actually a pre-monetary step. It is what prevailed in economically simple ancient communities and what resurfaces at times of monetary crisis. It involves the direct exchange of one good or service for another, with no intervening medium of exchange. Its limitations are what spur the need for the mental creation of money. In a barter system, members of the community can struggle to store value in physical goods and services (the vegetables I grow in my garden will rot and go off), to find willing partners for exchange (the fisherman down the road may not want any of my vegetables) and to meaningfully account for the value of different items (how much is a freshly-caught salmon worth in sticks of celery?).
Step Two - Commodity-based Systems: This is really the first step in the development of proper money. It is what happens when a community picks some available commodity (barley grains, shiny stones, sea shells, salt, tea leaves etc) and collectively agree (through practice and trial and error, not through formal contract) that this commodity will function as money. Many different societies have trialled many different types of commodity money. Some commodities are clearly better at doing this than others. The stone disks used as money on the Island of Yap, for instance, have some limitations. They are not readily portable and divisible, and are unlikely to be accepted as valuable in other communities. They also face a trust problem: how do you ensure that people do not “double-spend” their stone disks?
Step Three - Coined Money: This is a sub-type of commodity money based on coins made from precious metals such as gold, silver and bronze. It was an effective system of commodity money and became popular in a number of different cultures. There were many reasons for this. Precious metals had some instrumental value across many different cultures (for making weapons, tools, jewelry etc) and so could facilitate larger trade networks. They were also highly portable, and capable of being melted and sub-divided into units representing different quantities of value. However, they too faced a trust problem. Initially their value was tied to the weight or quantity of the metal in the coin, that was part of the reason people were willing to accept them as currency. But it was always possible to introduce counterfeit or impure coins. Communities tried to counteract this by adding official stamps and signs to the official currency. Counterfeiters reacted by developing methods for copying these stamps and signs. The trust arms-race started to take-off: governments introduced more and more elaborate mechanisms designed to restore trust in the monetary system (I ignore, for now, the fact that some governments also tried to decrease the value of their currencies through a practice known as seignoirage).
Step Four - Paper Money Backed by Coin/Precious Metal: Although coined money was relatively effective as a medium of exchange, it did have some inconvenient features. In addition to the counterfeiting problem, large sums of coined money were difficult to carry around and made one a target for thieves and bandits. As a convenience, goldsmiths started to offer safe storage facilities to people who wished to stash their coins. They would be issued with receipts which they could exchange for coin. In time, people started to use these slips of paper to pay for goods and services, confident that the holder of the receipt could always trade-in for the prescribed amount of coin. Paper money was born. Paper money also had its trust problems, with counterfeiting being a serious issue (as it still is) and with occasional crises of confidence when the holders of coin reserves became involved in fractional reserve banking. Since these institutions would lend multiples of the actual sums of coined money they held on reserves, they were prone to “runs” when people worried that borrowers would not be able to pay back their loans. Centralisation of the banking and monetary systems solved some of these problems and led to perhaps the most famous system of paper money backed by precious metal: the Gold Standard.
Step Five - Fiat Money: [Skipping over lots of the important historical details] The next step in the evolution of money came when governments (who controlled the gold standard) realised that this system was expensive, and limited what they could do during times of national crisis (e.g. during WWI and the Great Depression) to restore confidence to the broader economy. They realised that it was possible to get people to trust the paper money system through force of law alone. All the government had to do was declare that X amount of money was in existence, and thus it was that X amount money came into existence. This is money created by governmental fiat. This is the system that prevails today, and nowadays the majority of such money does not exist in paper or coined form. It exists as account balances on computer programs run by the banking system. This fiat system also clearly suffers from trust problems: it only works to the extent that people have faith in the stability and probity of government, and the prudence of the banking system.



As I say, this is a crude summary of the historical evolution of money. But I think it does show why our subjectivist ontology must be sophisticated rather than naive. We see now the problems that arise when trying to create a system of mutual trust that will provide the basis for money. For the majority of human history, these problems were solved by attaching value to commodities with the right kinds of properties (portable, divisible, widely perceived as having instrumental value, hard to fake etc.) and correcting for any trust deficits through the use of institutions and laws. In the most recent evolutionary step, money has been delinked from physical commodities. The mutual trust needed is fashioned from our trust in other social institutions (specifically governments, central banks and banks). These institutions are themselves fashioned out of our collective imaginations. Money has now truly become a collective hallucination, one that is maintained by a complex and esoteric network of rituals and laws.


4. The Ontology of Bitcoin
Where do Bitcoin and other cryptocurrencies fit within this ontological framework? Despite claims to the contrary made by its mysterious founder — Satoshi Nakamoto — Bitcoin is not a “trustless” monetary system. It is just as much an exercise in collective trust and imagination as is the fiat money system. But it does mark something of a move away from the pure fiat system and a return to the classic commodity-based systems. Only this time the commodity in question is not some physical object like silver or gold, but rather a digital program that runs on a decentralised network of computers. Bitcoin tries to solve the trust problem by getting us to attach value to a computer program with a number of unique properties.

I’m going to try to describe some of these properties now. But before I do so a caveat: the Bitcoin protocol solves the trust problem in a manner that also tries to implement a whole political and economic philosophy (roughly that of the Cypherpunk movement). In what follows, I try to ignore the features of that political and economic philosophy as much as possible. I do so not because I think they are unimportant or uninteresting — far from it: I want to write another post on whether Bitcoin is an economic and political failure at a later date — but because some of those details may distract from the purely ontological focus of this post.

So, anyway, what are the properties of the Bitcoin protocol that address the trust problem? This is not the place to launch into a full explanation of how Bitcoin works. To be honest, some of the technical details elude me (I have a basic grasp of how cryptographic hash functions work and how the mining competition operates, but I couldn’t claim to know everything). Fortunately, all I need to explain the ontology of Bitcoin are a few choice details. One of which is the basic gist of the idea. Bitcoin takes to the hilt the notion that pretty much anything can be used to represent value. It does this by encouraging people to attach value to digital tokens that represent numerical balances stored in personalised digital wallets. It then gets people to trust that these digital tokens and balances really can represent stores of value, and really can function as a medium of exchange, by creating a system that protects them from fraud and abuse (the digital tokens are bitcoin the currency; the protective system is Bitcoin the protocol).

Every type of money is open to fraud and abuse. What types of fraud and abuse are unique to Bitcoin? One of the major problems with a digital currency is that its users are being asked to trust that the digital records of their balances and payments are accurate. This is a tough sell since it is so easy to create and copy digital files. A user will always be inclined to ask: Couldn’t somebody be using the same digital token to pay for many different things? Couldn’t a sophisticated programmer or hacker be altering the digital records, stealing currency from others or creating an endless supply for themselves?
Bitcoin solves these specific trust problems by creating a computer program with a number of distinctive properties. I’ll mention four of them here:

The Blockchain: This is probably the key property of the Bitcoin program. It is a decentralised digital ledger that contains a record of every bitcoin transaction that has ever taken place. This ledger is what gives people confidence in the system. It allows them to be sure that there are no double-spends or fake transactions. It is constantly growing, and maintained and verified by a decentralised network of computers (in theory: anyone can contribute to this effort, provided they have the right equipment). This process of maintenance and verification occurs in ten minute intervals.

Cryptographic Security: The program uses a variety of cryptographic methods to protect the anonymity of its users and to prevent fraud and abuse. The primary methods are public key encryption, cryptographic hash functions, and proof of work protocols. These methods are used, respectively, to enable primary transactions between buyers and sellers, to add transaction data to the decentralised ledger, and to verify that the correct transaction data is being added to the Blcokchain. These are all essential to ensuring that people trust the system.

Competitive Mining: The people who maintain and verify the Blockchain are encouraged to do so through an interesting reward mechanism. During each ten-minute round, the computers running the system participate in a competition to ensure that they are the ones that get to add the next verified transaction block to the Blockchain. If they win this competition, they are rewarded with some freshly minted bitcoin. This has nothing to do with creating and maintaining trust per se, but rather with attracting people to the currency in the first place. In other words, it gets them to buy into this particular collective hallucination.

Radically Deflationary Currency: The final property of the Bitcoin program is that the currency is radically deflationary. That is to say: less and less of it will be created over time, until the maximum of 21 million bitcoin is reached. This stands in direct opposition to modern fiat money systems which are, if anything, radically inflationary in nature. Again, the radically deflationary nature of the currency has nothing to do with maintaining trust in the system, but rather with encouraging people to get onboard. Why? Because if a radically deflationary currency takes off, the particular units of currency can be expected to go up in value over time. Thus, if you are an early adopter, you will be richly rewarded. The purchasing power of your units of currency will increase, rather than decrease over time.

These are the four properties of the Bitcoin program that I think have facilitated is acceptance as a type of money. But these four properties are not the end of the story. Many other people tried to create digital currencies with similar properties, and yet they all failed to take off. What was it about this particular version that captured the popular imagination? The answer does not lie in the features of the protocol alone. The answers lies in the particular socio-cultural milieu in which Bitcoin was launched.

It is probably no coincidence that Bitcoin was launched and took off in the aftermath of the Financial Crisis of ’08. This was a moment when our collective trust in the social institutions propping up the fiat money system was at a low ebb. It is also helped that Bitcoin came along at a point in time when the internet — and the concept of digital transactions — had reached a certain level of maturity. People had already begun to believe in the notion of a digital payment system: the notion of a completely digital currency was less of a conceptual leap. Finally, it no doubt helped that the anonymity of Bitcoin facilitated a community of people who wanted to trade in illegal goods and services (such as the illegal narcotics that could be bought and sold on the Silk Road).


5. Conclusion
This post has gone on for too long. I’ll wrap up by summarising three key points. First, in terms of the ontology of money, I think we ought to subscribe to a sophisticated subjectivist ontology, according to which money is simply anything that people collectively agree can represent and function as a medium of exchange, store of value and unit of account. In short, money is a collectively reinforced hallucination.

Second, it is not easy to get people to collectively agree to a particular system of money. They need to trust that the system really will function as a medium of exchange, store of value and unit of account. Many different systems have been trialled over the course of human history, each of which comes with advantages and weaknesses. The two primary systems are the traditional commodity-based systems — which try to manufacture the necessary trust by utilising commodities with a unique set of properties; and fiat systems — which try to manufacture the necessary trust by using a network of social institutions, norms and laws.

Third, Bitcoin fits within this general subjectivist ontology. It is not a trustless system. It tries to manufacture the necessary trust by utilising a computer program with a number of unique properties. These include: (i) the Blockchain, which provides an authoritative record of transaction data; (ii) the various methods of cryptographic security, which make it hard for hackers to fake or manipulate transaction data; (iii) the mining competition, which incentivises people to maintain the authoritative record; and (iv) the radically deflationary nature of the currency, which encourages people to opt into the system by guaranteeing that, if it takes off, the units of currency will increase in value.