Saturday, October 22, 2016

Episode #13: Laura Cabrera on Human Enhancement, Communication and Values


In this episode I interview Dr Laura Cabrera. Laura is an Assistant Professor at the Center for Ethics and Humanities in the Life Sciences at Michigan State University where she conducts research into the ethical and societal implications of neurotechnology. I ask Laura how human enhancement can affect inter-personal communication and values and talk about the issues in her recent book Rethinking Human Enhancement : Social Enhancement and Emergent Technologies.

You download the show here or listen below. You can also subscribe on Stitcher and iTunes (click 'add to iTunes').

 Show Notes

  • 0:00 – 1:00 - Introduction
  • 1:00 – 11:15 - What is human enhancement- definitions and translations
  • 11:15 – 13:35 - Discussing moral enhancement - Savulescu and Persson
  • 13:35 – 14:35 - Human enhancement and communication - discussing Laura’s paper with John Wekert
  • 14:35 – 28:40 - Shared lifeworlds, similar bodies, communication problems
  • 28:40 – 39:48 - Augmented reality and sensory perception
  • 39:48 – 46:20 - Cognitive capacity and memory – Oliver Sacks & Borges
  • 46:20 – 49:50 - Ethics – hermeneutic crises and empathy gaps
  • 49:50 – 52:30 - Can technology solve communication problems?
  • 53:32 – 1:00:00 - What are human values?
  • 1:00:00 – 1:08:20 - How does cognitive enhancement affect values?
  • 1:08:20 – 1:16:00 – Neoliberalism values - pressures and competitiveness
  • 1:16:00 – End - How to prioritise values and see the positives in enhancement

Relevant Links

Wednesday, October 19, 2016

Why Non-Natural Moral Realism is Better than Divine Command Theory

It’s been a while since I wrote something about theism and morality. There was a time when I couldn’t go more than two weeks without delving into the latest paper on divine command theory and moral realism. More recently I seem to have grown disillusioned with that particular philosophical joy ride. But last week Erik Wielenberg’s new paper ‘Euthyphro and Moral Realism: A Reply to Harrison’ managed to cross my transom. I decided I should read it.

I am glad I did. It is an interesting paper. As you might guess from the title, it is a reply to another paper by the philosopher Gerald Harrison. In that paper, Harrison took defenders of divine command theory (DCT) and non-natural moral realism (NNMR) to task. He argued that two of the standard objections to divine command theory — usually based on the infamous Euthyphro dilemma — apply equally well to non-natural moral realism. And since many defenders of the latter are harsh critics of the former, this should be a cause of some embarrassment.

Now, Wielenberg is a defender of NNMR and he doesn’t think he needs to feel any embarrassment. The goal of his paper to explain why. I want to go through his main arguments in the remainder of this post.

1. Two Criticisms of DCT and NNMR
I have to start by going back to Harrison’s paper. The main purpose of that paper was to argue for something Harrison called the parity thesis. The essence of this thesis is that the main flaws associated with DCT apply equally well to NNMR. That is to say, they are similarly situated with respect to two major metaethical problems. What are those problems? Well, it all goes back to the Euthyphro dilemma. As you’ll no doubt recall, in Plato’s Euthyphro, Socrates and Euthyphro have a debate about the origin of moral properties like good/bad and right/wrong.* Euthyphro — who stands in for the modern defender of DCT — insists that those properties originate in the will (commands) of the gods. Socrates wonders whether those properties really originate in divine will or whether they are divinely willed because they have those properties. He thinks that both interpretations have their problems.

To take a simple example, suppose we all agree that it is wrong to torture an innocent child for fun. Where does the wrongness come from? Euthyphro and the proponent of DCT suggests that the wrongness comes from the command of God. It is wrong because God has prescribed against it. But Socrates suggests that this cannot be right. If it is only wrong because God commands against it, then we run into the problem that there is a possible world in which God did not issue that command. Indeed, there might even be a possible world in which God insists that we do torture innocent children for fun. That doesn’t sit right because there doesn’t seem to be anything that could make torturing innocent children for fun right. In other words, it seems like any two act-tokens that shared all their non-moral properties with respect to the innocence of the children, the nature of the torture, and the intention for doing it, could differ in their moral properties. But if you respond to the problem by saying that the wrongness doesn’t come from God, but rather God commands against torturing innocent children for fun because it is wrong, you seem to sweep away the original rationale for the DCT, namely: to provide the foundation for wrongness. If God only commands that which is already right/wrong, then His command does not make the moral difference we would like it to make. That’s the Euthyphro dilemma.

Theists have some solutions to this dilemma. The most popular is that God’s nature sets limits on what he can and cannot command. I won’t go into those solutions here though because I have done so on previous occasions and I believe that ‘solutions’ of this sort simply push the dilemma back a further step.

The important point for present purposes is that the Euthyphro highlights two problems for the proponent of DCT:

Horrendous Deeds (HD): DCT is consistent with there being a possible world in which horrendous deeds are morally right (because they are commanded by God). In other words, there is a possible world in which horrendous deeds are morally acceptable.

Explanatory Inadequacy (EI): DCT seems to contravene what we believe about moral supervenience. It suggests that two acts that are completely the same with respect to their natural properties can nonetheless differ in their moral properties (because God’s commands add a brute moral fact to natural reality). Put another way, it suggests that an act token’s natural properties can never explain its moral properties.

Now how do these concerns apply to NNMR? The answer is complicated. We’ll get into the nitty gritty as we go through Wielenberg’s reply to Harrison. But the basic idea is this. Our moral intuitions tell us that any metaethical theory that permits HD and EI is unsatisfactory. We have a strong belief that horrendous deeds cannot be moral and that non-natural properties make some explanatory contribution to moral properties. But DCT is consistent with both HD and EI (according to the Euthyphro). Is NNMR also consistent with both? Harrison argues that it is. NNMR is the view that moral properties are not reducible to natural properties; moral properties are, instead, sui generis, brute facts about reality. It seems like this is consistent with EI: if moral facts do not reduce to natural facts then natural facts do not explain moral facts. It also seems like it is consistent with HD. Why? Because there could be a possible world in which the brute, moral properties supervene over different sets of natural properties. Nothing in NNMR rules this out.

This gives us the parity thesis.

2. Problems for the Parity Thesis
There is an oddity to Harrison’s analysis. Many philosophical theists (I use the term ‘philosophical’ in order to contrast them with ordinary believers) are proponents of something called ‘perfect being theology’. They think God is the most perfect being that there can be. He is omnipotent, omniscient and omnibenevolent (or, if ‘omni’ is too strong, he is maximally powerful, knowledgeable and benevolent). Harrison isn’t a fan of perfect being theology. He favours something he calls ‘strange powerful creature’ theology. This claims that God is a powerful supernatural being, but not necessarily a perfect being.

The peculiarities of Harrison’s theology becomes important when we turn to Wielenberg’s critique. Wielenberg insists that the proponent of NNMR should not fear the parity thesis. But his insistence is strongest when it comes to the contrast between NNMR and the perfect being version of DCT. His criticisms might be less persuasive for the proponent of the strange powerful creature version of DCT. This, however, is not a major limitation since, as even Harrison acknowledges, most proponents of DCT favour the perfect being view. They do so because appealing to God’s maximal goodness is one popular way to resolve the Euthyphro dilemma.

Enough of this. What is Wielenberg’s critique of Harrison? It starts by distinguishing between two kinds of objection:

Objection A: Not-P is true; theory T entails P; therefore theory T is incompatible with a truth and hence T must be false.

Objection B: Not-P is true and not-P’s truth must have some explanation or other; theory T fails to provide an explanation for not-P’s truth.

At a purely abstract level, Objection A is a much more serious objection than Objection B. Objection A tells us that some proposed theory is inconsistent with a truth and therefore must be false. Objection B is simply telling us that some proposed theory fails to explain or account for a truth.

Here is an example. We know for a fact that objects of different mass fall at equal rates in a vacuum. Now suppose we have two different theories of gravity. One theory tells us that objects of a different mass should fall at different rates. That is to say: it is a logical derivation from the axioms of the theory that objects of different mass fall at different rates. The other theory says nothing about the rate at which objects should fall in a vacuum. It fails to explain or account for the fact that they fall at the same rate, but there is nothing in the axioms of the theory that contradicts or undermines that truth. The first theory thus falls foul of Objection A: it cannot be true because it contradicts something we know to be true. The second theory falls foul of Objection B: it cannot explain something we know to be true. This makes the second theory much stronger than the first. Objection B is still a problem for the theory, but it is not a fatal problem. The only way it becomes a problem is if it turns out that there is some rival theory of gravity that does explain the fact that objects fall at the same rate. If there is such a theory, then you will need to conduct a theoretic comparison: assessing both theories on the totality of their merits. The rival theory may be better than the original one if it accounts for more truths, but if the rival theory only accounts for the truth of objects falling at the same rate in a vacuum, we may still prefer the original theory.

This is relevant to the present debate because, according to Wielenberg, the possibility of morally acceptable horrendous deeds (HD) and the explanatory incompleteness of natural facts (EI) are like Objection A to DCT, and like Objection B to NNMR. In other words, HD and EI are nearly fatal for DCT because DCT entails their truth; but they are mere flesh wounds for NNMR - a puzzle that NNMR fails to explain but which is only a problem if there is some alternative, superior moral theory.

Let’s try to unpack this claim. The problem for the perfect being variant of DCT is that it seems to entail that horrendous deeds could be morally acceptable. If God is truly all powerful, and if one of God’s powers is to determine what is morally acceptable or not, it follows that there is a possible world in which horrendous deeds are morally acceptable. This contradicts what we intuit to be true. You might respond to this by suggesting we ditch our moral intuitions in favour of the theory, but it is very difficult to do this because our moral intuitions seem to be more epistemically secure than DCT.

By the same reasoning, the perfect being version of DCT also entails that natural properties do not contribute to the explanation of moral properties. Since in the actual world God does not command horrendous deeds, but there is a possible world in which God does command horrendous deeds, it follows that the natural properties of those horrendous deeds make no difference to their moral properties. The torturing of innocent children for fun can be morally acceptable in one world and impermissible in another. It is the presence or absence of God's command that makes the difference.

Contrariwise, NNMR entails neither of these problems. Harrison’s claim (to quote) about NNMR is that “if moral properties are not reducible [to natural properties], then there seems to be no principled way of ruling out the possibility of a naturalistically identical world to this one in which moral properties are differently arranged” (Harrison 2015, 110). This is fair, insofar as it goes. But it doesn’t go very far. Although it is true that NNMR cannot rule out the possibility of a world in which horrendous deeds are morally acceptable, it doesn’t entail that there must be a possible world in which they are. Similarly, although NNMR does entail that natural properties fail to fully explain or account for moral properties, it doesn’t entail that they make no explanatory contribution to those moral properties. Indeed it is consistent with the belief that they make some explanatory contribution.

All of this means that the parity thesis fails to find purchase. There are significant differences between NNMR and perfect being versions of DCT.

There is more to be said, of course, but I'll leave that to another day.

*Actually, if I recall correctly, the original Euthyphro debate was about ‘piety’ i.e. was X pious because it was willed by the gods or was it willed by the gods because it was pious.

Wednesday, October 12, 2016

Pornography and Subordination: The Contextual View

Most mainstream pornography is targeted at heterosexual cisgendered men. Its visual focus is on women: their bodies, their behaviours. The men in mainstream pornography are largely hidden from view. The women are there to pleasure the men: they are objects to be used for sexual gratification. The usage is literal, not metaphorical: the typical consumer of pornography is trying to sexually gratify themselves while viewing it, not merely simulating sexual gratification.

This is oftentimes said to subordinate women. It identifies and labels them as inferior to men, and contributes to their systematic oppression and marginalisation within society. Let’s grant that this is possible. How exactly does this subordination work? That’s one of the questions asked in Matt Drabek’s recent article ‘Pornographic Subordination, Power and Feminist Alternatives’. There’s lots of interesting stuff in the article. I would recommend it to anyone with an interest in this topic. I want to focus on what I take to be its big idea: the contextual view of pornography.

(These are just my notes on the article; not a careful critical analysis.)

1. The Constitutive View vs. The Causal View
The big idea is that the subordinative effect of pornography (if and when it exists) is due to the material and social contexts in which it is produced and used, and not due to its content. I call this the ‘contextual view’. It is to be contrasted with two other views of pornography and subordination:

Constitutive View: Pornographic material is itself constitutive of female subordination, i.e. the publication and viewing of the material is itself an act of subordination.

Causal View: Pornographic material causes female subordination, i.e. those who watch and use pornography go on to subordinate women.

The constitutive view is a little bit obscure, but popular in certain feminist and philosophical quarters. It originates in the work of Catherine MacKinnon, and has been reinterpreted by philosophers like Rae Langton and Mary Kate McGowan (whose work I have written about before). These philosophical interpreters often use speech act theory to defend their views.

The central tenet of speech act theory is that words and symbols don’t simply report on how the world is; they also perform acts in the world. When a registered celebrant at a wedding says ‘I now declare you husband and wife’, they are not simply reporting a fact; they are making a fact. They are creating a marital state of affairs where none existed before.

Proponents of the constitutive theory argue that pornography does something similar. It doesn’t simply depict women in sexually explicit, possibly denigrating and degrading positions. It has performative dimensions is well. One of the more popular claims is that it serves to silence women: it creates socio-normative rules regarding how women are to behave and act. Thus the production and use of pornography is itself constitutive of subordination - not simply a cause of subordination. (If you’re interested, I wrote a much longer analysis of the constitutive view on a previous occasion).

This is to be contrasted with the causal view. This is a more straightforward, less philosophically recondite theory. It holds that the production and use of pornography causes the subordination of women. There are various different understandings of this causal link. Some argue that the men who view the pornography go on to subordinate women in their daily lives: the pornography encourages them to do so. This can often be difficult to prove. Empirical research on the effects of pornography is vast and disputed.

A slightly more sophisticated causal theory has been propounded by Anne Eaton. Several years back, Eaton wrote a paper called ‘Towards a sensible anti-porn feminism’. In it, she suggested that the causal effects of pornography are (or are likely to be) quite complex. In particular, she suggested that there is probably a set of interlocking feedback loops between broader social structures and individual acts of production and use. The idea is that the individual acts of production and use are facilitated and justified by broader social structures of oppression, and they in turn serve to reinforce and reproduce those structures (in small ways). It’s a positive feedback loop: the structures support the pornographic works which in turn support and reinforce the structures.

Although I have contrasted the constitutive and causal views here, it is important to note that they are not necessarily in tension with one another. Someone could support both at the same time. Nevertheless, they do make different claims about the subordinating powers of pornography.

2. It’s Context not Content that Matters
Drabek’s goal is to defend an alternative, contextual view, of the subordinating powers of pornography. One thing that constitutivists and causalists usually share is the belief that the content of pornography matters when it comes to its subordinative powers. That is to say, certain kinds of images and depictions of women matter more. Sexually explicit material comes in many flavours. Some of it can be artful and thought-provoking. Some can be crass, violent and degrading. The generally shared view is that it is the latter — what Eaton calls ‘inegalitarian’ pornography — that is subordinating. Indeed, there is an interesting view among some feminists — Catherine MacKinnon being probably the most famous — which holds that only pornography with inegalitarian content counts as ‘pornography’.

Drabek pushes back against this view. He argues that when it comes to pornography, it is the material and social context of its production and its use that matters. Content has very little bearing on it. This puts him somewhere between the causal and constitutive camps. In certain contexts, the use of pornography might be constitutively subordinative; in others it might be causally subordinative; in still others it might not be subordinating at all.

Drabek’s claim that content doesn’t matter much is a strong one. It seems to fly in the face of common sense. To many people it just seems obvious that some sexually explicit material has more objectionable content than other sexually explicit material and that this difference must have some bearing on its social effect. This commonsense view is taken onboard by legislatures around the world. They often create specific legal bans for pornography that depicts violence or torture, for instance.

Perhaps this is the right approach, but there is an interesting philosophical point to be made about the meaning of any image or symbolic practice. Meaning is never inherent in content. The content-maker chooses particular symbols and images because they mean something in a particular social context; and the content-user interprets those symbols in light of the meaning imposed upon them from that broader social context. And social contexts are highly variable. I have discussed, previously, how the symbolic meaning of particular images and practices changes from culture to culture. Take, for example, the meaning attached to the act of paying for someone to mourn at your parents’ funeral. In some cultures, paid mourners are an insult to the memory of the dead; in others they are an acceptable and encouraged token of affection.

Drabek takes a similar view of the content of pornography. Its meaning is highly variable and this has an impact on its subordinating powers. Some ‘plain vanilla’ pornography, for example, could be highly subordinating:

Consider a case of seemingly banal or harmless sexually explicit materials, perhaps a short video of a man and woman engaged in uneventful sexual intercourse or the banal Playboy photos mentioned by [Gail] Dines…I think banal pornography often does contribute to gender subordination when it is produced and viewed in the right social and material contexts. Adolescent boys and girls who view this material internalize and advance assumptions about the body and about sex.
(Drabek 2016

Contrariwise, some pornography with seemingly offensive content could be unobjectionable:

The genre of sadomasochist pornography, in particular, is full of material that is unabashedly inegalitarian, but often careful to incorporate discussion and enactment of explicit consent and positive sexual exploration among practitioners. The eroticization of violence, humiliation, and gender inequality are common, but often done in an affirmative way.
(Drabek 2016

Again, it is the context that matters. If used and interpreted in the right way, banal sexually explicit material could reinforce gender oppression and violent or humiliating material could be empowering or affirmative. What matters is how the people producing, using and consuming the pornography understand it. For example, the ubiquity and banality of certain pornographic material might be key to its subordinating powers. It might be because it is so widely used and consumed (and socially accepted) that is has a strong subordinating effect. Whereas the transgressive and uncommon nature of some pornography might be why it doesn’t have this power.

3. Conclusion: The Possibility of Feminist Pornography
Drabek uses his contextual account to argue for the possibility of truly feminist pornography (note: there already exists pornography that is classed as feminist). That is to say, sexually explicit material that does not have a subordinating effect but actually helps to reverse or breakdown sources of gender discrimination and oppression. His argument makes sense in light of the contextual account. If content doesn’t matter (or doesn’t matter that much) then it should be possible to create contexts in which pornographic materials are produced and used in a way that does not support subordination. Drabek gives some illustrations towards the end of his paper. He suggests that feminist pornography tends to have educative and political intentions and uses.

I find this idea interesting. I quite agree that context matters, but I have some resistance to the notion that content doesn’t have a significant part to play. To be clear, Drabek never says that content counts for nothing. He just says it isn’t the major driving force. What I wonder, however, is whether certain types of pornographic material have context-insensitive meanings. Or, to put it another way, whether certain social and material contexts are so widely-shared that content ends up playing a decisive role in their meaning and broader social significance.

This is something I explored in previous posts about virtual rape and virtual child sexual abuse. There, I looked at work done by Stephanie Patridge on the possibility of incorrigible social meanings. Her suggestion was that some content will have a meaning that is relatively invariant across contexts. Examples in her papers included racist jokes and images, as well as some sexually explicit material, e.g. material depicting racially motivated sexual violence or rape fantasies. The meaning attached to such content was, according to her, incorrigible (i.e. incapable of being revised or changed). Patridge’s argument was thought-provoking but in the end even she seemed to acknowledge that there were contexts in which such seemingly incorrigible meanings could be altered. For instance, somebody from a racial minority could use racist imagery to make a political point or to educate social peers. (It seems that Drabek’s vision of feminist pornography is somewhat similar.)

But Patridge refined her argument and suggested that although some people could use transgressive imagery and content in a positive way, those who used it for humour or sexual gratification could not. So if you were sexually gratified by images of rape or sexual violence, then we were probably always going to be entitled to make negative inferences about your moral character, or the character of the society that encouraged you, or the likely social effects of your being gratified. The upshot was that content and context both mattered. In some cases, the context is relatively invariant and hence the meaning that attaches to the content is also relatively invariant.

This doesn’t contradict or undermine the contextual view. It does, however, highlight an important consequence of it. If we can change the context in which pornography is produced and used, we might be able to change the meaning that attaches to its content. But if the context is difficult to change, then we may not. The reality is that the context in which pornography is consumed and used is one involving sexual gratification (Drabek accepts this in his paper). To put it bluntly: people try to get off on pornographic material. That aspect of the context is relatively invariant. Yes, sometimes pornographic material might be displayed for artistic and educative reasons, but those occasions are pretty infrequent. Given the relative fixity of the context, I suspect that the meaning attaching to particular kinds of pornographic content is also going to be relatively invariant. It will be very hard to reform the meaning (and hence the social effect) of certain sexually explicit materials. Only changes to the content will have that effect. This is what is striking to me about the S&M example given by Drabek. It’s only because that material includes, in his characterisation, documented discussions of affirmative consent and boundary-setting that it is saved from being problematic. If you stumbled upon images of women (seemingly) being tortured, raped and humiliated, and used that imagery for the purposes of sexual gratification without knowing about those discussions of affirmative consent and boundary setting, then there would be something problematic about it. The content of the material is what changes our moral perception.

In short, in some cases the context of usage is so fixed that content is the only thing that can make a difference. In those cases, content will count for a great deal.

Sunday, October 9, 2016

How do we Enhance Cognition through External Representations? Five Ways

I use pen and paper to do most of my serious thinking. Whether it is outlining blogposts or academic papers, taking notes or constructing arguments, I pretty much always take out my trusty A4 pad and pen when I run into a cognitive trough. To be sure, I often mull ideas over in my head for a long time beforehand, but when I want to move beyond my muddled and incoherent thoughts, I will grab for my pen and paper. I am sure that many of you do the same. There is something cognitively different about thinking outside your head: creating an external representation of your thoughts reveals their strengths and weaknesses in a way that internal dialogue never can.

In a recent post, I suggested that the humble pen and paper might be examples of truly complementary cognitive artifacts — artifacts we use to perform cognitive tasks that don’t simply supplement or provide useful short-term props for cognition but actually improve our thinking in a manner that would not be possible in their absence. I was pretty sketchy about how such artifacts might work their cognitive magic in that post, but I’ve just been reading David Kirsh’s paper ‘Thinking with external representations’ and he provides some insights.

The paper is not a huge amount of fun to read, but it does contain some good examples. Kirsh is a proponent of situated/embodied cognition. He believes that human cognition always involves some kind of dynamic interaction between our bodies and our environments. He holds that external representations (drawings, maps, diagrams, bits of writing etc.) are an important way to enhance our cognitive abilities. He goes on to identify several distinct ways in which this happens. I want to share those ways in this post.

1. The Important Principle: Externalisation often lowers the cost of cognition
In the abstract to the paper, Kirsh claims that there are seven distinct ways in which the use of external representations enhance cognition. To be honest, I couldn’t really identify seven in the body of his paper. I wasn’t helped by the fact that Kirsh doesn’t actually identify the distinctions by number in his paper: he leaves that up to the reader. I’m sure it depends on whether you are a lumper or a splitter, but by my estimation it’s more like there is one major way in which externalisation enhances cognition and then four others.

I’ll start with the major one. This is tied into Kirsh’s preferred theory of cognition. As I mentioned, Kirsh is a proponent of embodied/situated cognition. This is a theory within cognitive science. It holds that thinking (the act of cognition) is not a purely brain-based phenomenon. The mental concepts and categories we form are, according to proponents of the theory, shaped by other aspects of our biological nature and the way in which we relate to our environments. Indeed, there is probably no such thing as purely brain-based cognition. Even the ‘highest’ and most abstract forms of thought involve some dynamic interaction between our brains and the world outside our heads. Some proponents of embodied cognition, like Andy Clark, go further and argue that humans are natural born cyborgs, always extending their minds beyond the cramped confines of their skulls.

One way to get your head around this is to think in terms of a schematic diagram. We have an agent — a human thinker — embedded in an environment. Cognition is the result of information being processed from that environment. But the agent’s body affects how he perceives and process that information. So on pretty much every occasion, cognition is not simply a brain-based phenomenon. It relies on some mediation between brain, body and environment.

It’s more interesting than that, of course. On some occasions (maybe even most) the agent will rely on some external artifact to assist in the performance of a cognitive task. When preparing to cook a meal, for example, s/he will open a cook book to the relevant page, mark elements within that page, perhaps scrolling through the text with their finger. They will also arrange their ingredients into a workspace, setting up a logical order in which to prepare and assemble those ingredients. The external representations provide a scaffolding for cognition: without them it would be much more difficult to perform the task.

Now here’s the important idea: Cognition is costly. It uses up energy. There is, consequently, an incentive to make it cheaper. If you’re very brain-centric in your approach to cognition, you might think that the only way to reduce the cost of cognition is to improve the efficiency of your brain-based information processing. Learn and master the tasks you want to perform so that they use up less neural energy. But once you recognise the central insight of embodied cognition, you see that this is not the only way. You can also reduce costs through externalisation: by offloading some of the cognitive energy to external cognitive scaffolds. For example, it is often much ‘cheaper’ to keep track of a recipe in a book than to store it all in your head.

That’s the main reason why externalisation enhances cognition. There is an incentive for cognitive tasks to flow to the place in which they can be performed most cheaply and since cognition is not a purely brain-based phenomenon, the cheapest site of performance is not always going to be inside the head.

2. Four other reasons why externalisation enhances cognition
But it’s not the only the only reason why externalisation enhances cognition. Indeed, thinking solely in terms of ‘costs’ often means you miss important ways in which externalisation aids our thinking processes. Here are four more:

They create shareable objects of thought: Creating an external representation of a thought turns it into an object that others can perceive and interpret. This enables collaboration - the power of many minds working on a similar cognitive problem. In many ways, this is what scholarship and research is all about. Scholarly papers translate arguments and theories that might otherwise be stuck in people’s heads into shared objects of thought. Others can then come in and refine, critique and improve upon those arguments and theories. It’s possible to do some of this through dialogue, but the scope for collaboration is greatly increased through externalisation.

They facilitate rearrangement and new cognitive operations: Creating an external representation can allow you to bring together ideas that are distant in logical space and perform cognitive operations that would be difficult without representation. This might be the main advantage of externalisation. By putting thoughts and ideas down on paper — often through schematic diagrams or argument maps — I can literally ‘see’ how thoughts relate to one another and combine them in new ways. Kirsh gives the simple example of a jigsaw puzzle in his article. Imagine trying to assemble a jigsaw puzzle via purely mental representations and rotations of pieces. It would be practically impossible. Having external representations of the pieces, and being able to move them around in physical space, makes it practically possible.

They are often more stable and persistent than mental representations: By externalising your thoughts you usually encode them into an object that is more stable and persistent than a mental representation. This is a very important factor for me. It is well-known that memory is a fallible, dynamic and reconstructive phenomenon. Our brains are not like video recorders. They do not store perfect representations of real-world events. Rather, our memories get dynamically integrated into how we perceive and understand the world. The act of remembering involves reshaping the past. This dynamic integration has its advantages, but it also makes it more difficult to recover exactly what you were thinking on a previous occasion. Externalisation helps to address this problem. Indeed, I find that if I don’t get thoughts and ideas down on paper I often lose my ‘train’ of thought. This can be very frustrating when I previously thought I had hit upon some insight.

They enable re-representation: Once you create an external representation of a thought, you can often re-represent it as something simpler and more elegant. For instance, I might have the bones of argument rattling around my head. When I put it down on paper first, it might consist of ten premises and two conclusions. But when I analyse those premises and conclusions in their represented format, I realise that a few of them are redundant. I can simplify the argument to something with only three premises and one conclusion. Not only that, I can then combine this newly simplified argument with another simplified argument, thereby chaining together two previously distinct (and complicated) representations. I can repeat this over and over creating representations of significant complexity. What’s more, I can combine many different forms of representation (writing, music, images) to add new perspectives to the original set of thoughts.

3. Is Externalisation Indispensable?
I quite agree with Kirsh that externalisation improves the quality of our thought. But can we push the argument further and say that externalisation is, in some cases, essential to thinking? Kirsh makes that push toward the end of the article. I won’t evaluate the argument he offers in any great depth here, but I will share it because I think it is interesting.

In essence, Kirsh argues that there a certain kinds of thinking (more precisely computation) that are too complex for purely mental representation. Indeed, there are certain kinds of computation that are too complex for most external representations. To engage in these forms of computation, you have to construct a particular kind of external representation (usually an analog model) to understand what is going on. The argument works like this:

  • (1) Some phenomena are irreducibly complex: they involve State A leading to State B but there is no way to identify and describe the factors mediating the two states in a simply logical-mathematical formula (which could in turn be represented mentally).
  • (2) The only way to understand such irreducibly complex phenomena is to construct an external representation of the system which models the transition between State A and State B and run that model repeatedly.
  • (3) Implication: externalisation is essential to some forms of cognition.

This might seem a little bit obscure. Kirsh takes the idea from work done by John Von Neumann who argued that some natural phenomena may be too complex to be reduced to simple equations. The simplest way to understand them would be to create models of the relationships underlying the phenomena. A good example of this is the behaviour of an n-body system, e.g. the planets in the solar system. Kirsh suggests that an orrery — a simple analog model of the planets — is the easiest way to model and predict the movement of the planets.

This sounds plausible to me, but I’m not sufficiently expert in the mathematics of complex systems to question it. The important point is that external representations can, and frequently do, improve the quality of our thinking. If you want to ‘enhance’ cognition, you are usually going to be better off reaching for a pen and paper than you are a box of modafinil.

Friday, October 7, 2016

Episode #12 - Rick Searle on the Dark Side of Transhumanism


In this episode I interview Rick Searle. Rick is an author living in Amish country in Pennsylvania. He is a prolific writer and commentator on all things technological. I get Rick to educate me about the darker aspects of the transhumanist philosophy. In particular, what Rick finds disturbing in the writings of Zoltan Istvan, Steve Fuller and the Neoreactionaries.

You can download the episode here. You can listen below. You can also subscribe on Stitcher or iTunes (click add to 'iTunes').

Show Notes

  • 0:00 - 1:40 - Introduction
  • 1:40 - 4:40 - Rick's definition of Transhumanism
  • 4:40 - 10:10 - Zoltan Istvan and the Transhumanist Wager
  • 10:10 - 16:35 - The philosophy of teleological egocentric functionalism - Ayn Rand on steroids?
  • 16:35 - 22:30 - Steve Fuller's Humanity 2.0
  • 22:30 - 28:00 - Some disturbing conclusions?
  • 28:00 - 32:20 - The ontology and ethics of Humanity 2.0
  • 32:20 - 36:55 - Stalinism as Transhumanism
  • 43:25 - 47:00 - Transhumanism as religion
  • 47:00 - 56:30 - The neo-reactionaries of Silicon Valley
  • 56:30 - End - Is democracy fit for the future?

Relevant Links

Wednesday, September 21, 2016

Pushing Humans off the Loop: Automation and the Unsustainability Problem

There is a famous story about an encounter between Henry Ford II (CEO of Ford Motors) and Walter Reuther (head of the United Automobile Workers Union). Ford was showing Reuther around his factory, proudly displaying all the new automating technologies he had introduced to replace human workers. Ford gloated, asking Reuther ‘How are you going to get those robots to pay union dues?’. Reuther responded with equal glee ‘Henry, how are you going to get them to buy your cars?’.

The story is probably apocryphal, but it’s too good a tale to let truth get in the way. The story reveals a common fear about technology and the impact it will have on human society. The fear is something I call the ‘unsustainability problem’. The idea is that if certain trends in automation continue, and humans are pushed off more and more productive/decision-making loops, the original rationale for those ‘loops’ will disappear and the whole system will start to unravel. Is this a plausible fear? Is it something we should take seriously?

I want to investigate those questions over the remainder of this post. I do so by first identifying the structure of the problem and outlining three examples. I then set out the argument from unsustainability that seems to follow from those examples. I close by considering potential objections and replies to that argument. My goal is not to defend any particular point of view. Instead — and as part of my ongoing work — I want to identify and catalogue a popular objection/concern to the development of technology and highlight its similarities to other popular objections.

[Note: This is very much an idea or notion that I thought might be interesting. After writing it up, I'm not sure that it is. In particular, I'm not sure that the examples used are sufficiently similar to be analysed in the same terms. But maybe they are. Feedback is welcome]

1. Pushing Humans off the Loop
Let’s start with some abstraction. Many human social systems are characterised by reciprocal relationships between groups of agents occupying different roles. Take the relationship between producers (or suppliers) and consumers. This is the relationship at the heart of the dispute between Ford and Reuther. Producers make or supply goods and services to consumers; consumers purchase and make use of the goods and services provided by the producers. The one cannot exist without the other. The whole rationale behind the production and supply of goods and services is that there is a ready and willing cadre of consumers who want those goods and services. That’s the only way that the producers will make money. But it’s not just that the producers need the consumers to survive: the consumers also need the producers. Or, rather, they need themselves to be involved in production, even if only indirectly, in order to earn an income that enables them to be consumers. I have tried to illustrate this in the diagram below.

The problem alluded to in the story about Ford and Reuther is that this loop is not sustainable if there is too much automation. If the entire productive half of the loop is taken over by robots, then where will the consumers get the income they need to keep the system going? (Hold off on any answers you might have for now — I’ll get to some possibilities later)

When most people think about the unsustainability problem, the production-consumption relationship is the one they usually have in mind. And when they think about that relationship, they usually only focus on the automation of the productive half of the relationship. But this is to ignore another interesting trend in automation: the trend towards automating the entire loop, i.e. production and consumption. How is this happening? The answer lies in the growth of the internet of things and the rise of ‘ambient payments’. Smart devices are capable of communicating and transacting with one another. The refrigerator in your home could make a purchase from the robot personal shopper in your local store. You might be the ultimate beneficiary of the transaction but you have been pushed off the primary economic loop: you are neither the direct producer nor the supplier.

It’s my contention that it is this trend towards total automation that is the really interesting phenomenon. And it’s not just happening in the production-consumption loop either. It is happening in other loops as well. Let me give just two examples: the automation of language production and interpretation in the speaker-listener loop, and the automation of governance in the governor-governed loop.

The production and interpretation of language takes place in a loop. The ‘speaker’ produces language that he or she wishes to cause some effect in the mind of the ‘listener’ — without the presumption of a listener there is very little point to the act. Likewise, the ‘listener’ interprets the language based on the presumption that there is a speaker who wishes to be understood, and based on what they have learned about the meaning of language from living in a community of other speakers and listeners. Language lives and breathes in a vibrant and interconnected community of speakers and listeners, with individuals often flitting back and forth between the roles. So there is, once again, a symbiotic relationship between the two sides of the loop.

Could the production and interpretation of language be automated? It is already happening in the digital advertising economy. This is a thesis that Pip Thornton (the research assistant on the Algocracy and Transhumanism Project that I am running) has developed in her work. It is well known that Google makes its money from advertising. What is perhaps less well-known is that Google does this by commodifying language. Google auctions keywords to advertisers. Different words are assigned different values based on how likely people are to search for them in a given advertising area (space and time). The more popular the word in the search engine, the higher the auction value. Advertisers pay Google for the right to use the popular words in their adverts and have them displayed alongside user searches for those terms.

This might sound relatively innocuous and uninteresting at first glance. Language has always been commodified and advertisers have always, to some extent, paid for ‘good copy’. The only difference in this instance is that it is Google’s PageRank algorithm that determines what counts as ‘good copy’.
Where the phenomenon gets interesting is when you start to realise that this has resulted in an entire linguistic economy where both the production and interpretation of language is slowly being taken over by algorithms. The PageRank algorithm functions as the ultimate interpreter. Humans adjust their use of language to match the incentives set by that algorithm. But humans don’t do this quickly enough. An array of bots are currently at work stuffing webpages with algorithmically produced language and clicking on links in the hope that it will trick the ranking system. In very many instances neither the producers nor interpreters of advertising copy are humans. The internet is filled with oddly produced, barely comprehensible webpages whose linguistic content has been tailored to the preferences of machines. Humans web-surfers often find themselves in the role of archaeologists stumbling upon these odd linguistic tombs.

Automation is also taking place in the governor-governed relationship. This is the relationship that interests me most and is the centrepiece of the project I’m currently running. I define a governance system as any system that tries to nudge, manipulate, push, pull, incentivise (etc.) human behaviour. This is a broad definition and could technically subsume the two relationships previously described. More narrowly, I am an interested in state-run governance systems, such as systems of democratic or bureaucratic control. In these systems, one group of agents (the governors) set down rules and regulations that must be followed by the others (the governed). It’s less easy to describe this as a reciprocal relationship. In many historical cases, the governors are rigidly separated from the governed and by necessity have significant power over them. But there is still something reciprocal about it. No one — not even the most brutal dictator — can govern for long without the acquiescence of the governed. The governed must perceive the system to be legitimate in order for it to work. In modern democratic systems this is often taken to mean that they should play some role in determining the content of the rules by which they are governed.

I have talked to a lot of people about this over the years. To many, it seems like the governor-governed relationship is intrinsically humanistic in nature. It is very difficult for them to imagine a governance system in which either or both roles becomes fully automated. Surely, they say, humans will always retain some input into the rules by which they are governed? And surely humans will always be the beneficiaries of these rules?

Maybe, but even here we see the creeping rise of automation. Already, there are algorithms that collect, mine, classify and make decisions on data produced by us as subjects of governance. This leads to more and more automation on the governor-side of the loop. But the rise of smart devices and machines could also facilitate the automation of the governed side of the loop. The most interesting example of this comes in the shape of blockchain governance systems. The blockchain provides a way for people to create smart contracts. These are automated systems for encoding and enforcing promises/commitments, e.g. the selling of a derivative at some future point in time. The subjects of these smart contracts are not people — at least not directly. Smart contracts are machine-to-machine promises. A signal that is recorded and broadcast from one device is verified via a distributed network of other computing devices. This verification triggers some action via another device (e.g. the release of money or property).

As noted in other recent blog posts, blockchain-based smart contracts could provide the basis for systems of smart property (because every piece of property in the world is becoming a ‘smart’ device) and even systems of smart governance. The apotheosis of the blockchain governance ideal is the hypothetical distributed autonomous organisation (DAO) which is an artificial, self governing agent, spread out across a distributed network of smart devices. The actions of the DAO may affect the lives of human beings, but the rules by which it operates could be entirely automated in terms of their production and implementation. Again, humans may be indirect beneficiaries of the system, but they are not the primary governors or governed. They are bystanders.

2. The Unsustainability Argument
Where will this process of automation bottom out? Can it continue indefinitely? Does it even make sense for it to continue indefinitely? To some, it is not possible to understand the trend toward total automation in terms of its causes and effects. To them, there is something much more fundamental and disconcerting going on. Total automation is a deeply puzzling phenomenon — something that cannot and should continue to the point where humans are completely off the loop.

The Ford-Reuther story seems to highlight the problem in the clearest possible way. How can a capitalistic economy survive if there are no human producers and consumers? Surely this is self-defeating? The whole purpose of capitalism is to provide tools for distributing goods and services to the humans that need them. If that’s not what happens, then the capitalistic logic will have swallowed itself whole (yes, I know, this is something that Marxists have always argued).

I call this the unsustainability problem and it can be formulated as an argument:

  • (1) If automation trend X continues, then humans will be pushed off the loop.

  • (2) The loop is unsustainable* without human participation.

  • (3) Therefore, if automation trend X continues we will end up with something that is unsustainable*.

You’ll notice that I put a little asterisk after unsustainable. That’s deliberate. ‘Unsustainable’ in this context is not to be understood in its colloquial sense, though it can be. Unsustainable* stands for a number of possible concerns. It could be literally unsustainable in the sense that the trend will eventually lead to some breaking point or crash point. This is common in certain positive feedback loops. For example, the positive feedback loop that causes the hyperinflation of currencies. If the value of a currency inflates like it did in Weimar Germany or, more recently, Zimbabwe, then you eventually reach a point where the currency is worthless in economic transactions. People have to rely on another currency or have recourse to barter. Either way, the feedback loop is not sustainable in the long-term. But unsustainable* could have more subtle meanings. It may be the trend is sustainable in the long-term (i.e. it could continue indefinitely) but if it did so you would radically alter the value or meaning that attached to the activities in the loop. So much so that they would seem pointless or no longer worthwhile.

To give some examples, the unsustainability argument applied to the producer-consumer case might involve literal unsustainability, i.e. the concern might be that it will lead to the capitalistic system breaking down; or it might be that it will radically alter the value of that system, i.e. it might force a change in the system of private property. In the case of the speaker-listener loop, the argument might be that automation misses the point of what a language is, i.e. that a language is necessarily a form of communication between two (or more) conscious, intentional agents. If there are no conscious, intentional agents involved, then you no longer have a language. You might have some form of machine-to-machine communication, but there is no reason for that to take the form of language.

3. Should the Unsustainability Problem Concern Us?
I want to close with some simple critical reflections on the unsustainability argument. I’ll keep these fairly general.

First, I want to talk a bit more about premise (1). There are various ways in which this may be false. The simple fact that there is automation of the tasks typically associated with a given activity does not mean that humans will be pushed off the loop. As I’ve highlighted on other occasions, the ‘loops’ referred to in debates about technology are complicated and break down into multiple sub-tasks and sub-loops. Take the production side of the producer-consumer relationship. Productive processes can usually be broken down into a series of stages which often have an internal loop-like structure. If I own a business that produces some widgets, I would usually start the productive process by trying to figure out what kinds of widgets are needed in the world, I would then acquire the raw materials needed to make those widgets, develop some productive process, release the widgets to the consumers, and then learn from my mistakes/successes in order to refine and improve the process in the future. When we talk about the automation of production, there is a tendency to ignore these multiple stages. It’s rare for them all to be automated, consequently it’s likely that humans will retain some input into the loops.

Another way of putting this point is to say that technology doesn’t replace humans; it displaces them, i.e. changes the ecology in which they operate so that they need to do new things to survive. People have been making this point for some time in the debate about technology and unemployment. The introduction of machines onto the factory floors of Ford Motor Cars didn’t obviate the need for human workers; it simply changed what kinds of human workers were needed (skilled machinists etc.). But it is important that this displacement claim is not misunderstood. It doesn’t mean that there is nothing to worry about or that the displacement won’t have profound or important consequences for the sustainability of the relevant phenomenon. The human input into the newly automated productive or consumptive processes might be minimal: very few workers might be needed to maintain production within the factory and there might be limited opportunity for humans to exercise choice or autonomy when it comes to consumer-related decisions. Humans may be involved in the loops but be reduced to relatively passive roles within them. More radically, and possibly more interestingly, the automation trends may subsume humans themselves. In other words, the humans may not be displaced by technology; they may become the technology itself.

This relates to the plausibility of premise (2). This may also be false, particularly if unsustainability is understood in its literal sense. For example, I don’t see any reason to think that the automation of language production and interpretation in online advertising cannot continue. It may prove frustrating for would-be advertisers, and it may seem odd to the humans who stand on the sidelines watching the system unfold, but the desire for advertising space and the scarcity of attention suggests to me that, if anything, there will be a doubling down on this practice in the future. This will certainly alter the activity and rob it of some of its value, but there will still be the hope that you can find someone that is paying attention to the process. The same goes for the other examples. They may prove sustainable with some changed understanding of what makes them worthwhile and how they affect their ultimate beneficiaries. The basic income guarantee, for instance, is sometimes touted as a way to keep capitalism going in the face of alleged unsustainability.

Two other points before I finish up. Everything I have said so far presumes that machines themselves should not be viewed as agents or objects of moral concern — i.e. that they cannot directly benefit from the automation of production and consumption, or governance or language. If they can — and if it is right to view them as beneficiaries — then the analysis changes somewhat. Humans are still pushed off the loop, but it makes more sense for the loops to continue with automated replacements. Finally, as I have elaborated it, the unsustainability problem is very similar to other objections to technology, including ones I have covered in the recent past. It is, in many ways, akin the outsourcing and competitive cognitive artifacts objections that I covered here and here. All of these objections worry about the dehumanising potential of technology and the future relevance of human beings in the automated world. The differences tend to come in how they frame the concern, not in its ultimate contents.

Sunday, September 18, 2016

Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis

David Krakauer seems like an interesting guy. He is the president of the Santa Fe institute in New Mexico, a complexity scientist and evolutionary theorist, with a noticeable interest in artificial intelligence and technology. I first encountered his work — as many recently did — via Sam Harris’s podcast. In the podcast he articulated some concerns he has about the development of artificial intelligence, concerns which he also set out in a recent (and short) article for the online magazine Nautilus.

Krakauer’s concerns are of interest to me. They echo the concerns of others like Nicholas Carr and Evan Selinger (both of whom I have written about before). But Krakauer expresses his concerns using an interesting framework for thinking about the different kinds of cognitive artifact humans have created over the course of history. In essence, he argues that cognitive artifacts come in two flavours: complementary and competitive. We are creating more and more competitive cognitive artifacts (i.e. AI), and he thinks this could be a bad thing.

What I hope to do in this article is examine this framework in more detail, explaining why I think it might be useful and where it has some shortcomings; then I want to reconstruct Krakauer’s argument against competitive cognitive architectures and subject it to critical scrutiny. In doing so, I hope to highlight the similarities between Krakauer’s argument and the others mentioned above. I believe this is important because the argument developed is incredibly common in popular debates about technology and is, I believe, misunderstood.

1. Complementary and Competitive Cognitive Artifacts
Krakauer takes his cue from Donald Norman’s 1991 paper ‘Cognitive Artifacts’. This paper starts by noting that one of the distinctive traits of human beings is that they can ‘modify the environment in which they live through the creation of artifacts’ (Norman 1991, quoting Cole 1990). When I want to dig a hole, I use a spade. The spade is an artifact that allows me to change my surrounding environment. It amplifies my physical capacities. Cognitive artifacts are artifacts that ‘maintain, display or operate upon information in order to serve a representational function’. A spade would not count as a cognitive artifact under this definition (though the activity one performs with the spade is clearly cognitively mediated) but much contemporary technology does.

Indeed, one of Norman’s main contentions is that cognitive artifacts are ubiquitous. Many of the cognitive tasks we perform on a daily basis are mediated through them. Paper and pen, map and compass, abacus and bead: these are all examples of cognitive artifacts. All digital information technology can be classified as such. They all operate upon information and create representations (interfaces) that we then use to interact with and understand the world. The computer on which I type these words is a classic example. I could not do my job — nor, I suspect, could you — without the advantages that these cognitive artifacts bring.

But there are different kinds of cognitive artifact. Contrast the abacus with a digital calculator. Very few people use abaci these days, though they are still common in some cultures. They are external scaffolds that allow human beings to perform simple arithmetical operations. Sliding beads along a wireframe, in different directions, with upper and lower decks used to identify orders of magnitude, can enable you to add, subtract, multiply, divide and so forth. Expert abaci users can often impress us with their computational abilities. In some cases they don’t even need the physical abacus. They can recreate its structure, virtually, in their minds and perform the same computations at speed. The artifact represents an algorithm to them through its interface — i.e. a ruleset for making something complex quite simple — and they can incorporate that algorithm into their own mental worlds.

The digital calculator is rather different. It also helps us to perform arithmetical operations (and other kinds of mathematical operation). It thereby amplifies our mathematical ability. A human being with a calculator could tell you what 1,237 x 456 was in a very short period of time. But if you took away the calculator the human probably wouldn’t be able to do the same thing on their own. The calculator works on an algorithmic basis, but the representation of the algorithms is hidden beneath the user interface. If you take away the calculator, the human cannot recreate — re-represent — the algorithm inside their own minds. There is no virtual analogue of the artifact.

The difference between the abacus and the calculator is the difference between what Krakauer calls complementary and competitive cognitive artifacts. In the article I read, he isn’t terribly precise about the definitions of these concepts. Here’s my attempt to define them:

Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.

Put another way, Krakauer says that complementary cognitive artifacts are teachers whereas competitive cognitive artifacts are serfs (for now anyway). When we use them, they improve upon (i.e. compete with) an aspect of our cognition. We use them as tools (or slaves) to perform tasks in which we are interested; but then we become dependent on them because they are better than us. We don’t work with them to improve our own abilities.

Here’s where I must enter my first objection. I find the distinction Krakauer draws between these two categories both interesting and useful. He is clearly getting at something true: there are different kinds of cognitive artifact and they affect how we perform cognitive tasks in different ways. But the binary distinction seems simplistic, and the way in which Krakauer characterises complementary cognitive artifacts seems limiting. I suspect there is really a spectrum of different cognitive artifacts out there, ranging from ones that really improve or enhance our internal cognitive abilities at one end to ones that genuinely compete with and replace them at the other.

But if we are going to stick with a more rigid classification system, then I think we should further subdivide the ‘complementary’ category into two sub-types. I don’t have catchy names for these sub-types, but the distinction I wish to make can be captured by referring to ‘training wheels’-like cognitive artifacts and ‘truly complementary’ cognitive artifacts. The kinds of complementary artifact used in Krakauer’s discussion are of the former type. Remember when you learned to ride a bike. Like most people, you probably found it difficult to balance. Your parents (or whoever) would have attached training wheels to your bike initially as a balance aid. Over time, as you grew more adept at the physical activity of cycling, the training wheels would have been removed and you would eventually be able to balance without them. Krakauer’s reference to cognitive artifacts that can eventually be replaced by a virtual/mental equivalent strike me as being analogous. The physical artifact is like a set of training wheels; the adept user doesn’t need them.

But is there not a separate category of truly complementary artifacts? Ones that can’t simply be taken away or replaced by mental simulacra, and don’t compete with or replace human cognition? In other words, are there not cognitive artifacts with which we are genuinely symbiotic? I think a notepad and pen falls into this category for me. I could, of course, think purely ‘in my head’, but I am so much better at doing it with a notepad and pen. I can scribble and capture ideas, draw out conceptual relationships, and map arguments using these humble technologies. I would not be as good at thinking without these artifacts; but the artifacts don’t replace or compete with me.

2. The Case Against Competitive Cognitive Artifacts
I said at the outset that this had something to do with fears about AI and modern technology. So far the examples have been of a less sophisticated type. But you can probably imagine how Krakauer’s argument develops from here.

Artificial intelligences (narrow, not broad) are the fastest growing example of competitive cognitive artifacts. The navigational routing algorithms used by Google maps; the purchase recommendation systems used by Netflix and Amazon; the automated messaging apps I covered in my conversation with Evan Selinger; all these systems perform cognitive tasks on our behalf in a competitive way. As these systems grow in scope and utility, we will end up living in a world where things are done for us not by us. This troubles Krakauer:

We are in the middle of a battle of artificial intelligences. It is not HAL, an autonomous intelligence and a perfected mind, that I fear but an aggressive App, imperfect and partial, that diminishes human autonomy. It is prosthetic integration with the latter — as in the case of a GPS App that assumes the role of the navigational sense, or a health tracker that takes over decision-making when it comes to choosing items from a menus — that concerns me. 
(Krakauer 2016)

He continues by drawing an analogy with the story of the Lotus Eaters from Homer’s The Odyssey:

In Homer’s The Odyssey, Odysseus’s ship finds shelter from a storm on the land of the lotus eaters. Some crew members go ashore and eat the honey-sweet lotus, “which was so delicious that those [who ate it] left off caring about home, and did not even want to go back and say what happened to them.” Although the crewmen wept bitterly, Odysseus reports, “I forced them back to the ships…Then I told the rest to go on board at once, lest any of them should taste of the lotus and leave off wanting to get home.” In our own times, it is the seductive taste of the algorithmic recommender system that saps our ability to explore options and exercise judgment. If we don’t exercise the wise counsel of Odysseus, our future won’t be the dystopia of Terminator but the pathetic death of the Lotus Eaters. 
(Krakauer 2016)

This is evocative stuff. But the argument underlying it all is a little opaque. The basic idea appears to work like this:

  • (1) It is good (for us) to create and use complementary cognitive artifacts; it is bad (or could be bad) to create and use competitive cognitive artifacts.
  • (2) We are creating more and more competitive cognitive artifacts.
  • (3) Therefore, we are creating a world that will be (or could be) bad for us.

This is vague, but it has to be since the source material is vague. Clearly, Krakauer is concerned about the creation of competitive cognitive artifacts. But why? Their badness (or potential badness) lies in how they sap us of cognitive ability and how they leave us no smarter without them. In other words, their badness lies in how we are too dependent on them. This affects our agency and responsibility (our autonomy). What’s not clear from Krakauer’s account is whether this is bad in and of itself, or whether it only becomes bad if the volume and extent of the cognitive competition crosses the threshold. For reasons I get into below, I assume it must be the latter rather than the former because in certain cases it seems like we should be happy to replace ourselves with artifacts.

Now that the argument is laid bare, it’s similarities with other popular anti-AI and anti-automation arguments becomes obvious. Nicholas Carr’s main argument in his book The Glass Cage is about the degenerative impact of automation on our cognitive capacities. Carr worries that over-reliance on automating, smart technologies will reduce our ability to perform certain kinds of cognitive task (including complex problem-solving). Evan Selinger’s anti-outsourcing argument is similar. It worries about the ethical impact of outsourcing certain kinds of cognitive labour to a machine (though Selinger’s argument is more subtle and more interesting for reasons I explore in a moment).
Krakauer’s argument is just another instance of this objection, dressed up in a different conceptual frame.

Is it any good?

3. The Changing Cognitive Ecology Problem
In a way, Krakauer’s argument is as old as Western Civilisation itself. In the Platonic dialogue The Phaedrus, Plato’s Socrates laments the invention of writing and worries about the cognitive effects that will result from the loss of oral culture:

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

Seems quaint and old-fashioned doesn’t it? Critics of anti-automation always point to this passage. They think it highlights how misguided and simplistic Krakauer’s (or Carr’s or Selinger’s) views are. No one now looks back and laments the invention of writing. Indeed, I think we can all agree that it has been enormously beneficial. It it far better at preserving culture and transmitting collective wisdom than oral traditions ever were. I think I can safely say that having access to high-quality written materials makes me a smarter, better person. I wouldn’t have it any other way (though I acknowledge some books have had a negative impact on society). I gain by having access to so much information: it enables me to understand far more of the world and generate new and hopefully interesting ideas by combining bits and pieces of what I have read. Furthermore, books didn’t really undermine memory in the way that Socrates imagined. They simply changed what it was important to remember. There were still (until recently anyway) pressures to remember other kinds of information.

The problem with Krakauer’s view is deep and important. It is that competitive cognitive artifacts don’t just replace or undermine one cognitive task. They change the cognitive ecology, i.e. the social and physical environment in which we must perform cognitive tasks. This is something that Donald Norman acknowledged in his 1991 paper on cognitive artifacts. There, his major claim was that such artifacts neither amplify nor replace the human mind; rather they change what the human mind needs to do. Think about the humble to-do list. This is an artifact that helps you to remember. But the cognitive act of remembering with a to-do list is very different from the cognitive act of remembering without. With the to-do list, three separate tasks must be performed: creating the list, storing it, looking it up when needs be. Without the list you just search your mind for the information (perhaps through the use of associative cues). The same net result is produced, but the ecology of tasks has changed. These changes are not something that can be evaluated in a simple or straightforward manner. The process of changing the cognitive ecology may remove or eliminate an old cognitive task, but doing so can bring with it many benefits. It may enable us to focus our cognitive energies on other tasks that are more worthy uses of our time and effort. This is what happened with the invention of writing. The transmission of information via the written word meant we no longer needed to dedicate precious time and effort to the simple act of remembering that information. We could dedicate time and effort to thinking up new ways in which that information could be utilised.

The Canadian fantasy author R Scott Bakker describes the ‘cognitive ecology’ problem well in his recent response to Krakauer. As he puts it:

What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us. 
(Bakker 2016)

And therein lies the rub for Krakauer et al: why should we fear the growth of competitive cognitive architectures when their effects on our cognitive ecology are uncertain and when similar technologies have, in the past, been beneficial?

It is a fair point but I think the cognitive ecology objection has its limitations too. It may highlight problems with the generalised version of the anti-automation argument that Krakauer seems to be making, but it fares less well against more specific versions of the argument. For instance, Evan Selinger’s objections to technological outsourcing tend to be much more nuanced and focused. I covered them in detail before so I won’t do so again here. In essence Selinger argues that certain types of competitive cognitive artifact might be problematic insofar as the value of certain activities may come from the fact that we are present, conscious performers of those activities. If we are no longer present conscious performers of the activities — if we outsource our performance to an artifact — then we may denude them of their value. Good examples of this include affective tasks we perform in our interpersonal relationships (e.g. messaging someone to remind them how much you love them) as well as the performative aspects of personal virtues (e.g. generosity and courage). By tailoring the argument to specific cases you end up with something more powerful.

In addition to this, I worry about the naive use of historical examples to deflate concerns about present-day technologies. The notion that you can simply point to the Phaedrus, laugh at Socrates’ quaint preliterate views, and then warmly embrace the current wave of competitive cognitive artifacts seems wrong to me. There may be crucial differences between what we are currently doing with technology and what has happened in the past. Just because everything worked out before doesn’t mean everything will work out now. This is something that has been well-thrashed out in the debate about technological unemployment (proponents of which are frequently ridiculed for believing that this time it will be different). The scope and extent of the changes to our cognitive ecology may be genuinely unprecedented (it certainly seems that way). The assumption behind the cognitive ecology objection is that humans will end up occupying a new and equally rewarding niche in the new cognitive ecology, but who is to say this is true? If technology is better than humans in every cognitive domain, there may be no niches to find. Perhaps we are like flightless birds on some cognitive archipelago: we have no natural predator right now but things could change in the not-too-distant future.

Finally, I worry about the uncertainty involved in the coming transitions. We must make decisions in the face of uncertainty — of course we must. But the notion that we should embrace rampant AI despite (or maybe because of) that uncertainty seems wrong to me. Commitment to technological change for its own sake seems just as naive as reactionary conservatism against it. There must be a sensible middle ground where we can think reasonably and rationally about the evaluative trade-offs that might result from the use of competitive cognitive artifacts, weigh them up as best we can, and proceed with hope and optimism. Throwing ourselves off the cliff in the hopes of finding some new cognitive niche doesn’t feel like the right way to go about it.