Saturday, October 22, 2016

Episode #13: Laura Cabrera on Human Enhancement, Communication and Values

cabrera-full-2015

In this episode I interview Dr Laura Cabrera. Laura is an Assistant Professor at the Center for Ethics and Humanities in the Life Sciences at Michigan State University where she conducts research into the ethical and societal implications of neurotechnology. I ask Laura how human enhancement can affect inter-personal communication and values and talk about the issues in her recent book Rethinking Human Enhancement : Social Enhancement and Emergent Technologies.

You download the show here or listen below. You can also subscribe on Stitcher and iTunes (click 'add to iTunes').


 Show Notes

  • 0:00 – 1:00 - Introduction
  • 1:00 – 11:15 - What is human enhancement- definitions and translations
  • 11:15 – 13:35 - Discussing moral enhancement - Savulescu and Persson
  • 13:35 – 14:35 - Human enhancement and communication - discussing Laura’s paper with John Wekert
  • 14:35 – 28:40 - Shared lifeworlds, similar bodies, communication problems
  • 28:40 – 39:48 - Augmented reality and sensory perception
  • 39:48 – 46:20 - Cognitive capacity and memory – Oliver Sacks & Borges
  • 46:20 – 49:50 - Ethics – hermeneutic crises and empathy gaps
  • 49:50 – 52:30 - Can technology solve communication problems?
  • 53:32 – 1:00:00 - What are human values?
  • 1:00:00 – 1:08:20 - How does cognitive enhancement affect values?
  • 1:08:20 – 1:16:00 – Neoliberalism values - pressures and competitiveness
  • 1:16:00 – End - How to prioritise values and see the positives in enhancement
 

Relevant Links

Wednesday, October 19, 2016

Why Non-Natural Moral Realism is Better than Divine Command Theory




It’s been a while since I wrote something about theism and morality. There was a time when I couldn’t go more than two weeks without delving into the latest paper on divine command theory and moral realism. More recently I seem to have grown disillusioned with that particular philosophical joy ride. But last week Erik Wielenberg’s new paper ‘Euthyphro and Moral Realism: A Reply to Harrison’ managed to cross my transom. I decided I should read it.

I am glad I did. It is an interesting paper. As you might guess from the title, it is a reply to another paper by the philosopher Gerald Harrison. In that paper, Harrison took defenders of divine command theory (DCT) and non-natural moral realism (NNMR) to task. He argued that two of the standard objections to divine command theory — usually based on the infamous Euthyphro dilemma — apply equally well to non-natural moral realism. And since many defenders of the latter are harsh critics of the former, this should be a cause of some embarrassment.

Now, Wielenberg is a defender of NNMR and he doesn’t think he needs to feel any embarrassment. The goal of his paper to explain why. I want to go through his main arguments in the remainder of this post.


1. Two Criticisms of DCT and NNMR
I have to start by going back to Harrison’s paper. The main purpose of that paper was to argue for something Harrison called the parity thesis. The essence of this thesis is that the main flaws associated with DCT apply equally well to NNMR. That is to say, they are similarly situated with respect to two major metaethical problems. What are those problems? Well, it all goes back to the Euthyphro dilemma. As you’ll no doubt recall, in Plato’s Euthyphro, Socrates and Euthyphro have a debate about the origin of moral properties like good/bad and right/wrong.* Euthyphro — who stands in for the modern defender of DCT — insists that those properties originate in the will (commands) of the gods. Socrates wonders whether those properties really originate in divine will or whether they are divinely willed because they have those properties. He thinks that both interpretations have their problems.

To take a simple example, suppose we all agree that it is wrong to torture an innocent child for fun. Where does the wrongness come from? Euthyphro and the proponent of DCT suggests that the wrongness comes from the command of God. It is wrong because God has prescribed against it. But Socrates suggests that this cannot be right. If it is only wrong because God commands against it, then we run into the problem that there is a possible world in which God did not issue that command. Indeed, there might even be a possible world in which God insists that we do torture innocent children for fun. That doesn’t sit right because there doesn’t seem to be anything that could make torturing innocent children for fun right. In other words, it seems like any two act-tokens that shared all their non-moral properties with respect to the innocence of the children, the nature of the torture, and the intention for doing it, could differ in their moral properties. But if you respond to the problem by saying that the wrongness doesn’t come from God, but rather God commands against torturing innocent children for fun because it is wrong, you seem to sweep away the original rationale for the DCT, namely: to provide the foundation for wrongness. If God only commands that which is already right/wrong, then His command does not make the moral difference we would like it to make. That’s the Euthyphro dilemma.

Theists have some solutions to this dilemma. The most popular is that God’s nature sets limits on what he can and cannot command. I won’t go into those solutions here though because I have done so on previous occasions and I believe that ‘solutions’ of this sort simply push the dilemma back a further step.

The important point for present purposes is that the Euthyphro highlights two problems for the proponent of DCT:

Horrendous Deeds (HD): DCT is consistent with there being a possible world in which horrendous deeds are morally right (because they are commanded by God). In other words, there is a possible world in which horrendous deeds are morally acceptable.

Explanatory Inadequacy (EI): DCT seems to contravene what we believe about moral supervenience. It suggests that two acts that are completely the same with respect to their natural properties can nonetheless differ in their moral properties (because God’s commands add a brute moral fact to natural reality). Put another way, it suggests that an act token’s natural properties can never explain its moral properties.

Now how do these concerns apply to NNMR? The answer is complicated. We’ll get into the nitty gritty as we go through Wielenberg’s reply to Harrison. But the basic idea is this. Our moral intuitions tell us that any metaethical theory that permits HD and EI is unsatisfactory. We have a strong belief that horrendous deeds cannot be moral and that non-natural properties make some explanatory contribution to moral properties. But DCT is consistent with both HD and EI (according to the Euthyphro). Is NNMR also consistent with both? Harrison argues that it is. NNMR is the view that moral properties are not reducible to natural properties; moral properties are, instead, sui generis, brute facts about reality. It seems like this is consistent with EI: if moral facts do not reduce to natural facts then natural facts do not explain moral facts. It also seems like it is consistent with HD. Why? Because there could be a possible world in which the brute, moral properties supervene over different sets of natural properties. Nothing in NNMR rules this out.

This gives us the parity thesis.


2. Problems for the Parity Thesis
There is an oddity to Harrison’s analysis. Many philosophical theists (I use the term ‘philosophical’ in order to contrast them with ordinary believers) are proponents of something called ‘perfect being theology’. They think God is the most perfect being that there can be. He is omnipotent, omniscient and omnibenevolent (or, if ‘omni’ is too strong, he is maximally powerful, knowledgeable and benevolent). Harrison isn’t a fan of perfect being theology. He favours something he calls ‘strange powerful creature’ theology. This claims that God is a powerful supernatural being, but not necessarily a perfect being.

The peculiarities of Harrison’s theology becomes important when we turn to Wielenberg’s critique. Wielenberg insists that the proponent of NNMR should not fear the parity thesis. But his insistence is strongest when it comes to the contrast between NNMR and the perfect being version of DCT. His criticisms might be less persuasive for the proponent of the strange powerful creature version of DCT. This, however, is not a major limitation since, as even Harrison acknowledges, most proponents of DCT favour the perfect being view. They do so because appealing to God’s maximal goodness is one popular way to resolve the Euthyphro dilemma.

Enough of this. What is Wielenberg’s critique of Harrison? It starts by distinguishing between two kinds of objection:

Objection A: Not-P is true; theory T entails P; therefore theory T is incompatible with a truth and hence T must be false.

Objection B: Not-P is true and not-P’s truth must have some explanation or other; theory T fails to provide an explanation for not-P’s truth.

At a purely abstract level, Objection A is a much more serious objection than Objection B. Objection A tells us that some proposed theory is inconsistent with a truth and therefore must be false. Objection B is simply telling us that some proposed theory fails to explain or account for a truth.

Here is an example. We know for a fact that objects of different mass fall at equal rates in a vacuum. Now suppose we have two different theories of gravity. One theory tells us that objects of a different mass should fall at different rates. That is to say: it is a logical derivation from the axioms of the theory that objects of different mass fall at different rates. The other theory says nothing about the rate at which objects should fall in a vacuum. It fails to explain or account for the fact that they fall at the same rate, but there is nothing in the axioms of the theory that contradicts or undermines that truth. The first theory thus falls foul of Objection A: it cannot be true because it contradicts something we know to be true. The second theory falls foul of Objection B: it cannot explain something we know to be true. This makes the second theory much stronger than the first. Objection B is still a problem for the theory, but it is not a fatal problem. The only way it becomes a problem is if it turns out that there is some rival theory of gravity that does explain the fact that objects fall at the same rate. If there is such a theory, then you will need to conduct a theoretic comparison: assessing both theories on the totality of their merits. The rival theory may be better than the original one if it accounts for more truths, but if the rival theory only accounts for the truth of objects falling at the same rate in a vacuum, we may still prefer the original theory.

This is relevant to the present debate because, according to Wielenberg, the possibility of morally acceptable horrendous deeds (HD) and the explanatory incompleteness of natural facts (EI) are like Objection A to DCT, and like Objection B to NNMR. In other words, HD and EI are nearly fatal for DCT because DCT entails their truth; but they are mere flesh wounds for NNMR - a puzzle that NNMR fails to explain but which is only a problem if there is some alternative, superior moral theory.

Let’s try to unpack this claim. The problem for the perfect being variant of DCT is that it seems to entail that horrendous deeds could be morally acceptable. If God is truly all powerful, and if one of God’s powers is to determine what is morally acceptable or not, it follows that there is a possible world in which horrendous deeds are morally acceptable. This contradicts what we intuit to be true. You might respond to this by suggesting we ditch our moral intuitions in favour of the theory, but it is very difficult to do this because our moral intuitions seem to be more epistemically secure than DCT.

By the same reasoning, the perfect being version of DCT also entails that natural properties do not contribute to the explanation of moral properties. Since in the actual world God does not command horrendous deeds, but there is a possible world in which God does command horrendous deeds, it follows that the natural properties of those horrendous deeds make no difference to their moral properties. The torturing of innocent children for fun can be morally acceptable in one world and impermissible in another. It is the presence or absence of God's command that makes the difference.

Contrariwise, NNMR entails neither of these problems. Harrison’s claim (to quote) about NNMR is that “if moral properties are not reducible [to natural properties], then there seems to be no principled way of ruling out the possibility of a naturalistically identical world to this one in which moral properties are differently arranged” (Harrison 2015, 110). This is fair, insofar as it goes. But it doesn’t go very far. Although it is true that NNMR cannot rule out the possibility of a world in which horrendous deeds are morally acceptable, it doesn’t entail that there must be a possible world in which they are. Similarly, although NNMR does entail that natural properties fail to fully explain or account for moral properties, it doesn’t entail that they make no explanatory contribution to those moral properties. Indeed it is consistent with the belief that they make some explanatory contribution.

All of this means that the parity thesis fails to find purchase. There are significant differences between NNMR and perfect being versions of DCT.

There is more to be said, of course, but I'll leave that to another day.


*Actually, if I recall correctly, the original Euthyphro debate was about ‘piety’ i.e. was X pious because it was willed by the gods or was it willed by the gods because it was pious.

Wednesday, October 12, 2016

Pornography and Subordination: The Contextual View




Most mainstream pornography is targeted at heterosexual cisgendered men. Its visual focus is on women: their bodies, their behaviours. The men in mainstream pornography are largely hidden from view. The women are there to pleasure the men: they are objects to be used for sexual gratification. The usage is literal, not metaphorical: the typical consumer of pornography is trying to sexually gratify themselves while viewing it, not merely simulating sexual gratification.

This is oftentimes said to subordinate women. It identifies and labels them as inferior to men, and contributes to their systematic oppression and marginalisation within society. Let’s grant that this is possible. How exactly does this subordination work? That’s one of the questions asked in Matt Drabek’s recent article ‘Pornographic Subordination, Power and Feminist Alternatives’. There’s lots of interesting stuff in the article. I would recommend it to anyone with an interest in this topic. I want to focus on what I take to be its big idea: the contextual view of pornography.

(These are just my notes on the article; not a careful critical analysis.)


1. The Constitutive View vs. The Causal View
The big idea is that the subordinative effect of pornography (if and when it exists) is due to the material and social contexts in which it is produced and used, and not due to its content. I call this the ‘contextual view’. It is to be contrasted with two other views of pornography and subordination:

Constitutive View: Pornographic material is itself constitutive of female subordination, i.e. the publication and viewing of the material is itself an act of subordination.

Causal View: Pornographic material causes female subordination, i.e. those who watch and use pornography go on to subordinate women.



The constitutive view is a little bit obscure, but popular in certain feminist and philosophical quarters. It originates in the work of Catherine MacKinnon, and has been reinterpreted by philosophers like Rae Langton and Mary Kate McGowan (whose work I have written about before). These philosophical interpreters often use speech act theory to defend their views.

The central tenet of speech act theory is that words and symbols don’t simply report on how the world is; they also perform acts in the world. When a registered celebrant at a wedding says ‘I now declare you husband and wife’, they are not simply reporting a fact; they are making a fact. They are creating a marital state of affairs where none existed before.

Proponents of the constitutive theory argue that pornography does something similar. It doesn’t simply depict women in sexually explicit, possibly denigrating and degrading positions. It has performative dimensions is well. One of the more popular claims is that it serves to silence women: it creates socio-normative rules regarding how women are to behave and act. Thus the production and use of pornography is itself constitutive of subordination - not simply a cause of subordination. (If you’re interested, I wrote a much longer analysis of the constitutive view on a previous occasion).

This is to be contrasted with the causal view. This is a more straightforward, less philosophically recondite theory. It holds that the production and use of pornography causes the subordination of women. There are various different understandings of this causal link. Some argue that the men who view the pornography go on to subordinate women in their daily lives: the pornography encourages them to do so. This can often be difficult to prove. Empirical research on the effects of pornography is vast and disputed.

A slightly more sophisticated causal theory has been propounded by Anne Eaton. Several years back, Eaton wrote a paper called ‘Towards a sensible anti-porn feminism’. In it, she suggested that the causal effects of pornography are (or are likely to be) quite complex. In particular, she suggested that there is probably a set of interlocking feedback loops between broader social structures and individual acts of production and use. The idea is that the individual acts of production and use are facilitated and justified by broader social structures of oppression, and they in turn serve to reinforce and reproduce those structures (in small ways). It’s a positive feedback loop: the structures support the pornographic works which in turn support and reinforce the structures.

Although I have contrasted the constitutive and causal views here, it is important to note that they are not necessarily in tension with one another. Someone could support both at the same time. Nevertheless, they do make different claims about the subordinating powers of pornography.


2. It’s Context not Content that Matters
Drabek’s goal is to defend an alternative, contextual view, of the subordinating powers of pornography. One thing that constitutivists and causalists usually share is the belief that the content of pornography matters when it comes to its subordinative powers. That is to say, certain kinds of images and depictions of women matter more. Sexually explicit material comes in many flavours. Some of it can be artful and thought-provoking. Some can be crass, violent and degrading. The generally shared view is that it is the latter — what Eaton calls ‘inegalitarian’ pornography — that is subordinating. Indeed, there is an interesting view among some feminists — Catherine MacKinnon being probably the most famous — which holds that only pornography with inegalitarian content counts as ‘pornography’.

Drabek pushes back against this view. He argues that when it comes to pornography, it is the material and social context of its production and its use that matters. Content has very little bearing on it. This puts him somewhere between the causal and constitutive camps. In certain contexts, the use of pornography might be constitutively subordinative; in others it might be causally subordinative; in still others it might not be subordinating at all.



Drabek’s claim that content doesn’t matter much is a strong one. It seems to fly in the face of common sense. To many people it just seems obvious that some sexually explicit material has more objectionable content than other sexually explicit material and that this difference must have some bearing on its social effect. This commonsense view is taken onboard by legislatures around the world. They often create specific legal bans for pornography that depicts violence or torture, for instance.

Perhaps this is the right approach, but there is an interesting philosophical point to be made about the meaning of any image or symbolic practice. Meaning is never inherent in content. The content-maker chooses particular symbols and images because they mean something in a particular social context; and the content-user interprets those symbols in light of the meaning imposed upon them from that broader social context. And social contexts are highly variable. I have discussed, previously, how the symbolic meaning of particular images and practices changes from culture to culture. Take, for example, the meaning attached to the act of paying for someone to mourn at your parents’ funeral. In some cultures, paid mourners are an insult to the memory of the dead; in others they are an acceptable and encouraged token of affection.

Drabek takes a similar view of the content of pornography. Its meaning is highly variable and this has an impact on its subordinating powers. Some ‘plain vanilla’ pornography, for example, could be highly subordinating:


Consider a case of seemingly banal or harmless sexually explicit materials, perhaps a short video of a man and woman engaged in uneventful sexual intercourse or the banal Playboy photos mentioned by [Gail] Dines…I think banal pornography often does contribute to gender subordination when it is produced and viewed in the right social and material contexts. Adolescent boys and girls who view this material internalize and advance assumptions about the body and about sex.
(Drabek 2016

Contrariwise, some pornography with seemingly offensive content could be unobjectionable:

The genre of sadomasochist pornography, in particular, is full of material that is unabashedly inegalitarian, but often careful to incorporate discussion and enactment of explicit consent and positive sexual exploration among practitioners. The eroticization of violence, humiliation, and gender inequality are common, but often done in an affirmative way.
(Drabek 2016

Again, it is the context that matters. If used and interpreted in the right way, banal sexually explicit material could reinforce gender oppression and violent or humiliating material could be empowering or affirmative. What matters is how the people producing, using and consuming the pornography understand it. For example, the ubiquity and banality of certain pornographic material might be key to its subordinating powers. It might be because it is so widely used and consumed (and socially accepted) that is has a strong subordinating effect. Whereas the transgressive and uncommon nature of some pornography might be why it doesn’t have this power.


3. Conclusion: The Possibility of Feminist Pornography
Drabek uses his contextual account to argue for the possibility of truly feminist pornography (note: there already exists pornography that is classed as feminist). That is to say, sexually explicit material that does not have a subordinating effect but actually helps to reverse or breakdown sources of gender discrimination and oppression. His argument makes sense in light of the contextual account. If content doesn’t matter (or doesn’t matter that much) then it should be possible to create contexts in which pornographic materials are produced and used in a way that does not support subordination. Drabek gives some illustrations towards the end of his paper. He suggests that feminist pornography tends to have educative and political intentions and uses.

I find this idea interesting. I quite agree that context matters, but I have some resistance to the notion that content doesn’t have a significant part to play. To be clear, Drabek never says that content counts for nothing. He just says it isn’t the major driving force. What I wonder, however, is whether certain types of pornographic material have context-insensitive meanings. Or, to put it another way, whether certain social and material contexts are so widely-shared that content ends up playing a decisive role in their meaning and broader social significance.

This is something I explored in previous posts about virtual rape and virtual child sexual abuse. There, I looked at work done by Stephanie Patridge on the possibility of incorrigible social meanings. Her suggestion was that some content will have a meaning that is relatively invariant across contexts. Examples in her papers included racist jokes and images, as well as some sexually explicit material, e.g. material depicting racially motivated sexual violence or rape fantasies. The meaning attached to such content was, according to her, incorrigible (i.e. incapable of being revised or changed). Patridge’s argument was thought-provoking but in the end even she seemed to acknowledge that there were contexts in which such seemingly incorrigible meanings could be altered. For instance, somebody from a racial minority could use racist imagery to make a political point or to educate social peers. (It seems that Drabek’s vision of feminist pornography is somewhat similar.)

But Patridge refined her argument and suggested that although some people could use transgressive imagery and content in a positive way, those who used it for humour or sexual gratification could not. So if you were sexually gratified by images of rape or sexual violence, then we were probably always going to be entitled to make negative inferences about your moral character, or the character of the society that encouraged you, or the likely social effects of your being gratified. The upshot was that content and context both mattered. In some cases, the context is relatively invariant and hence the meaning that attaches to the content is also relatively invariant.

This doesn’t contradict or undermine the contextual view. It does, however, highlight an important consequence of it. If we can change the context in which pornography is produced and used, we might be able to change the meaning that attaches to its content. But if the context is difficult to change, then we may not. The reality is that the context in which pornography is consumed and used is one involving sexual gratification (Drabek accepts this in his paper). To put it bluntly: people try to get off on pornographic material. That aspect of the context is relatively invariant. Yes, sometimes pornographic material might be displayed for artistic and educative reasons, but those occasions are pretty infrequent. Given the relative fixity of the context, I suspect that the meaning attaching to particular kinds of pornographic content is also going to be relatively invariant. It will be very hard to reform the meaning (and hence the social effect) of certain sexually explicit materials. Only changes to the content will have that effect. This is what is striking to me about the S&M example given by Drabek. It’s only because that material includes, in his characterisation, documented discussions of affirmative consent and boundary-setting that it is saved from being problematic. If you stumbled upon images of women (seemingly) being tortured, raped and humiliated, and used that imagery for the purposes of sexual gratification without knowing about those discussions of affirmative consent and boundary setting, then there would be something problematic about it. The content of the material is what changes our moral perception.

In short, in some cases the context of usage is so fixed that content is the only thing that can make a difference. In those cases, content will count for a great deal.

Sunday, October 9, 2016

How do we Enhance Cognition through External Representations? Five Ways





I use pen and paper to do most of my serious thinking. Whether it is outlining blogposts or academic papers, taking notes or constructing arguments, I pretty much always take out my trusty A4 pad and pen when I run into a cognitive trough. To be sure, I often mull ideas over in my head for a long time beforehand, but when I want to move beyond my muddled and incoherent thoughts, I will grab for my pen and paper. I am sure that many of you do the same. There is something cognitively different about thinking outside your head: creating an external representation of your thoughts reveals their strengths and weaknesses in a way that internal dialogue never can.

In a recent post, I suggested that the humble pen and paper might be examples of truly complementary cognitive artifacts — artifacts we use to perform cognitive tasks that don’t simply supplement or provide useful short-term props for cognition but actually improve our thinking in a manner that would not be possible in their absence. I was pretty sketchy about how such artifacts might work their cognitive magic in that post, but I’ve just been reading David Kirsh’s paper ‘Thinking with external representations’ and he provides some insights.

The paper is not a huge amount of fun to read, but it does contain some good examples. Kirsh is a proponent of situated/embodied cognition. He believes that human cognition always involves some kind of dynamic interaction between our bodies and our environments. He holds that external representations (drawings, maps, diagrams, bits of writing etc.) are an important way to enhance our cognitive abilities. He goes on to identify several distinct ways in which this happens. I want to share those ways in this post.


1. The Important Principle: Externalisation often lowers the cost of cognition
In the abstract to the paper, Kirsh claims that there are seven distinct ways in which the use of external representations enhance cognition. To be honest, I couldn’t really identify seven in the body of his paper. I wasn’t helped by the fact that Kirsh doesn’t actually identify the distinctions by number in his paper: he leaves that up to the reader. I’m sure it depends on whether you are a lumper or a splitter, but by my estimation it’s more like there is one major way in which externalisation enhances cognition and then four others.

I’ll start with the major one. This is tied into Kirsh’s preferred theory of cognition. As I mentioned, Kirsh is a proponent of embodied/situated cognition. This is a theory within cognitive science. It holds that thinking (the act of cognition) is not a purely brain-based phenomenon. The mental concepts and categories we form are, according to proponents of the theory, shaped by other aspects of our biological nature and the way in which we relate to our environments. Indeed, there is probably no such thing as purely brain-based cognition. Even the ‘highest’ and most abstract forms of thought involve some dynamic interaction between our brains and the world outside our heads. Some proponents of embodied cognition, like Andy Clark, go further and argue that humans are natural born cyborgs, always extending their minds beyond the cramped confines of their skulls.

One way to get your head around this is to think in terms of a schematic diagram. We have an agent — a human thinker — embedded in an environment. Cognition is the result of information being processed from that environment. But the agent’s body affects how he perceives and process that information. So on pretty much every occasion, cognition is not simply a brain-based phenomenon. It relies on some mediation between brain, body and environment.

It’s more interesting than that, of course. On some occasions (maybe even most) the agent will rely on some external artifact to assist in the performance of a cognitive task. When preparing to cook a meal, for example, s/he will open a cook book to the relevant page, mark elements within that page, perhaps scrolling through the text with their finger. They will also arrange their ingredients into a workspace, setting up a logical order in which to prepare and assemble those ingredients. The external representations provide a scaffolding for cognition: without them it would be much more difficult to perform the task.



Now here’s the important idea: Cognition is costly. It uses up energy. There is, consequently, an incentive to make it cheaper. If you’re very brain-centric in your approach to cognition, you might think that the only way to reduce the cost of cognition is to improve the efficiency of your brain-based information processing. Learn and master the tasks you want to perform so that they use up less neural energy. But once you recognise the central insight of embodied cognition, you see that this is not the only way. You can also reduce costs through externalisation: by offloading some of the cognitive energy to external cognitive scaffolds. For example, it is often much ‘cheaper’ to keep track of a recipe in a book than to store it all in your head.

That’s the main reason why externalisation enhances cognition. There is an incentive for cognitive tasks to flow to the place in which they can be performed most cheaply and since cognition is not a purely brain-based phenomenon, the cheapest site of performance is not always going to be inside the head.


2. Four other reasons why externalisation enhances cognition
But it’s not the only the only reason why externalisation enhances cognition. Indeed, thinking solely in terms of ‘costs’ often means you miss important ways in which externalisation aids our thinking processes. Here are four more:

They create shareable objects of thought: Creating an external representation of a thought turns it into an object that others can perceive and interpret. This enables collaboration - the power of many minds working on a similar cognitive problem. In many ways, this is what scholarship and research is all about. Scholarly papers translate arguments and theories that might otherwise be stuck in people’s heads into shared objects of thought. Others can then come in and refine, critique and improve upon those arguments and theories. It’s possible to do some of this through dialogue, but the scope for collaboration is greatly increased through externalisation.

They facilitate rearrangement and new cognitive operations: Creating an external representation can allow you to bring together ideas that are distant in logical space and perform cognitive operations that would be difficult without representation. This might be the main advantage of externalisation. By putting thoughts and ideas down on paper — often through schematic diagrams or argument maps — I can literally ‘see’ how thoughts relate to one another and combine them in new ways. Kirsh gives the simple example of a jigsaw puzzle in his article. Imagine trying to assemble a jigsaw puzzle via purely mental representations and rotations of pieces. It would be practically impossible. Having external representations of the pieces, and being able to move them around in physical space, makes it practically possible.

They are often more stable and persistent than mental representations: By externalising your thoughts you usually encode them into an object that is more stable and persistent than a mental representation. This is a very important factor for me. It is well-known that memory is a fallible, dynamic and reconstructive phenomenon. Our brains are not like video recorders. They do not store perfect representations of real-world events. Rather, our memories get dynamically integrated into how we perceive and understand the world. The act of remembering involves reshaping the past. This dynamic integration has its advantages, but it also makes it more difficult to recover exactly what you were thinking on a previous occasion. Externalisation helps to address this problem. Indeed, I find that if I don’t get thoughts and ideas down on paper I often lose my ‘train’ of thought. This can be very frustrating when I previously thought I had hit upon some insight.

They enable re-representation: Once you create an external representation of a thought, you can often re-represent it as something simpler and more elegant. For instance, I might have the bones of argument rattling around my head. When I put it down on paper first, it might consist of ten premises and two conclusions. But when I analyse those premises and conclusions in their represented format, I realise that a few of them are redundant. I can simplify the argument to something with only three premises and one conclusion. Not only that, I can then combine this newly simplified argument with another simplified argument, thereby chaining together two previously distinct (and complicated) representations. I can repeat this over and over creating representations of significant complexity. What’s more, I can combine many different forms of representation (writing, music, images) to add new perspectives to the original set of thoughts.




3. Is Externalisation Indispensable?
I quite agree with Kirsh that externalisation improves the quality of our thought. But can we push the argument further and say that externalisation is, in some cases, essential to thinking? Kirsh makes that push toward the end of the article. I won’t evaluate the argument he offers in any great depth here, but I will share it because I think it is interesting.

In essence, Kirsh argues that there a certain kinds of thinking (more precisely computation) that are too complex for purely mental representation. Indeed, there are certain kinds of computation that are too complex for most external representations. To engage in these forms of computation, you have to construct a particular kind of external representation (usually an analog model) to understand what is going on. The argument works like this:


  • (1) Some phenomena are irreducibly complex: they involve State A leading to State B but there is no way to identify and describe the factors mediating the two states in a simply logical-mathematical formula (which could in turn be represented mentally).
  • (2) The only way to understand such irreducibly complex phenomena is to construct an external representation of the system which models the transition between State A and State B and run that model repeatedly.
  • (3) Implication: externalisation is essential to some forms of cognition.


This might seem a little bit obscure. Kirsh takes the idea from work done by John Von Neumann who argued that some natural phenomena may be too complex to be reduced to simple equations. The simplest way to understand them would be to create models of the relationships underlying the phenomena. A good example of this is the behaviour of an n-body system, e.g. the planets in the solar system. Kirsh suggests that an orrery — a simple analog model of the planets — is the easiest way to model and predict the movement of the planets.

This sounds plausible to me, but I’m not sufficiently expert in the mathematics of complex systems to question it. The important point is that external representations can, and frequently do, improve the quality of our thinking. If you want to ‘enhance’ cognition, you are usually going to be better off reaching for a pen and paper than you are a box of modafinil.

Friday, October 7, 2016

Episode #12 - Rick Searle on the Dark Side of Transhumanism

medium_searle

In this episode I interview Rick Searle. Rick is an author living in Amish country in Pennsylvania. He is a prolific writer and commentator on all things technological. I get Rick to educate me about the darker aspects of the transhumanist philosophy. In particular, what Rick finds disturbing in the writings of Zoltan Istvan, Steve Fuller and the Neoreactionaries.

You can download the episode here. You can listen below. You can also subscribe on Stitcher or iTunes (click add to 'iTunes').



Show Notes

  • 0:00 - 1:40 - Introduction
  • 1:40 - 4:40 - Rick's definition of Transhumanism
  • 4:40 - 10:10 - Zoltan Istvan and the Transhumanist Wager
  • 10:10 - 16:35 - The philosophy of teleological egocentric functionalism - Ayn Rand on steroids?
  • 16:35 - 22:30 - Steve Fuller's Humanity 2.0
  • 22:30 - 28:00 - Some disturbing conclusions?
  • 28:00 - 32:20 - The ontology and ethics of Humanity 2.0
  • 32:20 - 36:55 - Stalinism as Transhumanism
  • 43:25 - 47:00 - Transhumanism as religion
  • 47:00 - 56:30 - The neo-reactionaries of Silicon Valley
  • 56:30 - End - Is democracy fit for the future?
 

Relevant Links