Saturday, August 20, 2016

Phenomenological Coupling, Augmented Reality and the Extended Mind




Contrast these two scenarios. First, I’m in the supermarket. I want to remember what I need to buy but I’m not the kind of guy who write things down in lists. I just keep the information stored in my head and then jog my memory when I arrive at the store. If I'm lucky, the list of items immediately presents itself to my conscious mind. I remember what I need to buy. Second, I’m in the supermarket. I want to remember what I need to buy. But I’m hopelessly forgetful so I have to write things down in a list. I take the list from my pocket and look at the items. Now, I remember what I needed to buy.

Is there any difference between these two scenarios? Proponents of the extended mind thesis (EMT) would argue that there isn’t any significant difference between them. Both involve functionally equivalent acts of remembering. In the first scenario, the functional mechanism is intra-cranial. No external props are used to access the content of the list: it is just immediately present in the conscious mind. In the second instance, the functional mechanism is partly extra-cranial. An external prop (the list) is used to access the content. The information is then present in the conscious mind. The mechanisms are slightly different; but the overall functional elements and effects are the same.

But think about it again. There does seem to be something phenomenologically different about the two scenarios. That is to say, the two acts of remembering have a different conscious representation and texture. The first scenario involves immediate and direct access to mental content. The second scenario is more indirect. There is an interface (namely: the list) that you have to locate and perceptually represent before you can access the content.

This raises a question: could we ever have external mental aids that are phenomenologically equivalent to intra-cranial mental mechanisms? And if so, would this provide support for the extended mind thesis? I want to consider an argument from Tadeusz Zawidzki about this very matter. It comes from a paper he wrote a few years back called ‘Transhuman cognitive enhancement, phenomenal consciousness and the extended mind’. He claims that future technology could result in external mental aids that are phenomenologically equivalent to intra-cranial mental mechanisms. And that this does provide some support for the EMT.


1. The Basic Argument: The Need for Frictionless Access to Mental Content
To understand Zawidzki’s argument we have to start by formalising the argument for phenomenological difference that I sketched in the introduction and turning it into an objection to the EMT. The argument would go something like this:


  • (1) The phenomenology of intra-cranial remembering is characterised by a frictionless and transparent access to the relevant mental content (the list of shopping items)
  • (2) The phenomenology of extra-cranial remembering is characterised by a frictionful and non-transparent access to the relevant mental content: you have to engage with the physical list first.
  • (3) Truly mental acts of remembering (or mental cognitive acts more generally) are characterised by frictionless and transparent access to mental content.
  • (4) Therefore, extra-cranial remembering is not truly mental in nature.



This is rough and ready, to be sure. The focus on ‘friction’ and ‘transparency’ is intended to capture some distinctive mark of the mental. The idea is that mental acts are noteworthy because the semantic or intentional content (e.g. beliefs, desires etc) that feature in those acts are just immediately present in our minds. We don’t have to think about where they came from or how we gain access to them. They are just there. There is nothing between us and the mental content. This is formulated into a general principle (premise 3) and this then determines the rest of the argument.

Of course, these are really characteristics of conscious mental activity — something that the original proponents of the EMT (Clark and Chalmers) studiously avoided in their defence of an extended mind — not subconscious mental activity. So you could dispute the reliance on premise (3) in this argument. You could argue that frictionless and transparent access to mental content is not a necessary mark of the mental: mental activity can and does take place without those properties. And hence mentality can extend beyond the cranium without those properties.

That’s fine, insofar as it goes. But it doesn’t render this argument completely pointless. You can take this argument (and the remainder of this post) to be about the extension of conscious mental activity only. Indeed, focusing only on this type of mental activity arguably sets the bar higher for the proponent of the EMT. One of the frequent critiques of the EMT is that it doesn’t account for the distinctive nature of conscious mental activity that is mediated through intra-cranial mechanisms (I discussed this in a previous post). If you can show some phenomenological equivalence between mental content accessed intra-cranially and mental content accessed extra-cranially, then you will show something significant.

And that’s exactly what Zawidzki tries to do. He tries to show how content that is accessed with the help of extra-cranial props can be both frictionless and transparent.


2. The Possibility of a Technological Metacortex
Zawidzki uses examples drawn from Charles Stross’s novel Accelerando to illustrate the possibility. The novel is about the social and personal consequences of rapidly evolving technologies. It is infused with the singularitarian ethos: i.e. the belief that technological progress is accelerating and will have radical consequences for humanity. One example of this is how our interactions with the world will be affected by the combination of augmented reality tech and artificial intelligence. In the novel, individual humans are equipped with augmented reality technology that displays constant streams of information to them about their perceived environments. This information is updated and presented to their conscious minds with the help of artificially intelligent assistants.

Zawidzki describes the future depicted in Stross’s novel in the following terms:

Swarms of on-line, virtual agents constantly and automatically “whisper” pertinent information in one’s ear in real time, or display it in one’s visual field, as one experiences a passing scene. For example, the microscopic video cameras mounted on one’s spectacles provide a constant video stream of one’s point of view as one walks about. This information is constantly processed by the swarm of on-line agents — appropriately called one’s “metacortex” by Stross — which search the Internet for information relevant to what one is visually experiencing and provide continuous updates on it. All of this happens automatically: users need not deliberately initiate searches about the persons with whom they are currently interacting, or the environs they are currently exploring. The information is displayed for them, through earphones or on virtual screens projected by the spectacles, as though it were being unconsciously retrieved from memory. 
(Zawidzki 2012, 218)

I have underlined the last part because I think it is critical to Zawidzki’s argument. His point is that the metacortical technologies depicted in the novel involve truly frictionless and transparent access to mental content. There is no separation or disjunction between you and the presentation of the information in your mind. You don’t have to follow a series of steps or program instructions into some user-interface. The information is just there: immediately present in your conscious mind.

To make the point more intuitive consider the following. Last night I was watching a film. There was an actor in it who I knew I had seen in some other films but whose name I could not recall. So I took out my smartphone and looked up the name of the film on IMDB. I then scrolled down through the cast list until I came across the actor I was interested in. I then clicked on her profile to see what else she had been in. In this manner, knowledge of her past triumphs and failures as an actor made its way into my conscious mind.

I’m sure many people have had a similar experience. They are doing something — watching a film, having a conversation — and they want know something else that is either critical to, or would improve the quality of, that activity. They don’t have the information in their own heads. They have to go elsewhere for it. Smartphones and the internet have made this much easier to do. But they haven’t made the process frictionless and transparent. To get the information displayed on your phone, you have to follow a series of steps. Furthermore, when you are following those steps you are acutely aware of the fact that the information is presented to you via a user-interface. There is considerable phenomenological distance between you and the information you desire.

Now imagine if instead of having to look up the information on my smartphone, I had something akin to the metacortex depicted in Stross’s novel. As I was watching the film with my AR glasses, a facial recognition algorithm would automatically identify the actor and display in my visual field information about them. There would no longer be any friction. The information would just be there. And although it would be displayed to me on a user-interface, the likelihood is that as I became more used to the device, my awareness of that interface would fade away. There would be no separation between me and the cognitive information. Something analogous already happens to elite musicians: as they improve in their musical abilities the phenomenological distance between themselves and the instrument they are playing evaporates. What is to stop something similar happening between us and the metacortex?

The reason this is an interesting question to ask is because the kinds of technologies needed to create a Strossian metacortex don’t seem all that far-fetched. Indeed, they seem eminently feasible. Sophisticated AR technologies are being created, and the advances in AI in the past decade have been quite impressive. It seems like it is really a matter of when, not if, we will build a metacortex.


3. The Phenomenologically Extended Mind
And when we do, what will have happened? Will we have proven the extended mind thesis to be correct? Will we have established once and for all that the human mind is not confined to the brain-blood barrier?

Not so fast. There are many critics of this view. Rupert (2009) argues that phenomenological arguments of this sort fail because they leap from this sort of claim:


  • (P) The phenomenology of interacting with the extra-cranial world reveals no cognitively relevant boundary between the organism and the extra-organismic world.


To this sort of claim:


  • (C) There is no cognitively relevant boundary between the organism and the extra-organismic world.


That’s obviously an illogical leap. Just because something seems (phenomenologically) to be one way does not mean that it actually is that way. Our perceptions of the world can be misleading.

Consider the rubber hand illusion (wherein stroking a rubber hand repeatedly while looking at it results in the phenomenological feeling of having one’s hand stroked). Does this prove that the rubber hand is actually ours? That there is no relevant difference between the rubber hand and our own? Of course not. The same could be true of the metacortex: it might seem to be part of our minds, but that doesn’t mean it actually is.

Zawidzki has responses to this. He says that his phenomenological argument is only intended to complement other arguments for the extended mind; not replace them. When you add all the lines of argument together, you end up with a more robust case for extended cognition. Furthermore, he insists he is talking about a particularly advanced form of technology that has not been invented yet.

But, in some ways, the technical debate about the extended mind is by-the-way. What really matters is what will happen once we do create metacortical technologies with frictionless and transparent phenomenological integration. Suddenly we will all start to feel as though our minds are intimately integrated into or dependent upon our metacortices. But presumably the technologies underlying those metacortices will not belong solely to us? The machine learning algorithms upon which we rely will probably be used by many others too. What will happen to our sense of individuality and identity? Will it just be like everyone reading the same book? Or will a more radical assault on individuality result? Philosophers may restrain us and say that there is an important philosophical difference between the intra-cranial and extra-cranial. But it won’t feel that way to many of us. It will all feel like it is part of a seamless cognitive whole.

It's worth thinking about the consequences of such a reality.

No comments:

Post a Comment