Monday, April 27, 2015

The Automation Loop and its Negative Consequences



I’m currently reading Nicholas Carr’s book The Glass Cage: Where Automation is Taking Us. I think it is an important contribution to the ongoing debate about the growth of AI and robotics, and the future of humanity. Carr is something of a techno-pessimist (though he may prefer ‘realist’) and the book continues the pessimistic theme set down in his previous book The Shallows (which was a critique of the internet and its impact on human cognition). That said, I think The Glass Cage is a superior work. I certainly found it more engaging and persuasive than his previous effort.

Anyway, because I think it raises some important issues, many of which intersect with my own research, I want to try to engage with its core arguments on this blog. I’ll do so over a series of posts. I start today with what I take to be Carr’s central critique of the rise of automation. This critique is set out in chapter 4 of his book. The chapter is entitled ‘The Degeneration Effect’, and it makes a number of arguments (though none of them are described formally). I identify two in particular. The first deals with the effects of automation on the quality of decision-making (i.e. the outputs of decision-making). The second deals with the effects of automation on the depth and complexity of human thought. The two are connected, but separable. I want to deal with them separately here.

In the remainder of this post, I will discuss the first argument. In doing so, I’ll set out some key background ideas for understanding the debate about automation.


1. The Nature of the Automation Loop
Automation is the process whereby any action, decision or function that was once performed by a human (or non-human animal) is taken over by a machine. I’ve discussed the phenomenon before on this blog. Specifically, I have discussed the phenomenon of algorithm-based decision-making systems. They are a sub-type of automated system in which a computer algorithm takes over a decision-making function that was once performed by a human being.

In discussing that phenomenon, I attempted to offer a brief taxonomy of the possible algorithm-based systems. The taxonomy made distinctions between (i) human in the loop systems (in which humans were still necessary for the decision-making to take place); (ii) human on the loop systems (in which humans played some supervisory role) and (iii) human off the loop systems (which were fully automated and prevented humans from getting involved). The taxonomy was not my own; I copied it from the work of others. And while I still think that this taxonomy has some use, I now believe that it is incomplete. This is for two reasons. First, it doesn’t clarify what the ‘loop’ in question actually is. And second, it doesn’t explain exactly what role humans may or may not be playing in this loop. So let’s try to add the necessary detail now with a refined taxonomy.

Let’s start by clarifying the nature of the automation loop. This is something Carr discusses in his book by reference to historical examples. The best of these is the automation of anti-aircraft missiles after the end WWII. Early on in that war it was clear that the mental calculations and physical adjustments that were needed in order to fire an anti-aircraft missile effectively were too much for any individual human to undertake. Scientists worked hard to automate the process (though they didn’t succeed fully until after the war — at least as I understand the history):

This was no job for mortals. The missile’s trajectory, the scientists saw, had to be computed by a calculating machine, using tracking data coming in from radar systems along with statistical projections from a plane’s course, and then the calculations had to be fed automatically into the gun’s aiming mechanism to guide the firing. The gun’s aim, moreover, had to be adjusted continually to account for the success or failure of previous shots. 
(Carr 2014, p 35)

The example illustrates all the key components in an automation loop. There are four in total:

(a) Sensor: some machine that collects data about a relevant part of the world outside the loop, in this case the radar system.

(b) Processor: some machine that processes and identifies relevant patterns in the data being collected, in this case some computer system that calculates trajectories based on the incoming radar data and issues instructions as to how to aim the gun.

c) Actuator: some machine that carries out the instructions issued by the processor, in this case the actual gun itself.

(d) Feedback Mechanism: some system that allows the entire loop to learn from its previous efforts, i.e. allows it to collect, process and act in more efficient and more accurate ways in the future. We could also call this a learning mechanism. In many cases humans still play this role by readjusting the other elements of the loop.


These four components should be familiar to anyone with a passing interest in cognitive science and AI. They are, after all, the components in any intelligent system. That is no accident. Since automated systems are designed to take over tasks from human beings they are going to try to mimic the mechanisms of human intelligence.

Automation loops of this sort will come in many different flavours, as many different flavours as there are different types of sensor, processor, actuator and learning mechanism (up to the current limits of technology). A thermostat is a very simple type of automation loop: it collects temperature data, processes it by converting it into instructions for turning on or off the heating system. It then makes use of negative feedback to constantly regulate the temperature in a room (modern thermostats like the Nest have more going on). A self-driving car is a much more complicated type of automation loop: it collects visual data, processes it quite extensively by identifying and categorising relevant patterns, and then uses this to issue instructions to an actuating mechanism that propels the vehicle down the road.

Humans can play a variety of different roles in such automation loops. Sometimes they might be sensors for the machine, collecting and feeding it relevant data. Sometimes they might play the processing role. Sometimes they could be actuators, i.e. the muscle that does the actual work. Sometimes they might play one, two or all three of these roles. Sometimes they might share these roles with the machine. When we think about humans being in, on, or off the loop, we need to keep in mind these complexities.

To give an example, the car is a type of automation device. Traditionally, the car just played the part of the actuator; the human was the sensor and processor, collecting data and issuing instructions to the machine. The basic elements of this relationship now remain the same, albeit there is some outsourcing and sharing of sensory and processing functions with the car’s onboard computers. So, for example, my car can tell me how close I am to an object by making a loud noise; it can keep my car travelling at a constant speed when cruising down a motorway; and it can even calculate my route and tell me where to go using its built-in GPS. I’m still very much involved in the loop; but the machine is taking over more of the functions I used to perform myself.

Eventually, the car will be a fully automated loop, with little or no role for human beings. Not even a supervisory one. Indeed, some manufacturers want this to happen. Google, reportedly, want to remove steering wheels from their self-driving cars. Why? Because it is only when humans take over that accidents seem to happen. The car will be safer if left to its own devices. This suggests that full automation might be better for the world.


2. The Consequences of Automation for the External World
Automation is undertaken for a variety of reasons. Oftentimes the motivation is benevolent. Engineers and technicians want to make systems safer and more effective, or they want to liberate humans from the drudge work, and free them up to perform more interesting tasks. Other times the motivation might be less benevolent. Greedy capitalists might wish to eliminate human workers because it is cheaper, and because humans get tired and complain too much.

There are important arguments to be had about these competing motivations. But for the time being let’s assume that benevolent motivations predominate. Does automation always succeed in realising these benevolent aims? One of Carr’s central contentions is that it frequently does not. There is one major reason for this. Most people adhere to something called the ‘substitution myth’:


Substitution Myth: The belief that when a machine takes over some element of a loop from a human, the machine is a perfect substitute for the human. In other words, the nature of the loop does not fundamentally change through the process of automation.


The problem is that this is false. The automated component of the loop often performs the function in a radically different way and this changes both the other elements of the loop and the outcome of the loop. In particular, it changes the behaviour of the humans who operate within the loop or who are affected by the outputs of the loop.

Two effects are singled out for consideration by Carr, both of which are discussed in the literature on automation:

Automation Complacency: People get more and more comfortable allowing the machine to take complete control.

Automation Bias: People afford too much weight to the evidence and recommendations presented to them by the machine.

You might have some trouble understanding the distinction between the two effects. I know I did when I first read about them. But I think the distinction can be understood if we look back to the difference between human ‘in the loop’ and ‘on the loop’ systems. As I see it, automation complacency arises in the case of a human on the loop system. The system in question is fully automated with some limited human oversight (i.e. humans can step in if they choose). Complacency arises when they choose not to step in. Contrariwise, automation bias arises in the case of a human in the loop system. The system in question is only partially automated, and humans are still essential to the process (e.g. in making a final judgment about the action to be taken). Bias arises when they don’t second-guess or go beyond recommendations given to them by the machine.

There is evidence to suggest that both of these effects are real. Indeed, you have probably experienced some of these effects yourself. For example, how often do you second-guess the route that your GPS plans for you? But so what? Why should we worry about them? If the partially or fully automated loop is better at performing the function than the previous incarnation, then isn’t this all to the good? Could we not agree with Google that things are better when humans are not involved?

There are many responses to these questions. I have offered some myself in the past. But Carr clearly thinks that these two effects have some seriously negative implications. In particular, he thinks that they can lead to sub-optimal decision-making. To make his point, he gives a series of examples in which complacency and bias led to bad outcomes. I’ll describe four of them here.

I’ll start with two examples of complacency. The first is the case of the 1,500 passenger ocean liner Royal Majesty, which ran aground on a sandbar near Nantucket in 1995. The vessel had been traveling from Bermuda to Boston and was equipped with a state of the art automated navigation system. However, an hour into the voyage a GPS antenna came loose and the ship proceeded to drift off course for the next 30 hours. Nobody on board did anything to correct for the mistake, even though there were clear signs that something was wrong. They didn’t think to challenge the wisdom of the machine.

A similar example of complacency comes from Sherry Turkle’s work with architects. In her book Simulation and its Discontents she notes how modern-day architects rely heavily on computer-generated plans for the buildings they design. They no longer painstakingly double-check the dimensions in their blueprints before handing the plans over to construction crews. This results in occasional errors. All because they have become reluctant to question the judgment of the computer program.

As for bias, Carr gives two major examples. The first comes from drivers who place excessive reliance on GPS route planners when driving. He cites the 2008 case of a bus driver in Seattle. The top of his bus was sheared off when he collided with a concrete bridge with a nine-foot clearance. He was carrying a high-school sports team at the time and twenty one of them were injured. He said he did not see the warning lights because he was busy following the GPS instructions.

The other example comes from the decision support software that is nowadays used by radiographers. This software often flags particular areas of an X-ray scan for closer scrutiny. While this has proven helpful in routine cases, a 2013 study found that it actually reduces the performance of expert readers in difficult cases. In particular, it ia found that the experts tend to overlook areas of the scans not flagged by the software, but which could be indicative of some types of cancer.

These four examples support the claim that automation complacency and automation bias can lead to inferior outcomes.


3. Conclusion
But is this really persuasive? I think there are some problems with the argument. For one thing, some of these examples are purely anecdotal. They highlight sub-optimal outcomes in certain cases, but they involve no proper control data. The Royal Majesty may have run aground in 1995 but how many accidents have been averted by the use of automated navigation systems? And how many accidents have arisen through the fault of human operators? (I can think of at least two high profile passenger-liner accidents in the past couple of years, both involving human error). Likewise, the bus driver may have crashed into the bridge, but how many people have gotten to their destinations faster than they otherwise would have through the use of GPS? I don’t think anecdotes of this sort are a good way to reach general conclusions about the desirability of automation systems.

The work on radiographers is more persuasive since it shows a deleterious comparative effect in certain cases. But, at the same time, it also found some advantages to the use of the technology. So the evidence is more mixed there. Now, I wouldn’t want to make too much of all this. Carr provides other examples in the book that make a good point about the potential costs of automation. For instance, in chapter five he discusses some other examples of the negative consequences of automation and digitisation in the healthcare sector. So there may be a good argument to be made about the sub-optimal nature of automation. But I suspect it needs to be made much more carefully, and on a case-by-case basis.

In saying all this, I am purely focused on the external effects of automation, i.e. the effects with respect to the output or function of the automated system. I am not concerned with the effects on the humans who are being replaced. One of Carr’s major arguments is that automation has deleterious effects for them too, specifically with respect to the degeneration of their cognitive functioning. This turns out to be a far more interesting argument and I will discuss it in the next post.

No comments:

Post a Comment