Human and other factors: systems, situations and awareness

11305
""

‘The blame of loss, or murder,

Is laid upon the man.

Not on the Stuff-—the Man!

The Hymn of Breaking Strain Rudyard Kipling, 1935

An insight by the celebrated economist John Maynard Keynes about the savage politics of the Great Depression is also relevant to the statistically safe, but inherently dangerous, world of aviation.

‘The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood,’ Keynes wrote, in the modestly named General Theory of Employment, Interest and Money, ‘Indeed the world is ruled by little else.’

In short, ideas rule the world. There’s an academic struggle taking place now, which as Keynes’s maxim predicts, will eventually spill into the mainstream world of aviation. It will go from lecture rooms and seminar rooms to crew rooms, board rooms and flying club rooms. It’s a battle for the soul of human factors as a discipline. This might sound like a dry academic argument, conducted by professional obscurantists, but the implications for the future safety of flight are profound, and could include whether accident rates continue to decline.

The battlefield is the concept of situational awareness, an idea linked to human factors training by crew resource management. But Sidney Dekker says the term situational awareness has become no more than a new way of allocating blame. In philosophical terms it is a circular argument, he argues in a 2015 paper, quoting John Flach to demonstrate its circular reasoning with a chain of questions.

‘Why did you lose situation awareness?
 Because you were complacent.
 How do we know you were complacent?
 Because you lost situation awareness.’

Then he really gets going: ‘loss of situation awareness’’ is analytically nothing more than a post hoc (after the fact) judgment that says we know more about the situation now than other people apparently did back then,’ Dekker thunders.

‘With such scientific legitimation, we can hardly blame lay people from seeing such a construct as a ‘‘convenient explanation that [they] easily grasp and embrace”.’

Dekker does not dispute that situational awareness techniques are something that crews should be taught and attempt to practise, within the limits of human cognition and physiology. But he is unconvinced it offers a safety panacea and he has strong disdain for it being used as an explanation.

‘Constructs such as situation awareness lock human factors into a hopelessly old-fashioned dualist ontology where there is a world and a mind, and the mind is merely the (imperfect) mirror of the world,’ he says.

University of the Sunshine Coast professor of human factors, Paul Salmon, instead suggests that situational awareness can be a useful concept, but only when it is viewed through the appropriate ‘systems’ lens. To demonstrate this, Salmon and British academics Guy Walker and Neville Stanton wrote a paper published last year that argued ‘situation awareness’ was a distributed property of systems that is shared between humans and other systems.

‘Many recent crash investigation findings reflect “a trend in which human operators are implicated for their role in accidents due to having ‘lost situation awareness’ or because they had ‘poor situation awareness’”. This focus on human operators and their own awareness remains, despite systems thinking now being widely accepted as the most appropriate approach to accident investigation,’ they say.

Their study used network analysis to map the interactions between crewmembers and between crew and aircraft systems.

‘Situation awareness is an emergent property that is held by the overall system and is built through interactions between “agents”, both human (e.g. human operators) and non-human (e.g. tools, documents, displays),’ they say.

Examining a contemporary air transport accident, they argue ‘the initial transaction that led to the incident beginning was entirely between non-human agents (e.g. the pitot tubes and the cockpit systems), whereby the pitot tubes and eventually the cockpit systems lost awareness of the plane’s airspeeds’. They also note the inability of the aircraft systems to clearly inform the pilots of what was going on.

‘This included why the autopilot had disconnected, what the appropriate procedure was, what actions the pilot flying was taking in response to the situation, and the status and associated risk of key flight parameters.

‘It is inappropriate to point to a loss of awareness on behalf of the aircrew only,’ they say.

Like Dekker, they argue that the most serious problem with blaming dead crews for crashes is that it obscures other factors.

Emphasis on crew actions ‘reflects a trend in which human operators are implicated for their role in accidents due to having ‘lost situation awareness’ or because they had ‘poor situation awareness’. This focus on human operators and their own awareness remains, despite systems thinking now being widely accepted as the most appropriate approach to accident investigation,’ they say.

Salmon and his colleagues say excessive focus on situational awareness leads to a distortion in training. ‘Particularly problematic is the fact that focusing on individual cognition during accident investigation inevitably leads to counter-measures which focus on fixing human operators through avenues, such as retraining and reprisals, an approach that has long been known to be inappropriate,’ they write.

‘What makes this state of affairs more worrying is that, in ignoring advances in the human factors knowledge base and returning to individual operator-focused concepts, our discipline may no longer be doing what it should—supporting the design of safe socio-technical systems in which humans are viewed as assets rather than the source of problems.’

Hands, minds and machines—the ergonomic approach

Yvonne Toft is Associate Professor of Human Factors & Systems Safety at Central Queensland University. Far from being the sole domain of psychology, she says human factors is another name for the broad and multifaceted discipline of ergonomics, a discipline that predates aviation by hundreds, if not thousands of years. Its name dates from 1857 when it was coined by Polish polymath, Wojciech Jastrzębowski.

‘Human factors, (or ergonomics) is about optimising interactions between people, and other components of the system. It’s a design science about systems and artefacts within systems that enable people to perform optimally. It’s not about tweaking people,’ she says.

Problems with human-device interaction can often be better solved by changing the device rather than attempting to change the human, she says.

‘There are conventions (norms) we all work with. When you turn something to the right you expect it to increase; when you move a control up you expect more volume, or light, or whatever.

‘If you design against those, you can train people to be counter-intuitive, but there’s a price for that. And in an emergency the price is highest, because people revert to their norms,’ she says.

‘If we design things that take account of these norms, we have less requirement for training and retraining because things work the way people expect them to. It breaks down the chance of so-called errors.’

Toft works largely in rail transport and power generation safety, but says the parallels with aviation are vivid. As an example she compares the Zanthus, Western Australia, rail accident of 1999, where an operator switched track points at the wrong moment to the SpaceShip Two crash of 2014 where a test pilot unlocked an aerodynamic control prematurely. ‘In neither case was there any interlock or defence against a slip, which can be considered an inevitable human error, no matter how well-trained the operator. In both cases the human was set up to fail.’

Associate professor Geoff Dell, Discipline Leader of Accident Investigation and Forensics at Central Queensland University, says aviation has had an interest in design science since the days of Alphonse Chapanis, who did pioneering work in World War II.

‘So while one of the drivers in aviation has been design science, the flip side has been long product life cycles, with basic designs in production for decades and in service for decades after production ends,’ he says.

‘The result is operators are left with situations where it’s very difficult, if not impossible, to tweak or retro fit systems, so the tendency is to try and tweak, or perfect, the human operator.’

Most modern aircraft have better ergonomics compared to older designs, with landing gear and flap levers widely separated on new designs, and controls mostly mirroring the operation of the system they are attached to. Earlier types did not always follow this convention; the British Hawker Siddeley Trident had a droop retract lever that moved forward, in the opposite direction the leading edge droop went when it retracted. This lever was placed next to the flap lever, which was correctly mirrored with the flaps, moving back to extend and forward to retract. The levers, although differently sized were adjacent in the Trident cockpit. Whether this was a factor in the crash of a Trident, BEA flight 548, in 1972 cannot be said for sure, although other Tridents experienced unexplained droop retractions from time to time. These may have been purely mechanical failures or design induced errors.

But some acknowledged ergonomic flaws remain in modern aviation. A 2001 paper by Sidney Dekker and Gideon Singer looked at research into one of the most fundamental flight instruments, the attitude indicator. The findings of human factors research into this instrument are unambiguous: western-built aircraft are using the second-best system. In most of the world, an attitude indicator uses a so-called inside-out display, in which the horizon moves and the nominal aircraft stays fixed. The former Soviet Union developed a different ‘outside-in’ display, still used in some Russian aircraft, in which the horizon stays fixed and the display aircraft moves.

Reviewing tests of the inside-out (western) and outside-in (Russian) displays, Dekker and Singer say, ‘What links this research is the consistent superiority of an outside-in ADI representation: pilots perform better with it; they themselves prefer it; they have been shown to use the outside-in mental concept even when flying with an inside-out ADI, and no amount of overlearning on the inside-out ADI can overcome their preference for an outside-in configuration.’

Dekker and Singer then did original research into the best display system for the sky pointer used in glass panel attitude indicators.

They assessed the roll recovery performance of 13 commercial pilots in 390 trials, using an experimental set-up that tested three different sky pointer configurations: one commercial (roll index on top, slaved to the horizon), one general aviation (roll index on top, slaved to the aircraft), and one military (roll index at bottom, slaved to the horizon).

While their sample may have been small, their results were unambiguous. ‘The commercial set-up (roll index on top of the ADI, slaved to the horizon) is almost five times more likely to produce roll reversal errors than the military or general aviation configuration,’ they found.

‘The source of error probably lies in more basic features of the commercial ADI itself—the violation of the control-display compatibility principle being a strong contender on the basis of the results from our study.’

Toft says pilots are their own worst enemies when it comes to recognising design errors.

‘They’re a trained user group that accepts responsibility for mistakes and doesn’t blame design. They do so well for the majority of the time that it’s hard to build the evidence base to argue for change. It’s a very different situation to the automotive industry where a defect that might be linked, not necessarily causally, to a driver error is ground for a mass recall. “We’ve seen those from Toyota and General Motors, over things like floor mats that might obstruct accelerator pedals, and an ignition key barrel that could fail if a large bunch of keys was hung from it.’

The contrast with aviation can be frustrating, she says. She gives the example of fuel caps on general aviation and sport aircraft. On many types it is possible to leave them on the ground and once this is done, the placement of the fuel filler results in airflow over the wing draining fuel from the tank.

‘I mentioned this to RA-Aus and they said “that’s why we have a very stringent procedures”, but in one case I know of, the pilot was just exemplary in his procedures on every flight, except the one where he was preoccupied by weather conditions and left the cap off,’ she says. ‘He made mistake because he was human, that’s all.’

Beyond human: error and new technologies

While well-trained and disciplined pilots minimise the consequences of poor design with current aviation technologies, the future technologies may be beyond human ability to work around. Advanced technologies with a high degree of autonomy, but also complexity, offer abundant potential for design-induced error and confusion.

‘We know that human errors belong to every part of the system,’ Toft says. ‘The decisions made by people far from the sharp end influence the outcome at the sharp end.’

‘The question we have to face when considering over-the-horizon technologies is, what will be the impacts of new systems being guided by more complex technologies, if design process errors have not been removed?’

Using their distributed situational awareness model, Salmon and his colleagues attempt to set out rules for interaction with complex systems. They focus on creating simplicity and clear communication.

‘Countermeasures should focus on enhancing the transactions required during both routine and non-routine flight situations. For example, what information is required, how best it can be communicated in high-workload situations, and who or what it should be communicated by are important considerations.’

‘It is systems, not individuals, that lose situation awareness and therefore those systems, not individuals, should be the focus when attempting to improve performance following adverse events.’

Tweaking the system—ergonomic lessons from a shipping container

Australian-born NASA researcher Alan Hobbs says the hard won experience of military drone pilots teaches a bitter lesson in the importance of ergonomics in system design.

‘The control station is no longer a cockpit—doesn’t even look like a cockpit. In many cases it looks more like a control room of a nuclear power station,’ Hobbs told the Australian Society of Air Safety Investigators annual conference.

Hobbs said it was something of a misnomer to think of a large drone as unmanned. ‘In some ways there are more crew than on a conventional aircraft. And it can turn very rapidly from having one or two people in the control station, to having a great number of people, all of them wanting to have their say,’ he said.

NASA Dryden Flight Research Photo Collection | Ken Ulbrich
NASA Dryden Flight Research Photo Collection | Ken Ulbrich
NASA Dryden Flight Research Photo Collection | Tony Landis
NASA Dryden Flight Research Photo Collection | Tony Landis

He said too little attention was paid to how concepts evolved and stretched between conventional and remotely piloted aviation. ‘When you talk to an airline pilot about flight termination they mean pulling up to the terminal, shutting down and collecting their bag. When you talk to an unmanned aircraft pilot about flight termination it means they’ve got a problem so severe they are going to destroy the aircraft or ditch’, he said.

‘It changes the decision making enormously. You may not want to save your million dollar investment if it means risking people on the ground.’

Reviewing a 2006 accident with an MQ-9 drone flown by US Customs and Border Protection he said, ‘Sometimes accidents reveal a lot about systems.’

‘The remote flight deck was ‘essentially a shipping container, with two seats for the pilot and sensor operator. They had identical consoles. On this flight they had a lock-up, a screen freeze—not an unusual thing.

‘The pilot decided to fly from the identical console on the right that was being used by the sensor operator. He asked the sensor operator to stand up so he could switch control across. But the aircraft uses multi-function controls on each side. When you’re operating in pilot mode, a particular lever controls fuel flow to the engine, but in sensor operator mode the same lever controls the iris setting of the camera.

‘The lever was set to a particular iris setting, so that when control was switched over it shut off fuel to the engine. When they switched across they were in a rush and decided not to use the checklist. The engine stopped and the aircraft started descending. At this point they didn’t know what was going on—they thought it was some sort of technical problem.

‘An auditory alarm went off—but there was only one auditory alarm tone—it could have meant any number of things. They didn’t realise that the engine had stopped, so they intentionally disconnected the radio link. That would normally send the aircraft into its automatic lost link mode, where it would climb to 15,000 feet.

‘The aircraft was equipped with an auto-ignition system that could have re-started the engine, but this system would only work when there was an active satellite link. But the aircraft was designed to conserve battery power when there was no engine power, and one of the ways it did this was by shedding the satellite link. So the aircraft glided to an impact with the ground.

2 COMMENTS

  1. Systems obscurity/complexity and human interaction are long implicated in disasters. The more we build in system integration and supposed ‘daylighting’ the more likely it is that the “system” will behave “according to design”… except the complexity is obscured from even the most knowledgably capable savants. As the system gains in capability (i.e. designed integration with backstops) the pilot/operator becomes more and more distant from the inner workings. Regardless of pilot training and capability, expectation biases become significant in the event initiation, recognition, response, correction sequence. SA (Situational Awareness) is always challenging, and even more so as the inanimate side of the ‘system’ becomes increasingly capable. “Why did it do THAT???” is increasingly less likely to be sorted out in the intense heat of an exigency. Essentially, my take away from this thoughtful article is that “SA” is the Leprechaun’s mythical pot ‘o gold. Seek it at the end of the rainbow, but don’t expect to find it.

Comments are closed.