Final destination: Birmingham

9378
image: Photographer Hal Yeager AP Photo

The time has come
What lays behind
Will always outweigh
What awaits me
Tipping Point, David Kush

Adrian Park analyses the combination of acts, omissions and circumstances that saw a freight aircraft hit the ground short of the runway, killing its crew.

On 14 August, 2013, in pre-sunrise darkness, the crew of UPS 1354 prepared their Airbus A300 for its cargo flight over the southern United States from Louisville to Birmingham, Alabama. The crew had been on this flight many times before. However, on this night, a convergence of numerous small divergences would bring the crew to the tipping point.

In researching accident reports I’ve regularly wondered something: How can it be on any one particular evening a crew flies fatigued, or makes a programming error, or misses weather information without consequence, and yet on another night the flight ends in disaster? More personally, why is it I’ve sometimes flown with a measure of fatigue or misprogrammed a GPS, or missed important weather information and not crashed? The sobering reality is small deviations are not small when they are part of a network of effects. In some cases small, ‘regular’ deviations collaborate in new ways to bring about a sudden and catastrophic accident.

So it was with UPS 1354. As the aircraft was prepared for flight, the captain and first officer discussed an inconsistent fatigue management policy:

Rockford (Illinois) is only fourteen hours [to] rest. So you figure a thirty minute ride [to the] hotel….

I know by the time you go to sleep you are down to about twelve [hours rest]. Wow.

This is where the passenger side, you know, the new rules they’re gonna make out…

Yeah. we need that too.

I mean I don’t get that. You know it should be one level of safety for everybody…

The pilots were critiquing a fatigue related policy which allowed passenger-carrying crews greater duty-free time than cargo aircraft flying similarly fatiguing routes. Commenting on her own feelings of fatigue, the first officer remarked:

‘When my alarm went off… I mean I’m thinkin’ I’m so tired …’

She was indeed tired. The investigation would show she had not fully utilised her off duty period (and therefore her sleep opportunity) to actually sleep. Instead phone records and witnesses demonstrated she’d been awake for much of the duty-free period either on her phone, or visiting friends.

The captain was better rested, but a few weeks earlier he’d commented to a colleague about the demanding day-to-night roster changes …

‘I can’t do this until I retire … it’s killing me.’

Both crew members had flown with some degree of fatigue on other nights; however, those other nights had not had as many tipping-point factors at work. Still they would have been forgiven for thinking they were alright as long as there were no ‘surprises’ for the flight. Unfortunately there were indeed surprises—more than a few.

As the crew proceeded with the start up they read the dispatcher’s weather reports. The reports were generated by an automated system, and therein another ‘small’ factor began its work.

For years the automated system had been deleting the ‘comments’ line of the METAR—the aviation routine weather report. Neither UPS pilots nor UPS’s director of flight operations were aware of this apparently inconsequential omission, and admittedly, up to this point, the missing remarks section had not been an issue. Not this time—the missing remarks section stated Birmingham was experiencing variable ceilings down to 600 ft. This meant it was unlikely the aircraft would able to break visual at the minimums unless an approach was made to the main runway. Again though, this one factor needn’t have been a major issue. As long as there were no surprises, as long as the main runway at Birmingham was available, the missing remarks section would be just as irrelevant as it had been on so many flights before.

The crew took off, heading towards Birmingham unaware of the lowering ceilings and unaware of the next surprise. A notice to airmen, given to the pilots by the dispatcher before departure, showed the runway closed until 0500. Their arrival time was scheduled for 0451. The crew noticed the runway outage only on receiving the aerodrome terminal information just prior to arrival.

First officer: Well. Did you hear any of Papa? [the terminal information]

Captain: I didn’t hear any of it.

First officer: They’re sayin’ six and two-four [the main runway] is closed. They’re doin’ the localiser to one eight [the much shorter secondary runway].

Captain: Localiser to one eight. It figures …

With their original estimate of 0451, only nine minutes before the runway was to reopen, the crew decided to make a non-precision approach to the shorter runway—still unaware of the lowered ceilings.

Normally, on a surprise-free night, the crew would have picked up this important information from the terminal weather, but on this particular night the air traffic controller who prepared the terminal information had not included the variable 600 ft ceilings in the remarks. However, even without this info, if the crew had looked more closely at the published approach plate for the runway 18 localiser, they would have seen the plate clearly stated approaches of this type were ‘NA’ for nights; that is, ‘Not Available’. (This was later acknowledged by Jeppesen to be an error. The approach was in fact certified for nights; however, the crew at the time did not know this, which meant the plan to fly the approach should have at least been questioned.) The restriction on the plate was never briefed, nor actioned, and the crew continued their preparation for a non-precision approach—an approach neither were well practised in. It was then another surprise entered the mix—a small thing called ‘finger trouble’.

An aviation colloquialism, ‘finger trouble’ refers to a human-induced programming error. It could be finger trouble with frequencies, waypoints or other data but it always results in a ‘garbage-in: garbage-out’ condition. As UPS 1354 began its final approach, both crew members were unaware their autopilot profile information was erroneous: the first officer had inadvertently neglected to sequence the flight management computer to an approach waypoint—it was still set to the aerodrome waypoint. The glide-slope indicators were thus ‘pegged out’ uselessly at the top of the scale. The fatigued crew of UPS 1354 was now set to conduct one of the most demanding types of instrument approaches without any glide-slope assistance on a dark night with cloud below the minimums.

UPS 1354 received its clearance from air traffic control and began its final approach. Inside the cockpit the crew complained about another surprise: ATC had kept them high. To compensate, the crew initiated a high descent rate and began what’s colloquially called a ‘dive-and-drive’ approach: the aircraft flown with a higher than recommended descent rate in order to get to the next step as soon as possible. This was opposed to the more acceptable method of following a constant, more controllable (and therefore easier to judge) glide path.

The National Transportation Safety Board had long recommended against such approaches after not a few ‘dive-and-drive’ incidents. In 2006 it made the following safety recommendation:

Require … operators to incorporate the constant-angle-of-descent technique into non-precision approach procedures and to emphasise the preference for that technique where practicable.

On other flights, in a less fatigued state, the dive-and-drive approach would probably have been flown successfully—but not tonight. The first dive (then drive) saw UPS1354 level out successfully above the descent step. The second was more problematic. As UPS 1354 began its final ‘dive-and-drive’ towards runway 18, the first officer noticed the aircraft’s descent rate was under the manual control of the captain. Earlier, on discovering the autopilot would not engage in the VNAV (vertical nav) mode (because of the programming error), the captain had, unannounced, switched the autopilot to vertical speed mode. The first officer did not assertively question this, nor did she question the excessive descent rate that was about to come.

First officer: Let’s see you’re in … vertical speed … okay.

Captain: Yeah I’m gonna do vertical speed. Yeah, he kept us high.

First officer: Kept ya high. Could never get it over to ‘profile…’

The first officer’s comment ‘could never get it over to profile’ referred to the fact the autopilot wouldn’t engage in a vertical nav ‘profile’ mode and she was attributing this to being kept high. She was still unaware of the programming error. UPS 1354’s rate of descent had by now increased to 1500 foot per minute under the captain’s manual control—nearly double the normal rate. The workload of the crew increased dramatically, while the time available to make an appropriate decision at the minimum descent altitude decreased. Not a good condition, even for an alert crew in the middle of the day, let alone a crew that had been up nearly all night. Still, even now, if the crew made the appropriate decision to abort the approach at the minimums, all would be well. But they did not. The next call from the crew was well below the minimums and still in cloud:

First officer: ’There’s a thousand feet, instruments cross-checked, no flags’.

The minimum descent altitude in cloud was 1200 ft. This should have been the time to make the clarion call to abort the approach and go around. The aircraft was 200 ft below minimums and only 10 seconds before ground impact but the cockpit tone remained conversational.

First officer: ‘It wouldn’t happen to be “actual” …chuckle’.

In mentioning ‘actual’ she was referring to the fact they were still in cloud and probably expressing her sentiment that since there had been so many other surprises, of course there would be cloud on the minimums. Shortly afterwards UPS 1354’s sink rate warning went off. Still there was no response from the crew. At last, mere seconds before ground impact, the captain said he could see the runway. It was too late. They were well and truly beyond the tipping point, which in a fast moving aircraft, is also the impact point.

At 4:47 in the morning, UPS 1354 struck trees 1600 m short of the runway, ingesting wood fragments into its engines. The CVR recorded the captain maintaining his ‘bedside manner’ all the way to impact. Even as the trees thudded into the wings and fuselage he said, still almost conversationally, ‘Oh did I hit something?’ Shortly after this UPS 1354 collided violently with the ground, ploughing its way catastrophically to a fiery blaze.

Blunt force injuries killed the crew on impact, and the broken back of the aircraft surreally disgorged neatly packaged UPS parcels over hundreds of metres. A few minutes after impact, security cameras showed the main runway—now reopened—illuminated by UPS 1354’s fireball.

How is that on one night an aircraft is accident-free, but on another it crashes? The answer is in the dangerous alliance of fatigue with a collaboration of the unexpected: lowered ceilings, a closed runway, omitted weather remarks and finger trouble.

This collaboration of factors pushed the crew of UPS 1354 past the tipping point. When I’ve wondered about these things I’ve also been disturbed—disturbed that these relatively ‘small’ divergences in major accidents are ones we pilots have probably all experienced at some point or another in our careers. Flying a little tired? Tick. Missed relevant operational data? Tick. Unexpected runway closures and/or cloud at minimums? Tick. Approach flown a little ugly? Tick. And yet here we all are reading someone else’s catastrophe rather than them reading ours.

The biggest lesson for me from UPS 1354 is that movement towards the tipping point doesn’t require a big effect—just a collaboration of small ones. If we want to avoid other pilots reading about our own untimely tipping point, it is not enough for us to see small issues in isolation: if we can, they must be seen as part of a network of effects. And even if we can’t see the big picture, we should at least be a whole lot more pedantic about correcting the small things knowing they can conspire against us just as easily as they did against the crew of UPS 1354. I know one thing. I’m going to be a lot more careful about how I manage my fatigue before flying. I’m also going to be a heap more attentive to the fine detail in weather reports, a whole lot more careful with data entry and a whole lot more eager to go around at the minimums if it’s all looking ugly.

2 COMMENTS

  1. An example of the “Swiss Cheese” model of safety at work, with all those holes lining up neatly…

  2. Thank you flight safety. A good message , if fact all accidents have this chain of binary switches (Swiss cheese holes – prof James Reason). Dont hesitate to flick at least one off (best get them all off) no matter how trivial.

Comments are closed.