"Nice weather" and other threats.
It may seem odd to classify "good weather" as a threat to air safety but many accidents and incidents occur when the crew have been lulled into a false sense of security by "it's a nice day" (or worse, " a nice night"). The IFR flight plan ends in an instrument approach, but "going visual" early is easier for both ATC and pilots, and both crew members become focussed on the landing to the exclusion of other factors.
Examples of this include making approaches to, or actually landing at, the wrong airport. These events often happen when a similarly oriented runway is located fairly close to the proper destination, and they occur with embarrassing frequency. Two highly publicised examples in January 2014 were the landing of a B747 freighter in Wichita, and a SouthWest Airlines B737, intended for Branson Missouri. The latter actually landed on a dangerously short runway at Hollister, 7 miles away. The crew had abandoned their intended IFR approach even though "The captain later reported that this was his first flight into Branson Airport while the first officer had only flown once into Branson during daylight."
More recently, in April 2016 an Embraer 170 of Aeromexico landed on a 1500m runway at Monterrey North, only half the 3000m anticipated length of the correct, parallel runway 9 miles away at Monterrey International. Three months later in July 2016, a Delta Airlines A320 landed shortly after dusk at Ellsworth Air Force Base instead of Rapid City, South Dakota - as had an A319 of Northwest Airlines some 12 years previously, in June 2004.
These incidents generally have a high survivability rate, but are still indicative of a major crew failure. They are potentially very hazardous as well as being extremely commercially damaging for the operator concerned. By definition, the aircraft has landed on a runway for which absolutely no planning or preparation has been made either in the cockpit, or on the ground. As the actual runway dimensions and surface conditions are unknown to the crew there is a major risk of a runway excursion.
The normal means of assuring separation from other air traffic (or ground vehicles) are useless. Generally in such events the actual runway of landing will not be at a major airport, and will not be prepared to handle the consequences of landing. For example, a crew realising late that the runway is far shorter than they had expected may apply such heavy braking as to generate a brake fire, on a strip where firefighting capability is either inadequate for the size of aircraft, unready for action, or nonexistent.
In other words, all aspects of the safety infrastructure have been compromised, and the safe outcome of such an event is due purely to good luck. Had the SWA B737 touched down at Branson only 2 seconds further down the runway it would have exited the paved surface into a gully, with no rescue or firefighting available. Typically, even if no damage or injuries are sustained, it will be career-limiting for the crew members concerned.
In almost all such cases however, as one such incident investigation report noted, "had the approach been flown in IMC, there is little doubt that the operating crew would have flown the ILS to Decision Altitude and landed, without incident, at [their intended destination].
Other "inverted" threats - factors that would normally be seen as reducing rather than increasing risk - include familiarity with the airport. Examples of these in recent events include an A340 near loss of control during Cat 3 approach (landing at home base).
A basic characteristic of PicMA procedures is that if the pilot in charge has to delegate the flying during the approach, this will in itself tend to generate a greater degree of caution in both pilots, leaving them (and especially the Captain) better placed to deal with adverse changes. Having someone else do a job that you know you are personally both good at and are used to being responsible for is a pretty good recipe for increased vigilance and vocalisation of detected errors - as many spouses have found when driving, for example!
Drama or degradation?
Dramatic "problem" events are sometimes used as examples for CRM training, with an event such as an uncontained engine failure being used to illustrate the benefits of what is being advocated. In early years, a frequent example was the UAL DC10 accident at Sioux City: today perhaps the Qantas A380 Singapore event is used. What these incidents, and those like the Hudson river A320 ditching, actually show is that pilots can deal with the most improbable and unexpected events that failures by others in the system can throw at them. (They also illustrate the naivety of much argument that automation will make pilots redundant on airliners in the near future.)
But most "crew caused" accidents and incidents do not result from a dramatic single event like those quoted. Instead, there was a slippery slope where relatively minor threats and errors accumulated, eventually overwhelming the pilots' ability to cope.
NASA Tech Memo 78482: the start of CRM?
In 1979 a series of NASA full-mission simulations of routine airline operations were carried out by a team led by Pat Ruffell-Smith, John Lauber and others. They were followed by a 3-day conference at Ames Moffett Field which in many ways heralded the formal beginning of "Crew Resource Management". These trials started each flight with no unusual problems, but added realistic ones as the trip progressed, and actually measured the way that circumstances could overload crew members.
The trials showed that when the pilot-in-charge ("P1") continued to act as PF he could easily overload the other crew members without realising it, as well as showing reduced performance in handling the aircraft. Taking lessons from these trials, good CRM training now incorporates advice to the Captain to reap the crew performance and decision-making benefits of delegating aircraft handling to the co-pilot while the crew are trying to resolve problems, as well as utilising as much of the autoflight system as possible.
The NASA report noted that "When P1s were aware of the situation and delegated their P2s to controlling the aircraft in its flight path, immediate benefits were evident. Great differences were also evident in the way in way in which decisions were arrived at by P1s ........... because then he was able to give full attention to assimilating the information from documents, ATC, and other crew members and to use these data to make unhurried decisions."
One subject pilot also remembers a researcher saying, after observing after one such mission in which the Captain retained the PF role as problems developed, that the First Officer's gathering of information, evaluation of alternatives, and suggestions for options and priorities to the Captain meant that in effect the First Officer had informally assumed command of the aircraft. The Captain had ended up merely "rubber-stamping" the First Officer's suggested strategy.
While the basic lessons of these experiments are well known and often incorporated into CRM courses, questions rarely covered are
- How do you determine when the situation has become serious enough to warrant this delegation?
- When should the basic PF/PM allocation of duties be abandoned because it does not give "the pilot" enough capacity to see and resolve the bigger problems?
At the same time, of course, CRM training emphasises the need to stick to SOPs. So there is a conflict: the SOP say that handling should be done by the "Pilot" (in charge), but good CRM indicates it should be delegated to the co-pilot as a lower priority task.
In fact, many real world accident and incident reports show with hindsight that an incremental breakdown of the pilot's ability to “see the overall picture” occurred. Minor events pile up pressure and frequently overwhelm the pilot’s ability to realise that a “break point” has been reached where transferring the handling workload (and departure from the basic SOP) has really become essential to deal with bigger problems. Frequently accident reports note that the crew's, and a particularly the Captain's, situational awareness had become seriously degraded.
While the need to deal with this is recognised in studies such as these links on "Threat and Error Management", to protect effectively against this situation delegation needs to be achieved BEFORE the degradation sequence starts, and the inevitable deterioration of judgment under pressure occurs.
This is one of the fundamental aspects of the PicMA concept: as the CFIT Training Aid indicates, it is a precise and unambiguous way to ensure proper management of crew workload. If the approach is started with the co-pilot as PF, the pilot himself is better situated to deal immediately with problems and threats if they materialise during the approach.
"Where did they go wrong?" on this website contains a more detailed analysis of this problem based on a fictional accident scenario.