A human perspective.
f you don't accept the validity of comparing two human pilots to one mechanical system, there's an alternative "humanistic" way of approaching the issue. “Failure to adhere to SOPs” is the most common element in crew error accident causes. To start with, it usually results from either
- inadequate reception of available information, or
- misinterpretation of information which has been received.
The psychological factors which lead pilots into either of these conditions in situations of stress and high workload are well known. If either results in an inappropriate response by the pilot flying, it is directly converted into action which affects the aircraft situation and/or flight path. Less commonly though unfortunately far from unknown, pilots may realise that an SOP exists but knowingly ignore it. It is extremely rare though for a pilot to say in advance "the SOP is (X), but we will not do (X), we will do (Y)".
Once the initial mistake is made, the issue becomes one of error recognition and recovery, and the longer the time taken for this to occur, the less likely it is that it will be successful. Recognition and recovery are critical to achieving error tolerance, and anything which inhibits error recognition and recovery is a major safety hazard. Almost all SOPs therefore also incorporate statements requiring the pilot monitoring to intervene to correct a hazardous action by the pilot flying. Failure to do so is itself another non-adherence to the SOP - a monitoring or challenging failure.
The fundamental characteristic of the basic PF/PM SOP is that it allocates the initiative for all action to the pilot in charge, normally the Captain. The task of error recognition and correction - monitoring - is allocated to a subordinate, the First Officer. Once the First Officer has recognised the risk, the degree to which he or she is able to achieve the necessary corrections is driven by his or her ability to overcome the difference in authority between the parties.
This is the “Cross-cockpit Authority Gradient” (CCAG), a term first developed by Dr Elwyn Edwards of Aston University in the early 1970s. If the PM can't overcome this gradient, a monitoring and challenging failure occurs, resulting in an unwanted event, in the extreme case a catastrophic accident such as many of those listed.
Danger recognition.
It's very easy to write SOPs like the Indian Government one that require the PM (generally the First Officer) to intervene forcibly if the safety of the flight is threatened by the PF's actions or lack of action. It does not overcome the fundamental problem that deciding whether there really is a "threat" that requires this drastic action is a matter of judgement of many interacting parameters. If it were simply a single parameter not being adhered to, life would be simple, but it isn't. Making this judgement is the infamous "co-pilot's dilemma".
A January 2014 FSF Aerosafety World article noted from LOSA data that 4% of flights involved an unstable approach, but only 3% of those unstable approaches resulted in a go-around: three times as many (10%) resulted in an unsatisfactory landing, either short, long or off-centreline. The vast majority of crews decided to continue with the landing, even though they knew they were not within specified parameters.
Since these numbers are based on airlines which take CRM, LOSA and similar safety concerns seriously, it's not unreasonable to surmise that the situation is LESS satisfactory in many other operations. IATA's 2013 safety report showed similar trends, with (unless the data has been incorrectly analysed in the graph) a worrying increase in the proportion of unstable approaches which are continued to landing.
In the real world, it's always a judgement call whether a situation is hazardous. Stabilised approach criteria are the result of "big picture" judgements by management pilots and other specialists, looking at operations overall. But in practice, whether or not they are strictly applied is a separate judgement subsequently made by the pilots involved in the unique circumstances of a specific approach.
A 2006 paper on the same issue asked if this was due to operational pressure such as wanting to save time and fuel, poor airmanship, or foolish bravado - or a combination of all three. Whatever the answer, they amount to a judgement by the PF that his or her actions would adequately resolve the situation in the time available, even if the strict criteria were not met.
Scaling that LOSA data for 100,000 approaches, there would be 4000 unstable ones, 400 unsafe landings, 120 go-arounds and 3480 uneventful arrivals. So MOST of the time the judgement of the PF that the probability of an accident wasn't so high as to warrant a go-around, was "correct" in the sense that an uneventful landing followed.
But was the judgement of both pilots on those 4% of approaches the same? Or did they have different views? Was one silently sweating while the other congratulated himself? Or was the PF thinking "I'm never doing that again"? By definition, the situation is ambiguous, and in reaching their individual judgement, each pilot applied many factors including their experience on the type, experience of the environment (e.g. weather, terrain, facilities, airport) etc.
Added into that would be factors like fatigue, commercial pressures, and personality. In some accidents and incidents where monitoring failures occured, CVR evidence makes it likely that there was indeed a difference in judgement. In those events, the monitoring pilot correctly saw the danger, but failed to impose a change of course on the PF.