Mostrando postagens com marcador pilot error. Mostrar todas as postagens
Mostrando postagens com marcador pilot error. Mostrar todas as postagens

terça-feira, 1 de março de 2022

AVIATION ACCIDENT ACCORDING TO A SINGLE CAUSE - The Main Cause

 



 AVIATION ACCIDENT INVESTIGATION 

Sources:

School of Public and Environmental Affairs, Indiana University, MT, USA

Clinton V. Oster Jr.

 

Mason School of Business, College of William and Mary, Williamsburg, VA, USA

John S. Strong

 

School of Public and Environmental Affairs, Indiana University, 1315 E. Tenth St., Bloomington, IN, USA

C. Kurt Zorn


 

A LEGAL APPROACH

Classify each aviation accident according to a SINGLE CAUSE after you having dissected the event chain of failures.

 

There have been decades of the misused term, LOSS OF CONTROL (in flight), to describe many written real causes of aviation accidents in the Final Investigation Report, since such term it overcharges the Pilot's onus and, legally prejudices the flight deck crew.

 

It's because categorizing every accident into a SINGLE CAUSE (the first event of the chain) is now an absolute requirement (Judicially). The investigators have responsibility to find 'what was the "atomic particle" of influencing the LOSS OF CONTROL, not only writing a generalized "accusation" without determining the first cause in the events chain, which it promoted to end up in the accident.


Widespread "prosecution" of pilots is illegal as using the term LOSS OF CONTROL without the determination of the first event that started the events chain for resulting the LOSS OF CONTROL. Those investigated aviation accident concluded as LOSS OF CONTROL they could have occurred for another main cause, such as maintenance failure, engineering design, mechanical failure, or a running software routine error.


If the Final Aviation Accident Investigation Report does NOT specify the first event that caused the result, the Report must NOT synthetize the cause as LOSS OF CONTROL.  

 

“Protected Disclosure”

Protected Disclosure means any good faith communication that discloses

• suspected improper governmental activity (IGA), or

• any significant threat to public/employee health or safety

 

PORTUGUÊS

"Divulgação Protegida"

Divulgação Protegida significa qualquer comunicação de boa-fé

que revela

• atividade suspeita imprópria do governo (IGA), ou

• qualquer ameaça significativa à  saúde ou segurança pública/funcionário

 


 

Purpose of the Written Report

• To evidence a timely and impartial institutional response

• To accurately document the investigation conducted

• To provide decision-maker with facts needed to decide the matter

• To ensure a successful investigation

• To best defend the investigation

 

The TOP-SET headings:

T ime, Sequence and History

O rganisation / Control / Responsibility

P eople and their involvement

S imilar events

E nvironment and its effects

T echnology, equipment & processes

 

Loss of lives it isn’t to be summarized on a mere wide-ranging technical aviation term, like LOSS OF CONTROL. That’s vague statement.

 

Even vague suggestion of Criminal Conduct may be defamatory Per Se.

 

Vagueness refers to a lack of clarity in meaning. For example, “Go down the road a ways and then turn right” is vague because “a ways” does not precisely explain how far one should go down the road.

 

Ambiguity is when there is more than one clear meaning, and it is difficult to choose which meaning was intended. For example, “Paul went to the bank” is ambiguous because bank could mean a river bank or a financial institution. “He was cut” could mean he was cut from the team or he was cut by a sharp object.

 

Another example: “The stool is in the garden” is ambiguous because stool could mean poop or chair.

 

PORTUGUÊS

A "acusação" generalizada aos pilotos é ilegal. O acidente poderia ter ocorrido por outra causa, como falha de manutenção, projeto de engenharia, falha mecânica e, erro de execução de rotina de software.

 

Houve décadas do termo mal utilizado, PERDA DO CONTROLE EM VOO, para descrever muitas causas reais escritas de acidentes aéreos no Relatório de Investigação Final, uma vez que tal termo sobrecarrega o ônus do Piloto e, legalmente, prejudica os membros da tripulação.   É porque categorizar cada acidente em uma ÚNICA CAUSA (o primeiro evento) é agora um requisito absoluto.

 

QUESTIONS ON REPORT TO BE ASWERED

- Who was involved in the accident?

- What actually happened?

- When it happened?

- Where it happened? And

- Why did the first failure event take place?

 

 


 Accidents are usually the culmination of a sequence of events, mistakes, and failures.

 

When planes crash, we want to know what happened. The good news is that there’s technology available today that it would give us the answers. The bad news is that the Federal Aviation Administration (FAA) has not mandated that aircraft operators install it [the tech], citing privacy, security, cost, and other concerns.

 

Commercial airliners are required to have only flight data recorders and cockpit voice recorders, commonly called “black boxes”, but the NTSB has long called for cockpit image recorders, as well. Such video would have been extremely helpful in determining flight crew actions in recent crashes in Texas, Indonesia, and Ethiopia.

 

Part 121 regulations or under Part 135 regulations.

Airline passenger service in aircraft with more than 30 seats has always been provided under Part 121 regulations. Traditionally, scheduled commuter service with aircraft with fewer than 30 seats and on-demand air taxi service has been provided under Part 135 regulations.

 

The goal of the analysis

 

Not all accidents are investigated by organizations with the resources or technical expertise of the National Transportation Safety Board in the United States, the Air Accidents Investigations Branch in the United Kingdom, the Bureau of Enquiry and Analysis for Civil Aviation Safety in France.


 

For example, an engine failure during takeoff where the crew fails to take the needed actions to land the plane safely with the result of an accident.

 

If more information is available for accidents in some sectors of aviation than others or in some countries than others, then there may be a tendency to find more errors in accidents where more information is available which could result in giving those accidents more weight in aggregate statistics.

 

The analysis of the example above should consider both the engine failure and the improper crew response as causes.

 

Approach to classify each accident according to a SINGLE CAUSE

 

An advantage of this simplification is that it is possible to compare a much broader range of accidents.

 

There are two basic approaches to assigning a cause or causes to an accident.

 

FIRST - Why did the engine fail?

SINGLE: Engine failure would be identified as the cause of the accident.

The absence of the factor that initiated the chain of events resulting in an accident, the accident could have been avoided.

 

SECOND - Why didn't the crew respond properly?

SINGLE: The cause of the factor that initiated the sequence of events that culminated in the accident.

I've called it "the atomic particle of influencing crew error".

That is an “unforced” pilot error rather than a failure to respond properly to an emergency.

 

One approach would be to assign the cause that was the last point at which the accident could be prevented. Pilot error would be indicated as the cause of the accident provided in the example above.

 

An example of the approach of assigning multiple causes to an accident is the Human Factors Analysis and Classification System (HFACS) developed originally for the Department of Defense and more recently applied to civilian aviation accidents (Shappell & Wiegmann, 2000).

 


HFACS has focused on aircrew behavior but could also be applied to human factors in maintenance, air traffic management, cabin crew, and ground crew.

 

In a re-examination of the link between an airline’s profitability and its safety record, Madsen (2011, p. 3) suggests that the “strikingly inconsistent results” in the existing empirical literature are due to an inflection point in the relationship between profitability and safety. His analysis “demonstrates that safety fluctuates with profitability relative to aspirations, such that accidents and incidents are most likely to be experienced by organizations performing near their profitability targets” (Madsen, 2011, p. 23).

 

If an airline is slightly below its profitability target, it has an incentive to increase its risk of accidents by spending less on safety. Or, if it is slightly above its target, a reduction in spending on safety can have a significant effect on its ability to remain above the profitability target. Conversely, when an airline is substantially above or below its profitability target, the incentive to reduce spending on safety is considerably less.

 

Investigating the link between maintenance and aviation safety

Marais and Robichaud (2012) look at the effect that maintenance has on aviation passenger risk. They found a small but significant impact of improper or inadequate maintenance on accident risk.

 

The effect that aging aircraft may have on accidents and overall safety levels

 

In an investigation of the effect the adoption of strict product liability standards has had on the general aviation industry, it was found that liability insurance costs for new planes increased significantly (Nelson & Drews, 2008).

 

The concept of latent and active failure and considers four levels of failure:

1) unsafe acts

2) preconditions for unsafe acts

3) unsafe supervision

4) organizational influences

 

Individual error categories within each causal category (Wiegmann et al., 2005)

 

In one study of human error in commercial aviation accidents, the results were reported aggregated into 18 causal categories (Shappell et al., 2004). Not all accidents were included in the analysis, only those where there was some error by the aircrew. The results were reported as the number of accidents in the data set that were associated with one or more of the error categories that make up each causal category.

 

Part 135 air carriers operate smaller aircraft in both scheduled (often referred to as commuter) and nonscheduled (often referred to as on-demand) service typically into and out of smaller airports than those served by Part 121 air carriers.

 

Within the Part 135 industry, the distribution of accident causes for scheduled and nonscheduled services are very similar, so they are not presented. However, the distributions of causes for Part 135 accidents in Alaska services are noticeably different than for service outside of Alaska.

 

For Part 135 service in Alaska, pilot error is even more prominent, accounting for 83 percent of both accidents and fatalities. The reasons for these differences are also not understood.

 


The Swiss cheese theory

The “ Swiss cheese effect ,” also known as the “cumulative act effect,” comes from the work of James Reason, a British psychologist who analyzed systemic failure in terms of four levels of human error:

1. Unsafe supervision

2. Preconditions for unsafe acts

3. The unsafe acts themselves

4. Organizational influences.

 

 

The Swiss cheese model of accident causation

The Swiss cheese model of accident causation illustrates that, although many layers of defense lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur.

 

The Swiss cheese model of safety

The theory is that the multiple layers of cheese represent a process safety system. If several slices of cheese are stacked on top of each other, the hypothesis is that the holes would not align, which would shield the beam of light, preventing a hazard from passing through the layers (and resulting in catastrophe).

 

Human error theory

With human error theory, a violation occurs when an individual deliberately and knowingly chooses not to follow a guideline or rule. These latter types of error are cognitive failures and are either due to actions not going as planned (slips/lapses) or plans being inadequate to achieve the objective (mistakes).




quarta-feira, 4 de agosto de 2010

A MUST for All Lovers of Aviation Literature - The Limits Of Expertise book

Ashgate Studies in Human Factors for Flight Operations

Key Dismukes NASA Ames Research Center
Ben Berman and Loukia Loukopoulos San Jose State University/NASA Ames Research Center
CRM/HF Conference
Denver, Colorado

Learning from the book

R. Key Dismukes
Benjamin A. Berman
Loukia D. Loukopoulos


On February 9, 1998 at 09:54 central standard time, American Airlines flight 1340, a Boeing 727, crashed short of the threshold of runway 14R at O'Hare Intemational Airport, Chicago, Illinois, after deviating below the glideslope while conducting an autopilot-coupled instrument landing system (ILS) approach. The airplane struck the ground hard, shearing off its landing gear and damaging the fuselage and wings. It bounced onto the runway surface, then slid off the right side of the runway and carne to a stop in the grass. The airplane was destroyed in the accident. Of the 116 passengers and six crew members aboard, 22 passengers and one flight attendant received minor injuries.

The weather at O'Hare at the time ofthe accident was ½-mile visibility in freezing fog and a 100-foot overcast cloud ceiling; both temperature and dewpoint were 28 degrees Fahrenheit. Winds were calm. The runway visual range (RVR) for runway 14R was variable between 1,400 and 1,800 feet.
Both pilots were highly experienced, but the captain had qualified as a Boeing 727 pilot-in-command only within the past year and accumulated 424 hours in that position. The first officer had been flying the 727 for seven years and had 3,731 lumrs of second-in-command experience in that aircraft type. The flight engineer, IDO, was well experienced in his role, with five years and 1,550 hours as a 727 flight engineer at the airline.
After experiencing a gate hold because of air traffic in Chicago, flight 1340 departed from Kansas City, Missouri nearly one hour behind schedule. The flight WIIS routine through the en route portion and descent into the Chicago area. The IIl'st officer was the flying pilot and the captain was performing the monitoring pilot dllllcs. The weather in Chicago continued to be poor as the flight arrived in the area, wl\h visibility below the standard (Category I) ILS minimum of 1,800 feet RVR. 10llscquently, the crew chose to perform a Category II ILS approach, which requires special ground facilities, cockpit equipment, and crew training in order to use lower weather minimums for landing (1 ,200 feet RVR). In this case, a Category II approach required the crew to operate the airplane under autopilot control at least until they could see the environment (the runway surface, lighting, and approach light systems).
The flight proceeded normally as the airplane was vectored onto the final approach course. Analysis of radar and FDR data by the NTSB revealed that the flight then proceeded along the centerlines of the localizer and glideslope courses until reaching approximately 200 feet above ground, ½ mile from the runway. At that point, the autopilot caused the airplane to deviate increasingly above and below the proper glidepath to the runway. Comparing flight simulations with the actual descent path of flight 1340, the NTSB found that these deviations were consistent with an excessively sensitive response by the autopilot to the glideslope signal.
Tbe autopilot-induced oscillations caused the airplane to enter a steep descent when it was very c10se to the ground. In the last seconds before impact the crew noticed that the airplane was descending toward the approach lights and attempted to recover, but the airplane struck the ground short of the runway. Conc1uding that the crew should have been able to prevent this undershoot of the runway, the NTSB detennined that the probable cause of the accident was "the failure of the flight crew to maintain a proper pitch attitude for a successful landing or go-around".
Contributing to the cause of the accident were "the divergent pitch oscillations of the airplane, which occurred during the final approach and were the result of an improper autopilot desensitization rate" (NTSB, 2001li, p. 26).1

Beginning at 0923:52, while the airplane was at cruise altitude and entering the Chicago area, the captain conducted a thorough briefing about the ILS approach to runway 14R and the Category II procedures that the weather conditions necessitated. Aecording to the company's Category II guidelines, this type of approach must be flown by the first officer using the autopilot. When the airplane nears the decision height, 110 feet above runway elevation in this case, the captain attempts to acquin.. visuul contact with the runway environment. If the captain is able to identify the required visual cues prior to decision height, he or she announces: "I've got it" and displaces the first officer's hand from the throttles and lands the airplane. If the captain does not make this call by the time the airplane reaches decision height, the first officer disengages the autopilot and executes a missed approach. Consistent with the company's Category II procedures and 727 operating limitations. Thecaptain briefed the crew that after taking over the flying pilot duties he planned to use the autopilot to continue the descent until slightly below decision hight. He would disconect lhe autopilot, in accordance with the company-establishedminimum altitude for autopilot use under the existing conditions, prior to reaching 80 feet above the ground.

At 0936:51 the flight crew contacted the arrival controller, who advised them to expect the ILS to runway 14R and that the RVR was 1.600 feet. This RVR observation confirmed to the crew that the visibility was too low for Category I approaches but adequate for Category II. At 0948:32, when flight 1340 was 18 miles from the airport, the controller cleared the flight for the ILS approuch. With the autopilot engaged, the flight intercepted and tracked both the localizer and glideslop courses. The flight had been operating in clear skies above a solid layer of clouds that obscured the ground. At this time the crew noted that some of Chicago's tall buildings were visible above the clouds, suggesting that the tops of the obscuration were low. As the descent continued through 500 feet above the ground (less than one minute from the planned touchdown), the airplane entered the clouds and the flrst officer removed his sunglasses. The captain, who was monitoring the first officer's execution of the approach and the autopilot's control of flight parameters at this time, continued to wear his sunglasses. The crew later reported that the autopilot was tracking the localizer and glideslope courses perfectly as the descent continued through 500 feet. FDR data indicated that the approach was normal until the airplane descended below approximately 200 feet, 9 seconds prior to impact.

According to FDR and radar data the airplane began to deviate about ½ dot (one quarter scale) below the glideslope at approximately 170 feet above runway elevation. The autopilot then increased the airplane's pitch attitude by more than 3 degrees, causing the airplane to fly up to and then above the glideslope, following which the autopilot began to decrease the airplane's pitch attitude in response to the fly-down indications of the glideslope signal. At about 5 seconds before impact the airplane was ½ dot above glideslope, 136 feet above the ground, and pitching down through 2 degrees below the horizon. In contrast, the normal pitch attitude for a steady descent on an ILS glideslope would have been slightly above the horizon.
The CVR did not record any comments from the crewmembers on these excursions below and above the glideslope, and it is not known whether they noticed the excursions initially. Company procedures for the Category II ILS approach required the captain to monitor outside the cockpit for the first visual indications of the runway environrnent while the aircrafi approached decision height, so there is a good chance that he would not notice small transient excursions from the glideslopc during this period. As flying pilot, the first officer was responsible for monitoring thc i nstruments and making callouts of altitudes, flight parameters, and course deviations. The first officer in fact made the required callout at 500 feet for altitude, sink rate and lIirspeed, and he continued to call altitude at 100-foot intervals as required.
Wc do not know whether the first officer noticed the deviations from the glideslope that occurred after he made the 500-foot callout, or whether he would have found them remarkable without foreknowledge of what was to happen in the seconds that followed. After the accident he did not recall these initial deviations that remained within ½ dot. Review of the airline's manuais and procedures suggests that the company had not established specitic limits for glideslope deviation that would require either a verbal challenge from the pilots or a missed approach. Company pilots interviewed after the accident verified that there were no specific limits for continuing the approach or calling out deviations; however, a company check
airman told investigators that he had been trained to execute a missed approach if
a glideslope deviation of greater than ½ dot occurred. A company line pilot who was interviewed stated that a ½-dot glideslope deviation should result in a verbal challenge from monitoring pilots. But the company's Category II Operations Study Guide from the B727 Flight Training Manual suggested a greater deviation limit: "Normally a landing can be made if the aircraft is displaced ... no more than one dot from the center ofthe glideslope" (reproduced in NTSB, 1 995c). Thus it appears that the initial glideslope excursions of flight 1340 bordered on values that warranted action; however, it is not clear what the company expected of pilots in this situation or what significance pilots would attach to deviations of this magnitude.
The pitch excursions from 3 degrees above the horizon to more than 2 degrees below the horizon during this period also provided a cue, reflected on the pilots' attitude indicators, that something might be amiss; however, this airline, like most others, did not provi de pilots with guidance to use pitch excursions ofthis magnitude as a criterion for discontinuing an autopilot-coupled approach. Because the first officer was probably actively monitoring the autopilot's execution of the approach, he may have noticcd the glideslope course and pitch deviations that began below 200 feet but found them unremarkable, in which case he would have had no reason to mention them at the time or to recall them later.

At 0953:49 (5 seconds before impact) the captain stated: "I got it", indicating that he had acquired visual contact with the runway environrnent and, per procedure, was taking over the role of the flying pilot (he later recalled seeing the sequence flashers of the approach light system on the ground at this point). The first officer conflrmed relinquishing flying responsibility to the captain by stating: "You got it". The captain continued the descent with the autopilot engaged, while he focused on the view through his windshield. According to company procedures the 6rst officer (now performing the monitoring pilot role) was required to continue monitoring the autopilot and the cockpit instruments for any system malfunctions or flightpath deviations.
When the captain took control of the airplane it was descending through approximately 25 feet above decision height (135 feet above ground levei), positioned ½ dot above the glideslope centerline, and pitching down to 2 degrees below the horizon as the autopilot attempted to bring the airplane back to the cenlcl' of the glidepath. Dunng the next 2 seconds, the airplane continued pitching down to 6 degrees below the horizon and began to sink rapidly below the glideslope. Investigators later determined that the autopilot commanded this large pitch-down because it was oversensitive to glideslope signals and was overcorrecting for the smalloscillations it had created moments before. At the time of the accident this airline, and others, had not implemented a service bulletin that the aircraft manufacturer previously issued that would have desensitized the autopilot's response to glideslopc deviations.2
It was around this time that the first officer recalled feeling "a pitch-down". He told investigators that he glanced up nom the radar altimeter, which he had been focusing on in preparation for calling out the decision height to the captain, and he saw the approach lights through the windshield and the "nose pointed short of the runway". The CVR did not record any verbal utterance by the first officer at this time. The flight engineer recalled that the airplane "nosed over" at about 150 feet. He saw the "windshield full ofapproach lights". He recalled that about 1 second elapsed after seeing the lights before he could tell that the airplane was in an incorrect attitude and position. At 0953:51, the CVR recorded the flight engineer stating: "Oooh, nose uh". In the captain's post-accident interviews he recalled that "in a heartbeat", his view ofthe approach lights went from "normal" to "all around us".
The flight lasted only 2 seconds longer. FDR and CVR data indicate that at 0953:52 the autopilot disengaged. The captain did not recall disengaging the autopilot, but he did recall positioning his finger next to the disengage button earlier; thus it is possible that he disengaged the autopilot in response to the aircraft's pitch-down motion, which would have been appropriate. At this time the first officer called out: "100 feet", the airplane's ground proximity warning system annunciated: "Sink rate", and the flight engineer said: "Nose up, nose up". At approximately the same time, the captam added a substantial amount of thrust (he later described his throttle inputs as "cobb[ing] the power", a "healthy fist worth") and pulled back on the elevator contro!.3 The airplane responded to the captain's elevator and power inputs, and its pitch attitude increased to 5 degrees above the horizon. However, the steeply descending flightpath could not be arrested quickly enough. At 0953:54, the airplane struck the ground 314 feet short ofthe runway threshold at a sink rate of 1,260 feet per minute.
The NTSB concluded that "... the flight crew did not react in a proper and timely manner to excessive pitch deviations and descent rates by either initiating a go-Ilround or adjusting the pitch attitude and thrust to ensure a successfullanding ..." (NTSB, 200li, p. 24). At issue here is how quickly airline pilots might be expected 10 rcact reliably and appropriately to the indications available to the crew of flight
1340. At 0953:51 - 2 seconds after the captain took control and 3 seconds before IlIIpact - the airplane was approximately on the center of the glideslope; however, lhe abnormal pitch attitude and the rapid rate ofnose-down attitude change revealed hy the outside visual scene alerted the captain to the danger. His responses to correct the situation (adding power and pulling back the yoke) occurred about 1 second later, which is consistent with the range of normal response times for humans to initiate a complex response to an unexpected stimulus (see, for example. Summala, 2000). (In general. humans can rcspond much more quickly to an expected stimulus than to na unexpecled one. and they can respond more quickly to a simple stimulus than to a change complex stimulus that requires interpretation: for a review of the reaction time literature, see Wickens and Hollands, 2000, pp. 340-9). Thus, the captain's reactions after recognizing the problem were what would be expected of a skilled pilot.
Is it reasonable to expect airline pilots to reliably recognize an abnormal pitch-down attitude more quickly than this captain did? No data exist to address this question directly. Only 2 seconds elapsed between the captain assuming the controls, at which time the flightpath seemed to be within acceptable limits, and the time at which the crew recognized that the pitch attitude had become dangerous. During this brief period the captain was shifting his attention from the cockpit instruments to the outside world to acquire visual reference to the runway. Generally, appreciable time is required to make this transition to using outside visual references to control the airplane's flightpath and attitude, and this period of adjustment increases if the available visual cues are incomplete or ambiguous because of weather, as in this case. Further, the outside visual cues first noticed by the captain were the approach light system's sequence flashers, which provide no direct information about the aircraft's attitude or descent path. In fact, there is a visual illusion that is known to occur in which pilots tend to descend into approach lights because ofthe absence of visual cues to the honzon - in effect the brain incorrectly treats the approach lights as the horizon line (this was dubbed the "black-hole approach" by Gillingham and Previc, 1996).
We have no way of knowing how much time elapsed before better visual cues emerged from the fog to allow the captain to judge attitude and flightpath. The NTSB noted that the captain was at increased risk ofvisual illusions from reduced visibility because he did not remove his sunglasses when the airplane entered the clouds; however, it cannot be determined whether this appreciably slowed the captain's recognition of the airplane's flightpath deviation.
Considering the inherent limitations of human reaction time to unexpected events that require recognition, analysis, and response selection, the rapidity of
the large pitch-down at the moment the captain was transitioning to outside visual references, and the initial incompleteness of visual information available from the runway environment, it is not at all surprising that the captain did not respond quickly enough to prevent the accident. Although pilots might sometimes respond quickly enough to such a sudden deviation from flightpath, it is unrealistic to assume that this would happen reliably.
CVR, FDR, and post-accident flight crew interview data indicate that the firsl officer did not challenge the airplane's steeply descending flightpath after the captain took controI. The airline's procedures required the first officer to continue monitoring the instruments after transfer of control and to call out decision height (which he did) as well as any significant deviation from glidepath (interpreted by some compan:y training personnel to be greater than ½-dot deviation from the glideslope centerline). However, the final glideslope deviation did not reach ½ dot below centerline unlil about two seconds before impact, at which time the captain was already attempting to recover. Therefore, glideslope indications would not have enabled the first officer to wam the captain quickly enough to hasten his response.

During this period the first officer would have been monitoring several instruments on his panel, but some of the information ftom those instrumcnls was misleading or incomplete. Sink rate, in principIe, might have provided the first ofliccr with an indication of the problem sooner than the glideslope deviation informalion: however, this aircraft was equipped with a non-instantaneous vertical speed indicator that lagged the actual sink rate. In post-accident interviews the first officer partialIy attributed his delay in challenging the flightpath deviation to the inherent lags in the instrument's indications.4 Also, pitch changes displayed on the attitude indicator provided a nearly instantaneous indication of the developing problem. However, without specific attitude targets to help pilots judge what they see on the indicator, attitude data require more interpretation, thereby increasing response times. More important, we suggest that monitoring pilots generally scan the radar altimeter, barometric altimeter, glideslope deviation indicator, vertical speed indicator, and airspeed indicator during the final stages of an instrument approach, but in the very last seconds of the approach they devote substantial attention to the radar altimeter because that instrument is necessary to determine when decision height is reached. It is likely that during the l-second period before flight 1340 reached decision height the first officer was concentrating mainly on the radar altimeter in order to be able to make his required callout at that altitude. The large pitch-down occurred during this same period, and the first officer probably was not able to monitor the attitude indicator ftequently enough to catch the pitch-down indication instandy. In fact, we doubt that other pilots in this situation would perform differently, other than by chance, with high reliability. After the accident the first officer recalled that he was first alerted to the large pitch-down by his body's vestibular responses, which caused him to look outside and see that the aircraft was descending short of the runway; by that time, though, a verbal callout would have been toa late. Thus, as with the
captain, it is unrealistic to assume that pilots in the situation of the first officer can reliably intervene quickly enough to prevent an accident if an autopilot quickly pitches down so close to the ground.
Company records indicate that the pilots offlight 1340 were trained and qualified to perform the Category II ILS procedure. Training included a study guide, ground school, and simulator training. Crews were also required to demonstrate Category II ILS procedures during qualification check rides, including both landings and missed approaches. In the simulator pilots experienced system malfunctions such as failure of the autopilot to arm, but they were not exposed to pitch oscillations ai low altitude on short final. Instructors demonstrated below-minimum visibility conditions and demonstrated the appearance of the approach lights on a normal Cutegory II approach. Apparently they did not demonstrate how the approach lights IIppear if the airplane is not on the glideslope or at the proper pitch attitude.s We olso note that while crews were trained in the flying pilot functions for the Category II approach (including the transfer of control from the first officer to the caplain pt'ior to reaching decision hcight), there was no evidence of specific training for the instrument and flightpath monitoring functions required of both the flying and non-flying pilot in this type of approach. Such training, which would help pilots respond in the situation of this accident, might include practice in effective scan patterns, practice in identifying hazardous malfunctions, and realistic experience with the pace of events and inherent time pressure of monitoring the critical phases of the approach.
Company pilots told investigators that they typicalIy performed only one or two Category II approaches per year in regular line operations. We also note that, although the captain of flight 1340 was a highly experienced pilot, he was a relatively new 727 captain, and the accident flight was his first actual Category II approach in this aircraft type. Thus although airline crews are trained to monitor for certain types of equipment malfunctions on instrument approaches, this captain had not encountered or been trained for an autopilot-induced flightpath deviation of the type that occurred. When the airplane abruptly pitched down 2 seconds after the captain took control, it presented him with a picture that did not match anything in his previous experience. It is possible that previous exposure to this situation in a simulator might have allowed the captain to react more quickly, although as we have noted the captain's response was rapid compared to the expected human response time to react to an unexpected event. Similarly, if the first officer had encountered pitch oscillations during Category II training he perhaps would have been better primed to recognize and calI attention to a potential threat. However, airlines obviously cannot anticipate and train for alI possible malfunctions, and even a thoroughly trained crew would remain subject to the cognitive limitations and vulnerabilities that we have discussed.
The investigation revealed that several company 727 pilots had experienced pitch oscillations on instrument approaches before the accident. A check airman who had trained the captain of flight 1340 told investigators that he had experienced an autopilot-induced pitch oscillation at 300 feet above ground. He also related that in his experience, as a test pilot conducting post-maintenance functional evaluation flights, approximately three quarters of 727s leaving the company's heavy maintenance base required adjustrnents to correct for autopilot pitch oscillations. Another line captain reported that he had experienced "porpoising" on a Category I approach with the autopilot engaged. He noticed the pitch oscillations below 1,000 feet and addressed the problem by disconnecting the autopilot in order to stop the oscillations. At the time he assumed that the oscillations were caused by a vehicle or other aircraH violating the ILS protected area on the airport surface. Neither he, nor any other line pilot interviewed by investigators, was aware ofthe company test pilots' seemingly routine experiences with autopilot pitch oscillations folIowing maintenance. We note that all of these instances of pitch oscillation occurred at higher altitudes than those of flight 1340. Thus these other flight crews had the benefit of much more time and space to recover.
Apparently, the information about pilots' experiences with 727 autopilot-induccd pitch oscillations was not widely disseminated among line pilots at this time.
We suggest that if this information had been common knowledge, it might have prompted the crew of flight 1340 to be more skeptical about autopilot reliability, which in turn have made them more likely to notice and respond to the small initial glideslope deviations on this flight. Better dissemination of information about the problem of autopilot-induced pitch oscillation might also have led the airline. manufacturer, and regulator to address the problem before it led to this accident.

Concluding discussion

This accident situation allowed the crew only a few seconds to recognize and respond to a situation they had never encountered previously or been trained for - at a time when their attention was focused on the demands of executing a Category II ILS approach. Under Category lI, approaches may be flown with lower cloud ceilings (decision height is as little as 100 feet above the runway, in contrast to the 200 feet of Category I approaches) and lower visibility (minimum RVR is 1,000-1,200 feet, in contrast to 1,800 feet for Category I approaches). Deviation tolerances for airplane attitude and flightpath are quite small, and when an airplane breaks out ofthe clouds at 100 foot minimums, the crew has only seconds to decide whether the airplane is in a position to land or to recognize deviations or malfunctions. Recognizing that Category II operations are by default challenging, with narrow margins for equipment failure or human error, the FAA requires special equipment, training, and performance capabilities for Category lI. This accident illustrates those narrow margins. Under the much more frequent1y flown Category I procedures, the crew of flight 1340 would have already established full visual contact with the runway by the time the autopilot pitched the nose down; or, in the weather conditions that existed on the day ofthe accident, the flight would not have been allowed to attempt to land and would have been executing a missed approach.
The NTSB cited the cause ofthe accident as the crew's failure to maintain proper pitch attitude following the autopilot malfunction. However, in its report on this Elccident, the agency did not provide a rationale for whether and how crews might be cxpected to reliably react in time to correct the situation that the crew faced in the critical moments after they reached decision height. We suggest that it is umeasonable 10 assume that.airline pilots, no matter how skilled and conscientious, can respond qllickly and accurately enough to this situation to avert an accident with the levei 01' I'cliability required for passenger operations. Although no data are available on tlirlillc pilots' responses in this exact situation, it is well known that humans cannot initially detect, interpret, and respond appropriately to an unfamiliar and extremely I arl' pcrturbation of a normal visual scene. Therefore, a1though the flight crew's inability to recover in time was the most proximate cause of the crash, we argue that this is a classic "systems accident", caused by a known equipment deficiency, organizational failure to correct the deficiency and disseminate information about it, and unrealistic assumptions about human performance capabilities.
Modern autopilot systems developed after the 727 have dual-and triple-redundant autopilots in which lhe individual systems monitor each other, reject incorrect control inputs, or disengage safely in the event of a malfunction of one of the autopilots. They are much more reliable, and when these modern system fail they do so in ways that are easier for pilots to manage. Yet these advanced systems are currently required only for the even more demanding Category III autopilot-coupled operations; the less reliable autopilots such as those installed on tlight 1340 can still be used for Category 11 operations, although the older equipment involved in this accident is being phased out in most US airline tleets. The vulnerability revealed by tlight
1340 suggests that the industry should systematically review adverse interactions between equipment malfunctions in Category II operations and human perceptual and cognitive limitations in responding to these malfunctions. More broadly, it would be useful for the airline industry to carefully review all critical operating situations in which tolerances for equipment failures and human error are small to ferret out unrealistic assumptions about human performance embedded in the design of operating procedures and equipment. Although safeguards in the airline industry for the most part work extremely well, periodic reviews of this sort are essential to uncover latent threats to safety before they eventually cause accidents.

Notes

The NTSB conductcd a major investigation of this accident but did not produce a major accident reporto A summary of factual information and analysis were published in an AircraftAccident Brief (NTSB, 200 I a). We obtained information for this review from that report and the following elements of the public docket: Operations/Human Performance Group Chairman's Factual Report (November 24, 1998), Aircraft Performance Group Chairman's Factual Report and Addendum 1 (February 5, 2001), Flight Data Recorder Group Chairman's Factual Report (May 26, 1998) and Cockpit Voice Recorder Group Chairman's Factual Report (March I, 1998).
2 Under FAA procedures and terminology, operator compliance is optional fora manufacturer-issued service bulletin (SB) but mandatory for an FAA-issued Airworthiness Directive (AD). No AD was issued in this instance. Typically, air carrier engineering departments evaluate each SB to ascertain whether the carrier will comply with the bulletin and, if so, the timing for compliance.
3 The exact instant of the captain's responses to the excessive pitch-down is difficult to determine. The FDR did not provide usable data for the elevator position. Engine pressure ratios (EPR), which slightly lag throttle inputs, began to rise, and pitch attitude began to increase about 2 seconds before impact.
4 The NTSB did not evaluate the potential effects oflags in vertical speed indication in this accident, but NTSB investigators did raise this issue in a later accident (see Chapter 18) involving a below-glideslope excursion.
5 The FAA does not require this to be inc1uded in training, but some other air carricrs do inc1ude it in their Category II ILS training programo One instructor at this airlinc told investigators that he exposed students to a situation in the simulator in which lU! increasing crosswind moved the airplane beyond the lateral deviation limits for a Catcgory II operation, prompting the students to execute a missed approach. However, this wus 1101 a required simulator scenario so not ali students at the airline might have received it, IInd the scenario also did not involve glidepath deviations.