Abstract

Competency-Based Training and Assessment (CBTA) is a broad umbrella vocational training concept that has spread across numerous training domains / industries within the US over the last half century. It is referenced by several labels, including but not limited to competency-based, performance-based, proficiency-based and outcomes-based. 

Within the domain of pilot training, it draws from the competency tradition embodied in Evidence-Based Training (EBT) as well as the performance- or proficiency-based tradition embodied in the Advanced Qualification Program (AQP). While AQP focuses more on the training and assessment of the narrow individual competencies that are the components of flight tasks, EBT focuses more on the broad, high-level competencies those flight tasks represent. In the end however, both approaches may be considered hybrid programs. 

This paper reviews the history of these traditions, the primary reason why the US has, to date, preferred to maintain the proficiency tradition embodied in AQP, and more detailed guidance regarding the use of competencies within the AQP framework.

Defining Competency-Based Training and Assessment (CBTA)

ICAO Document 9868, 2020, currently provides an extremely broad and inclusive definition of CBTA: “Training and assessment that are characterized by a performance orientation, emphasis on standards of performance and their measurement, and the development of training to the specified performance standards.” This very open-ended approach to defining CBTA is shared with other domains, such as medicine:  “Competency-based medical education is defined as an outcomes-based approach to the design, implementation, assessment and evaluation of a medical education program using an organizing framework of competencies, with competencies being defined as the essential minimal set of a combination of attributes, such as applied knowledge, skills, and attitudes, that enable an individual to perform a set of tasks to an appropriate standard efficiently and effectively (Frank et al. 2010, Carraccio et al. 2016, Sultan et al. 2019).  

The International Labor Organization (2020) defines Competency-Based Training (CBT) simply as “…a structured training and assessment system that allows individuals to acquire skills and knowledge in order to perform work activities to a specified standard.”  Note this definition includes assessment as a component of CBT, making it essentially a definition of CBTA. 

Finally, in the academic domain, Cleary (2015) defines CBT by stating that “CBT is outcome-based and assesses learner’s attainment of competencies determining whether learners demonstrate their ability to do something.”

AQP supports a similar process. AC 120-54A (2017) indicates that “AQPs are systematically developed, continuously maintained, and empirically validated proficiency-based training systems. They allow for the systematic analysis, design, development, implementation, progressive evaluation, and maintenance of self-correcting training programs that include integrated Crew Resource Management (CRM), improved instructor / evaluator standardization, scenario-based evaluation, and a comprehensive data-driven quality assurance system.” Here proficiency refers to the attainment of established standards.

What all these definitions seem to have in common are emphasis on: 

(1) the actual demonstrated performance of real-world tasks, 

(2) the establishment of real-world task standards, and 

(3) the development of training and evaluation protocols that assure job holders are able to perform according to standards. 

In other words, the tightest possible among training requirements, checking requirements and job performance requirements.

Within the aviation domain, Kearns, Mavin & Hodge (Kearns et. al 2016) include the following four programs under the rubric competency-based: 

  • Multi-Crew Pilot License (MPL), 
  • Advanced Qualification Program (AQP), 
  • Evidence-Based Training (EBT), and 
  • Advanced Training and Qualification Program (ATQP).  

But the authors refer to AQP as merely “similar” to competency-based, rather than truly competency-based. “The difficulty with AQP as a model for a competency framework is that it is task and procedure oriented and may not fully consider, integrate or assess non-technical skills” (p.59). 

This paper presents a counterargument, that AQP is simply derived from a different competency tradition than the one adopted by ICAO, and that AQP was the first aviation training system to fully integrate CRM into all portions of pilot training. It also pioneered the Line Operational Evaluation (LOE), the gold standard for the assessment / evaluation of integrated CRM.

Because the ICAO definition of CBTA is extremely broad, in keeping with definitions from other training domains, the organization had to provide more detailed guidance through its definition of “competency” and its 10 Principles of Competency-Related Training and Assessment (ICAO, 2020). These map nicely onto AQP, but the definition of competency itself does not.

Defining Competency

ICAO currently defines competency as “A dimension of human performance that is used to predict successful performance on the job. Competency is manifested and observed through behaviors that mobilize the relevant knowledge, skills, and attitudes to carry out activities or tasks under specified conditions.” 

This definition parallels the definition of the highly respected International Board of Standards for Training, Performance, and Instruction (IBTSI), which published a widely used, educationally focused series of competency standards for Instructional Designers (2000), Training Managers (2001), Instructors (2003) and Evaluators (2006) in the early 2000s. IBTSI (2008) defines competency as “a set of related knowledge, skills and attitudes that enable an individual to effectively perform the activities of a given occupation or job function to the standard expected in employment.” IBTSI reviewed the most relevant competency models of the day and concluded that “Most of the definitions of competency focus on knowledge, skills and attitudes as the main components of competencies” (Russ-Eft et al., 2006). 

This is true of most contemporary models as well, but not all. A recent example would be the 2021 edition of the Dangerous Goods Regulation, developed by the IATA CBTA Center (IATA, 2021), which includes skills, knowledge, attitudes, and experience.  These one-off definitions of competency are very common throughout the history of CBTA, which demonstrates the flexibility of the approach to accommodate itself to different industries and organizations within those industries.

Addressing competency-based education (CBE) in aviation, Kearns (Kearns et al. 2016) takes a contemporary and applied approach to the realities of competencies in today’s aviation environment. Competencies are “the negotiated and agreed written statements (texts) that attempt to represent the ability to participate fully in a social practice.  Competencies embody assumptions about (1) the nature of competence, (2) the number of discrete written statements (texts) required to represent competence, (3) the most appropriate language for representing competence” (p.4). This approach looks at CBE through a much wider lens than many of the older competency models. 

Those second and third points capture a major philosophical difference between EBT and AQP when it comes to CBTA. EBT rests on eight extremely broad competencies, which are meant to capture and represent the hundreds of individual competencies (KSAs) that underlay the tasks that in turn represent or “manifest” in the broader eight competencies. 

While this approach addresses those underlying KSA’s through the training and assessment of ground and flight tasks, it also groups tasks into high-level competencies and uses those as the organizing principle for curriculum design. AQP employs competences (KSAs) as well, but does not take the next step of consolidating tasks into a high-level competency scheme, other than when developing LOFT, SPOT and LOE scenarios. In that case, AQP focuses on the same broad CRM competencies that EBT does.  

As indicated earlier, both EBT and AQP are hybrids of the task-based and competency-based models. AQP works like EBT when it trains non-technical tasks, and EBT works like AQP during the maneuvers training phase.

This difference of traditions explains why the AQP Advisory Circular (120-54A, 2017, Change 1) does not define the term competency. Instead, it defines the terms skill, knowledge, and attitude individually. It refers to competency in terms of competency analysis, which it equates to learning analysis, KSA analysis, or hierarchical analysis as the subdivision of tasks and subtasks into their component KSAs, or competencies.

Macro versus Micro Competencies

Training Broad (Macro) Competencies Indirectly through Task Training

The EBT approach to CBTA is most convincingly traced back to the competency modeling conducted by Dr. David McClelland (1973) in the late 1960s and early 1970s (Russ-Eft et al, 2008). He pioneered the notion of using competencies, rather than aptitude or intelligence scores, to measure and predict job performance. This is the approach adopted by the IBTSI and endorsed by the American Society for Training and Development (2013), now the Association for Talent Development (ATD). Call these macro competencies.

This approach has been employed across the globe for decades, primarily by educational institutions, and is the approach that will surface most frequently when using the keywords “competency-based training” in internet searches, as opposed to the words “performance-based training” or “proficiency-based training” or “outcomes-based training”. But it is not the approach that has been embraced by the US armed forces or major Fortune 100 companies over the decades (Dick et al. 2022, Taylor & Ellis, 1991), or by the Federal Aviation Administration.  

Training Narrow (Micro) Competencies (KSAs) Directly and Organizing Curricula by Tasks

The AQP approach to CBTA dates to WWII, particularly to the work of research psychologist Dr. Robert Gagne, who was hired to improve the training of Army Air Corps pilots. To increase the training throughput of soldiers, aviators, sailors, and marines by magnitudes of 10, 20 and even 30 during WWII, the US military had to abandon the use of a suddenly outdated apprenticeship model and move to a model that mandated ruthless efficiency and effectiveness. 

Left: Dr. Robert Gagne, one of the founders of modern ISD

Soldiers were to receive all the training they required to perform their jobs, but absolutely nothing extra. Young psychologists such as Dr. Gagne, who went on be one of the founders of modern instructional design practice in the US, advised them to use a job task analysis-based approach to assure a tight match between job requirements, training requirements, and training outcomes. He went on to propose the use of individual skill, knowledge, and attitude components not only as prerequisites to and components of job tasks, but also as a tool to select tasks for training, assign media and recommend instructional strategies (Gagne et al, 2005).

By the early 1960s all branches of the US military and much of the large business community, such as AT&T, Bank of America and IBM (Bowsher, 1989A, Bowsher 1989B, Markle, 1967, Grossman, 1972) had adopted this systems approach to training (SAT), later referred to as the instructional systems development (ISD) process (Taylor & Ellis, 1991). In their Handbook for Designers of Instructional Systems (Department of the Air Force, 1974) the US Air Force defined their ISD process as “A deliberate and orderly process for planning and developing instructional programs which ensure that personnel are taught the knowledges, skills and attitudes essential for successful job performance.” 

This tradition is often referred to as “performance-based training” versus “competency-based training”, but the concepts are clearly identical. This is the proverbial distinction without a difference. The focus is on mastering the skills, knowledge and attitudes required for acceptable job performance. A later entry in the Dictionary of Training (1993) builds on the Air Force definition and expands it by stating that “ISD should be a deliberate and orderly, but flexible, process for planning and developing instructional programs that ensure personnel are taught in a cost-effective way the knowledge, skills and attitudes essential for successful job performance.” 

This focus on KSAs is mirrored in the AQP approach to competencies, where the goal of training is not to address all tasks, but only enough tasks to exhaust all the necessary underlying competencies. Call these micro competencies.

The ADDIE (Analysis, Design, Development, Implementation, Evaluation) Framework

Kearsley (1984) suggested that “Of all the soft technologies that have influenced the [training] domain, the methodology of Instructional Systems Development (ISD) is undoubtedly the most pervasive and important.” The most universal framework for the application of the ISD process is the well-known five-step ADDIE framework. ICAO (2020) provides a step-by-step guide for building a CBTA program that “makes use of the ICAO competency framework and the ADDIE model.” (p. 1-2-C-1).

AQP, like most branches of the US military, has a specific step wherein tasks and subtasks are further broken down into skills, knowledge, and attitudes. This process is variously referred to as learning analysis, competency analysis, KSA analysis, or in the case of the US Army (Department of the Army 2004, 2018) “Identify the skills and knowledge required to perform each step.“ Note the absence of attitudes in the Army process.  (This is discussed later in the paper.) 

Back to the Beginning

Although the earliest roots of performance-based training date back to the massive military training expansion in the US between 1939 (when the Army was 100,000 strong) and 1945 (when the Army was over 8 million strong), the core concepts have been operationalized in increasingly sophisticated ways over the years. The basic notion of providing the student soldiers with “everything they need and nothing they don’t” has matured into organized bodies of tasks, competencies and standards that are systematically defined, taught, tested, and revised in response to increasingly complex and meaningful databases, reflecting a dynamic training and safety environment. But for all this additional complexity, the underlying goal remains the same as it was in WWII: the efficient production and maintenance of competent professionals.

Assumptions

All Pilot Training is Evidence-Based

All pilot training, including Traditional Training (TT), is evidence-based training.  Programs differ significantly according to the quality, quantity, accuracy, timeliness, and organization of the safety data used as evidence, but none is evidence-free.

TT programs rely primarily on accident and incident data that is often dated and not necessarily relevant to preventing safety events moving forward. It also tends to be insensitive to the make or even the generation of aircraft involved in the accidents it uses as referents. The approach is reactive, while AQP and EBT rely heavily on proactive safety data sets. Yet in their defense, within the US these programs continue to produce reasonably well-prepared pilots despite these shortfalls.

AQP relies on international, national, and organization-specific safety data to maintain its curricula. It also collects, analyzes, and integrates training performance data, both on individual pilot performance and the performance of the training system itself.

EBT relies on data like that of AQP, but with additional focus on the generation of aircraft from which the data was collected on the safety side, and a focus of data collection on the eight competencies on the training performance side. 

On the AQP side, the program avoids the necessity of examining generations of aircraft by making programs specific to the make, model, series, and variant of aircraft. And as far as data collected on the eight competencies, all AQP operators collect data on reason codes, which are broad categories of capabilities that are like, and in some cases identical to, the eight competencies. (More on reason codes later.)

All Curricula are Task-Based, Including EBT                                   

All curricula, including EBT, are constructed of the same sorts of flight tasks and procedures that have been the backbone of flight training for decades. While EBT is primarily scenario based, these scenarios themselves consist either directly of flight tasks, or of conditions and contingencies presented for the purpose of eliciting the appropriate flight tasks from the crews. 

Scenarios represent a carefully integrated series of flight tasks designed to cue specific behaviors from the flight crew (Hannam, 2010; Curtis & Jentsch, 2010). This is perfectly illustrated in Table 7-5 of the FAA’s current (FAA, 2015) Advisory Circular on LOS, entitled “the event set assessment / grade sheet” (p. 47). It provides two examples of LOE event sets, each set composed of over half a dozen events. Each event is keyed to the appropriate task number from the operator’s job task analysis. 

In EBT, the competencies are taught and assessed indirectly by teaching and assessing the flight tasks that represent (manifest) examples of each competency.

Traditional Training, EBT & AQP as Task Selection Methodologies

In a sense, TT, EBT and AQP may be considered three different task selection methodologies for constructing curricula. None of these approaches, even TT, include all possible tasks, so each has a somewhat different approach to determine what tasks to include and what to exclude from the curriculum. All three approaches are based on both safety and training considerations. In that sense, all three are evidence-based training. It is the quality, quantity and timeliness of the safety data that differentiates traditional training from EBT and AQP.

Traditional training focuses on safety data, but it is severely limited and dated and primarily reactive. It is heavily skewed towards older accidents and incidents that are unlikely to reoccur. It appears to take training factors into consideration with such traditional selection criteria as difficulty, complexity, frequency, criticality, and immediacy of task performance.  

EBT is powerful and unique in that it supports a non-profit organization dedicated to providing EBT operators with current safety data on an ongoing basis. And beyond this, the data is tailored to the four generations of aircraft that its taxonomy of aircraft safety revolves around. It is also unique in utilizing the eight broad competencies as the primary organizing principle of its task selection process. Other than that, it uses the same broad array of internal and external safety databases available to the AQP operators. And because AQP programs are unique to make, model, series and variant of aircraft, there is no overarching taxonomy, such as the four generations of aircraft, to filter the data. Data is tailored to the aircraft and not the generation of aircraft. That is the responsibility of the operator.  

AQP collects pilot training data organized by qualification standards, which are based on tasks, subtasks, and KSAs, drawn from a job task analysis and competency analysis that are maintained throughout the life of the training system. This ensures that students are taught and assessed on current job requirements. AQP uses traditional task selection criteria, but also uses competencies to determine what to include or exclude from the curriculum.   

While EBT competencies are broad and few, AQP KSAs are deep and many. The use of competencies in the task selection process of AQP is aimed primarily at consolidating tasks that require the same underlying competencies. In that way, if four tasks require mastering the same competencies, only one of those tasks needs be taught. According to the AQP Advisory Circular (2017) “TPO’s [Terminal Proficiency Objectives] and SPO’s [Supporting Proficiency Objectives] having common knowledge, skill, attitude and/or CRM factors may be consolidated to avoid duplication” (p. 20). So, in the end, both AQP and EBT aim to select tasks for training that will exhaust not all the possible tasks, but all of the relevant competencies required to perform critical flight tasks.

EBT and AQP

Having traced the historical roots of the two very different approaches to CBTA embraced by AQP and EBT, we can discuss some of the significant differences between the two programs, and the two approaches to CBTA. Some of these differences relate to the different competency models, while others simply relate to general differences in scope and approach.

Scope

One of the most obvious differences between the two programs is the difference in scope. EBT currently applies only to pilot recurrent training, while AQP is far more comprehensive. It includes all safety-related airline training for pilots (indoctrination, qualification, continuing qualification, transition, conversion, upgrade, requalification, refresher, etc.), as well as all safety-related training for dispatchers and flight attendants. This requirement to standardize training concepts across multiple curriculums and work groups helps explain some of the additional analyses and paperwork that AQP requires compared to EBT.

As will be explained later, EBT overlays existing regulatory requirements, which are much fewer for recurrent training than for initial equipment training. It is very likely that the EBT methodology will have to expand to accommodate the future development of the range of curriculums currently covered by AQP.

 

The Black Swan Problem

ICAO (2013) argues that “Mastering a finite number of competencies should allow a pilot to manage situations in flight that are unforeseen by the aviation industry and for which the pilot has not been specifically trained” (p. 1-1-1). AQP makes the same assumption. But whereas to EBT “finite” refers to 8 macro competencies, “finite” refers to hundreds of micro competencies in the case of AQP. This argument is not historically associated with CBTA programs in other domains, but is viewed as a major driver of ICAO CBTA.

If this assumption proves to be a powerful factor in preparing pilots for the unexpected, the FAA and the AQP operators will undoubtedly respond and modify their programs accordingly. In the meantime, because the EBT competencies are so few and so basic, it seems unlikely that any rigorous AQP continuing qualification (recurrent) program would exclude any of them over a series of training cycles. Hence AQP did not initially modify its approach to training based on the Black Swan challenge, although the FAA has sponsored research on resilience training in the past. If data supporting these advantages emerges, the AQP community will act. AQP was not simply designed to be new. It was designed to be renewable. 

If the AQP community is cautious about change, it is in part because of the spectacular success it has had with taming pilot error. With approximately 90% of US airline pilots in AQP, it has been over 15 years since any AQP operator has had a single fatality attributed to pilot error. The EBT training program will need to demonstrate this level of success in non-technical skill performance before the AQP community is likely to embrace more of its tenets.

In other domains, CBTA is compared to traditional training and found to have significant advantages. And within the pilot training domain, CBTA also has significant advantages relative to traditional training. But how significant are those advantages over a program like AQP, which includes the use of narrowly defined competencies to narrow the range of tasks to be taught?  

Comparing EBT and AQP, some of the traditional arguments for the resilience advantage seem more muted. AQP is essentially a hybrid system of competency-based and task-based training, which probably explains this outcome.

It is argued that task-based training will need to be expanded year over year as new tasks are added to the pilot’s inventory of responsibility. Yet this has not occurred. AQP is three decades old, and the training footprints are no longer than they were decades ago when the program began. They are, in most cases, less than the notional training footprints proposed by ICAO for EBT. This is because of the regulatory flexibility of AQP and the data-driven protocols for maintaining reasonable curriculum footprints. The AQP rule and advisory circular list no required maneuvers, tasks, or topics for training pilots.  Instead, they include a process for developing curricula. Training intervals may be stretched, tasks may be taught in lower-level devices, tasks may be sampled or even excluded.

It is also argued that EBT focuses more on training and less on evaluation than AQP.  Yet the programs have identical evaluation footprints. They each have a single evaluation event, an LOE, administered at the beginning of an EBT training session and the end of an AQP training session. AQP has what it called a maneuvers validation, but this is a train-to-proficiency event, similar or identical to the maneuvers training phase of EBT.

Another argument is that tasks are trained in isolation, even though AQP was the first pilot training program to mandate the integration of CRM and technical skills not only for training but for evaluation as well. AQP also encourages the use of LOFT and SPOT, where appropriate. These features also argue against the notion that AQP does not prepare pilots for complex and evolving environments. Again, these arguments hold well against a traditional training program but do not hold well against a hybrid system like AQP.

 

Data Sources 

The primary sources of safety data may be simplified to include: 

* International sources (Boeing, Airbus, NTSB, Data Report for EBT, etc.),

* National sources (Safety Information Analysis and Sharing (ASIAS) system, Aviation Safety Reporting System - ASRS, etc.) and airline (voluntary and mandatory safety and reporting systems).  

All these systems provide valuable information to the training program and should be reviewed on a regular basis. Several training departments have commented that when it comes to actionable intelligence for the ongoing maintenance of courseware, it is the operator’s own safety data that results in the most changes. Operators are therefore encouraged to participate in as many operator safety programs as possible, whether AQP or EBT.


Student Data

EBT encourages operators and regulators to select their own data collection schemes.  The little EBT data that the authors have seen included pass / fail scores for sessions, 1-5 scores for competencies, and 1-5 scores for tasks, elements or maneuvers that fall below standard, so 1s and 2s only. Because AQP offers many different programs with different grading schemes, the only apples-to-apples comparison is to describe the rating scheme for pilot continuing qualification (AQP recurrent) to EBT.

Pilot continuing qualification requires data collection on maneuvers validation (a train-to-proficiency event), an LOE and a line check (for First Officers as well as Captains). Overall events are graded pass / fail and individual tasks, maneuvers, procedures, and event sets are rated on a 1-4 or 1-5 scale. This results in over 50 grades and ratings being submitted in a de-identified format to the regulator on every pilot every training cycle. Airlines are required to collect and analyze additional performance data above and beyond what they submit to the regulator, but this additional data is not specified, only required, providing operators additional flexibility.

This suggests that a major difference between EBT and AQP data collection is the additional collection of data at the competency level under EBT. Several AQP operators have resolved this difference by simply replacing their reason codes with the eight competencies. Reason codes are broad categories of explanations for higher or lower ratings, such as knowledge of procedures, execution of maneuvers, workload management or communication. AQP requires the annotation of reason codes for sub-standard ratings but encourages annotation of all scores. By adding the eight competencies to their existing list of reason codes, or by replacing their existing list all together with the EBT competencies, an AQP operator can easily collect data on the same eight competencies found in EBT. Some have already completed this transition.

Baseline EBT is a Program, AQP is a System 

One of the cornerstones of AQP / ISD is that, once established, it is a self-correcting training system. Gagne et al. (2005) said of the ISD process, “Once the training program is implemented, other phases [analysis, design, development] don’t simply end; they are continually repeated on a regular basis to see if further improvements can be made.” 

It appears that the Baseline EBT process uses the traditional front-end learning analysis protocols primarily to develop the initial curriculum, and then moves beyond them. Their currency is not maintained. All changes to the curriculum moving forward are based on training and safety data, which are then used to modify the curriculum as appropriate.  There is no avenue to return to the job task analysis or other foundational analysis documents to maintain their currency. Its work is done once the initial curriculum is complete. This is true of baseline EBT but should not necessarily be true of an enhanced EBT, depending on the version of the ADDIE framework they employ.

It is a fact that AQP does have some temporary processes that are conducted for the sole purpose of establishing the first version of the curriculum, only to be modified when data either validates or fails to validate the original assumptions. Criticality and currency analysis are assumptions made without hard data, simply to get the initial curriculum up and running, until performance feedback becomes available. Once data is flowing, assumptions are no longer required.

When performance data suggests a need to modify the training system, the AQP / ISD / ADDIE process revisits each foundational document in turn to see if it requires modification. Do we simply go back to the curriculum outline? Must we go further back to the qualification standards? Must we go even further back to the competency analysis? Must we go even further back to the job task analysis? We go back as far as we need to go, make the changes, and then let the modifications ripple through the remaining succeeding stages, repeating the process by which the original version of the training system was developed.

Individualized Training

Increased individualism was a strategic goal of the pioneers of AQP. The long game was to move from a training system that provided all Part 121 and 135 pilots with essentially the same training, regardless of aircraft or route structure or other variables, to a training system tailored to the needs of individual pilots or crews. While an AQP must be tailored to the make, model, series, and variant of aircraft, to the route structure of the operator, and so on, this represents tailoring to a smaller and smaller group of pilots, and not down to an individual pilot or crew.

There is a considerable amount of individualization in an AQP continuing qualification program. Whereas the EBT curriculum is initiated with a diagnostic LOE as the first event, AQP operators begin with an optional systems validation to assess the strengths and weaknesses of each pilot. These validations are not required by policy for continuing qualification, but in practice most operators have implemented them as a form of diagnostic tool to assess individual strengths and weaknesses. The instructor / evaluators also have access to the students’ previous training records and are free to deviate from the approved curriculum when required for an individual pilot’s needs.

While AQP uses the Line Operational Evaluation (LOE) as the final check at the end of recurrent training, EBT uses the LOE as an initial personal diagnostic tool to purposefully shape the remainder of the pilot or crew’s training. While the need to customize training on the fly in this manner puts an extra burden on the instructors and evaluators, it should improve the training for the individual pilots and crews if implemented rigorously.

AQP also has another form of diagnosis. These are the First Look Maneuvers, which are performed without prior training or briefing, giving the instructor or evaluator their “first look” at pilot performance since the previous training cycle. While these maneuvers are primarily for the diagnosis of the training system itself, they obviously provide the individual instructor / evaluator with diagnostic data regarding the individual pilot’s strengths and weaknesses.

The developers of EBT specifically allowed for the use of the LOE at the end of the curriculum, to allow AQP programs to synch up with EBT practices in other ways. While the A4A committee that developed the processes for constructing LOEs back in the early 1990’s was tasked specifically to develop a post-test rather than a pre-test (ATA, 1994), it is not unusual in educational programs for pretests and post-tests to be identical. This is an area for AQP operators to explore. What is extremely unusual is to have a course of study with a pre-test and no post-test, as EBT appears to do.

Phased Implementation

Both EBT and AQP have recommended a phased implementation strategy for the operator’s initial entry into the new training paradigm. EBT recommends operators exercise a baseline EBT for 24 months prior to converting to an advanced EBT program (ICAO, 2013). AQP originally employed a Single Visit Exemption process that allowed operators to enhance their traditional programs with a subset of critical AQP features, such as First Look, Scalar Data, LOFT, and data analysis (FAA, 2017). This exemption was also 24 months, but with options for extension.

In addition, AQP is divided into five sequential phases (Application, Curriculum Development, Small Group Tryout, Initial Operations, Continuing Operations), each of which requires FAA approval. 

The Special Case of Attitudes

While the research community has validated in numerous studies that Knowledge, Skills, and Attitudes are the three central components of a competency, many organizations are reluctant to employ the term attitude. The US Army (2004) refers to skills and knowledge, and the FAA (AC 120-54A, 2017) still allows attitudes to be optional, although they are defined in the glossary, and, most AQP operators employ them. 

Historically attitudes have had an emotional loading that skills and knowledge do not.  They have often been viewed as too subjective to be used for evaluation. To work around this dilemma, a number of AQP operators have substituted surrogate terms for attitude to make the term more acceptable to the work force. Words like professionalism, judgment, and motivation are not exactly on the mark, but probably close enough to substitute if the word attitude is not palatable with the pilots, or with their union.

Organizational Support

The initial Program Office for AQP was managed by a Ph.D. in human factors, included a Ph.D. in Instructional Systems, as well as a data analyst with 20 years of experience in the federal government. According to Kearns et al. (2016), “To integrate AQP in the United States, the Federal Aviation Administration (FAA) appointed a human factors scientist to administer the program (Weitzel & Lehrer, 1992). “This has been an advantage for the US integration of AQP, compared with Europe, where “there is no expert sitting in the center with a background in ISD directing [the program]” (Pilot #8). 

In much of the aviation industry worldwide, AQP is poorly understood. The background is just not there in [ISD] training - certainly in aviation flight training - people don’t understand what you’re talking about, therefore they cannot see the benefits or understand the need” (Pilot #8).”

Baseline EBT is an attempt to overcome the need for such specialized expertise and to develop an off-the-shelf competency-based program that can be implemented by the average airline’s training department. While this seems to work for the baseline program, it seems highly unlikely that an enhanced program can be truly optimized without specialized expertise like that required for AQP. 

Regulator Support for Safety and Training Integration

Not only did the FAA put a team of specialists in charge of the AQP program, but they also integrated the six voluntary safety programs and AQP into a single administrative unit, called the Voluntary Safety Programs Branch (Farrow, 2010). By the late 1990s, multiple AQP operators were actively integrating safety data into their curricula, using teams commonly referred to as Flight Data Analysis Groups (FDAGs). 

In response to requests from the operators, the FAA sponsored a series of three-day conferences dedicated to the integration of safety and training data. These began in 2002 and were referred to as the Shared Vison of Safety Conferences. So long before  Safety Management Systems (SMS) and long before Evidence-Based Training, the FAA facilitated the integration of safety data into the AQP curriculums.

One large AQP operator observed that while the training department later welcomed the integration of SMS into their operation, it had little impact on the operation. The operator’s SMS simply did not offer any data or processes that the training department did not already have in use.

The FAA’s efforts to integrate the data from multiple safety programs, operating from a single administrative unit, have been adopted as a best practice by other safety organizations (Mills, 2010; Transportation Research Board, 2011).  

The Role of the Regulator

ICAO guidance is developed for the regulator, while AQP guidance was written by a regulator. ICAO guidance regarding EBT is therefore significantly less detailed than the guidance each CAA will develop for its operators. 

In the case of AQP, the level of interaction is high. Operators must develop and maintain three separate management documents and three separate curriculum documents. They must also submit de-identified pilot performance data to the FAA monthly. While these documents and data submissions provide unprecedented levels of transparency to the agency, they also require specialized knowledge and skills on the part of the inspector workforce.

Cherry-Picking EBT

Because EBT was developed over two decades after the implementation of AQP, those developers had the opportunity to cherry pick from AQP some newer instructional concepts not found in traditional training, such as evaluator calibration, CRM integration, scenario-based evaluation, scalar grading, enhanced instructor / evaluator training, training modifications based on safety and training data, and so on. There is an old joke among instructional designers that ISD stands for Imitate, Steal and Duplicate. The FAA has encouraged AQP operators to reverse this process by examining EBT features and processes for possible inclusion into their own programs.

Notes, Cautions and Warnings

Evidence-Based Training and Instructional Design

To a professional instructional developer, the term evidence-based training was part of the jargon of the field long before ICAO decided to adopt that title for its competency-based pilot recurrent training program. In the literature of instructional design, evidence-based training, like evidence-based medicine before it, refers to the use of empirical studies to determine best practice, rather than a heavy reliance on committees of experts and case studies (Clark, 2010). Hence it references training research data rather than safety data.

 

The EBT Competence Model May Be Customized 

ICAO (2020) encourages each operator to customize their own listing of competencies.  This seems advisable. EBT with eight competencies, and the Multi-Crew Pilot License (MPL) with nine, are the only competency models outside of ICAO (2020) these authors have seen with a single digit list of competencies. The previously referenced IBSTPI series lists 14 competencies for evaluators, 16 for instructional developers and 18 for instructors. The American Society for Training and Development (2013) lists 15 for the Training and Development Profession. The limited range of competencies in EBT could conceivably limit its effectiveness.

 

Transferability of Competencies

ICAO (2020) indicates that “Competency-based training and assessment is based on the concept that competencies are transferrable” (p. 1-2-A-2). Based on research sponsored by the FAA in the early 1990s, this statement may be more applicable to technical skills than to non-technical skills. 

The A4A committee (ATA, 1994) that developed the concept of the LOE was surprised to see crews performing well on a CRM marker like workload management (one of the eight EBT competencies) in one scenario, and poorly on workload management in another. Consequently, this statement regarding transferability is probably stronger for the three “technical” competencies than it is for the five “non-technical" competencies, which seem more reactive to specific contexts.

 



ABOUT DOUG FARROW 

Douglas R. Farrow, PH.D., served his entire 25-year FAA career with the AQP Program Office at FAA Headquarters. He served as the Instructional Development Specialist during his entire tenure, as well as the national AQP Program Manager for his last 10 years. He also served 15 years as the Research and Development Coordinator for the FAA Air Transportation Division. 

Since his retirement from the FAA at the end of 2016, he has worked several additional projects for that same office as a contractor and is now a partner in a small aviation consulting firm (Nuovo | Partnering for Safety & Training Results).

Over the 40-year span of his aviation training career Dr. Farrow has seen, and overseen, significant evolution in the instructional approaches, computer technologies, data analysis capabilities and training media choices available to the industry. He was first introduced to the concepts of competency-based training 50 years ago when he began his graduate studies in Instructional Systems. He has traced, and in some cases directed, its evolution ever since. 

Prior to his FAA career, he spent 10 years as a civilian contractor for the US military, which included leading the contractor teams that developed the initial aircrew training programs for the F-16 and C-17 aircraft. As a graduate student at Florida State University in the early 1970s, he supported the development of the Interservice Procedures for Instructional Systems Development model, which later evolved into the ADDIE framework for Instructional Development. In addition, he represented the FAA at a series of meetings of the Training and Qualification Initiative, the team that produced the IATA and ICAO EBT documents. 

 

ABOUT KD VANDRIE

KD VanDrie is a professional in the fields of pilot training, human factors, data analysis, and risk management. With a career spanning over four decades, KD’s expertise goes beyond the cockpit, extending to a genuine passion for improving communications and helping others achieve their goals.

VanDrie is the Founder of Volant Systems Aviation Consulting (Volant Systems Aviation Consulting) and was Manager, AQP, for USAirways. She is the co-author, with Brock Bocher, of the book, Risk, Safety, Expertise: A Pilot's Journey Into Risk and Resource Management.

Her love of flying started in a small town in northeastern Ohio by watching clouds and the traffic from a nearby airport. At the age of 14 she finally convinced her father to drive her to the airport for her first flying lessons.

In college, after earning her flight instructor certificates, she extended her interests to instructional system design and human performance. Her subsequent work with general aviation, researchers, airline operators, the FAA, and the military provided the opportunity to explore the intricate interplay between technology, human behavior, and operational excellence.

This experience, coupled with VanDrie’s ability to facilitate information from teams of experts, led to a profound transformation in the way aviation professionals perceive and manage risk. From shaping pilot training programs with a human factors lens, to leveraging data analysis for risk management and continuous improvement, her contributions have rippled through military and civilian aviation. Her methodology involves not just addressing technical aspects, but also considers the complex psychological and physiological factors that influence decision-making in high-stakes scenarios.

With an innate ability to connect with audiences, VanDrie not only imparts knowledge but also motivates personal commitments for continuous improvement and innovation. Her ability to communicate complex concepts with clarity has made her expertise accessible to aspiring professionals, enriching the industry's talent pool with well-rounded, safety-conscious individuals including the “hidden heroes” who conduct analysis, develop documents, and implement training systems. 

 

References

Airline Transport Association, AQP Subcommittee. 1994. Line Operational Simulation: LOFT Scenario Design and Validation. Author. Washington, DC: Airline Transport Association.

American Society for Training and Development (ASTD), now Association for Talent Development (ATD). ASTD Competencies for the Training and Development Profession. 2013. Alexandria, VA: ASTD/ATD.

Bowsher, J.E. 1998A. Educating America. Lessons Learned for the Nation’s Corporations. New York, NY: John Wiley.

Bowsher, J.E. 1998B. Speaking with Jack E. Bowsher, Director of Education, IBM.  Instructional Delivery Systems, May-June 1989, pp. 10-14.

Brown, A. and Green, T.D. 2006. The Essentials of Instructional Design: Connecting Fundamental Principles with Process and Practice. Columbus, OH: Pearson Prentice Hall.

Carraccio, C., Englander, R., Van Melle, E., Ten, C.O., Lockyer, J. and Chan, M.K. 2016. Advancing Competency-Based Medical Education: A Charter for Clinician Educators. Journal of Academic Medicine, Volume 91, Number 11, pp.645-649.

Clark, R. C. 2010. Evidence-Based Training Methods: A Guide for Training Professionals, 2nd edition. Alexandria, VA: American Society for Training and Development.

Cleary, M.N. 2015. Faculty and Staff Roles and Responsibilities in the Design and Delivery of Competency-Based Education programs. http://works.bepress.com/navarrecleary/14.

Curtis, M. and Jentsch, F. Line Operations Simulation Tools. 2010. In: Kanki, B., Helmreich, R. and Anca, J. (Eds,) Crew Resource Management, Second Edition, Academic Press, San Diego, CA: pp. 265-284.

Department of the Air Force. 1974. Handbook for Designers of Instructional Systems. Author. Washington, DC: United States Air Force.

Department of the Air Force. 1994. Instructional Systems Development. Air Force Manual 36-2234. Author. Washington, DC: United States Air Force.

Department of the Army. 2004. Systems Approach to Training Analysis. Training and Doctrine Command (TRADOC) Pamphlet 350-70-6. Author. Fort Monroe, VA.

Department of the Army. 2018. Army Educational Processes. Training and Doctrine Command (TRADOC) Pamphlet 350-70-7. Author. Fort Monroe, VA.

Dick, W.O., Carey, L. and Carey J.O. 2022. Systematic Design of Instruction (9th edition). Boston, MA: Allyn & Bacon.

Farrow, D.R. A Regulatory Perspective II. 2010. In: Kanki, B., Helmreich, R. and Anca, J. (Eds,) Crew Resource Management, Second Edition, Academic Press, San Diego, CA: pp. 361-378.

Federal Aviation Administration. 2015. Flight Crew Member Line-Operational Simulations: Line-Oriented Flight Training, Special Purpose Operational Training, Line Operational Evaluation (Advisory Circular AC 120-35D). Author, Washington, DC.

Federal Aviation Administration. 2017. Advanced Qualification Program (Advisory Circular 120-54A, Change 1). Author, Washington, DC.

Frank, J.R., Snell, L.S., Cate, O.T., Holmboe, E.S., Carraccio, C. and Swing, S.R. 2010. Competency-Based Medical Education: Theory to Practice. Medical Teacher. Volume 32, Number 8, pp. 638-645.

Gagne, R.M. 1965. Conditions of Learning. New York, NY: Holt, Rinehart and Wilson.

Gagne, R.M., Wager, W.W., Goals, K.C. and Keller, J.M. 2005. Principles of Instructional Design, fifth edition. Belmont, CA: Wadsworth/Thompson Learning.

Grossman, J.R. 1972. Deriving Skill and Knowledge Requirements for Task Analysis in a System Design Environment. Proceedings of the Conference on Uses of Task Analysis in the Bell System, AT&T Company Human Resources Laboratory, Training Research Group.

Hamman, W.R. Line Oriented Flight Training (LOFT): The Intersection of Technical and Human Factor Crew Resource Management (CRM) Team Skills 2010. In: Kanki, B., Helmreich, R. and Anca, J. (Eds,) Crew Resource Management, Second Edition, Academic Press, San Diego, CA: pp.233-263.

International Air Transportation Association. 2021. Dangerous Goods Regulation, Appendix H-Dangerous Goods Training Guidelines: Competency-Based Training and Assessment Approach. International Air Transportation Association.

International Board of Standards for Training, Performance, and Instruction. 2000. Instructional Design Competencies: The Standards. Batavia, IL.

International Board of Standards for Training, Performance, and Instruction. 2001. Training Manager Competencies: The Standards. Batavia, IL.

International Board of Standards for Training, Performance, and Instruction. 2003.  Instructor Competencies: The Standards. Volumes I and II. Batavia, IL.

International Board of Standards for Training, Performance, and Instruction. 2006 Evaluator Competencies: The Standards. Batavia, IL.

International Civil Aviation Organization (ICAO). 2013. Manual of Evidence-Based Training (Document 9995, first edition). Author. Montreal, Canada.

International Civil Aviation Organization (ICAO). 2020. Training (Document 9868, third edition). Author. Montreal, Canada.

International Labor Organization. 2020. Competency-Based Training (CBT): An Introductory Manual for Practitioners. Amman, Jordan: International Labor Organization.

Kearns, S.K. Mavin, T.J. and Hodge, S. 2016. Competency-Based Education in Aviation: Exploring Alternative Training Pathways. Burlington, VT: Ashgate.

Kearsley, G.P. 1984. Training and Technology: A Handbook for Professionals. Reading, MA: Addison-Wesley.

Klein, J.D., Spector, J.M., Grabowski, B. and de la Teja, I. 2004. Instructor Competencies: Standards for Face-to-Face, Online, and Blended Settings, 3rd edition. Greenwich, CT: Information Age Publishing.

Markel, D.G. 1967. The Development of the Bell System First Aid and Personnel Safety Course. Palo Alto, CA: American Institutes for Research, AIR-E81-4/76-FR.

McClellan, D.A.1973. Testing for Competence Rather than for Intelligence. American Psychologist. Volume 28, Number 1, pp. 1-14.             

Mills, R.W. 2010. The Promise of Collaborative Voluntary Partnerships: Lessons from the Federal Aviation Administration. Washington, DC: IBM Center for The Business of Government.

Russ-EFT, D.F., Bober, M.J., Teja, I.D., Foxon, M. and Koszalka, T.A. 2008. Evaluator Competencies: Standards for the Practice of Evaluation in Organizations. San Francisco, CA: Jossey-Bass, Wiley.

Sultan, S., Morgan, R.L., Murad, M.H., Falck-Ytter, Y., Dahm, P., Schunemann, H.J. and Mustafa, R.A. 2019. A Theoretical Framework and Competency-Based Approach to Training in Guideline Development. Journal of General Internal Medicine, Volume 35, Number 2, pp 561-567.

Taleb, N.N. 2005. Fooled by Randomness: The Hidden Role of Chance in Life and in Markets, 2nd edition. New York: NY: Random House.

Taleb, N.N. 2010. The Black Swan: The Impact of the Highly Improbable, 2nd edition. New York, NY: Random House. 

Taleb, N.N. 2012. Anti-Fragile: How to Live in a World We Don’t Understand. New York, NY: Penguin Books.

Taylor, B. and Ellis, J. 1991. An Evaluation of Instructional Systems Development in the Navy. Educational Technology, Research and Development, Volume 39, Number 1, pp. 93-103. 

Transportation Research Board of the National Academies. 2011. Improving Safety-Related Rules Compliance in the Public Transportation Industry. Washington, D.C.: Federal Transit Administration.

Weitzel, T.R. and Lehrer, H.R. 1992. A Turning Point in Aviation Training: The AQP Mandates Crew Resource Management and Line Operational Simulations. Journal of Aviation/Aerospace Education & Research. Volume 3, Number 1, pp. 14-20.

 Sales CTA Aug 2023.png