Optimal Acquisition and Assessment of Proficiency on Simulators in Surgery

Contact Our Team

For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more

 

The Americas -
holly.foster@halldale.com

Rest of World -
jeremy@halldale.com



Dimitrios Stefanidis, M.D., Ph.D., FACS, FASMBS, examines issues supporting the use of simulators in medical training.

The present paradigm shift from resident training using the Halstedian model, “see one, do one, teach one,” to training using surgical simulators results from concerns for patient safety, limited resident work hours, pressures on teaching faculty for more productivity, the cost of operating room training, and the need for objective assessment of trainees’ skill in an environment of constantly changing technology. The effectiveness of simulator-based training is mainly dependent on the quality of the curriculum, which brings life to the simulator and ensures learning. Designing optimal skills curricula requires an understanding of how manual skills are best acquired and the best ways to measure performance on simulators, which have a profound impact on learning. This article discusses the application of motor learning theory to simulator learning, important curricular elements for skill acquisition, and methods of simulator performance assessment that maximize learning and clinical skill transfer.

Motor Learning Theories

According to Fitts and Posner, complex manual skills are acquired in three stages: the cognitive stage (the trainee reads about and watches demonstrations of the task), associative stage (the trainee translates knowledge to task performance by associating cognitive elements with musculoskeletal maneuvers), autonomous stage (the task completion requires minimal demands on attention resources). Learning progresses sequentially, but the learning rate can vary substantially.

Specific to simulators, Gallagher et. al., described eight steps important to surgical skills curricula: 1. Didactic learning of relevant knowledge; 2. Information about the steps to task completion;  3. Illustration of common errors; 4. Testing previous learning; 5. Technical skills training on simulator; 6. Immediate feedback about errors; 7. Summative feedback about errors; and 8. Repeated trainings and illustration of progress at the end of each and defined expected proficiency goals.

McClusky and Smith developed another sequential, progressive approach to curriculum development.  The cognitive elements of the task are taught first, followed by the testing of the trainees’ innate abilities. Simulator-based training follows to translate cognitive skills into motor skills. Initial training with instructors providing feedback continues with independent practice until predefined proficiency criteria are reached.  Skills increase in complexity until simulation training is complete. After performance benchmarks are achieved in the skills laboratory, trainees transition to the operating room.

Similarly, Aggarwal et. al., proposed a competency-based assessment system.  Acquisition of procedure-specific knowledge is followed by testing. The task is deconstructed to its key components to facilitate learning, a procedure video is provided, and tools for objective performance assessment are defined. Training models are developed and validated; proficiency-based training is acquired in the skills laboratory and transferred to the operating room.

Factors Affecting the Effectiveness of the Curriculum

Although the above theories provide a framework for simulation curricular development, several factors should be considered to optimize skill acquisition and transfer.

Deliberate Practice and Learner Motivation

Deliberate practice, in which the learner monitors his/her performance and reacts to immediate feedback, is essential for skills acquisition. Internal and external motivations for deliberate practice are needed, and factors that positively affect motivation should be incorporated into skills training. Internal motivation, the most important force for learning, varies with learners and is difficult to modify.  Effective external motivation includes dedicated, protected practice time away from other responsibilities, encouraging healthy competition among residents by setting performance goals, offering awards for accomplishments, or setting requirements on simulators that must be met before participation in the clinical environment is allowed.  Mandatory participation in skills training may be the most effective motivator for surgical residents who have several negative external motivators.

Performance Feedback

For manual skills, feedback is performance-related information provided to the trainee.  Feedback can be intrinsic or extrinsic/augmented.

Intrinsic feedback consists of performance-related information directed to the sensory system of the trainee (visual, auditory, or haptic perceptions during task performance).  Research indicates that interventions that enhance the learner’s internal feedback may improve skill acquisition.


Dr. Dimitrios Stefanidis

Extrinsic or augmented feedback is performance-related information provided to the performer by an external source that augments intrinsic feedback to improve performance. Augmented feedback motivates the learner to continue the effort needed to achieve the skill.  In medical education, it is an informed, non-evaluative, performance appraisal by a teacher to reinforce strengths and foster improvement by providing information about actions and consequences and the differences between the intended and actual results.  Several authors have provided level 1 evidence that augmented feedback during simulator training results in improved skill acquisition and retention independent of the task.  The quality, timing, and frequency of augmented feedback are very important.  Augmented feedback can be provided in formative (during the performance of the task) or in summative (at the end of the task performance) form. The frequency and duration of either type of feedback can vary and possibly influence performance.  It is important that learners have practice time without feedback to develop learning strategies that can be enhanced by appropriately timed, good quality feedback. Although a reduced feedback frequency appears to benefit skill acquisition, the optimal frequency is unknown and task specific. These areas need further study. Little is known about appropriate delivery methods of performance feedback and the optimal training methods for instructors who provide this feedback.

Task Demonstration

Effective task demonstration shows learners the intricacies of a task and assists them in forming a mental model for how to accomplish it. Video-based education is effective for the acquisition of surgical skills on simulators. Video tutorials provided before and during training are superior to those provided before training only.

Practice Distribution

Practice distribution refers to learning a task in several training sessions with a period of rest between sessions and has been determined in several studies to be superior to massed practice where all training occurs in one session. However, the size of the effect is task dependent, is influenced by the interval between training sessions, and several studies have demonstrated opposing findings. Because the impact of practice distribution and inter-training interval on skill acquisition is task specific, additional study is needed to identify the optimal training sequence for various surgical skills. The optimal duration of each training session should be re-examined, because, at the present time, it is chosen arbitrarily.

Task Difficulty and Practice Variability

The literature suggests that skill acquisition is influenced by practice variability and training under increasing levels of difficulty. Using high fidelity medical simulators, learning was increased when trainees practiced with progressive levels of difficulty. Using a MIS virtual reality simulator, training on the medium level showed improved skill acquisition compared with training on the easy level. Progressively increasing difficulty has also been shown to be optimal for curricular design.

Contextual interference refers to the learning effectiveness of random versus blocked practice. Learning is increased when different tasks are practiced randomly rather than in a specific order and when training incorporates practice variability; however, the effect is not consistent and is dependant on task complexity and other factors.

Proficiency-based Training

Proficiency-based simulator training improves operative performance and is considered by many experts the ideal training paradigm on simulators. Proficiency-based curricula set training goals derived from experts and give learners a performance target to achieve. By providing targets and immediate feedback, learners can compare their performance to targets, which promotes deliberate practice, boosts motivation, and enhances skill acquisition. Proficiency-based training tailors training to the individual learner’s needs and produces uniform skills by meeting objective goals for all learners.

Traditional training paradigms such as time-based curricula (setting a specific training duration) and repetition-based curricula (setting a minimum number of repetitions) do not take into account individual learning differences and use arbitrary training endpoints. Given that learners have different baseline abilities, experiences and motivation, such curricula can lead to inadequate training or overtraining. Empiric evidence supports the superiority of proficiency-based training. The way goals are defined may also be important for skill acquisition.

Performance Assessment

Many questions regarding expertise in surgery need to be answered using high quality studies. How should expertise in surgery be defined and measured? How should proficiency levels based on expert performance be established? How should acquired expertise on simulators be detected in learners? Should expert levels be used as simulator-training endpoints or are less difficult performance levels more appropriate? Only with appropriate metrics to identify superior performance can these questions be answered.

Traditionally, skills curricula have used the easily obtainable metrics of task duration and errors for performance assessment; however, this provides no insight into the effort invested to achieve the performance goal or whether the learning has been completed and can produce a misleading picture of the trainee’s readiness to transition to the more stressful clinical environment. Several studies have shown that surgical trainees achieved expert-level performance based on time and error metrics on simulators, but their performance fell short of expert skill level in the stressful conditions of the operating room. This could be related to difficulties of assessing when simulator learning is complete. More sensitive performance metrics such as limb kinematics, global rating scales, psychophysiologic measures, and measures of mental workload may provide complementary performance assessment to augment skill acquisition and transfer.

Observer Ratings

Surgical performance can be reliably assessed by an experienced observer using global rating scales, visual analog scales, checklists, or a combination of these. These instruments are versatile, and some can be used for similar tasks. However, since the assessment relies on subjective ratings, the instruments must have proven reliability and validity before they are used for assessment. Global rating scales are superior to checklists for technical skill evaluation when completed by experts. validated rating scales for technical skill assessment and global operative assessment of laparoscopic skills should be incorporated into simulator training. However, the relationship between this assessment type and other more objective performance metrics is not well studied.

Automaticity

Automaticity is the ability to perform motor acts automatically, leaving enough attentional capacity to engage in other activities. Automaticity is a characteristic of an expert performer and has been used in the literature to confirm learning by novices. Novices practicing a new task often operate using maximum attentional capacity and cannot attend to other stimuli in their environment. To accurately measure automaticity and spare attentional capacity, a secondary task must compete for the same attentional resources as the primary task. Measuring the performance on the secondary task reflects how much attention can be spared for the primary task.  Several previous studies show that the traditional metrics of time and errors are good performance measures during the early learning stages; however, more sensitive performance metrics such as a secondary-task are needed for complete assessment of performance.

Workload Assessment

Learner performance can be influenced by task workload, performance anxiety, and stress. Task workload, which can increase operator fatigue and frustration and compromise attention span, is higher during early learning and decreases with experience. With high workload, the ability to deal with unexpected demands can be impaired and performance errors may increase.

The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) tool, first used in flight simulation, is a validated tool for workload self assessment, measures a task’s mental, physical, and temporal demands and the effort, frustration, and perceived performance of the trainee on a 20-point visual analog scale. Evidence suggests that the NASA-TLX tool provides a reliable measure of workload, task difficulty, and learner comfort during simulator training and during transition to the operating room. Because workload impacts performance, and this tool accurately provides performance information not otherwise available to learners and trainers, it should be incorporated into simulator training surgical skills assessment.

Performance Anxiety and Physiologic Measures of Performance

Mental stress is a possible factor for technical errors and inferior performance by surgeons, perhaps more so during minimally invasive than during open procedures. Because stress is difficult to evaluate subjectively, several studies have successfully used physiologic measures (heart rate) to objectively indicate stress. Aviation studies have concluded that heart rate is the most useful psychophysical variable to assess pilot workload and mental strain. A recent study linked incomplete transfer of simulator acquired skills to the operating room with a significant increase in the surgical trainee’s heart rate in the operating room compared with the simulator. Because these metrics may provide additional information on learner performance, they should also be considered during simulator training.

Summary

Many factors have been shown to optimize surgical skills curricula and the trainee’s learning so that learner proficiency can be achieved. Available studies regarding the best performance-assessment methods suggest that the incorporation of additional, more sensitive performance metrics may improve skill transfer. Simulator curricula that take into account all the factors discussed here can optimize skill acquisition and learner readiness for success in the operating room.

Editor’s note: Dr. Dimitrios Stefanidis is Director, Carolinas Simulation Center, Carolinas HealthCare System, Charlotte, North Carolina. He may be contacted at email Dimitrios.Stefanidis@carolinashealthcare.org.   Dr. Stefanidis attended Aristotelian University of Thessaloniki-Greece, completed a General Surgery Residency at the University of Texas, San Antonio and Fellowships at Tulane University in Minimally Invasive Surgery and the Carolinas Laparoscopic and Advanced Surgery Program

Related articles



More Features

More features