Don Combs, Ph.D., Vice President and Dean of the School of Health Profession at Eastern Virginia Medical School, provides a groundbreaking examination of return on investment for expenditures in medical simulation.

East Virginia medical student reviews patient history in simulation center. (Photo: Author)
East Virginia medical student reviews patient history in simulation center. (Photo: Author)

The emerging discipline of Modeling and Simulation (M and S) offers substantial opportunities to improve professional training in health care. The predominant training model in the medical and health professions remains a centuries-old pedagogical combination of lectures, small group discussions, and apprenticing, wherein learners with uncertain competence practice on real patients under the supervision of more experienced mentors. In this model, learning and mistakes occur in real time and affect real patients.

The use of simulation in its various training manifestations is growing and offers a new paradigm for providing medical training across many contexts. A paradigm where trainees practice cognitive decision-making and procedural techniques deliberately, in simulated health care settings, and achieve competence, if not mastery, before they encounter real patients has great appeal. Rosen and others (2012) elaborated on this point in the MEdSim issue 1, 2012. Most educational institutions and many hospital systems that train medical and health professionals are exploring the use of M and S technology. They think that it can facilitate effective training that reduces costs, reduces the use of animals and live tissue training, and minimizes risks over the range of professional and organizational practice.


The promise of M and S is great, as attested through many other articles in MEdSim and the broader literature on medical simulation. If more tangible proof of the increasing enthusiasm for medical simulation is needed, the growth of medically oriented exhibits and attendees at the 2011 I/ITSEC and 2012 IMSH conferences for the simulation community should suffice. About 80 medical simulation organizations exhibited at each conference and the breadth of available simulators was stunning, as was the ever-growing number of conference attendees. That said, it is reasonable to ask for evidence that a new approach, such as simulation, is effective in fulfilling training objectives and, once implemented, is more cost-effective than current or other approaches to training.


M and S technologies represent a significant investment of capital, time, and other resources, and not all M and S training systems yield the same results. The increasing reliance on M and S technologies as an alternative approach to medical training makes it essential that we be able to assess the expected return on investment (ROI) in the context of specific training objectives and, conversely, to facilitate the design of training systems that yield the best ROI within given cost and resource constraints. ROI is a summary performance measure used to evaluate the efficiency of an investment and to compare the relative efficiencies of different investments. For example, if the ROI is 50 percent for one investment and 75 percent for another, the presumption is that the second investment is more efficient, that is, better. The basic formula for calculating ROI is: ROI = Gain from Investment/Cost of Investment. x 100

Although the concept appears to be straightforward, the implementation of the calculus in a particular setting, with multiple stakeholders and value judgments is, or can become, complicated.

Jack Phillips provides a comprehensive, yet not too complex, discussion of ROI in his 2003 book, Return on Investment in Training and Performance Improvement Programs. He describes the process of calculating ROI as a fifth level of the evaluation framework that has been widely accepted in the education and training domains since first published by Kirkpatrick in 1959. This framework, as modified by Phillips, has five levels of evaluation that range from the basic “How do you feel about the training?” to the comprehensive evaluation of ROI that is shown in Table 2.


Table 1

Definitions of Evaluation Levels

Level Brief Description
1.     Reaction and Planned Action Measures participant’s reaction to the program and outlines specific plans for implementation.
2.     Learning Measures skills, knowledge, or attitude changes.
3.     Application and Implementation Measures changes in behavior on the job and specific applications and implementations.
4.     Business Impact Measures business impact of the program.
5.     Return on Investment Compares the monetary value of the results with the costs for the program, usually expressed as a percentage.

The evaluation process is not really complete until ROI is calculated. The basic ROI model as explained by Phillips (2003) involves the conversion of a variety of evaluation criteria into monetary terms as shown in Table 2. The challenge is, of course, immediately apparent. For the model to work correctly, a multitude of measures of satisfaction, learning, performance in practice, and effectiveness in the business of providing health care have to be converted appropriately to dollar values and the measures have to be isolated from the influence of other environmental factors. Therein is the rub of calculating ROI: How sensible is the conversion of multiple variables, some quantitative and some qualitative, and how confident are the users in the conversion process?


The balance of this article provides an overview of M and S within the medical context and of the most common methods of calculating the ROI of simulations. (The term simulation is intended to incorporate a training event that employs one or more simulators as a means to fulfill training objectives—that is, the calculus of a ROI may include the costs of one or more individual simulators.)


What is Medical Simulation and What are its Attributes?

A model is a physical, mathematical or logical representation of a system, entity, phenomenon, or process. A simulation is the implementation of a model over time. The U.S. Department of Defense (DoD), possibly the world’s largest user of simulation, has established three classes of simulation—virtual, constructive, and live. Virtual simulations represent systems both physically and electronically—think Wii and Kinect games. Constructive simulations represent systems through mathematical and decision-based modules, think computer programs with drop-down menus. Live simulation uses real people and machines. Although this widely accepted schema is helpful, most readers will find Chakravarthy’s (2011) categorization of five types of medical simulation to be more helpful:

  • Standardized Patients (real actors who simulate medical conditions and scenarios);
  • Partial-task Trainers (devices that simulate tasks such as insertion of catheters and endoscopic procedures);
  • Mannequins (high fidelity, computer-based patient simulators);
  • Screen-based Computer Simulators (video games focused on medical issues);
  • Virtual Reality Simulators (a combination of all of the above within a specified context—that is, an operating room, a battlefield setting, etc.)

Achieving wide consensus on an agreed-upon taxonomy of medical simulations is not merely an academic exercise. Determining the ROI of medical modeling and simulation begins with assigning a simulation to a category that is being investigated. All simulations share a broadly comparable set of core cost factors, benefits, system capabilities and metrics. Each category, however, also contains a set of costs, benefits, system capabilities, and metrics unique to that class of simulation. Such a schema enables fair comparisons of simulations within a specific class, such as comparing one part-task trainer to another, and comparisons across simulation classes, e.g., determining whether virtual patients are more efficient than standardized patients.


Identifying the category of a medical simulation (and thus its attributes) is an important step in the calculus of ROI. Gaba (2004) developed a description of medical simulation applications that consists of eleven categories. The applications described by Gaba (including parenthetical examples of some of the dimensions of each application category) are:


  • Aims and Purposes of the Simulation Activity (education, training, performance assessment, clinical rehearsal, research);
  • Unit of Participation (individual, team);
  • Experience Level of Participants (college, continuing education)
  • Health Care Domain (procedural-surgery, dynamic high hazard ICU);
  • Professional Discipline of Participants (physician, nurse, physician assistant);
  • Type of Knowledge, Skill, Attitudes or Behaviors Addressed (decision-making, technical skills);
  • The Simulated Patient’s Age (neonates, elderly);
  • Technology Applicable or Required (see Chakravarthy, 2011);
  • Site of Simulation (home, work unit);
  • Extent of Direct Participation (remote viewing, immersive participation)
  • Method of Feedback Used (none, contemporaneous, video-based post hoc)

A modified version of Gaba’s applications might be used to specify the relevant attributes of a simulation that can then be assigned a weighted value for the purposes of determining ROI.


How is ROI Calculated?

A comprehensive model for calculating ROI requires making a value judgment on a variety of attributes such as those described by Gaba and those other factors included in Table 2. In the terminology of analysis, this involves using a Multi Attribute Decision Making (MADM) model. That is, the method for calculating ROI has to allow for the systematic weighting of a broad variety of attributes. Although there are numerous MADM models to choose among, the three most commonly used are Simple Additive Weighting (SAW), the Analytical Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS).


Each method begins with the specification of the criteria that will be used to evaluate the simulation. Subject matter experts (SMEs) such as physicians, nurses, and educators need to determine the specific criteria that are appropriate for the evaluation of a particular simulation.


In an organization, there are multiple stakeholders, each with a different perspective that needs to be taken into account. One way of conceptualizing the different levels and types of stakeholders whose values affect the determination of ROI follows, ranging from the program level (operating room) to the societal level (outcomes across the population). Program stakeholders focus on the usability of a simulation. Community stakeholders focus on managing M and S within specific areas, such as effecting a reduction in medical errors across an emergency department. Enterprise stakeholders focus on M and S capabilities that apply across the entire spectrum of activities of the organization. Federal stakeholders focus on M and S as it affects the operations and outcomes of activities funded by the U.S. Government. Society stakeholders focus on the role and impact of M and S on costs and outcomes of the health care system as a whole. A complete analysis of ROI needs to incorporate the criteria and value weighting of each of these types of stakeholders.


Once the criteria are established, they must be weighted because each criterion has an importance that needs to be reflected in the ROI calculus. SMEs are generally used to determine the value weights associated with each criterion. Once the weights are established, summing all weights and then dividing each individual weight by the sum will normalize each weight. The next step is to calculate a score for each criterion for each simulation. The three most common methods of calculating ROI are summarized in the following paragraphs.


The first method of calculating ROI to be considered is Simple Additive Weighting (SAW), which is known as a weighted linear combination of scoring methods. It is simple and the most often used multi attribute decision-making technique (Afshari et al., 2010). The SAW method has developed over time and was initially used only with quantitative criteria. A weight would be determined for each criterion and then that weight would be multiplied by the quantitative value of the criterion. The products would then be summed to determine the overall score of the alternative. Several major drawbacks to the SAW method are readily apparent, for example, SAW cannot be easily used with qualitative criteria (i.e., professional judgment). Even among quantitative criteria, significant differences in the weighting of measures can cause extraordinary differences. In response to these concerns, the SAW method has evolved. It can now support both quantitative and qualitative criteria. The advantage of this method is that it is a linear transformation of raw data (Afshari et al., 2010). Thus, some of the key characteristics of SAW are that it is the best known and most widely used method, it uses all the attribute values of an alternative, and it uses the relatively simple mathematical operations of multiplication and addition (Sopadang, 2011).


The Analytical Hierarchy Process (AHP) was created due to the limitations of SAW. AHP allows users to consider and compare both qualitative and quantitative criteria when making multi-attribute decisions. AHP is similar to SAW in that it is based on a weighted average that is calculated for each alternative by multiplying the value of the alternative by the weighted value of the criterion. The major difference is AHP uses pair-wise comparisons to calculate both the weights of the criteria and the score for each alternative for each criterion. AHP has also evolved over time, although it still uses pair-wise comparisons to populate its matrices. In fact, one of the advantages of AHP is that the pair-wise comparison process allows a user to generate the weights in a consistent manner (McCafrey, 2005).


Another method for making multi-attribute decisions is the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). TOPSIS has been widely used to solve various multiple attribute decision-making and multiple criteria decision-making problems (Pie et al., 2010). Unlike SAW and AHP that determine a summary weighted score for each alternative, TOPSIS is based on the concept that the chosen alternative should have the shortest distance from the positive-ideal solution and the longest distance from the negative-ideal solution (Yoon et al., 1995).


As the reader may surmise, the calculus of ROI is complicated, involving many judgments and many variables and thus many opportunities for error. The basic take home message is that the “User should be wary.”


What difference does ROI make?

The ultimate decision on whether to proceed with an investment in simulation often depends most heavily on the financial analysis. If an investment does not appear to add measurable value to an organization in terms of benefits outweighing costs, then a project rarely gains approval. The merits of any business venture always boil down to the “numbers” and, as illustrated above, there are several valuation tools and methodologies that describe how to calculate the return on investment.

Assessing the value of simulation has long been described in terms of verification (Is a simulation built correctly?), validation (Was the right thing built?), and accreditation (Does it meet the needs of the trainers?) As substantial additional resources are shifted toward the emerging paradigm of modeling and simulation, the calculation of ROI, comparing M and S approaches to one another and to the current approach for training, will become an essential fourth question — What do we get out of it?


The real challenge is to take a simple concept, ROI, and a very complicated, assumption-based and error prone multi-attribute calculus and do something that is helpful to decision makers. No manager or health professional wants to make an investment that costs more than it returns. At the same time, most organizations are not willing to spend the 5percent premium that Phillips estimates a comprehensive ROI analysis costs and the analysts have not yet developed a user-friendly software that does for the complicated chore of calculating the ROI of medical simulation what Turbo Tax does for the equally daunting U.S. Tax Code.


About the Author

Dr. Donald Combs is Vice President and Dean of the School of Health Profession at Eastern Virginia Medical School. He also oversees the National Center for Collaboration in Medical Modeling and Simulation and the Sentara Center for Simulation and Immersive Learning.