To publish an innovation, health professions educators first have to conduct high-quality evaluations designed to improve an aspect of continuing education, including faculty development, such as a longstanding program that has been evaluated over time or a leading edge innovation to address a continuing education challenge. This editorial is a guide for those educators who want to conduct a program evaluation of an innovation and then be successful in publishing their findings in JCEHP. It is important to understand what is considered an innovation in JCEHP, what a program evaluation is and what it is not, who is a program evaluation for, and what guiding considerations need to be made. All these elements are part of an evaluation plan and once this is done, the evaluation is conducted, attention can be drawn to writing the report which needs to be descriptive and informative, helping readers to advance their understanding of the innovation.
WHAT DOES AN EVALUATOR NEED TO KNOW?So what is an innovation really? It starts with a program or phenomenon of interest; an object of evaluation. It can be a workshop, a course, or even the implementation of an entire curriculum. Is it something novel and of interest to others? Perhaps a new model or approach to teaching. Then, there is the evaluation of the program. Essentially, does the program work or not and why. Evaluation is not to be confused with assessment and research. Although these terms are similar, an often confused, there are subtle differences.1Assessment refers to the measurement of knowledge, skills, attitudes, perceptions, behaviors, or attributes of individuals, groups, teams, or environments. Assessment data can be quantitative or qualitative and can be used for different purposes. Assessment data are almost invariably used in both research and evaluation, but assessment alone does not equate with an evaluation or research study. Research shares many attributes of evaluation including methodology. However, the goal of research is broader; to establish new knowledge that includes developing new theories or gaining new understandings of current conditions. If the purpose of your study is to establish new knowledge in health professions continuing education, then you should be submitting your article to the original research or short-report sections of JCEHP. All research studies must first receive approval from one or more research ethics boards or institutional review boards. Evaluation refers to evaluating innovations, designed to produce desired changes in knowledge, skills, behaviors, or the implementation of guiding principles. In evaluation, we make observations and sometimes judgments about innovations depending on the type of evaluation undertaken. Evaluation can also be used to inform education policy.2
Evaluation studies should receive formal REB or IRB approval if the intent is to publish findings. In some cases, a formal REB or IRB exemption can be sought. Evaluation plans focus on underlying program needs, inputs, processes, and outcomes to help stakeholders make informed decisions related to program planning, improvement, expansion, or termination. To ensure this process is done well, I argue that we need to show the sustainability and longevity of an innovation.
Sustaining InnovationsThe successful implementation of an innovation over time is not often explored. This is usually difficult to do because continuing health care education innovations are nestled in complex ecosystems. Most models tend to focus on point-in-time data collection during the implementation of an innovation rather than evaluating the trajectory of an innovation over time which is important to show meaningful, lasting change and impact.3 In recent years, Hamza and Regehr have proposed an eco-normalization model to evaluate longevity of an innovation in a complex environment, giving consideration to the innovation itself, the system in which the innovation is embedded, and the people doing the work. In their model, they require investigators or evaluators to ask six critical questions about the innovation (see Figure 1). Innovation submissions to JCEHP that show longevity and sustainability have a greater chance of being sent for peer review. Therefore, the use of the eco-normalization framework is strongly encouraged.
Answering these evaluation questions postimplementation requires evaluators to reflect critically on how the innovation interacts with the principles and values of the system and people in which it is embedded; allowing us to move beyond the initial implementation. It frames the innovation as a dynamic, iterative, set of practices.3 There are different types of evaluation because there are different reasons for conducting and evaluation of an innovation. The eco-normalization model, for example, is a framework well-suited for Developmental Evaluation.
Types of EvaluationDevelopmental evaluation positions the evaluator subjectively and part of the innovation's design and development process. Michael Quinn Paton refers to developmental evaluators as having “skin and soul in the game.”4 The evaluator collects information and provides informal feedback on an ongoing basis to the team members (including stakeholders) on the progress of the innovation or program being designed before, during, and after implementation. Developmental evaluators tend to ask questions how teams design innovations to meet diverse needs. How are innovations and systems changed over time?
Formative evaluation5 is typically conducted for the purpose of improving or strengthening a program or innovation, typically during the early to middle phases of implementation. Similarly, process evaluation6 is also conducted in the early to middle phases of implementation and focusses on how innovations are being implemented and seeks to determine the lessons learned to facilitate successful implementation. In others words, what are the processes involved in implementation.
Summative evaluation5 is implemented a way that leads to make a final evaluative judgment. It is usually conducted at the end implementation. Outcomes evaluation7 refers to evaluation that determines if the outcomes, usually longer range, are achieved. Sometimes outcomes evaluation is referred to as impact evaluation. Finally, there is accountability or impact evaluation which is the type of evaluation funders are interested in because it deals with how inputs, such as resources, are used to develop activities and the outputs or indicators are accomplished.8
It is important to remember that it is the purpose of the innovation that determines what type of evaluation you are going to do. The purpose and intended use of an evaluation is equivalent to goals setting for an (educational) program.5 Regardless of the type of evaluation you undertake, there are approaches, theories, and frameworks that need to be used to guide the evaluation and data collection methods.
Evaluation Approaches and FrameworksAn evaluation approach or framework provides an orientation or blueprint for what needs to be evaluated. It serves as the background for an evaluation plan.1 Fortunately, there are numerous approaches and frameworks used in health professions education. Before adopting an evaluation approach, you want to consider the intent of the evaluation, stakeholders involved, and resources available. Theory explains how and why an innovation or program is expected to succeed. I discuss a few of the more common theoretical approaches and frameworks/models used in health professions continuing education: utilization-focused approach;5 participatory/collaborative approaches; the context, inputs, processes, and products (CIPP) model;9 Donald Moore's outcomes framework for planning and assessing CME activities;10,11 and using a logic model.
The utilization-focused approach focuses the evaluation on the intended use by users of the results or recommendations—usually the decision-makers. The evaluator works with the intended users to determine what kind of evaluation they need. The most appropriate type of evaluation and methods used is based on the needs to the users. Refer to Michael Quinn Patton's Essential of Utilization-Focused Evaluation.5
Alternatively, participatory/collaborative approaches involve all those who have a stake in the program, including targeted users (students, residents, faculty, and perhaps patients), not just funders and policy makers. All stakeholders can be involved in designing the evaluation questions and collecting and evaluating data. The advantage of this approach is that the recommendations of the evaluation have a high likelihood of being implemented, and all participants will feel a sense of empowerment that builds evaluation capacity within the organization.2,12
Stufflebeam's CIPP model is a tool commonly used to evaluate education programs large and small.9,13,14 Using the CIPP model is not proved whether a program works or not, but rather to improve it. If you can say the inputs are appropriate and the processes are being implemented, you should see the products are successful. If this is not the case, then you need to work backward and ask ourselves questions such as: “Were our products were too ambitious?” “Were our processes were not entirely implemented as planned?” “Were we under resourced (input)?” or “Did we underestimate what needed to be done (outputs)?”
In 2009, Moore et al11 published a revised framework on the original seven-level outcomes model that ranged from levels of participation, satisfaction, learning, competence, performance, patient health, and population health. These levels reflected the learning, application of learning in the workplace, and impact on patient health. This framework is commonly used to guide continuing professional development endeavors currently and has since been revised10 to emphasize planning of learning and continuous assessment.
Logic models are blueprints that graphically represent all the components, activities, and anticipated outcomes of a program. They are tools for conducting theory-driven evaluation (as opposed to specially using a theoretical evaluation approach or model as previously described).2 Logic models identify the problem or need for the evaluation, the inputs or resources needed for the program or innovation, activities of the program, outputs (what the program does with the inputs), outcomes (short-term, intermediate, and long-term benefits to the learners), and finally impact (system change and even societal benefit such as improved health care outcomes). An example and description of a logic model can be found in the Academic Medicine's last page.15
Evaluation Methods and Sources of EvidenceResearch and evaluation share many data collection and analysis methods. The key point about data collection in any program evaluation is that you use multiple methods to get a complete picture from as many sources as possible and to be rigorous in your approach. This may require you to conduct interviews and/or focus groups, questionnaires and rating tools, external audits and chart reviews, site visits and observations, and even consensus methods for the evaluation of education events or innovations.2 There are many resources available to help guide you—a number can be found in issues of JCEHP and other health professions education journals.16–21
HOW TO CONDUCT AN EVALUATIONBefore conducting an evaluation, it is a good idea to have an evaluation plan. Much of this section comes from the Incorporating IPCP Teamwork Assessment into Program Evaluation—Practical Guide: Volume 5.1 You need to consider (a) the context, (b) design decisions, and (c) implementation logistics.
Context Innovation RationaleThis is simply your elevator pitch. Why is your innovation needed? What issues/challenges/problems are you trying to address? Description—Includes details such as length or duration of the activity (if applicable), goals, objectives, materials, and assessments. Theory—What assumptions guide the evaluation? Evaluation purpose—What do you need to know about the innovation that you do not know already? What decisions need to be made based on the findings of the evaluation? Please read about the importance of context in the health professions; a scoping review in this journal issue.22
Design DecisionsEvaluation questions—What are you trying to find out? Example questions could be “Is your training innovation different at rural and urban training sites?” “What are the barriers and facilitators in implementing your innovation?” and “What aspects of this faculty development innovation had a positive impact on teaching at your institution?”Required information—What information will best answer you evaluation questions? Information source—Who is the best source for the required information? Study methods—Who will collect the information and how? If you are new to evaluation and do not have research experience, you may want to consult with an expert in your locale. Analytic procedures—How will the data you collect be analyzed? A good evaluator will determine the analysis a priori. Again, talk with a local researcher to help you with the analysis and the methods. Qualitative data analysis generated from interviews and focus groups, for example, can include thematic content analysis, frequency analysis, and coding (both inductive and deductive). Quantitative data analysis from surveys, tests, and other measurement tools can include descriptive and inferential statistics. Consult a researcher or statistician if available. Interpretation—Good evaluators will discuss the findings and results with their stakeholders. Many times interpreting data can be subjective but it is possible to implement benchmarks or standards ahead of time.
Implementation LogisticsManagement of the Evaluation—You should identify who in your evaluation team will do what. Think of the components of an evaluation: design, ethical study conduct, budget, management, creation and implementation of the evaluation plan, and dissemination of the findings. Timelines—Just as important as who will do what is when. Drawing up a timeline is very important. Be sure all stakeholders have input into the timeline. Reporting—To whom and how should you share the findings and recommendations of the report? Consult the Communication and Reporting chapter of The Program Evaluation Standards: A Guide for Evaluators and Evaluation Users.23 There are lots of great tips such as scheduling informal and formal reporting during the evaluation determined by information users' needs and selecting or creating guidelines to ensure formal reports are accurate.
HOW TO WRITE YOUR INNOVATION REPORT FOR PUBLICATION IN JCEHPIn conclusion, I want to provide some tips for success to publish your innovations in JCEHP, based on Curt Olson's and Loir Bakken's editorial, “Evaluations of educational interventions: Getting them published and increasing their impact.”24 In fact, I strongly encourage authors if they are considering designing an evaluation or writing an evaluation paper for JCEHP or another other health professions education journal to read their editorial. A special note here, a quality improvement project may lead to an innovation that can be sustained overtime. When this happens and it has been rigorously evaluated according to Olson's and Bakken's criteria, then Innovations can be the target submission.
TIP 1—Describe Your Context WellMost innovations and evaluations are situated in a local context that makes it difficult to publish so be sure to describe the context well and make a case for how the innovation contributes to the literature. A good description includes why others need to know this innovation and why an education innovation was needed to address the problem. I refer all authors to Simon Kitto25's JCEHP editorial, “What is an Educational Problem?” Revisited. Furthermore, it is essential that authors describe the innovation development process; the context in which the innovation is developed and implemented so readers (and reviewers) can better understand the study.
TIP 2—Use Many Sources of EvidenceMixed methods are a good if performed well. This requires the inclusion of enough detail in the methods section of your paper for readers to be able to determine how rigorous the evaluation is. The key point is to build a complete picture of the innovation from many perspectives. As well if possible do not rely entirely on self-report measures. There is value to using objective measures to show change in behavior. A special note here is that even if you have received exception from your local IRB/REB, you should still describe how ethical issues were considered and addressed.24
TIP 3—Tell an Informative StoryYou have spent a great deal of time and energy planning and evaluating your innovation. Take the time when writing your Innovations manuscript to entice readers who want to replicate your innovation in their context. To do this, make it descriptive and informative, perhaps even entertaining. You have up to 3000 words to work with. A special note here is note here is to be critical. Remember educational innovations are situated in complex environments. Please refer Olson's and Bakken's advice on writing the results and discussion sections of your paper. Best of luck on planning, conducting, and reporting your next educational innovation. Readers look forward to your story.
REFERENCES 1. Schmitz C, Collins L, Michalec B, Archibald D. Incorporating IPCP Teamwork Assessment into Program Evaluation Practical Guide. Vol 5. National Center for Interprofessional Practice and Education. Minneapolis, MN: University of Minnesota; 2017. 2. Lovato C, Peterson L. Programme evaluation. In: Swanwick T, Forrest K, O'Brien B, eds. Understanding Medical Education: Evidence, Theory, and Practice. 3rd ed. Oxford: John Wiley & Sons, Ltd; 2019:443–455. 3. Hamza DM, Regehr G. Eco-normalization: Evaluating the longevity of an innovation in context. Acad Med. 2021;96:S48–53. 4. Blue Marble Evaluation: Premises and Principles. Routledge & CRC Press. Available at: https://www.routledge.com/Blue-Marble-Evaluation-Premises-and-Principles/Patton/p/book/9781462541942. Accessed March 27, 2023. 5. Patton MQ. Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: SAGE; 2012. 6. Ellard D, Parsons S. Process evaluation: understanding how and why interventions work. In: Thorogood M, Coombes Y, eds. Evaluating Health Promotion: Practice and Methods. Oxford: Oxford University Press; 2010. 7. Program Evaluation Guide—Step 3—CDC. 2022. Available at: https://www.cdc.gov/evaluation/guide/step3/index.htm. Accessed April 4, 2023. 8. Preskill HS. Building Evaluation Capacity: Activities for Teaching and Training. 2nd ed. Thousand Oaks, CA: SAGE; 2016. 9. Stufflebeam DL. The CIPP model for evaluation. In: Stufflebeam DL, Madaus GF, Kellaghan T, eds. Evaluation Models. Evaluation in Education and Human Services, vol 49. Dordrecht: Springer; 2000. 10. Moore DE, Chappell K, Sherman L, Vinayaga-Pavan M. A conceptual framework for planning and assessing learning in continuing education activities designed for clinicians in one profession and/or clinical teams. Med Teach. 2018;40:904–913. 11. Moore DE Jr., Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Cont Educ Health Professions. 2009;29:1–15. 12. Cousins JB, Earl L. The case for participatory evaluation: theory, research, practice. In: Cousins J, Earl L, eds. Participatory Evaluation in Education: Studies in Evaluation Use and Organizational Learning. Falmer; 1995:3–18. Available at: http://legacy.oise.utoronto.ca/research/field-centres/ross/ctl1014/Cousins1995.pdf. Accessed June 29, 2020. 13. Stufflebeam D. The CIPP model for program evaluation. In: Madaus GF, Scriven MS, Stufflebeam DL, eds. Evaluation Models: Viewpoints On Educational And Human Services Evaluation. Evaluation in Education and Human Services. Springer Netherlands; 1983:117–141. 14. Stufflebeam DL. The CIPP model for evaluation. In: Kellaghan T, Stufflebeam DL, eds. International Handbook Of Educational Evaluation. Kluwer International Handbooks of Education. Springer Netherlands; 2003:31–62. 15. Melle EV. Using a logic model to assist in the planning, implementation, and evaluation of educational programs. Acad Med. 2016;91:1464. 16. Lin E, Gobraeil J, Johnston S, et al. Consensus-based development of an assessment tool: a methodology for patient engagement in primary care and CPD research. J Cont Educ Health Professions. 2022;42:153–158. 17. Artino AR, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach. 2014;36:463–474. 18. Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide no. 67. Med Teach. 2012;34:e288–e299. 19. Stalmeijer RE, Mcnaughton N, Van Mook WNKA. Using focus groups in medical education research: AMEE Guide No. 91. Med Teach. 2014;36:923–939. 20. Katz-Buonincontro J. How to Interview and Conduct Focus Groups. 1st ed. American Psychological Association; 2022. 21. Dillman DA, Smyth JD, Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th ed. Hobokenm, NJ: Wiley; 2014. 22. Thomas A, Rochette A, George C, et al. Definitions and conceptualizations of the practice context in the health professions: a scoping review. J Contin Educ Health Prof. 2023. [epub ahead of print]. 23. Yarbrough D, Shulha L, Hopson R, Caruthers F. The Program Evaluation Standards: A Guide for Evaluators and Evaluation Users. 3rd ed. Thousand Oaks, CA: Sage Publications; 2010. 24. Olson CA, Bakken LL. Evaluations of educational interventions: getting them published and increasing their impact. J Cont Educ Health Professions. 2013;33:77–80. 25. Kitto S. What is an educational problem? Revisited. J Cont Educ Health Professions. 2019;39:223–224.
Comments (0)