Eighteen professionals with experience in implementation science, health economics and/or digital health were invited to participate in the study. Fourteen professionals expressed interest in participating, but two were lost prior to Round 1. The final expert panel of twelve consenting participants contained a sufficient representation of the desired expertise: 50% had expertise in implementation science, 50% had expertise in health economics and 58% had expertise in digital health (Additional file 4: Table 1). Participants included: two implementation scientists, one health economist, three digital health specialists and six with experience across multiple fields (Additional file 4: Fig. 1). Participants worked across a range of healthcare disciplines, clinical areas and settings including nursing, surgery, maternal health, nutrition and dietetics, lung cancer, infectious disease, clinical excellence, and digital health (including telehealth and artificial intelligence). Most participants were female (n = 8, 67%), worked in academic contexts (n = 11, 92%), and were located in Australia (n = 7, 58%) (Additional file 4: Table 2).
In Round 1, consensus was reached on almost all questions, with the exception being a question asking if research activities should be considered as an implementation cost (Additional file 4: Table 3—question 2.3.1: 42% agreement) and two questions regarding the supporting material ‘Appendix C: Common activities and resources to operationalise implementation strategies’ (Additional file 4: Table 3—question 3.5.1: 42% agreement and question 3.5.2: 50% agreement). Percentage agreements from all questions in Round 1 can be found in Additional file 4: Table 3. Feedback and comments from Round 1 resulted in changes to the costing instrument (discussed below). Round 2 of the e-Delphi was used to obtain consensus on the components that did not reach consensus in Round 1 as well as additional questions regarding updates to the instrument made in response to Round 1.
The costing instrument was updated in response to the feedback from Round 1, summarised in Additional file 4: Table 4, and consensus was achieved on these updates in Round 2 (Additional file 4: Table 3—question 4.1.1: 100% agreement; question 4.2.1: 92% agreement; question 4.2.2: 92% agreement). As a result of the consensus on these integral questions, the e-Delphi process was terminated after Round 2. Integral questions related directly to the design and components of the instrument. Non-integral questions were in-directly related to the instrument including its use and users. There were three non-integral items that did not reach consensus in Round 2) that are described below. Percentage agreements from all questions in Round 2 can be seen in Additional file 4: Table 3.
Areas of non-consensusThe nature of research costsThe responses from Round 1 indicated that including research costs as an implementation cost is dependent on the study type and reason. Research costs may include preparing study protocols/ ethics applications, recruiting participants, obtaining consent, managing research data, and dissemination of research findings. Research costs for the purpose of furthering implementation science knowledge may not be relevant when quantifying implementation costs, as these costs would not extend to other institutions or sites considering the implementation of a particular innovation. Conversely, research costs may be relevant to include as an implementation cost when conducting quality improvement studies or when the intervention would otherwise not be implemented without local evidence to support its safety, efficacy, or cost-effectiveness. As a result of this feedback, it was decided to acknowledge research costs as being a potentially relevant implementation cost within the costing instrument, with an explanation that the relevance of research costs is context-specific and should be determined by the user of the instrument. Consensus was achieved on this update to the costing instrument in Round 2 (Additional file 4: Table 3—question 2.1.1: 100% agreement).
The user’s prior implementation science knowledgeThe initial implementation costing instrument prototype included supporting material designed to provide reference explanations for the user on implementation science concepts including phases, and common implementation strategies, activities, and resources (Additional file 2: Appendix A, page 3; Appendix B, page 9; Appendix C, page 10). The information on implementation phases reached consensus (Additional file 4: Table 3 – question 2.1.2: 75% agreement) but some respondents felt it gave a linear impression of implementation, when such processes are often iterative. Providing examples of common implementation strategies reached consensus (Additional file 4: Table 3—question 3.3.1: 75% agreement) but some respondents suggested inclusion of references to key implementation science articles for those lacking foundational knowledge. The purpose of providing examples of common activities and resources was not clear to participants and (as mentioned above) did not reach consensus (Additional file 4: Table 3—question 3.5.1: 42% agreement and question 3.5.2: 50% agreement). The research team considered that the mixed responses to these instrument supporting materials was likely due to ambiguity in the scope of the instrument.
In response to the feedback from Round 1, the research team decided to refine the purpose and content of the instrument to more clearly align with the intended aim to provide practical, user-friendly templates to assist in the collection of appropriate costing data. It was determined that reference explanations with the intention of educating users on implementation science phases and strategies was beyond the scope of this costing instrument. Hence, the supporting education-related materials were removed from the costing instrument (Additional file 2: Appendix A, page 3; Appendix B, page 9; Appendix C, page 10). This information was replaced with appropriate references to key studies within the implementation science literature to assist users in deepening their understanding as required. These updates were made to the costing instrument in Round 2 and consensus was achieved on both the removal of supporting material (Additional file 4: Table 3—question 4.4.2: 83% agreement) and the refined scope of the instrument (Additional file 4: Table 3—question 3.1.1: 92% agreement).
Through this refinement, the research team recognised that there was an implicit assumption that the user will likely have some prior understanding of implementation science, which we contend is reasonable given the intention to use and cost implementation strategies. In Round 2, we asked the participants if it is appropriate to assume users of the costing instrument will have some level of prior implementation science knowledge; this statement did not reach consensus (Additional file 4: Table 3—question 3.1.3: 67% agreement).
Specificity to digital health solutionsThe costing instrument was initially framed for application in digital health contexts and there was a suggestion from Round 1 indicating that more digital health specific examples would be helpful. Given the refinements in the overall instrument scope (outlined above), the research team was also prompted to consider making the instrument more generic in nature to allow for potential application beyond digital health contexts. The rationale for this related to the recognition that the costing categories for implementation strategies (as opposed to specific interventions or technologies) used for digital health solutions may be transferrable across settings. Although consensus was not reached on this update to the costing instrument in Round 2 (Additional file 4: Table 3—question 5.1.2: 67% agreement), most participants recognised it was plausible the instrument could be generic. The research team concluded that subsequent piloting would be required to confirm or refute the extent to which the instrument was generalisable beyond digital health.
Additional digital formatsIn response to the feedback from Round 1, the digital functionality of the costing instrument was improved. An electronic version of the data collection templates was created in Microsoft Excel, including use of ‘drop-down’ options where possible to optimise data quality. The Excel file included two additional summary tables that automatically populated with data entered from the templates. Consensus was achieved on this update to the costing instrument in Round 2 (Additional file 4: Table 3—question 6.1.1: 92% agreement; question 4.1.2: 92% agreement). Participants were satisfied with the Microsoft Excel version and did not indicate interest in any additional digital formats suggested, including REDCap, Microsoft Word, or PDF (Additional file 4: Table 3—question 6.1.3: 33% agreement).
The final implementation costing instrumentThe final costing implementation strategies (Cost-IS) instrument is presented through a worked example below and in Additional file 5. A health system perspective was taken in the worked example. The aim and scope of the instrument is to collect data on the costs associated with implementation strategies for digital health solutions. The instrument comprises of three data collection templates. It can be found online at https://cost-is.github.io/instrument/.
Cost-IS template 1: planningThe purpose of Template 1 is to help identify specific data items that need to be collected. This will allow for comprehensive and targeted data collection later in the costing instrument. In Template 1, users document the relevant implementation strategies and then outline which activities are needed to operationalise each of the strategies. Both labour and non-labour resources used to deliver the activities are listed in the final column. Table 1 provides a worked example of Template 1, including four implementation strategies with associated activities and resources.
Table 1 Cost-IS template 1 worked exampleCost-IS templates 2A/B: data collectionTemplates 2A and 2B are used to collect the data necessary to quantify the implementation costs; 2A collects data on labour resources associated with the implementation strategies, while 2B collects data on non-labour resources. In the worked example of Template 2A (Table 2), all activities associated with the hypothetical implementation were recorded. Each activity instance was given a specific index number because an activity occurred more than once. Similarly, a purpose was recorded for each activity to distinguish it from other similar activities. The implementation strategy related to the respective activity was documented in the same row. Personnel involved in the activity were documented. Each personnel type/role was recorded on a separate row, and roles were distinguished by wage rate or title classification. For each activity, the number of personnel for each role was recorded. Finally, the time spent on the activity for that role was documented. The digital version of this template includes two additional columns which automatically calculate labour costs when the columns presented in Table 2 are completed. In the digital version the entries columns ‘Activity’, ‘Strategy’ and ‘Role’ are restricted by drop down menus containing the items listed in Template 1. Template 1 can be completed iteratively as required by the project.
Table 2 Cost-IS template 2A worked exampleSummary table examplesSummary tables can be readily created from the data in the completed templates in a meaningful way as determined by the analyst. The templates were designed to collect data at varying levels of detail because of the wide range and adaptable nature of implementation projects. Table 3 and Fig. 1 demonstrates how implementation costs from the worked example can be summarised by both role and implementation strategy.
Table 3 Cost-IS summary table worked example (role and strategy)Fig. 1Cost-IS summary figure worked example (role and strategy)
Comments (0)